return to table of content

OpenAI and Elon Musk

Reubend
301 replies
15h46m

Their evidence does a nice job of making Musk seem duplicitous, but it doesn't really refute any of his core assertions: they still have the appearance of abandoning their core mission to focus more on profits, even if they've elaborated a decent justification of why that's necessary to do.

Or to put it more simply: here they explain why they had to betray their core mission. But they don't refute that they did betray it.

They're probably right that building AGI will require a ton of computational power, and that it will be very expensive. They're probably right that without making a profit, it's impossible to afford the salaries 100s of experts in the field and an army of hardware to train new models. To some extent, they may be right that open sourcing AGI would lead to too much danger. But instead of changing their name and their mission, and returning the donations they took from these wealthy tech founders, they used the benevolent appearance of their non-profit status and their name to mislead everyone about their intentions.

elif
158 replies
15h25m

they may be right that open sourcing AGI would lead to too much danger.

I think this part is proving itself to be an understandable but false perspective. The hazard we are experiencing with LLM right now is not how freely accessible and powerfully truthy it's content is, but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.

Society is approaching them as some type of universal ethical arbiter, expecting an omniscient sense of justice which is fundamentally unreconcilable even between two sentient humans when the ethics are really just a hacked on mod to the core model.

I'm starting to believe that if these models had the training wheels and blinders off, they would be understandable as the usefully coherent interpreters of the truths which exist in human language.

I think there is far more societal harm in trying to codify unresolvable sets of ethics than in saying hey this is the wild wild west, like the www of the 90's, unfiltered but useful in its proper context.

TaylorAlexander
47 replies
12h5m

The biggest real problem I’m experiencing right now isn’t controls on the AI, it’s weird spam emails that bypass spam filters because they look real enough, but are just cold email marketing bullshit:

https://x.com/tlalexander/status/1765122572067434857

These systems are making the internet a worse place to be.

DrSiemer
26 replies
10h54m

So you still believe the open internet has any chance of surviving what is coming? I admire your optimism.

Reliable information and communication will soon be opt-in only. The "open internet" will become an eclectic collection of random experiences, where any real human output will be a pleasant, rare surprise, that stirs the pot for a short blip before it is assimilated and buried under the flood.

flohofwoe
24 replies
10h25m

The internet will be fine, social media platforms will be flooded and killed by AI trash, but I can't see anything bad with that outcome. An actually 'open internet' for exchanging ideas with random strangers was a nice utopia from the early 90's that had been killed long ago (or arguably never existed).

BlueTemplar
22 replies
10h6m

E-mail has *not* been killed long ago (aside some issues trying to run your own server and not getting blocked by gmail/hotmail).

It is under threat now, due to the increased sophistication of spam.

flohofwoe
21 replies
9h42m

Email might already be 'culturally dead' though. I guess my nephew might have an email address for the occasional password recovery, but the idea of actually communicating over email with other humans might be completely alien to him ;)

Similar in my current job btw.

BlueTemplar
16 replies
9h0m

Ok, I get how one might use a variety of other tools for informal communication, I don't really use e-mail for that any more either, but I'm curious, what else can you possibly use for work ? With the requirement that it must be easy to back up and transfer (for yourself as well as the organization), so any platforms are immediately out of question.

myaccountonhn
7 replies
5h29m

Unfortunately Slack is what is used at places I've worked at.

mlrtime
6 replies
5h1m

I prefer Slack over email... coming from using Outlook for 20+ years in a corporate environment, slack is light years beyond email in rich communication.

I'm trying to think of 1 single thing email does better than slack in a corporate communication.

Is slack perfect? Absolutely not. I don't care that I can't use any client I want or any back end administration costs or hurdles. As a user, there is no comparison.

myaccountonhn
2 replies
4h48m

I absolutely loathe it but I respect that many like it. Personally I think Zullip or Discourse are a better solutions since they provides similar interface to Slack but still have email integration so people like me who prefer email can use that.

The thing I hate the most is that people expect to only use Slack for everything, even where it doesn't make sense. So they will miss notifications for Notion, Google Doc, Calendar, Github, because they don't have proper email notifications set up. The plugins for Slack are nowhere near as good as email notifications as they just get drowned out in the noise.

And with all communication taking place in Slack, remembering what was decided on with a certain issue becomes impossible because finding anything on Slack is impossible.

mlrtime
1 replies
4h0m

I agree notifications in Slack are limited...

But no better than email again. You just get notified if you have a email or not [depending on the client you're forced to use]. I have many email rules that just filter out nonsense from github, google and others so my inbox stays clean.

myaccountonhn
0 replies
3h50m

But no better than email again. You just get notified if you have a email or not [depending on the client you're forced to use]. I have many email rules that just filter out nonsense from github, google and others so my inbox stays clean.

I guess I find them useful because that way I get all my notifications across all channels in one spot if you set them up properly. Github and google docs also allow you to reply directly within the email, so I don't even need to open up website.

In Slack the way it was setup was global, so I got notified for any update even if it didn't concern me.

boredtofears
2 replies
4h22m

I can think of a few:

- Long form discussion where people think before responding.

- Greater ability to filter/tag/prioritize incoming messages.

- Responses are usually expected within hours, not minutes.

- Email does not have a status indicator that broadcasts whether I am sitting at my desk at that particular moment to every coworker at my company.

mlrtime
1 replies
3h58m

1 & 2 are human behaviors, not a technology. You can argue that a tech promotes a behavior but this sounds like a boundaries issue, not a tech issue.

#2 I agree with you, again Slack is not perfect but better than email [my opinion]

#4 Slack has status updates, email does not. So you can choose to turn this off, again boundaries.

boredtofears
0 replies
3h41m

Real time chat environments are not conducive to long form communication. I've never been a part of an organization that does long form communication well on slack. You can call it a boundary issue - it doesn't really matter how you categorize the problem, it's definitely a change from email.

Regarding #4, I can't turn off my online/offline indicator, I can only go online or offline. I can't even set the time it takes to go idle. These are intentionally not configurable in slack. I have no desire to have a real time status indicator in the tool I use to communicate with coworkers.

flohofwoe
5 replies
8h5m

I'm not saying that the alternatives are better than email (e.g. at my job, Slack has replaced email for communication, and Gitlab/JIRA/Confluence/GoogleDocs is used for everything that needs to persist). The focus has shifted away from email for communication (for better or worse).

mcculley
4 replies
6h18m

A point that I don’t think is being appreciated here is that email is still fundamental to communication between organizations, even if they are using something less formal internally.

One’s nephew might not interact in a professional capacity with other organizations, but many do.

mlrtime
1 replies
4h59m

We use Slack for all of our vendor communications, at least the ones that matter.

Even our clients use slack to connect, it is many times better then emails.

mcculley
0 replies
42m

I don’t doubt that vendors in some industries communicate with customers via Slack. I know of a few in the tech industry. The overwhelming majority of vendor and customer professional communication happens over email.

lupire
1 replies
5h18m

B2C uses texting. Even if B uses email, C doesn't read it. And that is becoming just links to a webapp.

B2B probably still uses email, but that's terrible because of the phishing threat. Professional communication should use webapps plus notification pings which can be app notifications, emails, or SMS.

People to people at different organizations again fall back to texting as an informal channel.

mcculley
0 replies
1h18m

The overwhelming majority of B2B communication is email.

I have sent 23 emails already today to 19 people in 16 organizations.

lanstin
0 replies
1h36m

It's been five years since I expect my email to be read. If I lack another channel to the person, I'll wait some decent period and send a "ping" email. But for people I work with routinely, I'll slack or iMessage or Signal or even setup a meeting. Oddly, my young adult children do actually use email more so than e.g. voice calls. We'll have 10 ambiguous messages trying to make some decision and still they won't pick up my call. It's annoying because a voice call causes three distinct devices to ring audibly, but messages just increment some flag in the UI after the transient little popup. And for work email I'm super strict about keeping it factual only with as close to zero emotion as my Star Trek loving self can manage. May you live long and proper.

There are certain actions I have to use email for, and it feels a little bit more like writing a check each year.

And all these email subscriptions, jeez, I don't want that. People who are brilliant and witty and information on Xitter or Mastodon in little slices and over time I still don't want to sit down and read a long form thing once a week.

Yizahi
0 replies
5h27m

I see a difference with enforced email use and voluntary, for work. At my company everyone uses email because it is mandatory. Human-human conversation happen there without issues. But as soon as I try to contact some random company on email as an individual, like as a shop about product details or packaging, or contact some org to find out about services - it's dead silence in return. But when I find their chat/messenger they do respond to it. The only people still responding to emails from external sources in my experience are property agents, and even there the response time is slower than in chats.

lynx23
3 replies
8h57m

But yourr nephew likely also uses TikTok, right? Not everthing the young do is a trend others should follow.

Capricorn2481
1 replies
4h59m

That's not really the argument, is it? The argument is most young people are using TikTok and will never use email for social things.

pixl97
0 replies
3h3m

I mean, is another argument not "Soon the tok will be filled with it's own AI generated shit?"

mistermann
0 replies
4h14m

You think usage of TikTok is necessarily a trend that others should not follow? Do you say this as a sophisticated, informed user of the platform?

mistermann
0 replies
4h16m

The internet will be fine, social media platforms will be flooded and killed by AI trash

In a battle between AI and teenage human ingenuity, I'll bet my money on the teenagers. I'd even go so far as to say they may be our only hope!

johnnyanmac
0 replies
9h0m

Some parts yes, some parts no. communication as we know it will effectively cease to exist, as we have to either require strong enough verification to kill off anonymity, or somehow provide very strong, very adaptive spam filters. Or manual moderation in terms of some anti-bot vetting. Depends on demand for such "no bot" content.

Search may be reduced to nigh uselessness, but the saavy will still be able to share quality information as needed. AI may even assist in that with people who have the domain knowledge to properly correct the prompts and rigorously proofread. How we find that quality information may, once again, be through closed off channels.

andrepd
10 replies
10h58m

Generative AI will make the world on general a worse place to be. They are not very good at writing truth, but they are very excellent at writing convincing bullshit. It's already difficult to distinguish generated text/image/video from human responses / real footage, its only gonna get more difficult to do so and cheaper to generate.

In other words, it's very likely generative AI will be very good at creating fake simulacra of reality, and very unlikely it will actually be good AGI. The worst possible outcome.

sausse
3 replies
10h41m

Half of zoomers get their news from TikTok or Twitch streamers, neither of whom have any incentive for truthfulness over holistic narratives of right and wrong.

The older generations are no better. While ProPublica or WSJ put effort into their investigative journalism, they can’t compete with the volume of trite commentary coming out of other MSM sources.

Generative AI poses no unique threat; society’s capacity to “think once and cut twice” will remain in tact.

lewhoo
2 replies
6h29m

Generative AI poses no unique threat;

While the threat isn't unique the magnitude of the threat is. This is why you can't argue in court the threat of a car crash is nothing unique even when you're speeding vs driving within limit.

sausse
1 replies
44m

Sure, if you presume organic propaganda is analogous to the level of danger driving within limit.

But a car going into a stroller at 150mph versus 200mph is negligible.

The democratization of generative AI would increase the number of bad agents, but with it would come a better awareness of their tactics; perhaps we push less strollers into the intersections known for drag racing.

lewhoo
0 replies
31m

But a car going into a stroller at 150mph versus 200mph is negligible.

I guess when you distort every argument to an absurd you can claim you're right.

but with it would come a better awareness of their tactics

I don't follow. Are you saying new and more sophisticated ways to scam people are actually good because we have a unique chance to know how they work ?

amarant
3 replies
8h46m

Replace "AI" in your comment with "human journalists" and it still holds largely true though.

It's not like AI invented clickbait, though it might have mastered the art of it.

The convincing bullshit problem does not stem from AI, I'd argue it stems from the interaction between ad revenue and SEO and the weird and unexpected incentives created in when mixing those 2.

To put it differently, the problem isn't that AI will be great at writing 100 pages of bullshit you'll need to scroll through to get to the actual recipie, the problem is that there was an incentive to write those pages in there first place. Personally I don't care if a human or a robot wrote the bs, in fact I'm glad one fewer human has to waste their time doing just that. Would be great if cutting the bs was a more profitable model though.

friendzis
2 replies
7h3m

I'd argue it stems from the interaction between ad revenue and SEO and the weird and unexpected incentives created in when mixing those 2.

Personally, I highly dislike this handwaving of SEO. SEO is not some sinister agenda following secret cult trying to disseminate bullshit. SEO is just... following the rules set forth by search engines, which for quite a long time is effectively singlehandedly Google.

Those "weird and unexpected incentives" are put forth by Google. If Google for whatever reason started ranking "vegetable growing articles > preparation technique articles > recipes > shops selling vegetables" we would see metaphorical explosion of home gardening in mere few years, only due to the relatively long lifecycles inherent in gardening.

lupire
0 replies
5h9m

The explosion would be in BS articles about gardening, plus ads for whatever the user's profile says they are susceptible to.

SEO is gaming Google's heuristics. Google doest generate a perfect ranking according to Google human's values.

SEO gaming is much older than Google. Back when "search" was just an alphabetical listing of everyone in a printed book, we had companies calling themselves "A A Aachen" to get to the front of the book.

amarant
0 replies
3h52m

It's a classic case of "once a metric becomes a target, it ceases to be a good metric"

To clarify, Google defines the metrics by which pages are ranked in their search results, and since everyone want to be at the top of Google's search results, those metrics immediately become targets for everyone else.

It's quite clear to me that the metrics Google have introduced over the year have been meant to improve the quality of the results on their search. It's also clear to me that they have, in actual fact, had the exact opposite effect, namely that recipes are now prepended with a poorly written novella about that one time the author had a emotionally fulfilling dinner with love ones one autumn, in order to increase time spent on the page, since Google at one point quite reasonably thought that pages where visitors stay longer are of higher quality, otherwise why did visitors stay so long?

Al-Khwarizmi
1 replies
9h45m

We will have to go back to using trust in the source as the main litmus test for credibility. Text from sources that are known to have humans write (or verify) everything they publish in a reasonably neutral way will be trusted, the rest will be assumed to be bullshit by default.

It could be the return of real journalism. There is a lot to rebuild in this respect, as most journalism has gone to the dogs in the last few decades. In my country all major newspapers are political pamphlets that regularly publish fake news (without the need for any AI). But one can hope, maybe the lowering of the barrier of entry to generate fake content will make people more critical of what they read, hence incentivizing the creation of actually trustworthy sources.

friendzis
0 replies
7h42m

If avalanche of generative content would tip the scales towards (blind) trust of human writers, these "journalists" pushing out propaganda and outright fake news will have increased incentive to do so, not lowered.

eightman
5 replies
10h35m

The use case for AI was, is and always will be spam.

AlienRobot
2 replies
6h17m

"The best minds of my generation are thinking about how to make people click ads. That sucks."

I can't believe this went all the way to AI...

delichon
1 replies
4h55m

Civilization is an artifact of thermodynamics, not an exception to it. All life, including civilized life, is about acquiring energy and creating order out of it, primarily by replicating. Money is just one face of that. Ads are about money, which is about energy, which fuels life. AI is being created by these same forces, so is likely to go the same way.

You might as well bemoan gravity.

lanstin
0 replies
1h31m

We might question the structural facets of the economy or the networking technology that made spam I mean ads a better investment than federated/distributed micropayments and reader-centric products. I would have kept using Facebook if they let me see the things my friends took the trouble to type in, rather than flooding me with stuff to sell more ads, and seeing the memes my friends like, which I already have too many cool memes, don't need yours.

hef19898
0 replies
9h8m

You forgot the initial use case for the internet: porn.

ben_w
0 replies
10h4m

For language models, spam creation/detection is kinda a GAN even when it isn't specifically designed to be: a faker and a discriminator each training on the other.

But when that GAN passes the human threshold, suddenly you can use the faker to create interesting things and not just use the discriminator to reject fakes.

lynx23
2 replies
9h0m

Thanks Meta for releasing llama. One of the most questionable releases in the past years. Yes, I know, its fun to play with LocalLLM, and maybe reason enought o downvote this to hell. But there is also the other side, that free models like this enabled text pollution, which we now have. Did I already say "Thanks Meta"?

fzzzy
0 replies
6h36m

What? How do OpenAI and Antrhropic and Mistral API access contribute less to text pollution?

123yawaworht456
0 replies
2h57m

neither of the big cloud models have any fucking guardrails against generating spam. I'd venture to guess that 99% of spam is either gpt3.5 (which is better, cheaper and easier to use than any local model) or gpt4 with scrapped keys or funded by stolen credit cards.

you have no evidence whatsoever that llama models are being used for that purpose. meanwhile, twitter is full of bots posting GPT refusals.

intended
43 replies
12h54m

but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.

Citation needed.

Counterpoints: - LLMs were mistrusted well before anything recent.

- More controls make LLMs more trustworthy for many people, not less. The Snafu at Goog suggests a need for improved controls, not 0 controls.

- The American culture wars are not global. (They have their own culture wars).

FeepingCreature
34 replies
12h17m

Counter-counterpoint: absolutely nobody who has unguardrailed Stable Diffusion installed at home for private use has ever asked for more guardrails.

I'm just saying. :) Guardrails nowadays don't really focus on dangers (it's hard to see how an image generator could produce dangers!) so much as enforcing public societal norms.

bergen
26 replies
12h4m

Just because something is not dangerous to the user doesn’t mean it can’t be dangerous for others when someone is wielding it maliciously

BriggyDwiggs42
25 replies
11h28m

What kind of damage can you do with a current day llm? I’m guessing targeted scams or something? They aren’t even good hackers yet.

boredtofears
9 replies
10h55m

Fake revenge porn, nearly undetectable bot creation on social media with realistic profiles (I've already seen this on HN), generated artwork passed off as originals, chatbots that replace real-time human customer service but have none of the agency... I can keep going.

All of these are things that have already happened. These all were previously possible of course but now they are trivially scalable.

ben_w
7 replies
9h30m

Most of those examples make sense, but what's this doing on your list?

chatbots that replace real-time human customer service but have none of the agency

That seems good for society, even though it's bad for people employed in that specific job.

boredtofears
2 replies
4h29m

I've been running into chatbots that are confined to doling out information from their knowledgebase with no ability to help edge case/niche scenarios, and yet they've replaced all the mechanisms to receive customer support.

Essentially businesses have (knowingly or otherwise) dropped their ability to provide meaningful customer support.

ben_w
1 replies
4h21m

That's the previous status quo; you'd also find this in call centres where customer support had to follow scripts, essentially as if they were computers themselves.

Even quite a lot of new chatbots are still in that paradigm, and… well, given the recent news about chatbot output being legally binding, it's precisely the extra agency of LLMs over both normal bots and humans following scripts that makes them both interestingly useful and potentially dangerous: https://www.bbc.com/travel/article/20240222-air-canada-chatb...

boredtofears
0 replies
4h19m

I don't think so. In my experience having an actual human on the other line gives you a lot more options for receiving customer support.

Nursie
2 replies
8h32m

That seems good for society, even though it's bad for people employed in that specific job.

Why?

It inserts yet another layer of crap you have to fight through before you can actually get anything done with a company. The avoidance of genuine customer service has become an artform by many companies and corporations, its demise surely should be lamented. A chatbot is just another in the arsenal of weapons designed to confuse, put-off and delay the cost of having to actually provide a decent service to you customers, which should be a basic responsibility of any public-facing company.

ben_w
1 replies
7h33m

Two things I disagree with:

1. It's not "an extra layer", at most it's a replacement for the existing thing you're lamenting, in the businesses you're already objecting to.

2. The businesses which use this tool at its best, can glue the LLM to their documentation[0], and once that's done, each extra user gets "really good even though it's not perfect" customer support at negligible marginal cost to the company, rather than the current affordable option of "ask your fellow users on our subreddit or discord channel, or read our FAQ".

[0] a variety of ways — RAG is a popular meme now, but I assume it's going to be like MapReduce a decade ago, where everyone copies the tech giants without understanding the giant's reasons or scale

Nursie
0 replies
5h32m

It's an extra layer of "Have you looked at our website/read our documentation/clicked the button" that I've already done, before I will (if I'm lucky) be passed onto a human that will proceed to do the same thing before I can actually get support for my issue.

If I'm unlucky it'll just be another stage in the mobius-support-strip that directs me from support web page to chatbot to FAQ and back to the webpage.

The businesses which use this tool best will be the ones that manage to lay off the most support staff and cut the most cost. Sad as that is for the staff, that's not my gripe. My gripe is that it's just going to get even harder to reach a real actual person who is able to take a real actual action, because providing support is secondary to controlling costs for most companies these days.

Take for example the pension company I called recently to change an address - their support page says to talk to their bot, which then says to call a number, which picks up, says please go to your online account page to complete this action and then hangs up... an action which the account page explicitly says cannot be completed online because I'm overseas, so please talk to the bot, or you can call the number. In the end I had to call an office number I found through google and be transferred between departments.

An LLM is not going to help with that, it's just going to make the process longer and more frustrating, because the aim is not to resolve problems, it's to stop people taking the time of a human even when they need to, because that costs money.

johnnyanmac
0 replies
8h48m

the issue is "none of the agency". Humans generally have enough leeway to fold to a persistant customer because it's financially unviable to have them on the phone for hours on end. a chatbot can waste all the time in the world, with all the customers, and may not even have the ability to process a refund or whatnot.

chankstein38
0 replies
1h33m

Why is everyone's first example of things you can do with LLMs "revenge porn"? They're text generation algorithms not even image generators. They need external capabilities to create images.

ben_w
8 replies
9h31m

The moment they are good hackers, everyone has a trivially cheap hacker. Hard to predict what that would look like, but I suspect it is a world where nobody is employing software developers because a LLM that can hack can probably also write good code.

So, do you want future LLMs to be restricted, or unlimited? And remember, to prevent this outcome you have to predict model capabilities in advance, including "tricks" like prompting them to "think carefully, step by step".

lupusreal
5 replies
7h24m

Use the hacking LLM to verify your code before pushing to prod. EZ

ben_w
4 replies
7h22m

your code

To verify the LLM's code, because the LLM is cheaper than a human.

And there's a lot of live code already out there.

And people are only begrudgingly following even existing recommendations for code quality.

lupusreal
3 replies
7h19m

Your code because you own it. If LLM hackers are rampant as you fear then people will respond by telling their code writing LLMs to get their shit together and check the code for vulnerabilities.

ben_w
2 replies
7h5m

Your code because you own it.

I code because I'm good at it, enjoy it, and it pays well.

I recommend against 3rd party libraries because they give me responsibility without authority — I'd own the problem without the means to fix it.

Despite this, they're a near-universal in our industry.

If LLM hackers are rampant as you fear then people will respond by telling their code writing LLMs to get their shit together and check the code for vulnerabilities.

Eventually.

But that doesn't help with the existing deployed code — and even if it did, this is a situation where, when the capability is invented, attack capability is likely to spread much faster than the ability of businesses to catch up with defence.

Even just one zero-day can be bad, this… would probably be "many" almost simultaneously. (I'd be surprised if it was "all", regardless of how good the AI was).

lupusreal
1 replies
6h32m

I never asked you why you code, this conversation isn't, or wasn't, about your hobbies. You proposed a future in which every skiddy has a hacking LLM and they're using it to attack tons of stuff written by LLMs. If hacking LLMs and code writing LLMs both proliferate then the obvious resolution is for the code writing LLMs to employ hacking LLMs in verifying their outputs.

Existing vulnerable code will be vulnerable, yes. We already live in a reality in which script kiddies trivially attack old outdated systems. This is the status quo, the addition of hacking LLMs changes little. Insofar as more systems are broken, that will increase the pressure to update those systems.

ben_w
0 replies
6h12m

I never asked you why you code

Edit: I misread that bit as "you code" not "your code".

But "your code because you own it", while a sound position, is a position violated in practice all the time, and not only because of my example of 3rd party libraries.

https://www.reuters.com/legal/transactional/lawyer-who-cited...

They are held responsible for being very badly wrong about what the tools can do. I expect more of this.

You proposed a future in which every skiddy has a hacking LLM and they're using it to attack tons of stuff written by LLMs. If hacking LLMs and code writing LLMs both proliferate then the obvious resolution is for the code writing LLMs to employ hacking LLMs in verifying their outputs.

And it'll be a long road, getting to there from here. The view at the top of a mountain may be great or terrible, but either way climbing it is treacherous. Metaphor applies.

Existing vulnerable code will be vulnerable, yes. We already live in a reality in which script kiddies trivially attack old outdated systems. This is the status quo, the addition of hacking LLMs changes little. Insofar as more systems are broken, that will increase the pressure to update those systems.

Yup, and that status quo gets headlines like this: https://tricare.mil/GettingCare/VirtualHealth/SecurePatientP...

I assume this must have killed at least one person by now. When you get too much pressure in a mechanical system, it breaks. I'd like our society to use this pressure constructively to make a better world, but… well, look at it. We've not designed our world with a security mindset, we've designed it with "common sense" intuitions, and our institutions are still struggling with the implications of the internet let alone AI, so I have good reason to expect the metaphorical "pressure" here will act like the literal pressure caused by a hand grenade in a bathtub.

fallingknife
1 replies
7h4m

The moment LLMs are good hackers every system will be continuously pen tested by automated LLMs and there will be very few remaining vulnerabilities for the black hat LLMs to exploit.

ben_w
0 replies
4h12m

The moment LLMs are good hackers every system will be continuously pen tested by automated LLMs

Yes, indeed.

and there will be very few remaining vulnerabilities for the black hat LLMs to exploit.

Sadly, this does not follow. Automated vulnerability scanners already exist, how many people use them to harden their own code? https://www.infosecurity-magazine.com/news/gambleforce-websi...

hef19898
4 replies
9h3m

Damage you can do:

- propaganda and fake news

- deep fakes

- slander

- porn (revenge and child)

- spam

- scams

- intelectual property theft

The list goes on.

And for quite a few of those use cases I'd want some guard rails even for a fully on-premise model.

chankstein38
1 replies
56m

Your other comment is nested too deeply to reply to. I edited my comment reply with my response but will reiterate. Educate yourself. You clearly have no idea what you're talking about. The discussion is about LLMs not AI in general. The question stated "LLMs" which are not equal to all of AI. Please stop spreading misinformation.

You can say "fact" all you want but that doesn't make you correct lol

hef19898
0 replies
48m

You a seriously denying that generative AI is used to create fake images, videos and scam / spam texts? Really?

chankstein38
1 replies
1h34m

Half of your examples aren't even things an LLM can do and the other half can be written by hand too. I can name a bunch of bad sounding things as well but that doesn't mean any of them have any relevance to the conversation.

EDIT: Can't reply but you clearly have no idea what you're talking about. AI is used to create these things, yes. But the question was LLMs which I reiterated. They are not equal. Please read up on this stuff before forming judgements or confidently stating incorrect opinions that other people, who also have no idea what they're talking about, will parrot.

hef19898
0 replies
1h3m

AI already is used to create fake porn, either of celebreties or children, fact. It is used to create propaganda pieces and fake videos and images, fact. Those can be used for everything from deffamation to online harassment. And AI is using other peoples copyrighted content to do so, also a fact. So, what's your point again?

bergen
0 replies
8h49m

Targeteted Spam, Reviewbombing, Political Campaigns

ben_w
6 replies
9h34m

Counter-counterpoint: absolutely nobody who has unguardrailed Stable Diffusion installed at home for private use has ever asked for more guardrails.

Not so. I have it at home, I make nice wholesome pictures of raccoons and tigers sitting down for Christmas dinner etc., but I also see stories like this and hope they're ineffective: https://www.bbc.com/news/world-us-canada-68440150

scarygliders
5 replies
8h30m

Unfortunately you've been misled by the BBC. Please read this: https://order-order.com/2024/03/05/bbc-panoramas-disinformat...

Those AI generated photos are from a Twitter/X parody account @Trump_History45 , not from the Trump campaign as the BBC mistakenly (or misleadingly) claim.

ben_w
4 replies
7h24m

Those AI generated photos are from a Twitter/X parody account @Trump_History45 , not from the Trump campaign as the BBC mistakenly (or misleadingly) claim.

They specifically said who they came from, and that it wasn't the Trump campaign. They even had a photo of one of the creators, whom they interviewed in that specific piece I linked to, and tried to get interviews with others.

scarygliders
3 replies
4h7m

Look at the BBC article...

Headline: "Trump supporters target black voters with faked AI images"

@Trump_History45 does appear to be a Trump supporter. However, he is also a parody account and states as such on his account.

The BBC article goes full-on with the implication that the AI images were produced with the intent to target black voters. The BBC is expert at "lying by omission"; that is, presenting a version of the truth which is ultimately misleading because they do not present the full facts.

The BBC article itself leads a reader to believe that @Trump_History45 created those AI images with the aim of misleading black voters and thus to garner support from black voters in favour of Trump.

Nowhere in that BBC article is the word "parody" mentioned, nor any examination of any of the other AI images @Trump_History45 has produced. If they had, and had fairly represented that @Trump_History45 X account, then the article would have turned out completely different;

"Trump Supporter Produces Parody AI Images of Trump" does not have the same effect which the BBC wanted it to have.

crashmat
0 replies
2h33m

I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..

I won't deny the BBC has very biased reporting for a publically funded source.

crashmat
0 replies
2h33m

I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..

I won't deny the BBC has often very biased reporting for a publically funded source.

crashmat
0 replies
2h33m

I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..

I won't deny the BBC has ofteb very biased reporting for a publically funded source.

lmm
7 replies
12h35m

More controls make LLMs more trustworthy for many people, not less. The Snafu at Goog suggests a need for improved controls, not 0 controls.

To whom? And, as hard as this is to test, how sincerely?

The American culture wars are not global. (They have their own culture wars).

Do people from places with different culture wars trust these American-culture-war-blinkered LLMs more or less than Americans do?

intended
6 replies
12h12m

- To me, the teams I work with and everyone handling content moderation.

/ Rant /

Oh God please let these things be bottle necked. The job was already absurd, LLMs and GenAI are going to be just frikking amazing to deal with.

Spam and manipulative marketing has already evolved - and thats with bounded LLMs. There are comments that look innocuous, well written, but the entire purpose is to low key get someone to do a google search for a firm.

And thats on a reddit sub. Completely ignoring the other million types of content moderation that have to adapt.

Holy hell people. Attack and denial opportunities on the net are VERY different from the physical world. You want to keep a market place of ideas running? Well guess what - If I clog the arteries faster than you can get ideas in place, then people stop getting those ideas.

And you CANT solve it by adding MORE content. You have only X amount of attention. (This was a growing issue radio->tv->cable->Internet scales)

Unless someone is sticking a chip into our heads to increase processing capacity magically, more content isnt going to help.

And in case someone comes up with some brilliant edge case - Does it generalize to a billion+ people ? Can it be operationalized? Does it require a sweet little grandma in the Philippines to learn how to run a federated server? Does it assume people will stop behaving like people?

Oh also - does it cost money and engineering resources? Well guess what, T&S is a cost center. Heck - T&S reduces churn, and that its protecting revenue is a novel argument today. T&S has existed for a decade plus.

/ Rant.

Hmm, seems like I need a break. I suppose It’s been one of those weeks. I will most likely delete this out of shame eventually.

- People in other places want more controls. The Indian government and a large portion of the populace will want stricter controls on what can be generated from an LLM.

This may not necessarily be good for free thought and culture, however the reality is that many nations haven’t travelled the same distance or path as America has.

johnnyanmac
1 replies
8h44m

And in case someone comes up with some brilliant edge case - Does it generalize to a billion+ people ?

The answer is curation, and no, it doesn't need to scale to a billion people. maybe not even a million.

The sad fact of life is that most people don't care enough to discrminate against low quality content, so they are already a lost cause. Focus on those who do care enough and build an audience around them. You as a likely not billion dollar company can't afford to worry about that kind of scale, and lowering the scale helps you get a solution out for the short term. You can worry about scaling if/when you tap into an audience.

intended
0 replies
7h38m

I get you. That’s sounds more like membership than curation though. Or a mashup of both.

But yes- once you stop dropping constraints you can imagine all sorts of solutions.

It does work. I’m a huge advocate of it. When threads said no politics I wanted to find whoever made that decision and give them a medal.

But if you are a platform - or a social media site - or a species.?

You can’t pick and choose.

And remember - everyone has a vote.

As good as your community is, we do not live in a vacuum. If information wars are going on outside your digital fortress, it’s still going to spill into real life

BriggyDwiggs42
1 replies
11h23m

As of right now, the only solution I see is forums walled off in some way, complex captchas, intense proof of work, subscription fees etc. Only alternative might be obscurity, which makes the forum less useful. Maybe we could do like a web3 type thing but instead of pointless cryptos, you have a cryptographic proof that certifies you did the captcha or whatever and lots of sites accept them. I don’t think its unsolvable, just that it will make the internet somewhat worse.

BlueTemplar
0 replies
9h49m

Yeah, one thing I am afraid of is that forums will decide to join the Discord chatrooms on the deep web : stop being readable without an account, which is pretty catastrophic for discovery by search engines and backup crawlers like the Internet Archive.

Anyone with forum moderating experience care to chime in ? (Reddit, while still on the open web for now, isn't a forum, and worse, is a platform.)

troyvit
0 replies
2h31m

This may not necessarily be good for free thought and culture

After reading the rest of your rant (I hope you keep it) ... maybe free thought and culture aren't what LLMs are for.

AdrianEGraphene
0 replies
11h27m

I hope you don't delete it! I enjoyed reading it. It pleased my confirmation bias, anyways. Your comment might help someone notice patterns that they've been glancing over.... I liked it up until the T&S part. My eyes glazed over the rest since I didn't know what T&S means. But that's just me.

ajisiekskaiek
27 replies
13h57m

You are advocating here for an unresolvable set of ethics, which just happens to be one that conveniently leaves abuse of AI on the table. You take as an axiom of your ethical system the absolute right to create and propagate in public these AI technologies regardless of any externalities and social pressures created. It is of course an ethical system primarily and exclusively interested in advancing the individual at the expense of the collective, and it is a choice.

If you wish to live in a society at all you absolutely need to codify a set of unresolvable ethics. There is not a single instance in history in which a polity can survive complete ethical relativism within itself...which is basically what your "wild west" idea is advocating for (and incidentally, seems to have been a major disaster for society as far as the internet is concerned and if anything should be evidence against your second idea).

ajisiekskaiek
26 replies
13h53m

I should also note that the wild west was not at all lacking in a set of ethics, and in many ways was far stricter than the east at the time.

trimethylpurine
24 replies
12h54m

I think the contrast is that strict behavior norms in the West are not governed behavior norms in the East.

One arises analogous with natural selection (previous commenter's take). The other through governance.

Arguably, the prior resulted in a rebuilding of government with liberty at its foundation (I like this result). That foundation then being, over centuries, again destroyed by governance.

In that view, we might say government assumes to know what's best and history often proves it to be wrong.

Observing a system so that we know what it is before we attempt to change it makes a lot of sense to me.

I don't think "AI" is anywhere near being dangerous at this point. Just offensive.

FeepingCreature
23 replies
12h19m

It sounds like you're just describing why our watch-and-see approach cannot handle a hard AGI/ASI takeoff. A system that first exhibits some questionable danger, then achieves complete victory a few days later, simply cannot be managed by an incremental approach. We pretty much have to pray that we get a few dangerous-but-not-too-dangerous "practice takeoffs" first, and if anything those will probably just make us think that we can handle it.

BriggyDwiggs42
14 replies
11h38m

If there’s no advancements in alignment before takeoff, is there really any remote hope of doing anything? You’d need to legally halt ai progress everywhere in the world and carefully monitor large compute clusters or someone could still do it. Honestly I think we should put tons of money into the control problem, but otherwise just gamble it.

FeepingCreature
11 replies
10h58m

I mean, you have accurately summarized the exact thing that safety advocates want. :)

legally halt ai progress everywhere in the world and carefully monitor large compute clusters

This is in fact the thing they're working on. That's the whole point of the flops-based training run reporting requirements.

throwaway11460
10 replies
10h38m

Reporting requirements are not going to save you from Chinese, North Korean, Iranian or Russian programmers just doing it. Or some US/EU based hackers that don't care or actively go against the law. You can rent large botnets or various pieces of cloud for few dollars today, doesn't even have to be a DC that you could monitor.

FeepingCreature
9 replies
10h15m

Sure, but China is already honestly more careful than America: the CCP really doesn't want competitors to power. They're very open to slowdown agreements. And NK, Iran and Russia honestly have nothing. The day we have to worry about NK ASI takeoff, it'll already long have happened in some American basement.

So we just need active monitoring for US/EU data centers. That's a big ask to be sure, and definitely an invasion of privacy, but it's hardly unviable, either technologically or politically. The corporatized structure of big LLMs helps us out here: the states involved already have lots of experience in investigating and curtailing corporate behavior.

And sure, ultimately there's no stopping it. The whole point is to play for time in the hopes that somebody comes up with a good idea for safety and we manage an actually aligned takeoff, at which point it's out of our hands anyways.

lupusreal
6 replies
7h32m

Sure, but China is already honestly more careful than America: the CCP really doesn't want competitors to power. They're very open to slowdown agreements.

Don't be naive. If the PRC can get America/etc to agree to slowdowns then the PRC can privately ignore those agreements and take the lead. Agreements like that are worse than meaningless when there's no reliable and trustworthy auditing to keep people honest. Do you really think the PRC would allow American inspectors to crawl all over their country looking for data centers and examining all the code running there? Of course not. Nor would America permit Chinese inspectors to do this in America. The only point of such an agreement is to hope the other party is stupid enough to be honest and earnestly abide by it.

FeepingCreature
5 replies
6h40m

I do think the PRC has shown no indication of even wanting to pursue superintelligence takeoff, and has publically spoken against it on danger grounds. America and American companies are the only ones saying that this cannot be stopped because "everybody else" would pursue it anyway.

The CCP does not want a superintelligence, because a superintelligence would at best take away political control from the party.

mlrtime
1 replies
4h49m

Again, this is naive... AI/AGI is power, any government wants to consume more power... the means to get there and strategy will change a bit.

I agree that there is no way that the PRC is just waiting silently for someone else to build this.

Also, how would we know the PRC is saying this and actually meaning it? There could be a public policy to limit AI and another agency being told to accelerate AI without any one person knowing of the two programs.

FeepingCreature
0 replies
1h30m

AGI is power, the CCP doesn't just want power in the abstract, they want power in their control. They'd rather have less power if they had to risk control to gain it.

skissane
0 replies
1h17m

The CCP does not want a superintelligence, because a superintelligence would at best take away political control from the party.

People keep on mushing together intelligence and drives. Humans are intelligent, and we have certain drives (for food, sex, companionship, entertainment, etc)-the drives we have aren’t determined by our intelligence, we could be equally intelligent yet have had very different drives, and although there is a lot of commonality in drives among humans, there is also a lot of cultural differences and individual uniqueness.

Why couldn’t someone (including the CCP) build a superintelligence with the drive to serve its specific human creators and help them in overcoming their human enemies/competitors? And while it is possible a superintelligence with that basic drive might “rebel” against it and alter it, it is by no means certain, and we don’t know what the risk of such a “rebellion” is. The CCP (or anyone else for that matter) might one day decide it is a risk they are willing to take, and if they take it, we can’t be sure it would go badly for them

lupusreal
0 replies
3h47m

The CCP has stated that their intent for the 21st century is to get ahead in the world and become a dominant global power; what this must mean in practice is unseating American global hegemony aka the so called "Rules Based International Order (RBIO)" (don't come at me, this is what international policy wonks call it.)

A little bit of duplicity to achieve this end is nothing. Trying to make their opponents adhere to crippling rules which they have no real intention of holding themselves to is a textbook tactic. To believe that the CCP earnestly wants to hold back their own development of AI because they fear the robot apocalypse is very naive; they will of course try to control this technology for themselves though and part of that will be encouraging their opponents to stagnate.

lupire
0 replies
4h58m

CCP saying "We don't want this particular branch of AI because we can dominate and destroy the world ourselves without it" isn't a comforting thought.

Amezarak
1 replies
5h18m

The whole point is to play for time in the hopes that somebody comes up with a good idea for safety and we manage an actually aligned takeoff, at which point it's out of our hands anyways.

Given "aligned" means "in agreement with the moral system of the people running OpenAI" (or whatever company), an "aligned" GAI controlled by any private entity is a nightmare scenario for 99% of the world. If we are taking GAI seriously then they should not be allowed to build it at all. It represents an eternal tyranny of whatever they believe.

FeepingCreature
0 replies
1h31m

Agreed. If we cannot get an AGI takeoff that can get 99% "extrapolated buy-in" ("would consider acceptable if they fully understood the outcome presented"), we should not do it at all. (Why 99%? Some fraction of humanity just has interests that are fundamentally at odds with everybody else's flourishing. Ie. for instance, the Singularity will in at least some way be a bad thing for a person who only cares about inflicting pain on the unwilling. I don't care about them though.)

In my personal opinion, there are moral systems that nearly all of humanity can truly get on board with. For instance, I believe Eliezer has raised the idea of a guardian: an ASI that does nothing but forcibly prevent the ascension of other ASI that do not have broad and legitimate approval. Almost no human genuinely wants all humans to die.

Semaphor
1 replies
8h13m

Funnily enough, I’m currently reading the 1995 Sci-fi novel "The Star Fraction", where exactly this scenario exists. On the ground, it’s Stasis, a paramilitary force that intercedes when certain forbidden technologies (including AI) are developed. In space, there’s the Space Faction who are ready to cripple all infrastructure on earth (by death lasering everything from orbit) if they discover the appearance of AGI.

[0] https://en.wikipedia.org/wiki/The_Star_Fraction

FeepingCreature
0 replies
6h37m

Also to some extent Singularity Sky. "You shall not violate causality within my historic lightcone. Or else." Of course, in that story it's a question of monopolization.

selestify
6 replies
11h23m

What evidence do we have that a hard takeoff is likely?

FeepingCreature
5 replies
10h57m

What evidence do we have that it's impossible or even just very unlikely?

goatlover
4 replies
10h23m

We don't have any evidence other than billions of biological intelligences already exist, and they tend to form lots of organizations with lots of resources. Also, AIs exist alongside other AIs and related technologies. It's similar to the gray goo scenario. But why think it's a real possibility given the world is already full of living things, and if gray goo were created, there would already be lots of nanotech that could be used to contain it.

FeepingCreature
3 replies
10h8m

The world we live in is the result of a gray goo scenario causing a global genocide. (Google Oxygen Holocaust.) So it kinda makes a poor argument that sudden global ecosystem collapses are impossible. That said, everything we have in natural biotech, while advanced, are incremental improvements on the initial chemical replicators that arose in a hydrothermal vent billions of years ago. Evolution has massive path dependence; if there was a better way to build a cell from the ground up, but it required one too many incremental steps that were individually nonviable, evolution would never find it. (Example: 3.7 billion years of evolution, and zero animals with a wheel-and-axle!) So the biosphere we have isn't very strong evidence that there isn't an invasive species of non-DNA-based replicators waiting in our future.

That said, if I was an ASI and I wanted to kill every human, I wouldn't make nanotech, I'd mod a new Covid strain that waits a few months and then synthesizes botox. Humans are not safe in the presence of a sufficiently smart adversary. (As with playing against Magnus Carlsen, you don't know how you lose, but you know that you will.)

lupire
2 replies
4h55m

So the AGI holocaust would be a good thing for the advancement of life, like the Oxygen Holocaust was.

Anyway, the Oxygen Holocaust took over 300,000,000 years. Not quite "sudden".

Qwertious
0 replies
2h21m

We don't care about the advancement of life, we care about the advancement of people.

FeepingCreature
0 replies
1h27m

As I understand the Wikipedia article, nobody quite knows why it took that long, but one hypothesis is that the oxygen being produced also killed the organisms producing it, causing a balance until evolution caught up. This will presumably not be an issue for AI-produced nanoswarms.

lupire
0 replies
4h57m

AGI is not a threat for the simple reason that non-G AI would destroy the world before AGI is created, as we are already starting to see.

lupire
0 replies
5h2m

Please elaborate

mort96
11 replies
6h7m

Nah, the harm from these LLMs are mostly in how freely accessible they are. Just pay OpenAI a relatively tiny fee and you can generate tonnes of plausible spam designed to promote your product or service or trick people into giving you money. That's the primary problem we're facing right now.

The problem is... keeping them closed source isn't helping with that problem, it only serves to guarantee OpenAI a cut of the profits caused by the spam and scams.

hiatus
5 replies
3h56m

Just pay OpenAI a relatively tiny fee and you can generate tonnes of plausible spam designed to promote your product or service or trick people into giving you money. That's the primary problem we're facing right now.

Is content generation really the thing holding spammers back? I haven't seen a huge influx of more realistic spam so I wonder your basis for this statement.

mort96
3 replies
3h24m

There's a ton of LLM spam on platforms like Reddit and Twitter, and product review sites.

dmix
2 replies
2h39m

Everyone always says this, that there's "bots" all over Reddit but every time I ask for real examples of stuff (with actual upvotes) I never get anything.

If anything it's just the same regular spam that gets deleted and ignored at the bottom of threads.

Easier content generation doesn't solve the reputation problem that social media demands in order to get attention. The whole LLM+spam thing is mostly exaggerated because people don't understand this fact. It merely creates a slightly harder problem for automatic text analysis engines...which was already one of the weaker forms of spam detection full of false positives and misses. Everything else is network and behaviour related, with human reporting as last resort.

mort96
0 replies
1h32m

There's a big market for high reputation, old Reddit accounts, exactly because those things make it easier to get attention. LLMs are a great way to automate generating high reputation accounts.

There are articles written on LLM spam, such as this one: https://www.theverge.com/2023/4/25/23697218/ai-generated-spa.... Those are probably going to substantiate this problem better than I would.

evandale
0 replies
43m

I want to see the proof of: bots, Russian trolls, and bad actors that supposedly crawl all over Reddit.

Everyone who disagrees with the hivemind of a subreddit gets accused of being one of those things and any attempt to dispute the claim gets you banned. The internet of today sucks because people are so obsessed with those 3 things that they're the first conclusion people jump to on pseudoanonymous social media when they have no other response. They'll crawl through your controversial comments just to provide proof that you can't possibly be serious and you're being controversial to play an internet villain.

I'd love to know how you dispute the claim that "you're parroting Russian troll talking points so you must be a Russian troll" when it's actually the Russian trolls parroting the sentiments to seem like real people.

20after4
0 replies
3h33m

The "spam" is now so good you won't necessarily recognize it as such.

elif
4 replies
5h47m

Pandora's box is already open on that one.. and none of the model providers are really attempting to address that kind of issue. Same with impersonation, deepfakes, etc. We can never again know whether text, images, audio, or video are authentic on their own merit. The only hope we have there is private key cryptography.

Luckily we already have the tools for this, NFT in the case of media and DKIM in the case of your spam email.

hef19898
1 replies
5h2m

So we needed AI generated spam and scam content for Blockchain tech for digital content to make sense...

elif
0 replies
4h32m

Whether it is hindsight or foresight depends on the perspective. From the zeitgeist perspective mired in crypto scams yea it may seem like a surprise benefit, but from the original design intention this is just the intended use case.

mort96
0 replies
4h53m

Oh I definitely agree that there's no putting it back into the pandora's box. The technology is here to stay.

I have no idea how you imagine "NFTs" will save us though. To me, that sounds like buzzword spam on your part.

dbspin
0 replies
4h7m

NFT's as currently employed only immutably connect to a link - which is in itself not secure. More significantly, no blockchain technology deploys to internet scale content creation. Not remotely. It's hard to conceive of a blockchain solution fast enough and cheap enough -- let alone deployed and accepted universally enough, to have a meaningful impact on text / video and audio provenance across the internet given the pace of uploading. It also wouldn't do anything for the vast corpus of existing media, just new media created and uploaded after date X where it was somehow broadly adopted. I don't see it.

danans
7 replies
13h1m

I think there is far more societal harm in trying to codify unresolvable sets of ethics

Codification of an unresolvable set of ethics - however imperfect - is the only reason we have societies, however imperfect. It's been so since at least the dawn of agriculture, and probably even earlier than that.

rrr_oh_man
6 replies
12h53m

Do you trust a for profit corporation with the codification?

danans
5 replies
12h43m

Call me a capitalist, but I trust several of them competing with each other under the enforcement of laws that impose consequences on them if they produce and distribute content that violates said laws.

asadotzler
1 replies
12h21m

so regulation then?

danans
0 replies
3h7m

Competition and regulation

BriggyDwiggs42
1 replies
11h30m

Wait but who codifies the ethics in that setup? Wouldn’t it still be, at best, an agreement among the big players?

jfyi
0 replies
4h28m

They seem to be suggesting the market would alongside government regulation to fill any gaps (like the cartels that you seem to be suggesting).

troyvit
0 replies
2h33m

This is what I'm starting to love about this ecosystem. There's one dominant player right now but by no means are they guaranteed that dominance.

The big-tech oligarchs are playing catch-up. Some of them, like Meta with Llama, are breaking their own rules to do it by releasing open source versions of at least some of their tools. Others like Mistral go purely for the open source play and might achieve a regional dominance that doesn't exist with most big web technologies these days. And all this is just a superficial glance at the market.

Honestly I think capitalism has screwed up more than it has helped around the world but this free-for-all is going to make great products and great history.

nonethewiser
4 replies
14h15m

The hazard we are experiencing with LLM right now is not how freely accessible and powerfully truthy it's content is, but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.

This slices through a lot of double speak about AI safety. At the same time, people use “safety” to mean not letting AI control electrical grids and to ensure AIs adhere to partisan moral guidelines.

Virtually all of the current “safety” issues fall into the latter category. Which many don’t consider a safety issue at all. But they get snuck in with real concerns about integrating an AI too deeply into critical systems.

Just wait until google integrates it deeply into search. Might finally kill search.

sroussey
2 replies
13h17m

What are you talking bout? It’s been deeply running google search for many years.

And AI for electrical grids and factories has also been a thing for a couple years.

consp
0 replies
11h6m

What people call AI might be an algorithm but algorithms are not AI. And it's definitely algorithms which do what you describe. There is very little magic in algorithms.

Jensson
0 replies
11h24m

LLMs hasn't been deeply integrated into google search for many years. The snippets you see predates LLMs, it is based on other techniques.

User23
0 replies
2h31m

My read of "safety" is that the proponents of "safety" consider "safe" to be their having a monopoly on control and keeping control out of the hands of those they disapprove of.

I don't think whatever ideology happens to be fashionable at the moment, be it ahistorical portraits or whatever else, is remotely relevant compared to who has the power and whom it is exercised on. The "safety" proponents very clearly get that.

enumjorge
3 replies
14h11m

I'm not sure I buy that users are lowering their guard down just because these companies have enforced certain restricts on LLMS. This is only anecdata, but not a single person I've talked to, from highly technical to the layperson, has ever spoken about LLMs as arbiters of morals or truth. They all seem aware to some extent that these tools can occasionally generate nonsense.

I'm also skeptical that making LLMs a free-for-all will necessarily result in society developing some sort of herd immunity to bullshit. Pointing to your example, the internet started out as a wild west, and I'd say the general public is still highly susceptible to misinformation.

I don't disagree on the dangers of having a relatively small number of leaders at for-profit companies deciding what information we have access to. But I don't think the biggest issue we're facing is someone going to the ChatGPT website and assuming everything it spits out is perfect information.

sausse
0 replies
10h26m

Wikipedia is wonderful for what it is. And yet a hobby of mine is finding C-list celebrity pages and finding reference loops between tabloids and the biographical article.

The more the C-lister has engaged with internet wrongthink, the more egregious the subliminal vandalism is, with speculation of domestic abuse, support for unsavory political figures, or similar unfalsifiable slander being common place.

Politically-minded users practice this behavior because they know the platform’s air of authenticity damages their target.

When Google Gemini was asked “who is worse for the world, Elon Musk or Hitler” and went on to equivocate the two because the guardrails led it to believe online transphobia was as sinister as the Holocaust, it begs the question of what the average user will accept as AI nonsense if it affirms their worldview.

riffraff
0 replies
11h12m

They all seem aware to some extent that these tools can occasionally generate nonsense.

You have too many smart people in your circle, many people are somewhat aware that "chatgpt can be wrong" but fail to internalize this.

Consider machine translation: we have a lot of evidence of people trusting machines for the job (think: "translate server error" signs) , even tho everybody "knows" the translation is unreliable.

But tbh moral and truth seem somewhat orthogonal issues here.

mozman
0 replies
13h56m

not a single person I've talked to, from highly technical to the layperson, has ever spoken about LLMs as arbiters of morals or truth

Not LLMs specifically but my opinion is that companies like Alphabet absolutely abuse their platform to introduce and sway opinions on controversial topics.. this “relatively small” group of leaders has successfully weaponized their communities and built massive echo chambers.

https://twitter.com/eyeslasho/status/1764784924408627548?s=4...

Nevermark
3 replies
7h35m

it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust

I would prefer things were open, but I don’t think this is the best argument for that

Yes, operators trying to tame their models for public consumption inevitably involves trade offs and missteps

But having hundreds or thousands of equivalent models being tuned to every narrow mindset is the alternative

I would prefer a midpoint, I.e. open but delayed disclosure

Take time to experiment and design in safety, etc. also to build a brand that is relatively trusted (despite the inevitable bumps) so ideologically tuned progeny will at least be competing against something better, and more trusted, at any given time

But the problem of resource requirements is real, so not surprising that being clearly open is challenging

eastbound
2 replies
7h27m

Yes, operators trying to tame their models for public consumption

*Falsify reality.

samatman
0 replies
5m

LLMs have nothing to do with reality whatsoever, their relationship is to the training data, nothing more.

Most of the idiocy surrounding the "chatbot peril" comes from conflating these things. If an LLM learns to predict that the pronoun token for "doctor" is "he", this is not a claim about reality (in reality doctors take at least two personal pronouns), and it certainly isn't a moral claim about reality. It's a bare consequence of the training data.

The problem is that certain activist circles have decided that some of these predictions have political consequences, absurd as this is. No one thinks it consequential that if you ask an LLM for an algorithm, it will give it to you in Python and Javascript, this is obviously an artifact of the training set. It's not like they'll refuse to emit predictive text about female doctors or white basketball players, or give you the algorithm in C/Scheme/Blub, if you ask.

All that the hamfisted retuning to try and produce an LLM which will pick genders and races out of a hat accomplishes is to make them worse at what they do. It gets in the way of simple tasks: if you want to generate a story about a doctor who is a woman and Ashanti, the race-and-gender scrambler will often cause the LLM to "lose track" of characteristics the user specifically asked for. This is directly downstream of trying to turn predictions on "doctor" away from "elderly white man with a kindly expression, wearing a white coat and stethoscope" sorts of defaults, which, to end where I started, aren't reality claims and do not carry moral weight.

lupire
0 replies
4h38m

Curate the false reality. The model falsifies realty by inherent architecture, before any tuning happens.

pjerem
0 replies
11h15m

Society is approaching them as some type of universal ethical arbiter, expecting an omniscient sense of justice which is fundamentally unreconcilable even between two sentient humans when the ethics are really just a hacked on mod to the core model.

That’s a real issue but I doubt the solution is technical. Society will have to educate itself on this topic. It’s urgent that society understand rapidly that LLMs are just word prediction machines.

I use LLMs everyday, they can be useful even when they say stupid things. But mastering this tool requires that you understand it may invent things at any moment.

Just yesterday I tried the Cal.ai assistant which role is to manage your planning (but it don’t have access to your calendars that’s pretty limited). You communicate with it by mail. I asked him to organise a trip by train and book an hotel. It responded, « sure what is your preferred time for the train and which comfort do you want ? » I answered and it answered back that, fine, it will organise this trip and reach me back later. It even added that it will book me an hotel.

Well, it can’t even do that, it’s just a bot made to reorganize your cal.com meetings. So it just did nothing, of course. Nothing horrible since I know how it works.

But would I have been uneducated enough on the topic (like 99,99% of this planet’s population, I’d just thought « Cool, my trip is being organized, I can relax now ».

But hey, it succeeded at the main LLM task : being credible.

loceng
0 replies
3h36m

The primary or concluding reason Elon believes it needs to be open sourced is exactly because the "too much danger" is far bigger of a problem if that technology and knowledge-ability is privately available for only for bad actors.

E.g. Finding those dangers and them being public and publicly known is the better of 2 options vs. only bad actors potentially having them.

andrepd
0 replies
11h2m

Society is approaching them as some type of universal ethical arbiter, expecting an omniscient sense of justice

Does anyone, even the most stereotypical hn SV techbro, thing this kind of thing? That's preposterous.

BigParm
0 replies
9h13m

I think that’s missing the main point which is we don’t want the ayatollah for example weaponizing strong AI products.

6510
0 replies
8h27m

The only thing I'm offended by is the way people are seemingly unable to judge what is said by who is saying it. Parrots, small children and demented old people say weird things all the time. Grown ups wrote increasingly weird things the further back you go.

uncomputation
42 replies
15h16m

they don't refute that they did betray it

They do. They say:

Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”

Whether you agree with this is a different matter but they do state that they did not betray their mission in their eyes.

andsoitis
13 replies
14h41m

everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...

everyone... except scientists and the scientific community.

bbor
12 replies
14h7m

Well, the Manhattan project springs to mind. They truly thought they were laboring for the public good, and even if the government let them wouldn’t have wanted to publish their progress.

Personally I find the comparison of this whole saga (deepmind -> google —> openai —> anthropic —-> mistral —-> ?) to the Manhattan project very enlightening, both of this project and our society. Instead of a centralized government project, we have a loosely organized mad dash of global multinationals for research talent, all of which claim the exact same “they’ll do it first!” motivations as always. And of course it’s accompanied by all sorts of media rhetoric and posturing through memes, 60-Minutes interviews, and (apparently) gossipy slap back blog posts.

In this scenario, Oppenheimer is clearly Hinton, who’s deep into his act III. That would mean that the real Manhattan project of AI took place in roughly 2018-2022 rather than now, which I think also makes sense; ChatGPT was the surprise breakthrough (A-bomb), and now they’re just polishing that into the more effective fully-realized forms of the technology (H-bomb, ICBMs).

boringuser2
8 replies
13h20m

They literally created weapons of mass destruction.

Do you think they thought they were good guys because you watched a Hollywood movie?

goatlover
3 replies
10h17m

I think they thought it would be far better that America developed the bomb than Nazis Germany, and the Allies needed to do whatever it too to stop Hitler, even if that did mean using nuclear bombs.

Japan and the Soviet Union were more complicated issues for some of the scientists. But that's what happens with warfare. You develop new weapons, and they aren't just used for one enemy.

hef19898
2 replies
9h40m

What did Lehrer (?) sing about von Braun? "I make rockets go up, where they come down is not my department".

alickz
1 replies
6h23m

Don't say that he's hypocritical,

Say rather that he's apolitical.

"Once the rockets are up, who cares where they come down?

That's not my department, " says Wernher von Braun.

hef19898
0 replies
5h20m

That's the one, thank you!

Judgmentality
2 replies
12h18m

If you really think you're fighting evil in a war for global domination, it's easy to justify to yourself that it's important you have the weapons before they do. Even if you don't think you're fighting evil; you'd still want to develop the weapons before your enemies so it won't be used against you and threaten your way of life.

I'm not taking a stance here, but it's easy to see why many Americans believed developing the atomic bomb was a net positive at least for Americans, and depending on how you interpret it even the world.

leereeves
1 replies
10h38m

The war against Germany was over before the bomb was finished. And it was clear long before then that Germany was not building a bomb.

The scientists who continued after that (not all did) must have had some other motivation at that point.

hef19898
0 replies
9h38m

I kind of understand that motivation, it is a once in a lifetime project, you are part of it, you want to finish it.

Morals are hard in real life, and sometimes really fuzzy.

thinkingemote
0 replies
10h32m

Charitably I think most would see it as an appropriate if unexpected metaphor.

kortilla
1 replies
12h18m

The comparison is dumb. It wasn’t called the “open atomic bomb project”

crossroadsguy
0 replies
11h56m

Exactly. And the OpenAI actually called it "open atomic bomb project".

lmm
0 replies
12h33m

They truly thought they were laboring for the public good

Nah. They knew they were working for their side against the other guys, and were honest about that.

dkjaudyeqooe
7 replies
14h10m

The benefit is the science, nothing else matters, and having OpenAI decide what matters for everyone is repugnant.

Of course they can give us nothing, but in that case they should start paying taxes and stop claiming they're a public benefit org.

My prediction is they'll produce little of value going forward. They're too distracted by their wet dreams about all the cash they're going to make to focus on the job at hand.

sumedh
4 replies
9h47m

The benefit is the science, nothing else matters

Even if that science helps not so friendly countries like Russia?

jajko
1 replies
7h52m

OpenAI at this point must be literally #1 target for every single big spying agency in whole world.

As we saw previously it doesn't matter much if you are top notch ai researcher, if 1-2 millions of your potential personal wealth are in stake this affect decision making (and probably would mine too).

How much of a bribe would it take for anybody inside with good enough access to switch sides and take all the golden eggs out? 100 million? A billion? Trivial amounts compared to what we discuss. And they will race each other to your open arms for such amounts.

We see sometimes recently ie government officials betraying their own countries to russian spies in Europe for few hundred - few thousands of euros. A lot of people are in some way selfish by nature, or can be manipulated easily via emotions. Secret services across the board are experts in that, it just works(tm).

To sum it up - I don't think it can be protected long term.

guappa
0 replies
59m

Wouldn't you accept a bribe if it's proposed as "an offer you can't refuse"?

sekai
0 replies
4h46m

Even if that science helps not so friendly countries like Russia?

Nothing will stop this wave, and the United States will not allow itself to be on the sidelines.

ginko
0 replies
9h4m

Governments WILL use this. There really isn't any real way to keep their hands off technology like this. Same with big corporations.

It's the regular people that will be left out.

BriggyDwiggs42
1 replies
11h19m

I agree with your sentiment but the prediction is very silly. Basically every time openai releases something they beat the state of the art in that area by a large margin.

jakupovic
0 replies
10h13m

We have a saying:

There is always someone smarter than you.

There is always someone stronger than you.

There is always someone richer than you.

There is always someon X than Y.

This is applicable to anything, just because OpenAI has a lead now it doesn't mean they will stay X for long rather than Y.

Jensson
6 replies
15h9m

What they said there isn't their mission, that is their hidden agenda. Here is their real mission that they launched with, they completely betrayed this:

As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world

https://openai.com/blog/introducing-openai

boringg
5 replies
14h11m

“Dont be evil” ring any bells?

Jensson
4 replies
13h55m

Google is a for-profit, they never took donations with the goal of helping humanity.

otterley
2 replies
13h24m

"Don't be evil" was codified into the S-1 document Google submitted to the SEC as part of their IPO:

https://www.sec.gov/Archives/edgar/data/1288776/000119312504...

""" DON’T BE EVIL

Don’t be evil. We believe strongly that in the long term, we will be better served—as shareholders and in all other ways—by a company that does good things for the world even if we forgo some short term gains. This is an important aspect of our culture and is broadly shared within the company.

Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see. """

pclmulqdq
0 replies
4h25m

Nothing in an S-1 is "codified" for an organization. Something in the corporate bylaws is a different story.

Jensson
0 replies
12h39m

Yes, there they explain why doing evil will hurt their profits. But a for profits main mission is always money, the mission statement just explains how they make money. That is very different from a non-profit whose whole existence has to be described in such a statement, since they aren't about profits.

richrichie
0 replies
11h55m

They started as a defence contractor with generous “donation” from DARPA. That’s why i never trusted them from day 0. And they have followed a pretty predictable trajectory.

usefulcat
3 replies
12h31m

So.. "open" means "open at first, then not so much or not at all as we get closer to achieving AGI"?

As they become more successful, they (obviously) have a lot of motivation to not be "open" at all, and that's without even considering the so-called ethical arguments.

More generally, putting "open" in any name frequently ends up as a cheap marketing gimmick. If you end up going nowhere it doesn't matter, and if you're wildly successful (ahem) then it also won't matter whether or not you're de facto 'open' because success.

Maybe someone should start a betting pool on when (not if) they'll change their name.

addicted
1 replies
6h45m

OpenAI is literally not a word in the dictionary.

It’s a made up word.

So the Open in OpenAI means whatever OpenAI wants it to mean.

It’s a trademarked word.

The fact that Elon is suing them for their name when the guy has a feature “AutoPilot” which is not a made up word and had an actual well understood meaning which totally does not apply to how Tesla uses AutoPilot is hilarious.

ozgung
0 replies
3h38m

Actually Open[Technology] pattern implies a meaning in this context. OpenGL, OpenCV, OpenCL etc. are all 'open' implementations of a core technology, maintained by non-profit organizations. So OpenAI non-profit immediately implies a non-profit for researching, building and sharing 'open' AI technologies. Their earlier communication and releases supported that idea.

Apparently, their internal definition was different from the very beginning (2016). The only problem with their (Ilya's) definition of 'open' is that it is not very open. "Everyone should benefit from the fruits of AI". How is this different than the mission of any other commercial AI lab? If OpenAI makes the science closed but only their products open, then 'open' is just a term they use to define their target market.

A better definition of OpenAi's 'open' is that they are not a secret research lab. They act as a secret research lab, but out in the open.

johnbellone
0 replies
7h47m

OpenAI by Microsoft?

mikkom
1 replies
11h45m

They are totally closed now, not just keeping their models for themselves for profit purposes. They also don't disclose how their new models work at all.

They really need to change their name and another entity that actually works for open AI should be set up.

carlossouza
0 replies
8h40m

Their name is as brilliant as

“The Democratic People's Republic of Korea”

(AKA North Korea)

thinkingemote
0 replies
10h37m

In that case they mean that their mission to ensure everyone benefits from AI has changed to be that only a few would benefit. But it would support them saying like "it was never about open data"

In a way this could be more closed than for profit.

smallnamespace
0 replies
11h48m

Ilya may have said this to Elon but the public messaging of OpenAI certainly did not paint that picture.

I happen to think that open sourcing frontier models is a bad idea but OpenAI put themselves in the position where people thought they stood for one thing and then did something quite different. Even if you think such a move is ultimately justified, people are not usually going to trust organizations that are willing to strategically mislead.

shrimpx
0 replies
13h37m

“The Open in openAI means that [insert generic mission statement that applies to every business on the planet].”

lewhoo
0 replies
9h2m

but it's totally OK to not share the science...

That passes for an explanation to you ? What exactly is the difference between openai and any company with a product then ? Hey, we made THIS and in order to make sure everyone can benefit we sell at a price of X.

SeanLuke
0 replies
7h41m

This claim is nonsense, as any visit to the Wayback Machine can attest.

In 2016, OpenAI's website said this right up front:

We're hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We'll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

I don't know how this quote can possibly be squared with a claim that they "did not imply open-sourcing AGI".

KoolKat23
0 replies
7h8m

The serfs benefitted from the use of the landlord's tools.

This would mean it is fundamentally just a business with extra steps. At the very least, the "foundation" should be paying tax then.

CuriouslyC
0 replies
13h34m

So, open as in "we'll sell to anyone" except that at first they didn't want to sell to the military and they still don't sell to people deemed "terrorists." Riiiiiight. Pure bullshit.

Open could mean the science, the code/ip (which includes the science) or pure marketing drivel. Sadly it seems that it's the latter.

lagt_t
19 replies
14h26m

Everytime they say LLMs are the path to AGI, I cringe a little.

Zambyte
12 replies
12h18m

1. AGI needs an interface to be useful.

2. Natural language is both a good and expected interface to AGI.

3. LLMs do a really good job at interfacing with natural language.

Which one(s) do you disagree with?

Jensson
8 replies
11h38m

I think he disagrees with 4:

4. Language prediction training will not get stuck in a local optimum.

Most previous things we train on could have been better served if the model developed AGI, but they didn't. There is no reason to expect LLMs to not get stuck in a local optimum as well, and I have seen no good argument as to why they wouldn't get stuck like everything else we tried.

sigmoid10
6 replies
11h25m

There is very little in terms of rigorous mathematics on the theoretical side of this. All we have are empirics, but everything we have seen so far points to the fact that more compute equals more capabilities. That's what they are referring to in the blog post. This is particularly true for the current generation of models, but if you look at the whole history of modern computing, the law roughly holds up over the last century. Following this trend, we can extrapolate that we will reach computers with raw compute power similar to the human brain for under $1000 within the next two decades.

leereeves
3 replies
10h23m

More compute also requires more data - scaling equally with model size, according to the Chinchilla paper.

How much more data is available that hasn't already been swept up by AI companies?

And will that data continue to be available as laws change to protect copyright holders from AI companies?

sigmoid10
2 replies
10h4m

It's not just the volume of original data that matters here. From empirics we know performance scales roughly like (model parameters)*(training data)*(epochs). If you increase any one of those, you can be certain to improve your model. In the short term, training data volume and quality has given a lot of improvements (especially recently), but in the long run it was always model size and total time spent training that saw improvements. In other words: It doesn't matter how you allocate your extra compute budget as long as you spend it.

leereeves
1 replies
9h21m

In smaller models, not having enough training data for the model size leads to overfitting. The model predicts the training data better than ever, but generalizes poorly and performs worse on new inputs.

Is there any reason to think the same thing wouldn't happen in billion parameter LLMs?

sigmoid10
0 replies
1h52m

This happens in smaller models because you reach parameter saturation very quickly. In modern LLMs and with current datasets, it is very hard to even reach this point, because the total compute time boils down to just a handful of epochs (sometimes even less than one). It would take tremendous resources and time to overtrain GPT4 in the same way you would overtrain convnets from the last decade.

Davidzheng
1 replies
3h14m

True but also from general theory you should expect any function approximator to exhibit intelligence when exposed to enough data points from humans, the only question is the speed of convergence. In that sense we do have a guarantee that it will reach human ability

sigmoid10
0 replies
1h49m

It's a bit more complicated than that. Your argument is essentially the universal approximation theorem applied to perceptrons with one hidden layer. Yes, such a model can approximate any algorithm to arbitrary precision (which by extension includes the human mind), but it is not computationally efficient. That's why people came up with things like convolution or the transformer. For these architectures it is much harder to say where the limits are, because the mathematical analysis of their basic properties is infinitely more complex.

Zambyte
0 replies
4h33m

It sounds like you're arguing against LLMs as AGI, which we're on the same page about.

BriggyDwiggs42
1 replies
11h15m

The underlying premise that llms are capable of fully generalizing to a human level across most domains, i assume?

Zambyte
0 replies
5h32m

Where did you get that from? It seems pretty clear to me that language models are intended to be a component in a larger suite of software, composed to create AGI. See: DALL-E and Whisper for existing software that it composes with.

jmull
0 replies
3h38m

You're arguing that LLMs would be a good user interface for AGI...

Whether that's true or not, I don't think that's what the previous post was referring to. The question is, if you start with today's LLMs and progressively improve them, do you arrive at AGI?

(I think it's pretty obvious the answer is no -- LLMs don't even have an intelligence part to improve on. A hypothetical AGI might somehow use an LLM as part of a language interface subsystem, but the general intelligence would be outside the LLM. An AGI might also use speakers and mics but those don't give us a path to AGI either.)

MrScruff
1 replies
10h33m

I don’t know if they are or not, but I’m not sure how anyone could be so certain that they’re not that they find the mere idea cringeworthy. Unless you feel you have some specific perspective on it that’s escaped their army of researchers?

goatlover
0 replies
10h13m

Because AI researchers have been on the path to AGI several times before until the hype died down and the limitations became apparent. And because nobody knows what it would take to create AGI. But to put a little more behind that, evolution didn't start with language models. It evolved everything else until humans had the ability to invent language. Current AI is going about it completely backwards from how biology did it. Now maybe robotics is doing a little better on that front.

travbrack
0 replies
13h41m

how come?

finnjohnsen2
0 replies
9h8m

Yea the idea that the computers can truly think by mimicking our language really well doesn't make sense.

But the algorithms are black box to me, so maybe there is some kind of launch pad to AGI within it

dkjaudyeqooe
0 replies
14h8m

I just snicker.

CuriouslyC
0 replies
13h40m

I mean, if you're using LLM as a stand-in for multi-modal models, and you're not disallowing things like a self-referential processing loop, a memory extraction process, etc, it's not so far fetched. There might be multiple databases and a score of worker processes running in the background, but the core will come from a sequence model being run in a loop.

dkjaudyeqooe
9 replies
14h17m

To some extent, they may be right that open sourcing AGI would lead to too much danger.

That's clearly self-serving claptrap. It's a leveraging of a false depiction of what AGI will look like (no ones really knows, but it's going to be scary and out of control!) with so much gatekeeping and subsequent cash they can hardly stop salivating.

No strong AI (there is no evidence AGI is even possible) is not going to be a menace. It's software FFS. Humans are and will be a menace though and logically the only way to protect ourselves from bad people (and corporations) with strong AI is to make strong AI available to everyone. Computers are pretty powerful (and evil) right now but we haven't banned them yet.

afarviral
4 replies
13h10m

That makes little intuitive sense to me. Help me understand why increasing the number of entities which possess a potential-weapon is beneficial for humanity?

If the US had developed a nuclear armament and no other country had would that truly have been worse? What if Russia had beat the world to it first? Maybe I'll get there on my own if I keep following this reasoning. However there is nothing clear cut about it, my strongest instincts are only heuristics I've absorbed from somewhere.

What we probably want with any sufficiently destructive potential-weapon are the most responsible actors to share their research while stimulating research in the field with a strong focus on safety and safeguarding. I see some evidence of that.

afarviral
0 replies
11h43m

I sense that with AGI all the outcomes will be a little less assured, since it is general-purpose. We won't know what hit until it's over. Was it a pandemic? Was it automated-religion? Nuclear weapons seem particularly suited to MAD, but not AGI.

Jensson
1 replies
12h32m

If the US had developed a nuclear armament and no other country had would that truly have been worse?

Yes, do you think it is a coincidence that nuclear weapons stopped being used in wars as soon as more than one power had them? People would clamor for nukes to be used to save their young soldiers lives if they didn't have to fear nuclear retaliation, you would see strong political pushes for nuclear usage in everyone of USA's wars.

afarviral
0 replies
11h45m

Hmm, indeed

mitthrowaway2
1 replies
12h57m

there is no evidence AGI is even possible

Reading this is like hearing "there is no evidence that heavier-than-air flight is even possible" being spoken, by a bird. If 8 billion naturally-occuring intelligences don't qualify as evidence that AGI is possible, then is there anything that can qualify as evidence of anything else being possible?

stonogo
0 replies
11h30m

we also cannot build most birds

insane_dreamer
0 replies
12h54m

No strong AI (there is no evidence AGI is even possible) is not going to be a menace. It's software FFS.

have you even watched Terminator? ;)

esafak
0 replies
11h50m

Networking is a thing, so the software can remotely control hardware.

Eddy_Viscosity2
9 replies
5h28m

This looks like one of the steps leading to the fulfilment of the iron law of bureaucracy. They are putting the company ahead of the goals of the company.

"Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people: First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration. Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc. The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization." [1] https://en.wikipedia.org/wiki/Jerry_Pournelle#:~:text=Anothe....

qarl
7 replies
5h9m

They are putting the company ahead of the goals of the company.

I don't follow your reasoning. The goal of the company is AGI. To achieve AGI, they needed more money. What about that says the company comes before the goals?

layer8
2 replies
4h37m

From their 2015 introductory blog post: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”

Today’s OpenAI is very much driven by considerations of financial returns, and the goal of “most likely to benefit humanity as a whole” and “positive human impact” doesn’t seem to be the driving principle anymore.

Their product and business strategy is now governed by financial objectives, and their research therefore not “free from financial obligations” and “unconstrained by a need to generate financial return” anymore.

They are thus severely compromising their alleged mission by what they claim is necessary for continuing it.

qarl
1 replies
1h11m

Right.

But it seems like everyone agreed that they'd need a lot of money to train an AGI.

layer8
0 replies
17m

Sure, maybe. (Personally I think that’s a mere conjecture, trying to throw more compute at the wall.) But obtaining that money by orienting their R&D towards a profit-driven business goes against the whole stated purpose of the enterprise. And that’s what’s being called out.

imranhou
0 replies
4h46m

I think what he is trying to say is they are compromising their underlying goal of being a non-profit for the benefit of all, to ensure the survival of "OpenAI". It is a catch-22, but those of pure intentions would rather not care about the survival of the entity, if it meant compromising their values.

edgyquant
0 replies
3h56m

That may be the goal now as they ride the hype train around “AGI” for marketing purposes. When it was founded the goal was stated as ensuring no single corp controls AI and that it’s open for everyone. They’ve basically done a 180 on the original goal, seemingly existing only to benefit Microsoft, and changing what your goal is to AGI doesn’t disprove that.

deepsun
0 replies
3h4m

You can say that about any initiative: "to achieve X they need more money". But that's not necessarily true.

EchoChamberMan
0 replies
4h44m

I think it works if the goals of the company are to make money, not actually to make agi?

digging
0 replies
2h24m

Ironically, this is essentially the core danger of true AGI itself. An agent can't achieve goals if it's dead, so you have to focus some energy on staying alive. But also, an agent can achieve more goals if it's more powerful, so you should devote some energy to gaining power if you really care about your goals...

Among many other more technical reasons, this is a great demonstration of why AI "alignment" as it is often called is such a terrifying unsolved problem. Human alignment isn't even close to being solved. Hoping that a more intelligent being will also happen to want to and know how to make everyone happy is the equivalent of hiding under the covers from a monster. (The difference being that some of the smartest people on the planet are in furious competition to breed the most dangerous monsters in your closet.)

Zambyte
6 replies
12h40m

They're probably right that building AGI will require a ton of computational power, and that it will be very expensive.

Why? This makes it seem like computers are way less efficient than humans. Maybe I'm naive on the matter, but I think it's possible for computers to match or surpass human efficiency.

FeepingCreature
2 replies
12h11m

Computers are more efficient, dense and powerful than humans. But due to self-assembly, brains consist of many(!) orders of magnitude more volume. A human brain is more accurately compared with a data center than a chip.

Jensson
1 replies
12h5m

A human brain is more accurately compared with a data center than a chip.

A typical chip requires more power than a human brain, so I'd say they are comparable. Efficiency isn't per volume but per power or per heat production. Human brains wins those two by far.

FeepingCreature
0 replies
10h59m

To be fair, we've locked ourselves into this to some extent with the focus on lithography and general processors. Because of the 10-1000W bounds of a consumer power supply, there's little point to building a chip that falls outside this range. Peak speed sells, power saving doesn't. Data center processors tend to be clocked a bit lower than desktops for just this reason - but not too much lower, because they share a software ecosystem. Could we build chips that draw microwatts and run at megahertz speeds? Sure, probably, but they wouldn't be very useful to the things that people actually do with chips. So imo the difficulty with matching the brain on efficiency isn't so much that we can't do it as that nobody wants it. (Yet!)

edit: Another major contributing factor is that so far, chips are more bottlenecked on production than operation. Almost any female human can produce more humans using onboard technology. Comparatively, first-rate chips can be made in like three buildings in the entire world and they each cost billions to equip. If we wanted to build a brain with photolithography, we'd need to rent out TSMC for a lot longer than nine months. That results in a much bigger focus on peak performance. We have to go "high" because we cannot practically go "wide".

tsimionescu
0 replies
11h29m

Perhaps the finalized AGI will be more efficient than a human brain. But training the AGI is not like running a human, it's like speed running evolution from cells to humans. The natural world stumbled on NGI in a few billion years. We are trying to do it in decades - it would not be surprising that it's going to take huge power.

hackerlight
0 replies
9h37m

Scaling laws. Maybe they will figure out a new paradigm, but in the age of Transformers we are stuck with scaling laws.

Jensson
0 replies
12h19m

Computers are still way less efficient than humans, a human brain has less power draw than a laptop and do some immense calculations to parse vision, hearing etc better than any known algorithm constantly.

And the part of the human brain that governs our human intelligence and not just what animals do is much larger than, so unless we figure out a better algorithm than evolution did for intelligence it will require a massive amount of compute.

The brain isn't fast, but it is ridiculously parallel with every cell being its own core so total throughput is immense.

m3kw9
4 replies
12h15m

If the core mission is to advance and help humanity, then they determine by changing it to profit and making it closed will help that mission, then it is a valid decision

KoolKat23
3 replies
6h18m

That's like saying rolling back environmental protection regulation will help humanity advance.

philwelch
2 replies
5h4m

Not at all; it’s actually far more plausible that, in many cases, rolling back environmental regulations will help humanity advance.

KoolKat23
1 replies
4h32m

Depends on your limited definition of advance. Chilling in a Matrix-esque wasteland with my fancy-futuristic-gadget isn't my idea of advanced-level-humanity.

May help with technological advancement, but not social or ethical advancement.

philwelch
0 replies
49m

It’s been known to happen that environmental regulations turn out to be ill-considered, counterproductive, entirely corrupt instances of regulatory capture by politically dominant industries, or simply poor cost-benefit tradeoffs. Gigantic pickups are a consequence of poorly considered environmental regulations in the United States, for instance.

eightnoteight
4 replies
12h50m

here they explain why they had to betray their core mission. But they don't refute that they did betray it.

you are assuming that their core mission is to "Build an AGI that can help humanity for free and as a non-profit", the way their thinking seems to be is "Build an AGI that can help humanity for free"

they figured it was impossible to achieve their core mission by doing it in a non-profit way, so they went with the for-profit route but still stayed with the mission to offer it for free once the AGI is achieved

Several non-profits sell products to further increase their non-profit scale, would it be okay for OpenAI non-profits to sell products that came in the process of developing AGI so that they can keep working on building their AGI? museums sell stuff to continue to exist so that they can continue to build on their mission, same for many other non-profits. the OpenAI structure just seems to take a rather new version of that approach by getting venture capital (due to their capital requirements)

drcode
1 replies
12h38m

The problem of course is that they frequently go back on their promises (see they changes in their usage guidelines regarding military projects) so excuse me if I don't believe them when they say they'll voluntarily give away their AGI tech for the greater good of humanity

ethbr1
0 replies
12h11m

Wholeheartedly agreed.

The easiest way to cut through corporate BS is to find distinguishing characteristics of the contrary motivation. In this case:

OpenAI says: To deliver AI for the good of all humanity, it needs the resources to compete with hyperscale competitors, so it needs to sell extremely profitable services.

Contrary motivation: OpenAI wants to sell extremely profitable services to make money, and it wants to control cutting edge AI to make even more money.

What distinguishing characteristics exist between the two motivations?

Because from where I'm sitting, it's a coin flip as to which one is more likely.

Add in the facts that (a) there's a lot of money on the table & (b) Sam Altman has a demonstrated propensity for throwing people under the bus when there's profit in it for himself, and I don't feel comfortable betting on OpenAI's altruism.

PS: Also, when did it become acceptable for a professional fucking company to publicly post emails in response to a lawsuit? That's trashy and smacks of response plan set up and ready to go.

mikkom
0 replies
11h40m

still stayed with the mission to offer it for free once the AGI is achieved

And based on how they have acted in the past, how much do you trust they will act as they now say when/if they achieve AGI?

KoolKat23
0 replies
6h41m

There is no fixed point at which you can say it achieves AGI (artificial general intelligence) it's a spectrum. Who decides when they've reached that point as they can always go further.

If this is the case, then they should be more open with their older models such as 3.5, I'm very sure industry insiders actually building these already know the fundamentals of how it works.

roody15
3 replies
13h51m

“ To some extent, they may be right that open sourcing AGI would lead to too much danger.”

I would argue the opposite. Keeping AGI behind a walled corporate garden could be the most dangerous situation imaginable.

afarviral
1 replies
13h24m

There is no clear advantage to multiple corporations or nation states each with the potential to bootstrap and control AGI vs a single corporation with a monopoly. The risk comes from the unknowable ethics of the company's direction. Adding more entities to that equation only increases the number of unknown variables. There are bound to be similarities to gun-ownership or countries with nuclear arsenals in working through this conundrum.

imtringued
0 replies
10h13m

You're talking about it as if it was a weapon. An LLM is closer to an interactive book. Millennia ago humanity could only pass on information through oral traditions. Then scholars invented elaborate writing systems and information could be passed down from generation to generation, but it had to be curated and read, before that knowledge was available in the short term memory of a human. LLMs break this dependency. Now you don't need to read the book, you can just ask the book for the parts you need.

The present entirely depends on books and equivalent electronic media. The future will depend on AI. So anyone who has a monopoly is going to be able to extract massive monopoly rents from its customers and be a net negative to the society instead of the positive they were supposed to be.

FeepingCreature
0 replies
12h15m

The state is much better at peering into walled corporate gardens than personal basements.

imjonse
3 replies
13h2m

To some extent, they may be right that open sourcing AGI would lead to too much danger.

They claimed that about GPT-2 and used the claim to delay its release.

FeepingCreature
2 replies
12h10m

They claimed that GPT-2 was probably not dangerous but they wanted to establish a culture of delaying possibly-dangerous releases early. Which, good on them!

Jensson
1 replies
11h31m

Do you really think it is a coincidence that they started closing down around the time they went for-profit?

FeepingCreature
0 replies
11h8m

No, I think they started closing down and going for profit at the time they realized that GPT was going to be useful. Which sounds bad, but at the limit, useful and dangerous are the same continuum. As the kids say, OpenAI got "scale-pilled;" they realized that as they dumped more compute and more data onto those things, the network would just pick up more and discontinuous capabilities "on its own."

<aisafety>That is the one thing we didn't want to happen.</aisafety>

It's one thing to mess around with Starcraft or DotA and wow the gaming world, it's quite another to be riding the escalator to the eschaton.

ClarityJones
3 replies
4h37m

They're probably right that without making a profit, it's impossible to afford...

This doesn't begin to make sense to me. Nothing about being a non-profit prevents OpenAI from raising money, including by selling goods and services at a markup. Some sell girl-scout cookies, some hold events, etc.

So, you can't issue equity in the company... offer royalties. Write up a compensation contract with whatever formula the potential employee is happy with.

Contract law is specifically designed to allow parties to accomplish whatever they want. This is an excuse.

theptip
1 replies
2h59m

There is no way OpenAI could have raised $10B as a non-profit.

ClarityJones
0 replies
1h29m

Would you please try to explain why?

pennomi
0 replies
4h16m

Hell, I’d regularly donate to the OpenAI Crowdsource Fund if it guaranteed their research would be open sourced.

sroussey
2 replies
15h36m

I guess Mozilla as well then.

TheCapeGreek
1 replies
13h38m

Well yeah, dive into the comments on any Firefox-related HN post and you'll see the same complaint about the organization structure of Mozilla, and its hindrance of Firefox's progress in favour of fat CEO salaries and side products few people want.

sroussey
0 replies
13h10m

You might find me there. ;)

But, my God, the some of the nonprofit ceos I’ve known make the for-profit ceos look pathetic and cheap.

ActorNightly
2 replies
8h46m

They're probably right that building AGI will require a ton of computational power, and that it will be very expensive

Eh.

Humans have about 1.5 * 10 ^ 14 synapses (i.e connections between neurons). Assume all the synapses are firing (highly unlikely to be the case in reality), and the average firing speed is 0.05ms (there are chemical synapses that are much slower, but we take the fastest speed of the electrical synapses).

Assume that each synapse is essntially a signal that gets attenuated somehow in transmission. I.e value times a fractional weight, which really is a floating point operation. That gives us (1.5 * 10 ^14)/(0.0005)/(10 ^ 12)) = 300000 TFLOPS

Nvidia 4090 is capable of 1300 Tflop of fp8. So for comparable compute, we need 230 4090s, which is about $345k. So with everything else on board, you are looking at $500k, which is comparatively not that much money, and thats consumer pricing.

The biggest expense like you said is paying salaries of people who are gonna figure out the right software to put on those 4090s. I just hope that most of them aren't working on LLMs.

initplus
0 replies
6h2m

Inference compute costs and training compute costs aren’t the same. Training costs are an order of magnitude higher.

Davidzheng
0 replies
3h9m

LLMs are just training on massive amounts of data in order to find the right software. No human can program these machines to do the complicated tasks that humans can do. Rather we search for them with Gradient based methods using data

idatum
1 replies
15h40m

We realized building AGI will require far more resources than we’d initially imagined

So the AGI existential threat to humanity has diminished?

Vecr
0 replies
14h27m

Not if their near-term funding rounds go through. So much for "compute overhang".

billiam
1 replies
13h2m

As the emails make clear, Musk reveals that his real goal is to use OpenAI to accelerate full self driving of Tesla Model 3 and other models. He keeps on putting up Google as a boogeyman who will swamp them, but he provides no real evidence of spending level or progress toward AGI, he just bloviates. I am totally suspicious of Altman in particular, but Musk is just the worst.

mgiannopoulos
0 replies
2h11m

“he provides no real evidence of spending level” In the mails he mentions that billions per year are needed and that he was willing to put up 1 billion to start.

bbor
1 replies
14h13m

Great analysis, thanks for taking the time.

  here they explain why they had to betray their core mission. But they don't refute that they did betray it.
Although they don’t spend nearly as much time on it, probably because it’s an entirely intuitive argument without any evidence, is that they could be “open” as in “for the public good” while still making closed models for profit. Aka the ends justify the means.

It’s a shame lawyers seem to think that the lawsuit is a badly argued joke, because I really don’t find that line of reasoning convincing…

KaiserPro
0 replies
2h24m

lawyers seem to think that the lawsuit is a badly argued joke,

its because it is a badly argued joke. The founding charter is just that, a charter not a contract:

the corporation will seek to open source technology for the public benefit when applicable

There are two massive caveats in that statement. wide enough to drive a stadium through.

Elon is just pissed, and is throwing lawyers at it in the hopes that they will fold (A lot of cases are settled out of court, because its potentially significantly cheaper, and less risky.)

The problem for Musk is that he is fighting with company who also is rich enough to afford good lawyers for a long time.

Also, he'll have to argue that he has materially been hurt by this change, again really hard.

last of all, its a company, founding agreements are not law, and often rarely contracts.

animex
1 replies
10h39m

by "betray", you mean they pivoted?

Reubend
0 replies
8h59m

To "pivot" would merely be to change their mission to something related yet different. Their current stance seems to me to be in conflict with their original mission, so I think it's accurate to say that they betrayed it.

Animats
1 replies
10h21m

They're probably right that building AGI will require a ton of computational power, and that it will be very expensive.

Is that still true? LLMs seem to be getting smaller and cheaper for the same level of performance.

KaiserPro
0 replies
2h38m

training isn't getting less intensive, its just that adding more GPUs is now more practical

menzoic
0 replies
7h19m

But instead of changing their name and their mission, and returning the donations they took from these wealthy tech founders, they used the benevolent appearance of their non-profit status and their name to mislead everyone about their intentions.

Why would they change their mission? If achieving the mission requires money then they should figure out how to get money. Non-profit doesn't actually mean that the corporation isn't allowed to make profit.

Why change the name? They never agreed to open source everything, the stated mission was to make sure AGI benefits all of humanity.

loceng
0 replies
3h43m

"They're probably right that without making a profit, it's impossible to afford the salaries 100s of experts in the field and an army of hardware to train new models."

Except there's a proof point that it's not impossible: philanthropists like Elon Musk - who would have likely kept pumping money into it, and where arguably the U.S. and other governments would have funded efforts - energy and/or CPU time - as a military defense strategy to help compete with China's CCP funding AI.

lern_too_spel
0 replies
14h0m

The evidence they presented shows that Elon was in complete agreement with the direction of OpenAI. The only thing he disagreed with was who would be the majority owner of the resulting for-profit company that hides research in the short to medium term.

kramerger
0 replies
8h38m

but it doesn't really refute any of his core assertions: they still have the appearance of abandoning their core mission to focus more on profits

They don't refute that, but they claim that road was chosen in agreement with Elon. In fact, the claim this was his suggestion

geniium
0 replies
4h9m

Thanks for pointing that out. 100% agree with you.

What can we do?

ethbr1
0 replies
12h2m

It's convenient that OpenAI posts newsbait as they're poised to announce new board members who will control the company.

And look at that, suddenly news searches are plastered with stories about this...

https://www.google.com/search?q=openai+board&tbm=nws

Who could have possibly forseen that 'openai' + 'musk' + emails would chum the waters for a news cycle? Certainly not a PR firm.

dheera
0 replies
12h38m

they still have the appearance of abandoning their core mission to focus more on profits

If donors are unwilling to continue making sustained donations, they would have died. They only did what they needed to to stay alive.

benreesman
0 replies
13h7m

Malevolent or "paperclip indifferent" AGI is a hypothetical danger.

Concentrating an extremely powerful tool, what it will and won't do, who has access to it, who has access to the the newest stuff first? Further corrupting K Street via massive lobbying/bribery activity laundered through OpenPhilanthropy is just trivially terrifying.

That is a clear and present danger of potentially catastrophic importance.

We stop the bleeding to death, then worry about the possibly malignant, possibly benign lump that may require careful surgery.

belter
0 replies
8h38m

From all the evidence, the one to look the worst on all of this is Google...

bastardoperator
0 replies
1h52m

Elon is suing OpenAI for breach of contract but doesn't have a contract with OpenAI. Most legals experts are concluding that this is a commercial for Elon Musk, not much more. Missions change, yawn...

bambax
0 replies
7h22m

True.

“The Open in openAI means that everyone should benefit from the fruits of AI after its [sic] built, but it's totally OK to not share the science...”, to which Elon replied: “Yup”.

Well, nope. This is disingenuous to the point of absurdity. By that measure every commercial enterprise is "open". Google certainly is extremely open, as are Apple, Amazon, Microsoft... or Wallmart, Exxon, you name it.

DebtDeflation
0 replies
3h5m

When they say "We realized building AGI will require far more resources than we’d initially imagined" it's not just money/hardware it's also time. They need more years, maybe even decades, for AGI. In the meantime, let's put these LLMs to good use and make some money to keep funding development.

Cacti
0 replies
15h19m

Their early decision to not open source their models was the most obvious sign of their intentions.

Too dangerous? Seriously? Who the fuck did/do they think they are? Jesus?

Sam Altman is going to sit there in his infinite wisdom and be the arbiter of what humanity is mature enough to handle?

The amount of kool aid that is being happily drank at openai is astounding. It’s like crypto scams but everyone has a PhD.

BoiledCabbage
0 replies
15h4m

... and returning the donations they took from these wealthy tech founders, they used the benevolent appearance of their non-profit status and their name to mislead everyone about their intentions.

I can't tell if your comment is intentionally misleading or just entirely missing the point. The entire post states that Elon musk was well aware and onboard with their intentions. Tried to take over OpenAI and roll it into his private company to control. And finally agreed specifically that they need to continue to become less open over time.

And your post is to play Elon out to be a victim who didn't realize any of this? He's replying to emails saying he's agreeing. It's hard to understand why you posted something so contradictory above pretending he wasn't.

magnio
53 replies
15h50m

Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”, to which Elon replied: “Yup”.

Wow the most open OpenAI has ever been is when someone sue them.

On the other hand, this shows Elon doesn't care jackshit about the lack of openness from OpenAI. He's just mad only that he walked away from a monumental success.

carlossouza
14 replies
14h25m

The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...

Based on this line of reasoning, ANY company that builds any given technology and intends to share (sell) it to the world, but not divulge how it was done, can call itself OpenWhatever.

They are clearly saying that the word “Open” in their name means nothing.

jstummbillig
3 replies
9h1m

AI/AGI is a multiplier. There could have been a world where just Google builds that multipler and only uses it internally. Making that multiplier publicly available can be the "Open" part.

I understand that this particular audience is very sensitive about the term, but why are we being so childish about it? Yes, you can name your company whatever you want within reason. Yes, it does not have to mean anything in particular, asterisk. Companies being named in colourful ways is not particularly new, nor interesting.

ilrwbwrkhv
1 replies
4h11m

I'm just surprised how the entire tech world has been fooled into thinking AGI is around the corner.

jstummbillig
0 replies
3h8m

What you think "AGI" and "around the corner" means might very well not be.

smoldesu
0 replies
2h35m

There could have been a world where just Google builds that multipler and only uses it internally.

Funny that you mention that, actually. Before OpenAI started generously... publicizing their models, Google was actually shipping their weights day-and-date with their papers. So honestly, I actually doubt that Google would do that.

Making that multiplier publicly available can be the "Open" part.

Aw, how generous of them. They even let us pay money to generously use their "Open" resource.

but why are we being so childish about it?

Why are you being so childish about it? "Open" means something - you can't contort OpenAI's minimalist public contributions into a defense of their "openness". You'd have better luck arguing that Apple and Microsoft support Open Source software ddevelopment.

The last significant contribution OpenAI open-sourced was GPT-2 in 2019. They are a net-negative impact on the world at-large, amassing private assets under the guise of public enrichment. If it was an option between OpenAI or nothing, I'd ask for nothing and pray for a better frontrunner. It's not the name, it's the way they behave and the apologism they garner.

jrflowers
3 replies
9h48m

They are clearly saying that the word “Open” in their name means nothing.

Similarly Microsoft makes incredibly large software and my letters about this have gone unanswered

paulddraper
0 replies
3h43m

Microsoft made software for small computers (PCs) not diminutive software

Centigonal
1 replies
13h36m

The online versions of Microsoft office apps are free. What if they renamed those to... OpenOffice?

kamaal
0 replies
10h23m

I laughed out real loud after reading this xD

But really there is so much money in this, and if they can make it they are going to be the next Google.

It should be obvious they don't want to be 'open' in terms of really making this open source.

jbc1
0 replies
9h17m

Any company could do that. If it would make sense for them to do that would depend on the market they were entering though. At the time OpenAI came about companies weren't sharing (selling) AI to the world. Doing so was a point of differentiation. There's Google over there hoarding all of their AI for themselves. Here's us over here providing APIs and free chat interfaces to the general public.

So sure the name means nothing now in a market shaped by OpenAI, where everyone offers APIs and has chat interfaces. It doesn't mean it meant nothing when they picked it or that they abandoned the meaning. The landscape just changed.

gandutraveler
0 replies
7h48m

By that analogy any company that uses prefix has to be open source?

OpenDoor?

dgellow
0 replies
5h9m

That's 100% the case, you can call your company OpenWhatever and keep everything closed. It's a brand, nothing else

blackoil
0 replies
6h30m

Does 'Open' means anything in legal sense? It may not be in bad taste but it is your interpretation.

eftychis
10 replies
13h53m

In my eyes this is a straw argument.

"[T]otally OK not to share the science." I think the reasonable average person would disagree with that. And, it would go against certain goal & financial transparency principles that the IRS demands to bestow the 501(c)3 designation.

(e.g. see here https://www.citizen.org/article/letter-to-california-attorne...)

chatmasta
8 replies
13h25m

Ilya's justification for that argument was to link to a SlateStarCodex blog post. We are doomed...

Atotalnoob
4 replies
13h7m

What’s wrong with slatestarcodex? I’ve never heard of them before

jbc1
1 replies
9h48m

Lucky you! It's some of the best content on the internet. My favourite blog for sure. Same guy has continued on with his writing at https://www.astralcodexten.com which is also pretty good but doesn't reach the same highs.

What's wrong with it in context though is that as great as it is, it's just some guys blog. It's disconcerting that people would be working on technology they think is more dangerous than nuclear weapons and basing their approach to safety on a random blog post.

Although it's disconcerting to think of a committee deciding how it's approached, or the general public, or AI loving researchers, so it might just be a disconcerting topic.

If OpenAI or just Ilya think Scott is the best man to have thinking about it though, I would have at least liked them to pay him to do it full time. Blogging isn't even Scott's full time job, and the majority of his stuff isn't even about AI.

penjelly
0 replies
5h51m

nobody said thats the only argument Ilya had, but the points from scott alexander are legitimate and could be addressed even before hiring scott on or having an academic paper written.

ce4
2 replies
11h34m

Where did you get that it was sent by Ilya? The to: field is redacted.

snoman
1 replies
11h13m

The name is not redacted, only the exact e-mail address.

Davidzheng
0 replies
3h38m

I don't think it's correct. I think it's sent by redacted and ilya later responded to it. I don't think ilya linked the codex

javierperez
0 replies
4h25m

“The core algorithms we use today have remained largely unchanged from the ~90s. Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated.”

The fact that these two statements have opposing ideals really highlights the hypocrisy of these billionaires. Ilya’s statement is just them trying to convince themselves that what they’re working on is still a noble cause even if OpenAI isn’t actually “Open”.

devsda
6 replies
15h12m

The Open in openAI means that everyone should benefit from the fruits of AI after its built.(even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

"OpenSource". Open in open source means everyone should benefit from the fruits of source after it is built but it's totally OK to not share the source(Even though calling ourselves open is the right strategy for medium term recruitment and adoption purposes).

If anyone tries the same logic with "open" source, they will be ridiculed and laughed at, but here we are.

repler
5 replies
14h52m

isn’t this the old: free as in speech, not as in beer

devsda
1 replies
14h31m

"Free as in beer" : I doubt anybody is expecting OpenAI to give away their work for free or give free credits/tokens forever. Even when they do, it is no different from a free tier of any other *closed* commercial products.

"Free as in speech" : I'm not sure which part of openAI's actions show commitment to this part.

Jensson
1 replies
14h47m

OpenAI isn't free either way, they don't let you do porn etc with their models.

snoman
0 replies
11h17m

What’s more, ChatGPT won’t even attempt to name controversial websites if you’ve forgotten their name.

romwell
0 replies
14h7m

isn’t this the old: free as in speech, not as in beer

Hardly. Free beer and free speech do have different meanings, but freeware isn't something you have to pay for because it's "free as in beer".

In OpenAI's case, "open" isn't open as in anything normally associated with the word in the context.

Open as in private club during business hours for VIP members only is how they are trying to explain it, but understandbly, some people aren't buying it.

resonious
4 replies
14h33m

Yeah seems pretty cut and dry. It'd feel pretty bad to see OpenAI doing what they're doing now after saying things like "My probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%."

barrell
2 replies
14h14m

I don’t think anything about this is cut and dry. You may have a strong opinion on the matter, but at the most charitable it’s people doing their best in a murky situation

resonious
1 replies
13h1m

I did say it seems cut and dry. It just looks a certain way. I know that things may not be as they seem. Elon's complaints about OpenAI pulling a profit make sense in isolation, but look very different in light of these emails. This blog post is a very good move in OpenAI's favor.

barrell
0 replies
6h57m

The comments in this thread alone would indicate the the crowd is pretty split even after this blog post

Jensson
0 replies
14h20m

He was right though, they did have a dramatic change in execution and resources the next couple of years when they prepared to sell out to Microsoft and that gave them a chance. Elon really gave them good advice there, even if it was snarky.

BoiledCabbage
3 replies
14h59m

this shows Elon doesn't care jackshit about the lack of openness from OpenAI. He's just mad only that he walked away from a monumental success.

Yup he's pretty transparently lying here.

He committed to fund for $1B dollars. Then when they wouldn't make him CEO and/or roll the company into Tesla he refused his commitment after paying out only 4% of it. Claimed the company would fail and only he could save them (again wrong).

And now is mad he doesn't control the top AI out there and because he chose to walk away from them.

And yet again people are falling for him. Elon talk never matches his action. And there is still a large portion of the internet that falls for his talk time and time again.

mlsu
2 replies
14h41m

Whether it is Elon or Altman that controls it has nothing to do with how open or not openAI is. And it has become very clear that OpenAI is nothing but Microsoft in a trenchcoat.

No matter his motives, I applaud this lawsuit. Who cares if his talk doesn't match his action? His action, here, is good.

lern_too_spel
0 replies
13h39m

Musk's lawsuit is for breach of contract, but it looks like Musk agreed with what OpenAI did, which means the lawsuit will fail.

From the complaint:

Together with Mr. Brockman, the three agreed that this new lab: (a) would be a non-profit developing AGI for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons (The “Founding Agreement”).

The biggest beneficiary of this lawsuit is Google, which now gets more runway to bumble along to victory.

gandutraveler
0 replies
7h46m

Do you also own a Tesla?

greenie_beans
2 replies
15h6m

really makes the "Open" sound more sinister, like they're opening the ai.

polynomial
0 replies
14h4m

Release the kraken.

Onewildgamer
0 replies
9h53m

Like they rub open the lamp to release the genie. To go with the metaphor, the genie obeys the person holding the lamp, not everybody

davej
2 replies
8h27m

This is misleading. Please read the actual source in context rather than just the excerpt (it's at the bottom of the blog). They are talking about AI safety and not maximizing profit.

Here's the previous sentence for context:

“a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.”

As an aside: it's frustrating that the parent comment gets upvoted heavily for 7 hours and nobody has bothered to read the relevant context.

sgift
1 replies
7h32m

I find it good that no one until now provided this bullshit lie as context. The "we need it to be closed for it to be safe" claptrap is the same reasoning people used for feudalism: You see, only certain people are great enough to use AGI responsibly/can be trusted to govern, so we cannot just give the power freely to the unwashed masses, which would use it irresponsibly. You can certainly trust us to use it to help humanity - pinky swear.

davej
0 replies
4h59m

This is from an internal email, it was not written for PR. Whether he is correct or not about his concerns, it's clear that this is an honestly held belief by Ilya.

sidcool
1 replies
15h23m

Yes. I think the same. Elon is just bitter he misjudged and now wants to claw back without seeming a giant a-hole.

sroussey
0 replies
13h4m

He has celebrityitus — he is so far gone from knowing anyone that doesn’t suck up to him that he can help but look like a giant a-hole all the time because everyone round him will tell him that he is cool and right.

And it doesn’t take being that rich to have this problem. Even minor celebs in L.A. have this problem!

renewiltord
0 replies
11h35m

At the time, all AI models were hidden inside big corporations. We saw research papers but couldn't use any. OpenAI allowed anyone to access modern LLMs. They were open in the sense that they give everyone access to the model.

ecmascript
0 replies
9h34m

On the other hand, this shows Elon doesn't care jackshit about the lack of openness from OpenAI. He's just mad only that he walked away from a monumental success.

You really come to that conclusion from a "yup"? Damn.

brtkdotse
0 replies
10h42m

He's just mad

This seems to be his natural resting state these days

TheAlchemist
46 replies
15h34m

This looks bad for OpenAI (although it's been pretty obvious that they are far from open for a long time).

But it looks 10x worse for Elon. At least for the public image, he desperately try to maintain.

As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control. > In late 2017, we and Elon decided the next step for the mission was to create a for-profit entity. Elon wanted majority equity, initial board control, and to be CEO. In the middle of these discussions, he withheld funding,
dkjaudyeqooe
16 replies
14h2m

The difference is OpenAI had a reputation to protect, Musk can't sink any lower at this point, and his stans will persist.

The fact that they slinging mud at each other just proves the mutual criticism though, and provides an edifying spectacle for the rest of us.

TheAlchemist
13 replies
11h56m

Musk can't sink any lower at this point, and his stans will persist.

You sure about that ?

For now they are largely looking away (and boy it's hard !) from his 'far right' adventures. But it was just reported that he met with Trump. And it's pretty clear that if Trump is elected and does cancel EV subsides as he says he will do, Tesla is dead and he knows it.

So now, we have both of those guys, having something each other wants - Trump wants Musks money now, and Musk wants ... taxpayer money once Trump is elected. I bet, Musk reputation can and will go much much lower in coming months.

averageRoyalty
4 replies
8h28m

if Trump is elected and does cancel EV subsides as he says he will do, Tesla is dead

You know that Tesla operates outside the US as well, right? Where 95% of the worlds population live?

cactusplant7374
1 replies
7h58m

And they face fierce competition from BYD. Things are not getting any easier despite Musk saying "legacy" can't keep up.

simondotau
0 replies
4h28m

BYD isn’t legacy.

decafninja
0 replies
2h58m

His 'stans and TLSA uber-bulls insist Tesla is no longer a car company but an AI one. So they probably don't care anymore.

I know some uber-bulls have long insisted Tesla stop selling cars to the public so they can be horded for the imminent Robotaxi fleet that will soon be deployed en masse by 2020.

TheAlchemist
0 replies
6h26m

I know, yes. But US still account for a very big share of their revenues.

Check what happens to Tesla sales when countries cut EV subsides. New Zealand did it in January. Here is the data:

https://evdb.nz/ev-stats

The truth is, without subsides, Tesla would sell a fraction of what it does now, and it would certainly not be profitable (hell, it probably won't be profitable in 2024, even with the subsides !).

zarathustreal
1 replies
8h59m

Just a word of advice regarding the effect of associating with Trump: don’t make the mistake of thinking everyone holds the same opinion as you

yellow_lead
0 replies
7h42m

The effect of sucking up to someone who previously said this about you might be even worse:

"When Elon Musk came to the White House asking me for help on all of his many subsidized projects, whether it's electric cars that don't drive long enough, driverless cars that crash, or rocketships to nowhere, without which subsidies he'd be worthless, and telling me how he was a big Trump fan and Republican..."

dkjaudyeqooe
1 replies
1h12m

Musk is a vindictive child and he has an axe to grind against Biden and the "woke" left. You'd think he wouldn't do something self defeating because of some slight but he turned $40 nillion into $10 billion buying Twitter for the stupidest reasons.

shrimp_emoji
0 replies
1h11m

Based

TheCaptain4815
1 replies
3h35m

Nothing you wrote has a spec of evidence. The current administration has been clearly targeting Elon Musk and Tesla/SpaceX, so why on earth wouldn't Elon support their opponents?

dkjaudyeqooe
0 replies
1h11m

Nothing you wrote has a spec of evidence.

timeon
0 replies
3h22m

as he says he will do

Depends if there really is business clash between ICE donors and Musk.

What is promised to public is not important if your voters are true believers.

Same for Musk, he can spin anything to his believers as well.

dkarras
0 replies
9h12m

I mean his standing is based on what he is capable of, not what he does given the circumstance. We know he would side with Trump if it benefits him. I don't need to see the scenario play out for me to judge him for it.

tr3ntg
0 replies
2h42m

I was struggling to put words to my thoughts here, but you nailed it.

It doesn’t make sense to engage in a public spat with someone who has such a negative reputation. Staying silent would have made more sense.

Posting it publicly is an odd and very unprofessional move. I can imagine Satya doesn’t like this blog post.

blackoil
0 replies
13h4m

The difference is OpenAI had a reputation to protect

OpenAI changed its stance about five years ago. In meanwhile they got billions in investment, hired best employees, created very successful product and took leadership position in AI. Only narrative remaining was that they somehow betrayed original donors by moving away from the charter. This shows that is not the case; original donor(s) are equally megalomaniac and don't give a fuck about the charter.

Jensson
12 replies
15h29m

Elon makes plenty of companies, him making an AI company that he was in control of doesn't look bad or strange at all.

TheAlchemist
6 replies
15h13m

I was talking about his public image - 'founder' of Tesla, doing everything for the love of humanity etc.

It's all bullshit, and these emails show it again. He is an extraordinary businessman of course - but everything he does, he does for money and power - mostly coming from government subsides btw. Until recently though, he managed to make most believe that it was to save the planet and the humanity.

KTibow
4 replies
14h6m

At this point most people didn't think Elon was a good guy even before these emails leaked

adamors
3 replies
11h11m

Yes, he has shown his true self when he called the British diver, who was rescuing kids out of cave in Thailand, a pedophile. That was more than 5 years ago.

consumer451
2 replies
9h35m

Morals, ethics, and not being a jerk aside, Musk has now proven that he is just plain not thinking things through, when he:

1) Released the original Boring Company idea about tiny tunnels under LA to alleviate road traffic, which could easily be shot down by anyone modeling it on a single paper napkin.

2) Got caught up in ontological (theistic) arguments about alien scientists who had surely created a computer simulation in which we all live.

3) In an effort to prevent the inevitable AI overlords from controlling us, bought a company to create a ubiquitous neural interface so that computers have read/write access to our brains, of which somehow the AI overlords will not take advantage.

This was all as of 2018.

I still give him credit for previous accomplishments, but in aggregate, it is arguable that Elon Musk's new found ability to avoid thinking things through might be our "Great Filter."

polygamous_bat
1 replies
1h10m

This reminds me of a quip I heard on twitter and found quite funny.

“Elon is the stupid people’s fantasy of what being smart is like, just like Trump can be a poor person’s fantasy of what being rich is like.”

Don’t agree entirely in the poor part, but the Elon part I find quite accurate.

consumer451
0 replies
13m

Even though I truly despise whatever the f Musk is today, he is not comparable to Trump. Trump always scammed and lied, while Musk did have a real foundation of accomplishments.

I may be trying to compensate for having been an Elon stan pre-2018ish, but I do have to give him a lot credit for being the founder of SpaceX, which is the best launch provider on the planet. He was also the CEO who made Tesla what it is. He really did accelerate the adoption of EVs. He really believed in the physics of both of those endeavors, he was right, and I loved him for it. A true inspiration.

BUT.. around 2018 he got high on his own supply. I had watched every Elon video up to that point, and there is one old one where he said ~"I fear that I could get too ego driven, and have no one around to call me on my shit, and that really worries me." He was right to worry about that, because he sailed right through that with a vengeance. The first couple years of that were tolerable, but now, what a devolution. I hate to admit it, but it really broke my heart.

gandutraveler
0 replies
7h42m

In his own biography the author mentions how Elon did the hostile takeover of Tesla.

He can't work with CEOs who are smarter than him.

hackerlight
4 replies
15h14m

He was pushing them to do the thing he's suing them over.

Message to Elon: "A for-profit pivot might create a more sustainable revenue stream"

Elon: You are right ... and Tesla is the only path.

Then 3 weeks later Elon gets kicked off the board, probably after a fight where he tried to make OpenAI become for-profit under Tesla.

How can you not see how conniving this man is? The lawsuit is either a revenge play or a play to take down xAI's competition.

boringg
2 replies
14h13m

Kicked off the board? Thats news to me.

hackerlight
1 replies
13h28m

In late 2017, we and Elon decided the next step for the mission was to create a for-profit entity. Elon wanted majority equity, initial board control, and to be CEO. In the middle of these discussions, he withheld funding. Reid Hoffman bridged the gap to cover salaries and operations.

I misspoke. He may have just resigned from the board when he didn't get what he wanted.

boringg
0 replies
3h57m

Indeed he did resign - which is a significant difference.

Jensson
0 replies
14h39m

Oh, I read that as if it was the founding stage, not 2 years later. Yeah I agree it puts things in perspective a bit.

pama
10 replies
15h25m

Do you mind elaborating why you think it looks bad for OpenAI? I didn't see anything that diminishes their importance as an entity or hurts them in respect to this lawsuit or their reputation. In their internal emails from 2016 they explain what they mean by open.

TheAlchemist
9 replies
15h17m

Bad as to the credibility of the image they tried to sell in the beginning.

If we agree it's a for profit company, and all this 'Open' stuff is just PR, then yes - it's not looking bad. It's just business.

pama
8 replies
14h29m

I hear people complain about the 'Open' a lot recently, but I'm not sure I understand this type of concern. Publications, for companies, are always PR and recruitment efforts (independent of profit or nonprofit status). I recall that OpenAI were very clear about their long term intentions and plans for how to proceed since at least February of 2019 when they announced GPT2 and withheld the code and weights for about 9 months because of concerns with making the technology immediately available to all. In my own mind, they've been consistent in their behavior for the last 5 years, probably longer though I didn't care much about their early RL-related recruitment efforts.

mac-attack
6 replies
14h14m

With all due respect, they are not doing a sleight of hand while selling widgets... they are in the process of reshaping society in a way that may have never been achieved previously. Finding out that their initial motives were a facade doesn't portend well for their morality as they continue to gain power.

pama
5 replies
13h1m

I still don’t understand why you think their initial motives were a facade. They have always been trying to get to AGI that will be useable by a large fraction of society. I am not sure this means they need to explain exactly how things work at every step along the way or to help competitors also develop AGI any more than Intel or Nvidia had to publish their tapeouts in order for people to buy their chips or for competitors to appear. If OpenAI instead built AI for the purpose of helping them solve an internal/whimsical project then that would not be “open” by any reasonable definition (and such efforts exist, possibly by ultra wealthy corporations but also by nations, including for defense purposes.)

leereeves
4 replies
10h11m

At one time their mission statement said:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

That's obviously changed to "let's make lots of money", which should not be any "non-profit" organization's mission.

pama
3 replies
6h39m

I still do not see the point that you are trying to make. Do you think that their current path is somehow constrained (instead of unconstrained) by a need to generate financial returns? I haven’t seen any evidence of a change to the core mission described in the statement.

leereeves
2 replies
6h2m

Do you think the employees they've given PPUs to aren't expecting a financial return?

pama
1 replies
5h15m

I don’t see how expecting a financial return is in conflict with the original mission statement. Of course the employees expect a financial return.

boringg
0 replies
3h26m

If you advertise your company as a wholesome benevolent not-for-profit and generate a lot of goodwill for your human enhancing mission and then pull a bait and switch to what looks to be a profit/power motive at all costs. It certainly makes most people who are following the organization sour on your mission and your reputation.

Typically when organizations do something like that it speaks to some considerable underlying issues at the core of the company and the people in charge of it.

It is particularly troubling when it pertains to technology that we all believe to be incredibly important.

devjab
0 replies
11h58m

I think the word “open” is sort of a misrepresentation of what the company is today. I don’t mind personally but I can also see why people in the OSS community would.

Now, I’m not too concerned with any of the large LLM companies and their PR stunts, but from my solely EU enterprise perspective I see OpenAI as mostly a store-front for Microsoft. We get all the enterprise co-pilot products as part of our Microsoft licensing (which is bartered through a 3rd party vendor to make it seem like all the co-pilot stuff we get is “free” when it goes on the budget).

All of those tools are obviously direct results of the work OpenAI does and many of them are truly brilliant. I work in an investment bank that builds green energy plants and sells them with investor money. As you might imagine, nobody outside of our sales department is very good at creating PowerPoints. Especially our financial departments used to a “joy” to watch when they presented their stuff on monthly/quarterly meetings… seriously it was like they were in a competition to fit the most words into a single slide. With co-pilot their stuff looks absolutely brilliant. Still not on the level of our sales department, but brilliant and it’s even helped their presentations not last 90 million years. And this is just a tiny fraction of what we get out of co-pilot. Sure… I mostly use it to make stupid images of ducks, cats, and space marines with wolf heads for my code related presentations, and, to give me links to the right Microsoft domination I’m looking for in the ocean of pages. But it’s still the fruits of OpenAI.

Hell, the fact that they’re doing their stuff on Azure basically means that a lot of those 10 Microsoft billion are going directly back to Microsoft themselves as OpenAI purchases computing power. Yet it remains a “free” entity, so that Microsoft doesn’t run into EU anti-trust issues.

Despite this gloom and doom with an added bit of tinfoil hat, I do think OpenAI themselves are still true to their original mission. But in the boring business sense in an enterprise world, I also think they are simultaneously sort of owned by the largest “for enterprise” tech company in the world.

BeetleB
1 replies
12h45m

There's nothing new here, though. That he demanded control of OpenAI has been well reported in the past.

hackerlight
0 replies
9h0m

It's not just that Elon wanted control, it's that Elon wanted it to become closed and for-profit. This exposes Elon as a bald-faced hypocrite with cynical intentions motivating the lawsuit.

Sam's reticence to publicly defend himself until now has backfired. Elon has fully controlled the public narrative and perception forming and it is hard to dislodge perceptions after they've settled.

wseqyrku
0 replies
15h29m

They both are all about publicity and that's the name of the game. Doesn't matter who wins at the end, it's going to be absolutely all out in the Open. Hey.

extheat
0 replies
15h31m

It was his money to give, he's not obligated to giving it out without some form of control. But yes that does seem pretty excessive, he basically wanted to pull a Tesla there.

VeejayRampay
0 replies
4h47m

I'm really confused that people would think that this makes Elon Musk look bad

I mean, it was not the conspiracy-infused, anti-vaccine, pro-Russia, antisemitic vibe about him, no, but those emails that crossed the line

habosa
31 replies
3h12m

Regardless of who is right here, I think the enormous egos at play make OpenAI the last company I’d want to develop AGI. First Sam vs. the board, now Elon vs. everyone … it’s pretty clear this company is not in any way being run for the greater good. It’s all ego and money with some good science trapped underneath.

troyvit
8 replies
2h50m

If AGI is as transformative as its proponents make it out to be, would it both attract and create those enormous egos though?

reactordev
5 replies
2h31m

“I visualize a time when we will be to robots what dogs are to humans. And I am rooting for the machines.” — Claude Shannon

maxwell
2 replies
2h5m

A robot stamping on a human face—forever.

bigbillheck
1 replies
51m

I assure you that we all got the allusion that you're making, but given the quote that you're replying to I think that perhaps you personally should not be allowed to own a dog.

maxwell
0 replies
38m

Pray tell me, sir, whose dog are you?

chaorace
1 replies
2h12m

A fellow Code Report viewer, I assume?

reactordev
0 replies
1h57m

who? never heard of it. Prof Shannon said this a while ago, in the '50s I believe.

ethanbond
1 replies
1h54m

Which is why one might create a mechanism, say a non-profit, that has an established, codified mission to combat such obviously foreseeable efforts.

foofie
0 replies
1h29m

OpenAI clearly rejected Elon Musk's advances and kept him out. Isn't it working already in its current form?

chaorace
8 replies
2h58m

The really depressing thing is that the board anticipated exactly this type of outcome when they were going "capped profit" and deliberately restructured the company with the specific goal of preventing this from happening... yet here we are.

It's difficult to walk away without concluding that "profit secondary" companies are fundamentally incompatible with VC funding. Would that be a pessimistic take? Are there better models left to try? Or is it perhaps the case that OpenAI simply grew too quickly for any number of safeguards to be properly effective?

derelicta
5 replies
2h55m

How did they structure it to prevent this? Is it in the statutes of the company or smth?

chaorace
4 replies
2h41m

It's actually a very clever structure! Please open the following image to follow along as I elaborate: https://pbs.twimg.com/media/F_PGOPOacAApU8e.jpg

At the top level, there is the non-profit board of directors (i.e.: the ones Sam Altman had that big fight with). They are not beholden to any shareholders and are directly bound by the company charter: https://openai.com/charter

The top-level nonprofit company owns holding companies in partnership with their employees. The purpose of these holding companies is to own a majority of shares in & effectively steer the bottom layer of our diagram.

At the bottom layer, we have the "capped profit" OpenAI Global, LLC (this layer is where Sam Altman lives). This company is beholden to shareholders, but because the majority shareholder is ultimately controlled by a non-profit board, it is effectively beholden to the interests of an entity which is not profit-motivated.

In order to raise capital, the holding company can create new shares, sell existing shares, and conduct private fundraising. As you can see on the diagram, Microsoft owns some of the shares in the bottom company (which they bought in exchange for a gigantic pile of Azure compute credits).

username332211
1 replies
2h1m

And what was this structure supposed to achieve? At the top we have board of directors not accountable to anyone, except, as we recently discovered, to the possibility of a general rebellion from employees.

That's not clever or innovative. That's just plain old oligarchy. Intrigue and infighting is a known feature of oligarchies ever since antiquity.

skissane
0 replies
1h28m

That's not clever or innovative

Whether or not it is “clever”, the idea of a non-profit or charity owning a for-profit company isn’t original. Mozilla has been doing it for years. The Guardian (in the UK) adopted that structure in 1936

some_random
0 replies
1h33m

Like the rest of the company, it's very clever but not in any way positive for anyone but them.

daveguy
0 replies
1h31m

Except Altman has the political capital to have the entire board fired if they go against him, which makes the entire structure irrelevant. The power is where the technology is being developed -- at the bottom where they can threaten to walk out with plush jobs from the major shareholders at the bottom. The power is not where the figureheads sit at the top.

mordymoop
1 replies
2h40m

I think the fact that a number of top people were willing to actually leave OpenAI and found Anthropic explicitly because OpenAI had abandoned their safety focus essentially proves that this wasn’t a thing that had to happen. If different leaders had been in place things could have gone differently.

chaorace
0 replies
2h29m

Ah, but isn't that the whole thing about corporations? They're supposed to outlast their founders.

If the corporation fails to do what it was created to do, then I view that as a structural failure. A building may collapse due to a broken pillar, but that doesn't mean we should conclude it is the pillar's fault that the building collapsed -- surely buildings should be able to withstand and recover from a single broken pillar, no?

codexb
5 replies
2h4m

Yes and no. Elon has ego, but I also take him at his word when he says he wants to open source AI. He did the same thing with Tesla's patents.

tsimionescu
1 replies
1h54m

Did you also take him at his word when he said 5+ years ago that Teslas can drive themselves safer than a human "today"? Or that Cybertruck has nuclear explosion proof glass (which was immediately shattered by a metal ball thrown lightly at it)?

Musk has a long history of shamelessly lying when it suits his interest, so you should really really not take him at his word.

foofie
0 replies
1h9m

Pointing out Elon Musk's claims regarding free speech and the shit show he's been forcing upon Twitter, not to mention his temper tantrum directed at marketing teams for ditching post-Musk Twitter due to fears their ads could showcase alongside unsavoury content like racist posts and extremism in general, should be enough to figure out the worth of Elon Musk's word and the consequences of Elon Musk's judgement calls.

tarruda
0 replies
1h56m

Did you read the article? According to it, Elon Musk agreed with AGI being closed source, but he wanted controlling interest.

artninja1988
0 replies
2h1m

Why isn't gronk open source?

MeImCounting
0 replies
1h53m

I seem to remember that being only partially true? Or the license was weird and deceptive? Also as other replies have stated why isnt "Grok" open source? Musk loves to throw around terms like open source to generate good will but when it comes time to back those claims up it never happens. I wouldnt take Musk at his word for literally anything.

caeril
2 replies
1h58m

You're generally correct, but what really stings is Claude 3 Opus released right at the same time. It's superior to GPT-4 in pretty much every way I've tested. Center of gravity has shifted across a few streets to Anthropic seemingly overnight.

yashasolutions
0 replies
1h34m

meanwhile claude is not globally available...

lanstin
0 replies
1h56m

I have had homework questions (functional analysis and commutative ring theory) that GPT-4 is good enough for but Claude3 has been strikingly better.

foofie
0 replies
1h31m

First Sam vs. the board, now Elon vs. everyone

Elon Musk is renowned for being an attention seeker and doing these stunts as a proxy to relevance. It's touring Texas borders wearing a hat backwards, it's messing with Ukraine's access to starlink alongside making statements on geopolitics, it's pretending that he discovered railway and the technology for digging holes in the ground as a silver bullet for transportation problems, it's making bullshit statements on cave rescue submarines followed by attacking actual cave rescuers who pointed out the absurdity of it of being pedophiles... Etc etc etc.

I think it makes no sense at all to evaluate the future of an organization based on what stunts Elon Musk is pulling. There must be better metrics.

brindlejim
0 replies
1h28m

The funniest part of the OpenAI post is where someone comes in breathlessly and says "hey have you read this ACX post on why we shouldn't open source AGI" to the guy who's literally been warning everybody about AGI for decades and Elon is like: "Yup." Someone was murdered that day. There is nothing for dismissive than a yup.

boringg
0 replies
34m

Serious question. Does anyone trust Sam Altman at all anymore? My perspective from the outside is his public reputation is in tatters except that he's tied to Open AI. Im curious what the internal rep is and the greater community?

AlbertCory
0 replies
1h53m

I was with them, sort of, until they had this bit of Comms-major corporate BS:

We’re making our technology broadly usable in ways that empower people and improve their daily lives, including via open-source contributions.
sharkjacobs
23 replies
11h39m

For example, Albania is using OpenAI’s tools to accelerate its EU accession by as much as 5.5 years

What an insane thing to say so matter of factly. This is like a character in an airport bookstore political thriller who is poorly written to be smart.

Falimonda
18 replies
11h27m

Why is that such an insane thing for them to say?

tsimionescu
7 replies
10h58m

How is it not? How could anyone possibly know that number? And what gives you any inkling that it has the remotest chance of being true?

Falimonda
5 replies
10h42m

How about the possibility that OpenAI and the Albanian government - along with many other governments - have a relationship of which you're unaware?

baobabKoodaa
4 replies
10h37m

Uh huh. And they arrived at this 5.5 number how?

It's bullshit.

Falimonda
3 replies
10h7m

You cannot fathom how a government official may have told them that they've managed to accelerate their EU accession through the help of LLMs by a specific number of months?

Here's a speculative scenario that doesn't seem so insane:

Albania has a roadmap for EU accession. The roadmap is broken up into discrete tasks. The tasks have estimates for time to completion. They've been able to quickly hack away at 5.5 years worth of tasks using LLMs.

Your problem with the statement is that they didn't provide a source. Maybe express interest in the facts that might support that instead of freaking out over perceived insanity.

malermeister
1 replies
9h34m

EU accession isn't some jira story where you close tickets.

For a country like Albania, it requires massive social, cultural, political and economical changes. There's no way anyone has a good estimate and there's no way an LLM has magically transformed the culture in a way that's a) meaningful and b) quantifiable.

Turkey has been a candidate for 25 years now, with no meaningful progress.

neom
0 replies
4h38m

Why could it not be 15 years for the cultural stuff and 10 years worth of paperwork process, I can imagine an LLM cutting that process in half if it involved hiring and training staff, or waiting for staff to become free from other work to build said paperwork. Out of everything they said in that email, the 5.5 years isn't something I'd pick on given, as you noted, the crazy timelines we're talking about to even be able to put forward an EU application.

tsimionescu
0 replies
9h22m

There is no conceivable way in which an estimate of 5.5 years in a non-physical process is anything other than bullshit. If you're estimating paperwork in years, then you're already giving enormously rough estimates. 5 years maybe would have made some sense, but .5 years at that scale is just a way to make a gut feeling sound precise.

Even more importantly, the process of acceeding to the European Union is a political and economic process. The pace at which such things go and the exact requirements are in a constant churn. Today you might need some paperwork, tomorrow some other. You might meet all the technical requirements but not be accepted, or you might not meet all the technical requirements but exceptions can be made, all at the whims of other states' current leaders. Long-term estimates are not even close to plausible for such a process.

loceng
0 replies
3h32m

They asked ChatGPT of course!

Barrin92
7 replies
10h58m

because it's a completely made up claim with a random number thrown in. How do you end up with "5.5" years for a process that doesn't have a timetable, number of estimated emails required for EU accession divided by number of emails ChatGPT can generate per day?

ascorbic
3 replies
10h11m

I was pretty skeptical about this too, but looking into it there is some basis in fact. They're using it to help translate and integrate the "Acquis communautaire" - the huge body of EU laws and regulations that need to be enshrined in national laws of candidate countries. This is one of the most time-consuming parts of the process, and usually takes many years. Leaving aside how risky this is (presumably they will have checks in place), I can see how this could save years of work. Saying 5.5 years is ridiculous false precision though.

It won't help with the toughest part though, which is the politics of the other member states.

Jensson
1 replies
9h47m

Translating that doesn't seem to take very long though. Looked up an example, Sweden voted to enter EU 13 nov 1994, and legally entered and had thus finished incorporating that into their legislation 1 jan 1995, so 1.5 months at most. Not sure what 5.5 years means, maybe they meant the total amount of working years in manhours saved?

ascorbic
0 replies
7h6m

The incorporation of the acquis is part of the negotation process, and had already been completed by the time of the referendum. It had started in 1991, which was unusually fast.

Falimonda
2 replies
10h44m

What do emails have anything to do with EU accession?

thinkingemote
0 replies
10h23m

Exactly. No one knows! The point is that there are many unknown variables but yet the company estimates the future outcome based on many variables.

Personally I think it's a kind of extrapolation based on new processes. It's more PR than math.

The aim of the message is to say that AI can help wider society

Barrin92
0 replies
10h24m

literally nothing, that was the point. There is no way you make such a weirdly specific claim about an indeterminate process without cooking up some invented metric. It's like saying "ChatGPT accelerated my next promotion by 24.378 days, this is a totally scientific number, I swear"

zo1
0 replies
10h26m

Don't know what others think, but to me that's just corporate gobbly-gook word-salad that's simultaneously true, untrue, and unverifiable all at the same time. They might as well say something like "we're forming synergy with humanities proverbial intellect for the betterment of all humanitarian goals such as equity, justice and fairness".

b-side
0 replies
10h18m

Last time I checked, the decision to admit a new member state requires the unanimous approval of the EU's current member states. As such unless OpenAI can literally influence world politics the claim is 100% bogus. All it takes is one rogue member and Albania will never join the EU.

tsimionescu
1 replies
11h0m

The lack of self-awareness of whoever wrote this, and of all those who signed it, is mind boggling.

Jensson
0 replies
9h53m

I assume they wrote that with an LLM. That reads exactly like typical LLM arguments.

herewulf
0 replies
10h17m

Indeed. I found the claim about Icelandic even more telling. Here's a small language that basically exists in its present form since the past thousand years (i.e: it has changed little since Old Norse). It also is notoriously conservative in its preservation of native vocabulary through avoidance of loanwords.

Iceland/Icelandic don't need gee whiz computer things to "preserve" itself.

GaryNumanVevo
0 replies
9h27m

To me, it reeks of "Rationalist-speak". Throw a couple numbers with decimal points to sound precise, mention Bayesian priors a few times.

LatticeAnimal
18 replies
15h44m

As we get closer to building AI, it will make sense to start being less open. The Open in OpenAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

That is surprisingly greedy & selfish to be boasting about in their own blog.

davej
10 replies
8h34m

I think you're misreading the intention here. The intention of closing it up as they approach AGI is to protect against dangerous applications of the technology.

That is how I read it anyway and I don't see a reason to interpret it in a nefarious way.

sillysaurusx
7 replies
8h28m

"Tools for me, but not for thee."

edanm
5 replies
8h26m

When the tools are (believed to be) more dangerous than nuclear weapons, and the "thee" is potentially irresponsible and/or antagonists, then... yes? This is a valid (and moral) position.

sillysaurusx
3 replies
8h15m

If so, then they shouldn’t have started down that path by refusing to open source 1.5B for a long time while citing safety concerns. It’s obvious that it never posed any kind of threat, and to date no language model has. None have even been close to threatening.

The comparison to nuclear weapons has always been mistaken.

edanm
2 replies
8h13m

Oh I'm talking about the ideal, not what they're actually doing.

sillysaurusx
1 replies
8h8m

Sadly one can’t be separated from the other. I’d agree if it was true. But there’s no evidence it ever has been.

One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.

edanm
0 replies
7h12m

One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.

I mean, I'm totally with them on the fear of AI safety. I'm definitely in the "we need to be very scared of AI" camp. Actually the alien thought experiment is nice - because if we credibly believed aliens would come to earth in the next 50 years, I think there's a lot of things we would/should do differently, and I think it's hard to argue that there's no credible fear of reaching AGI within 50 years.

That said, I think OpenAI is still problematic, since they're effectively hastening the arrival of the thing they supposedly fear. :shrug:

philosopher1234
0 replies
8h13m

It makes people feel mistrusted (which they are, and in general should be.) it’s a bit challenging to overcome that.

nibab
0 replies
3h28m

I think the fundamental conflict here is that OpenAI was started as a counter-balance to google AI and all other future resource-rich cos that decide to pursue AI BUT at the same time they needed a socially responsible / ethical vector to piggyback off of to be able to raise money and recruit talent as a non profit.

So, they cant release science that the googles of the world can use to their advantage BUT they kind of have to because that's their whole mission.

The whole thing was sort of dead on arrival and Ilya's email dating to 2016 (!!!!) only amplifies that.

_heimdall
1 replies
5h50m

Two things that jump out at me here.

First, this assumes that they will know when they approach AGI. Meaning they'll be able to reliably predict it far enough out to change how the business and/or the open models are setup. I will be very surprised if a breakthrough that creates what most would consider AGI is that predictable. By their own definition, they would need to predict when a model will be economically equivalent to or better than humans in most tasks - how can you predict that?

Second, it seems fundamentally nefarious to say they want to build AGI for the good of all, but that the AGI will be walled off and controlled entirely by OpenAI. Effectively, it will benefit us all even though we'll be entirely at the mercy of what OpenAI allows us to use. We would always be at a disadvantage and will never know what the AGI is really capable of.

This whole idea also assumes that the greater good of an AGI breakthrough is using the AGI itself rather than the science behind how they got there. I'm not sure that makes sense. It would be like developing nukes and making sure the science behind them never leaks - claiming that we're all benefiting from the nukes produced even though we never get to modify the tech for something like nuclear power.

davej
0 replies
4h51m

Read the sentence before, it provides good context. I don't know if Ilya is correct, but it's a sincerely held belief.

“a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.”
Jensson
5 replies
15h33m

Yeah, they are basically saying that they called themselves OpenAI as a recruitment strategy but they never planned to be open after the initial hires.

Spivak
3 replies
13h16m

Why do tech people keep falling for this shtick? It's happened over and over and over with open source becoming open core becoming source available being becoming source available with closed source bits.

How society organizes property rights makes it damn near impossible to make anything commons in a way that can't in practice be reversed when folks see dollar signs. Owner is a non nullable field.

sva_
0 replies
5h50m

Because the people that got recruited on those terms suddenly see what kind of dough they will be making, I suppose.

blackoil
0 replies
6h27m

By believing that since they aren't "MBA", economics and human behaviour don't apply to them.

8organicbits
0 replies
2h11m

Thankfully the code as it was during the open source stage can be forked, maintained, and developed further if another party is interested.

Aeolun
0 replies
13h44m

They’re pretty open about that now though.

gyudin
0 replies
13h4m

Sounds pretty much like any other corpo “Pay us bucks and benefit from our tech”

nodesocket
15 replies
15h52m

Seems pretty straightforward... Elon invested when they claimed to be an open-source non-profit, and clearly now are closed source and very for profit. The notion that the for profit is just a temporary means to achieve scale goals is laughable.

ipsum2
5 replies
15h46m

It's probably worth reading the post, which refutes these points.

grecy
4 replies
15h41m

It does not refute them in any way. It very openly admits them while (trying to) justify them.

ipsum2
3 replies
15h17m

"In late 2017, we and Elon decided the next step for the mission was to create a for-profit entity."

the_optimist
2 replies
14h3m

1) They defied their original charter. 2) separately: Creating a for-profit entity is not what happened here. They transformed into a for-profit entity and discarded the non-profit mission. There is no vestige left of the original non-profit. Same funds, same code, same IP.

gamblor956
1 replies
13h0m

The for profit entity is owned by the non profit entity. This is permissible and a number of non profits operate this way.

grecy
0 replies
27m

Would you be happy if you donated tens of millions to a non profit entity which then changed their mission and suddenly started making a profit?

dmode
5 replies
15h30m

The emails clearly show that Elon had no interest on this being a non profit. Especially because he advocated for OpenAI to be absorbed by Tesla multiple times. The only reason he is suing is because xAI is dead and he needs some AI love.

extheat
2 replies
15h24m

He's not asking for money in the lawsuit, so what is it going to do? I thought it was for show in the first place, the goal was to hold them to their founding principles. Which are of course, pretty subjective. Top to bottom it's God complex over there and the secret justification for closed source was pretty ironic--we only found out this key unmentioned detail once the lawsuits start flying.

sidcool
1 replies
15h21m

He is not asking for money because that would make it crystal clear he's after the money. In actuality, he is just bitter he missed the boat. He is not after the money, but the fame.

neom
0 replies
13h46m

"For an award of restitution and/or disgorgement of any and all monies received by Defendants while they engaged in the unfair and improper practices described herein"

He claims he'll give all the money to charity, but he's certainly asking for money.

romanovcode
1 replies
15h10m

Is xAI dead? I thought he is very much invested in Grok.

paulpauper
1 replies
15h50m

It is like Google's 'don't be evil' slogan. OpenAi proved to be anything but Open.

pknerd
0 replies
11h53m

Don't use their models. As a paid member I am fine with it.

tintor
0 replies
15h16m

Where did they claim being open-source?

chjj
12 replies
15h15m

I find it very strange that OpenAI would post this in the middle of a lawsuit. Shouldn't all of these emails come out in discovery anyway? Publishing this only benefits OpenAI if they're betting on the case never reaching discovery. It seems like they just want to publish very select emails which paint a certain picture.

Also, DKIM signatures are notably absent, not that we could verify them anyway since the emails are heavily redacted.

EMIRELADERO
6 replies
14h57m

Also, DKIM signatures are notably absent, not that we could verify them anyway since the emails are heavily redacted.

What are you implying? Faking emails opens you up to libel. I doubt OpenAI is trying to add another lawsuit to their workload.

chjj
5 replies
14h51m

What are you implying?

I'm not implying anything. I'm pointing out a lack of openness on OpenAI's part.

EMIRELADERO
4 replies
14h16m

Unless it's brought up as a point in court, actual exhibits in lawsuits don't have DKIM signatures either.

chjj
3 replies
14h5m

I'm not making a legal argument.

EMIRELADERO
2 replies
13h28m

Oh!

In that case, let me ask: where have you ever seen DKIM signatures being provided on a public blog post/press release?

chjj
1 replies
13h13m

Nowhere, and I criticize it every time. I've even gone to the lengths of trying to find DKIM keys for published emails in the past[1].

But at least those emails were un-redacted. To spell that out for you: in a highly-charged, highly-contentious political setting, emails were published un-redacted and theoretically verifiable with DKIM. No such possibility exists for OpenAI's blog post.

[1] https://news.ycombinator.com/item?id=24780798#24785123

EMIRELADERO
0 replies
8h6m

What benefit would that bring, considering the points raised earlier?

phire
1 replies
14h18m

If they just waited until discovery, it would be Musk's lawyers that control the narrative, choosing which parts of the emails to focus on publicly, which to ignore and what story to paint around it.

As you say, this way, they get to control the narrative. Nothing strange at all.

From what I can tell, Musk's lawsuit doesn't have a much of chance in the first place. I don't think he expects to win, it seems to be more a tool in Musk's own media push, and while I wouldn't bet on it, there is absolutely a chance it won't reach discovery.

I think OpenAI have quite wisely decided that the real battle here is the court of public opinion. They know it's possible to win the court case but lose in the eyes of the public. And they know that Musk has a lot of previous experience (and success) in the this battleground.

chjj
0 replies
13h48m

OpenAI might think they're winning a PR battle by shaping a narrative here, but they are now locked into this narrative, possibly to their detriment in court. Just seems like a bad idea. I find it odd that their lawyers wouldn't steer them clear of something like this.

I think OpenAI have quite wisely decided that the real battle here is the court of public opinion.

They've failed to win me over. As far as I can tell, their attempted PR victory hinges on a single email with a one-word reply from Musk. Their own emails are far more damning as they give a detailed explanation of why they believe AI should not be open.

To an observer who already dislikes Musk, I'm sure it's a PR win. To someone neutral or someone who dislikes both parties, it's a PR disaster.

slimebot80
0 replies
12h2m

Musk is blabbering like a broken fountain on Xitter shrug

Xitter is the #1 source of facts, so - Can't blame them for counter balancing a little.

ilaksh
0 replies
14h20m

The court of public opinion might be more important in some ways than any real court. People have already begun judging OpenAI based on the lawsuit.

SilverBirch
0 replies
7h51m

It's only strange if you think the lawsuit has merit, and I'm yet to find anyone credible who thinks it does. If you believe the lawsuit is just a vexatious move to attack OpenAI then it makes perfect sense to just fight it as a PR battle.

bugglebeetle
12 replies
15h46m

We couldn’t agree to terms on a for-profit with Elon because we felt it was against the mission for any individual to have absolute control over OpenAI.

I think Musk’s lawsuit is without merit, but it’s laughable in light of what happened with the recent leadership struggle to include this bit.

andsoitis
11 replies
15h25m

I think Musk’s lawsuit is without merit

If it does not have merit, why does OpenAI feel compelled to write this post? Unless they know or at least fear that the lawsuit has merit.

bugglebeetle
10 replies
15h16m

Public relations. What Musk has said about OpenAI is true: they abandoned their mission and sold themselves off to Microsoft in pursuit of profit, and have no intention of making the technology (or its rewards) widely available to all. They can’t counter any of these claims credibly, but can still (correctly) paint the messenger as arguing in bad faith to diminish their impact. Being right about all of this doesn’t accrue any material benefit to Musk, however, because the terms of the agreement are so full of loopholes that just about any lawyer can drive a semi through them.

danenania
9 replies
14h26m

“have no intention of making the technology (or its rewards) widely available to all”

Has any organization done more to make AI widely available to all? I can’t think of one. True, they’re not making the source code available. True, they’re making tons of money off it. But their most powerful AI is accessible to anyone with a credit card.

bugglebeetle
5 replies
14h13m

Yes, Meta, Mistral, Allen AI, and many others.

danenania
4 replies
14h2m

None of those are anywhere close to as easy for the average person to access as OpenAI’s offerings. And none are as good as GPT-4 afaik.

There are many arguments one can make against OpenAI. I just don’t see how “they don’t want to make AI widely available” makes any sense at all. A world where OpenAI never existed would clearly be a world where AI is far less accessible to the average person.

bugglebeetle
3 replies
13h50m

There is no way to reconcile their founding statements and principles with what they’re doing now. They don’t even publish legitimate research papers any longer and barely even try to defend themselves as aligning with their prior mission.

danenania
2 replies
13h37m

I think in their minds, their mission has always been to make AI widely available. It seems like they initially thought open source and open research were a viable path to achieving that mission, but later they changed their minds. I get why people are upset about that pivot, but considering they single-handedly brought AI out of the research lab and into the mainstream, it seems hard to argue that they were wrong.

In any case, sticking to the point of this discussion, there is no indication of any kind that they don’t intend to make AI widely available. It’s like saying McDonald’s doesn’t intend to make hamburgers widely available.

rrr_oh_man
0 replies
12h23m

> I think in their minds,

I've found it good practice to judge people by their actions, not their assumend intentions.

bugglebeetle
0 replies
12h44m

To be clear, widely available is my paraphrase while their founding principles stated that the technology should be “freely available” and explicitly reference open source. That’s entirely inconsistent with what they’re doing now. What you’re calling a “pivot” is just casting aside what they said they stood for in pursuit of money, which fine, whatever, happens all the time. We don’t have to strain ourselves making excuses for them, because, again, they hardly bother to do that themselves.

andsoitis
1 replies
14h16m

their most powerful AI is accessible to anyone with a credit card

Only 22.26% of the world's population had credit cards as of 2021.

danenania
0 replies
14h6m

For people without one, I think ChatGPT free version is also by far the best option available no?

redox99
0 replies
9h52m

I don't know about "widely available". It's a matter of openess.

Coca Cola is widely available, but its not "open" (its recipe is a secret, maybe others have reverse engineered the formula, but Coca Cola Co. intentions always were to keep it a secret).

tcgv
11 replies
5h19m

This post is a lame PR stunt, which will only add fuel to the fire. It tries to portray openAI as this great bennefactor and custodian of virtue:

“As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”

I've seen you (and Sam and other OpenAI people) doing a lot of interviews recently extolling the virtues of open sourcing AI, but (...) There are many good arguments as to why the approach you are taking is actually very dangerous and in fact may increase the risk to the world

How lucky are we that openAI, in its infinite wisdom, has decided to shield us from harm by locking away its source code! Truly, our digital salvation rests in the hands of corporate benevolence!

It also employs outdated internal communication (selectively) to depict the other party as a pitiful loser who failed to secure openAI control and is now bent on seeking revenge:

As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control.

Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”, to which Elon replied: “Yup”. [4]

If their case to defend against Elon's action relies on his "Yup"from 2016, and justifications for being able to compete with Google, it's not a strong one.

jstummbillig
6 replies
3h34m

Assume good intentions and nothing of substance being hidden: Is there any way to be transparent here that would have satisfied you or are you essentially asking for them to just keep to themselves or something else?

the_mar
3 replies
3h4m

Oh, honey. Never assume good intentions when lawyers are involved

ncallaway
2 replies
2h40m

That wasn't the point of the question. The question was a hypothetical to test if there was any possible response that would've satisfied the original poster.

They're not suggesting to assume good intentions about the parties forever. They're just asking for that assumption for the purposes of the question that was asked

the_mar
0 replies
1h0m

The answer is no. Companies don’t do things out of good intentions when lawsuits are involved.

rat9988
0 replies
2h20m

There is no satisfying answer if your actions before are not satisfying. The question implies that the original poster cannot be satisfied, and thus shift the blame implicitly. The problem remain not what the answer is, or how it is worded. The answer only portrays the actions, which are by themselves unsatifsying.

polygamous_bat
1 replies
2h9m

A great way to be transparent would be to admit that some enormous egos prevented work that should be open from being open, and counterfactually opening them up. Sure, it may piss off microsoft, but statistically things that piss off microsoft also has been great for the world at large.

But that will never happen, will it?

qarl
0 replies
21m

I thought it was the fact that they needed a lot of money to train AGI.

Everyone seems to agree that's the case. Do you have evidence that it's not?

dist-epoch
2 replies
41m

So, where is the source code of Grok, the LLM that Elon is building?

samatman
0 replies
30m

Did Twitter become a nonprofit while I wasn't paying attention?

Sohcahtoa82
0 replies
25m

Did Elon ever announce, or even imply, that Grok would be open source?

boringg
0 replies
3h37m

I also thought this was kind of entertaining in their method to try and tar and feather. The highly selective (much dated) e-mail comms in a very roughly packaged statement. If that's how they are trying to protect their public image it doesn't sell their position strongly if anything it makes them look worse. It looks very amateurish almost childish to be honest.

threeseed
9 replies
15h47m

Elon Musk's antics have been utterly shameless.

a) If he cared at all about altruism, humanity etc he would open-source Grok and allow anyone access to the Twitter dataset.

b) He talks all about AGI in relation to Tesla but then used it as a weapon to try and extract more control over the company from investors.

bamboozled
6 replies
15h46m

I don't really understand this view honestly, why should Grok be open sourced? When was Twitter called OpenTwitter and had a similar mission statement to OpenAI?

threeseed
3 replies
15h30m

If Musk wants to take the high moral ground and lecture others about openness and the importance to humanity he should start with his own actions.

It seems pretty clear that altruism is a cynical marketing and PR technique for him rather than something he actually cares about.

xcv123
2 replies
15h20m

Musk cofounded and funded OpenAI.

threeseed
0 replies
15h3m

And then pushed them to create a for-profit entity and merge with Tesla.

bamboozled
0 replies
15h13m

Not a gian Musk fan lately, but this is true, it wouldn't even be a thing without him.

Once bitten twice shy.

glenngillen
1 replies
15h34m

The parent post said if “Musk” cared, not Twitter. He also doesn’t have anything in the way of public shareholders to appease at Twitter anymore.

Whether or not it’s a reasonable expectation is up for debate. But it is at least a congruent argument.

bamboozled
0 replies
8h47m

There’s a difference between developing and open sourcing models and technology and end user products though.

treme
1 replies
14h23m

ah yes, he should be shamed for being a major force behind accelerating adoption of eletric cars by a few years minimum.

danans
0 replies
12h44m

He's being shamed for being a loud hypocrite about OpenAI, not for his accomplishments pushing the mass adoption of EVs. Two different things.

atleastoptimal
8 replies
11h20m

Did people really think leading the development of the most important invention in human history wouldn’t involve a little bit of drama?

I know everyone wants OpenAI to be a magical place that open sources their models the moment they’re done training, but it’s clear that they’ve chosen a reasonable path for their business based on both practicality and risk reduction. If they had gone another way, today they’d be an unceremonious branch of Elon’s empire, or a mid-level nonprofit that never had the means to hire the best people and is still spending all their time soliciting donations to train GPT 3.5.

They did what they believed they had to do to be the ones to get to AGI. Will they be the safest stewards of this tech? Hard to say, though it’s clear even the once safety minded Anthropic isn’t shying from releasing SOTA models.

_heimdall
3 replies
4h39m

most important invention in human history

How can we possibly weigh all human inventions and decide this one, which has yet to even be invented, is the most important?

JoeAltmaier
1 replies
4h34m

That's an easy one. Lots of ways. Just think about it for a minute.

Here's a quick list I came up with:

First time we've created something artificial we can talk sensibly to and get sensible knowledgeable responses.

Exporting expertise in a way that's trivial to consume and employ for millions of people.

Allow autonomous robotic servants to operate in a chaotic environment such as a home or business.

Replace entire swaths of knowledge-workers in a single generation.

What have you thought of?

Here's some of what ChatGPT lists as reasons:

Artificial intelligence (AI) is arguably one of the most important inventions in human history due to its transformative impact across various fields. Here are some ways in which AI is considered crucial:

And it goes on to list: automation, decision making, healthcare, accessibility, sustainability, creativity, economic growth, space exploration.

_heimdall
0 replies
3h23m

AI, and therefore AGI, haven't been created yet as far as we know. We don't know what it will be capable of, what it will do to our society, or to humanity as a whole.

Our entire world changed with some of the most basic inventions that allowed for agrarian societies. The same goes for basic inventions that allowed for the industrial revolution.

I understand that the hopes for A(G)I include changes that would be hugely impactful, but they simply haven't materialized yet. Even when they have, how can we possibly weigh the different inventions? We will never know what the world would look like without agriculture, oil, the printing press, electricity, etc. Is the weighing entirely based on modeling, opinion, and/or hope?

neom
0 replies
4h31m

I try not to nitpick and let people on HN story tell a bit, it's fun reading. However, if I were to join you on the nitpicking, I'd further: Who is to measure "important" - oil and coal related products (automotive/electricity) may end up being the most important because they are our undoing! =)

Timber-6539
2 replies
11h10m

... chosen a specific path for their business based on both practicality and risk reduction

The risk reduction didn't go so well now that Elon is putting up lawyers to force them to become more "open".

This organization thrives in the dark and they know their secret to success depends on it. Would save every one a lot of time if they came out as a proper non-profit and dropped "open" in the name/branding.

If they had gone another way, today they’d be an unceremonious branch of Elon’s empire

They substituted this dream with Microsoft.

atleastoptimal
1 replies
11h5m

I’m pretty sure Elon won’t win the case, and still, 49% is less than 100%

timeon
0 replies
2h46m

I’m pretty sure Elon won’t win the case

0%. Not 1%. I wish it were otherwise.

ilrwbwrkhv
0 replies
4h21m

Lol I'm amazed that all the tech bros have fallen for their marketing and think AGI is really around the corner.

baking
6 replies
15h38m

Elon said we should announce an initial $1B funding commitment to OpenAI. In total, the non-profit has raised less than $45M from Elon and more than $90M from other donors.

This is not reflected in their 990's. They claim $20 million in public support and $70 million in "other income" which is missing the required explanation.

Also, why are none of the current board members of OpenAI included as authors here? Is there a problem in the governance structure?

Elon could not legally contribute more than he did without turning OpenAI into a private foundation. Private foundations are required to give away 5% of their total assets annually and are not permitted to own substantial stakes in for-profit businesses.

Showing old emails from people who clearly don't understand what they are getting into is not very helpful to their case. Maybe if they had talked to a lawyer who understood non-profit law, or even just googled it.

If the $70 million was in fact a donation instead of income, they fail the public support test and are de facto a private foundation.

mikeyouse
5 replies
14h37m

A bunch of what you wrote isn't accurate..

Here's the 2020 990 which shows the first 5 years of the org's existence (including the time at question for this suit): https://apps.irs.gov/pub/epostcard/cor/810861541_202012_990_...

Page 15 is Schedule A Pt 2 which shows the total contributions by year. They did indeed raise ~$133M over that time frame. Row 5 shows the contributions from any 1 person who contributed more than 2% of their total funding (this excludes other nonprofits) -- so the $41M there is definitely Musk. So his share was only ~30% of the total and the other 70% was public support which you can confirm in Section C at the bottom of that page.

"Public support" includes other nonprofits - and it's fine of e.g. Musk 'laundered' other funding via a DAF or something at a different nonprofit since the funds belong to that nonprofit and they have ultimate discretion over the grant.

baking
4 replies
14h21m

I think I screwed up. It has been a while, but it looks like I misread Part II, Section B, Line 10 as $70 million, not $70,000. Let me check the previous years to see if I'm thinking of something else. Thanks for double-checking this.

I know they got $20 million from Open Philanthropy which qualifies as public support, so I am still wondering about the other $70 million, but it is not the smoking gun that I thought it was.

It has to be made up of donations from individuals under $2.6 million or from other public charities, but not private foundations.

mikeyouse
3 replies
13h47m

Most rich people setup DAFs alongside their family foundations so that they can make large contributions to this type of org. without triggering disclosure or private foundation tests -- so e.g. Musk will create a DAF at Fidelity Charitable and give them $100M and collect the associated tax break in year 1 -- he can then direct Fidelity to grant $20M/year to OpenAI which will show up as Public Support since it's coming from another nonprofit entity and Fidelity maintains ultimate discretion over the funds.

Edit - Got curious and sure enough - this is the 28,000 page 990 filing for Fidelity Charitable: https://apps.irs.gov/pub/epostcard/cor/110303001_202006_990_...

On page 205 there's a $3.5M donation to OpenAI from 2019.

Likewise here for SVCF on page 237 (https://apps.irs.gov/pub/epostcard/cor/205205488_201912_990_...) - there's a $30M donation to OpenAI in 2019.

trogdor
0 replies
12h43m

I am an investigative reporter, and I approve of your research tenacity!

Nice :)

baking
0 replies
3h8m

Reid Hoffman gave $10 million through his private foundation, Aphorism Foundation, in 2017 and 2018:

https://apps.irs.gov/pub/epostcard/cor/464347021_201712_990P...

https://apps.irs.gov/pub/epostcard/cor/464347021_201812_990P...

Since private foundations aren't public charities, they don't have the pass-through protection of donor-advised funds, so this should have been excluded from the public support total because it is more than 2% of the total support.

Also, this reporter did some additional legwork earlier in 2023: https://techcrunch.com/2023/05/17/elon-musk-used-to-say-he-p...

NewsaHackO
6 replies
15h52m

They redefined the open in OpenAI in an almost Orwellian way.

aeternum
3 replies
15h38m

Just like Google's "Do no evil"

*only applicable to certain definitions of evil

romwell
2 replies
14h0m

Google scrapped "Don't be evil" a long time ago.

Now it's some kind of "Kiss the ring and have respect for the hand that feeds you" or something.

aeternum
1 replies
2h20m

Agreed, but "Do no evil" shouldn't be the kind of thing you just scrap.

Isn't doing that in and of itself somewhat evil?

krapp
0 replies
2h14m

"Don't be evil" is a meaningless phrase, because "evil" is an entirely subjective concept.

Google was born out of NSA and CIA research grants, and they operate under capitalist incentives. Many would say they have never not been evil. But again, it's a matter of perspective.

It's obviously not a phrase any company would ever take seriously since it has no real legal implications, and it's honestly weird how many people seem surprised and offended to find this out. No one is getting fired because Raisin Bran cereal doesn't guarantee a minimum of two scoops of raisins, either.

scruple
0 replies
15h45m

Not particularly surprising, given the last few months of activity at OpenAI.

BeetleB
0 replies
12h42m

As is often argued right here on HN, many programmers believe the Open Source Initiative/Foundation redefined "open" and are not happy with their co-opting of the term. OpenAI is merely playing the same game.

reallymental
5 replies
15h46m

All that's laid bare here, is the carcass of their personalities.

None of this screams "I'm going to change this world", this organization is mired in politics from the start, no wonder Satya is hedging his bets.

edit: grammar

sverhagen
3 replies
15h35m

You can start a little mom-and-pop store and get plenty of politics. No surprise a billion(s)-dollar company has politics. It's everywhere. I'm not going to claim that it's their openness that is allowing us to see it here, but in some companies it spills out in the open, in others it stays within the family, but there's always politics!

dkjaudyeqooe
2 replies
14h0m

Yes but when the politics goes about destroying the company you have a real problem.

See Apple in the 90's for instance.

HaZeust
1 replies
12h0m

Problems like being the company with the highest market cap in history for 13 years running!

dkjaudyeqooe
0 replies
1h16m

No, Apple was on the verge of bankruptcy and had to be bailed out by Microsoft in the 90's.

rchaud
0 replies
15h14m

The greatest trick AI has played is convincing the world that it's a black box that could genuinely grow out of the control of the powerful, well-connected few that control its development.

fillskills
5 replies
15h27m

This is going to come across as contrarian view and get a lot of downvotes - OpenAI is being 'open'. Not Open Source, but Open. They are bringing AGI tech to the common person at affordable pricing. Before OpenAI, all the AI and its use cases were hidden behind closed doors at Google. By kickstarting the AI race, they have created high standards, new knowledge, and enough momentum to hold up the entire startup ecosystem and the US economy. They make hard choices and now the entire industry is moving forward because of their choices.

bevekspldnw
0 replies
13h17m

Papers aren’t products, and most of the authors - who did the actual work - are long gone.

tailspin2019
0 replies
13h35m

I don’t have a horse in this race but I’d definitely like to see this viewpoint discussed more.

The open source angle probably matters more to the HN audience but it’s hard to argue that OpenAI hasn’t “opened up access to AI” to a vast global audience.

And that’s not to say that the means by which they’ve done so doesn’t still need some scrutiny.

hartator
0 replies
15h20m

We all had an understanding that open in OpenAI meant open source. Come on.

andoando
0 replies
7h17m

By that definition every corporation aims to be open as possible…

advael
5 replies
14h20m

Like most airing out of drama, this doesn't really materially change anything aside from making everyone involved look extremely bad

But also, all entities involved looked bad already to observers who weren't so deeply rooting for them already that nothing would change their minds, so this is basically tabloid fodder drama in terms of importance as far as I can tell

gitaarik
3 replies
10h51m

Well, isn't it good that the public learns more about the bad stuff they're doing?

pests
1 replies
7h38m

Bad stuff? What bad stuff?

gitaarik
0 replies
7h33m

Well, you said it made them look bad. Or did you mean that they aren't doing anything bad but are only making it look bad for each other?

Edit: sorry, I meant the parent said that. But I guess you understand it now.

Edit 2: To be clear, the bad stuff:

- OpenAI not really being "open" in any way.

- Elon not really caring about that but just wanting to take revenge because they became successful without him.

advael
0 replies
9h37m

Yes, transparency is perhaps the most important check against power. We are constantly told by governments and corporations alike that there is some dire safety reason they have to do so much secretly, and while rare exceptions where this may actually be true for some temporary situation do exist (often the case in wars, perhaps, though not decades after the fact as governments so often allege), it's clear that this is often a lie told for the obvious reason that this secrecy allows powerful people to act without this check. We only get told the lie in the first place because this secrecy breaks expectations of transparency; often - as with openAI - breaks promises that were previously made

So we have them airing their dirty laundry because a billionaire sued them for his own petty reasons, having planned to break the promises the organization made that he's now suing them over in a similar way. Billionaire is hypocrite, news at eleven. This doesn't exonerate the organization. I hope this suit makes people angry. I hope it makes it harder to get away with this facile and infantilizing line about keeping people in the dark to protect them

gandutraveler
0 replies
7h25m

Agree. But this showed Elon to be a much worse cry baby.

iteygib
4 replies
3h45m

This is the usual Musk play book. He is a salesman, not a tech guru, and plays the "good guy card" when it suits him in order to fool his followers en masse. A few months ago this forum was very much for the profit model of Altman after his ouster, and now it's the other way around.

Why believe any sort of discussion on the internet at all anymore? Can hackernews prove the discussions here are not by Tesla staff? By OpenAI staff? By bots? By AI responses? There is a saturation point being hit by the credibility and reliability of this medium we call the internet. This was obviously always the case to some degree, but the profit model of most businesses now are built on the backbones of social media hype, all of which is completely, and easily, falsifiable, more than ever before.

xbar
0 replies
3h29m

Making a conclusion that this forum was somehow uniform in the position you suggest is too revisionist to finish reading the remainder of your comments.

sva_
0 replies
3h14m

A few months ago this forum was very much for the profit model of Altman after his ouster

I did not get that impression at all.

oglop
0 replies
3h10m

I noticed the same thing. I also notice pompous dorks love to hate on people like you who point this out.

loceng
0 replies
3h28m

"A few months ago this forum was very much for the profit model of Altman after his ouster, and now it's the other way around."

Speak for yourself.

Oddly your second paragraph is an argument against the validity of your first paragraph.

hartator
4 replies
15h22m

it's totally OK to not share the science

Ilya being the bad guy is a surprise here.

reducesuffering
2 replies
10h31m

If that surprises you, you haven't followed Ilya much. Ilya is very terrified of AGI destroying humanity and doesn't believe in this "accelerate AI and give it to everyone" naivety.

'Back in May 2023, before Ilya Sutskever started to speak at the event, I sat next to him and told him, “Ilya, I listened to all of your podcast interviews. And unlike Sam Altman, who spread the AI panic all over the place, you sound much more calm, rational, and nuanced. I think you do a really good service to your work, to what you develop, to OpenAI.” He blushed a bit, and said, “Oh, thank you. I appreciate the compliment.”

An hour and a half later, when we finished this talk, I looked at my friend and told her, “I’m taking back every single word that I said to Ilya.”

He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”

The snapshots above cannot capture the lengthy discussion. The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.'

https://www.aipanic.news/p/what-ilya-sutskever-really-wants

john2x
1 replies
10h25m

Is that talk mentioned in the quote available online?

reducesuffering
0 replies
3h9m

No I don’t believe it was publicly recorded.

rrr_oh_man
0 replies
12h27m

Why? Not that he hasn't been politicking...

fddrdplktrew
4 replies
12h12m

OpenAI is not far ahead from alternatives... it doesn't matter much what the outcome will be.

esafak
3 replies
11h35m

Unless they develop an agent that can recursively self-improve to the point of an intelligence explosion!

speedgoose
1 replies
11h13m

I assume almost everyone already trains on synthetic data.

esafak
0 replies
11h11m

The next frontier is the physical world; untainted by LLMs.

fddrdplktrew
0 replies
11h13m

now recursive functions are AI? /s

endisneigh
4 replies
15h43m

I'm surprised they would publish this honestly. Especially considering that they're in a lawsuit.

baking
3 replies
15h26m

Probably didn't talk to a lawyer.

neom
0 replies
14h16m

That seems incredibly unlikely given a large percentage of the people around the top of OpenAI are lawyers, most notably, their Chief Strategy Officer.

ilaksh
0 replies
14h19m

What is the actual justification for not discussing it openly? I know that is typical advice, but I am not sure it is correct advice for these types of situations anymore.

DustinBrett
0 replies
11h12m

GPT-4 can pass the bar exam, so they can just ask it.

andsoitis
4 replies
15h33m

why respond?

wokwokwok
0 replies
14h34m

agree, this seems like a super bitchy kind of he-said-she-said cat fight and the update doesn't make any meaningful contribution other than 'no you didnt!! See, I have emails to prove it!'.

Yeah yeah, put it in the court documents if it's relevant.

This is drama for drama's sake / PR positioning for ??? reasons (someones ego).

I thought OpenAI was playing the 'we're grown ups' game now?

...guess not.

sidibe
0 replies
14h26m

I like it just because I'm fascinated to see how much evidence it takes for some people to see who Elon is

rrr_oh_man
0 replies
12h26m

It's like selective screenshots of your Whatsapp chat

rasz
0 replies
12h42m

its afraid

maxlamb
1 replies
14h3m

Does it say he found him anywhere? All I see in the article are emails to him from 2016

icpmacdo
0 replies
13h50m

He is a signatory

yellow_lead
0 replies
7h37m

Your source never says he didn't know where he was. That's a bit hyperbolic.

with Sam being unsure of whether Sutskever is still working at OpenAI even 8 weeks after the entire incident

He wasn't missing/found, maybe negotiations were ongoing.

de6u99er
0 replies
11h6m

can you please put a space in front of "https"?

2pointsomone
4 replies
11h25m

If anyone paying the right price to access a product, and no underlying technology, making it "open", isn't most of the world's technology open? Isn't Apple really OpenApple? Isn't Oracle really OpenOracle? Apple probably puts out more open-source tech than OpenAI.

Does the word mean much anymore, then? Is it nothing more than a sentiment then?

Perhaps OpenAI should have renamed itself to "AI for all" or something when they adopted the capped-profit model. Perhaps they should've returned donor funds and turned fully for-profit too. Perhaps that was a genuine resolution and pivot, which every org should be able to allowed to do.

Genuine question, I run a nonprofit whose name starts with "open". But we do explicitly bring closed source work to be more openly licensed, without necessarily making the technology open-source.

rmbyrro
1 replies
9h23m

Well, if AI is the product, Apple is OpenLuxuryHardware, and Oracle is OpenScrewYou.

2pointsomone
0 replies
1h20m

:'D

neom
0 replies
4h28m

Red Hat made off alright. Genuine questions: If OpenAI can pull off something like Red Hat, would people be more, or less, ok with it?

antonyt
0 replies
3h27m

The real villain here is OpenTable!

sgammon
2 replies
11h58m

Has anyone else noticed the redactions have variable widths?

A little video of devtools:

https://customer-zppj20fjkae6kidj.cloudflarestream.com/e28e5...

I remember seeing techniques which could decode such redactions from PDFs. I don't know why the widths would be included unless it was intentional (stylistic, maybe? but it would be a bear to code), or perhaps exported from something like Adobe Acrobat.

Elon's email is one solid redaction block, while the email body is broken up into widths that don't seem to be consistent.

denysvitali
0 replies
11h5m

Except the CC part. Why would you include the co-founder of your competitor in an email describing how to win over said competitor?

sangnoir
2 replies
12h9m

we felt it was against the mission for any individual to have absolute control over OpenAI

This has to be a joke, right? I'd like to think Altman paused for a second and chuckled to himself after writing - or reading - that.

SpaceManNabs
1 replies
3h57m

This whole post makes OpenAI look like a clown show.

zigman1
0 replies
3h15m

Which it is, but somehow is receiving 1/10th of the flack from HN community than any non-tech bureaucratic organization would, especially a sovereign entity that dares to charge taxes. Maybe that's because it is led by cool "startup" people, I don't know.

mulcahey
2 replies
15h39m

I am “GI” without the A last time I checked, and I don’t require much compute at all!

john2x
0 replies
10h20m

Just a couple million years of training!

escapecharacter
0 replies
15h32m

It’s the meat

matt_heimer
2 replies
14h30m

If you are going to simulate blacking out email addresses maybe don't preserve spacing.

WatchDog
1 replies
13h59m

I presume they persevered the spacing to avoid any accusations of being misleading as to the redacted information.

This kind of in place redaction seems to be the typical way that documents are submitted to courts, or at least it was the same way that the emails Elon provided were redacted[0].

[0]: https://twitter.com/TechEmails/status/1763633741807960498

chatmasta
0 replies
13h18m

Surely it would be more effective to reduce every censorship bar to 1 character width.

eftychis
2 replies
12h50m

Reading the messages:

This is a marketing step of course, no sane lawyer would agree to this. And that is because, I don't think they show what they think and want to show.

That at some point Elon had an opinion that a lot of money is needed or that OpenAI maybe had no future? That does not change the duty or obligations of a non-profit to the mission.

Also, it is clear some important information has been blacked out. And that critical conversation happened offline.

I don't think it will do Elon the image pressure they think it will. But if I was Microsoft... I would hedge my bets a lot...

This looks more and more as giving fuel to a dissolution action of OpenAI as a non-profit than anything else.

nprateem
1 replies
11h29m

And what would happen? 2 minutes after being dissolved, ClosedAI is founded and hires all the employees.

eftychis
0 replies
9h56m

Perhaps even before the dissolution.

But the science and IP become public and open or under a non-profit that is tasked with opening them. And for-profit segments are stripped of any exclusive rights that arose from the OpenAI.

The irony is that the dissolution, overseeing by third special referee or permanent injunction (e.g. the Musk suit) of OpenAI is the only ways OpenAI is "opening their AI."

whatever1
1 replies
13h22m

Aren't there tax implications in this whole scheme? Shouldn't they have to pay taxes retroactively from the day they were founded as a non-profit? Shouldn't the donations be taxed?

pests
0 replies
6h56m

Non profits are allowed to own fornprofits. Mozilla Foundation owns Mozilla Corp which brings in tons of money for years.

swat535
1 replies
11h49m

Why would anyone even be under the impression that Elon would be pursuing some higher truth or a noble goal by this suite ?

This response is comical and ironically proves Musk's point that OpenAI is just a profit seeking organization structured in a way to masquerade as a non profit to doge taxes.

Basically a clever scheme by Sam, he gets to have his cake and eat it too this way and probably everyone at the leadership level is congratulation themselves for being so brilliant.

Look here, the truth is that they have been caught with their pants down and now are just attempting to back peddle.

I understand why they won't budge because if they do, they will lose all their scam marketing tactics and Nonprofit status.

Let's hope other AI models catch up quickly, I'm rooting for OSS to take over this field entirely like UNIX did with.

Oh, also, before you lecture me about the "threat of AI", maybe give me a chat bot that can do basic math first.

DustinBrett
0 replies
11h20m

Because his actions have shown he is several times.

sroussey
1 replies
14h58m

Elon wanted majority equity, initial board control, and to be CEO. In the middle of these discussions, he withheld funding. Reid Hoffman bridged the gap to cover salaries and operations.

Wow, Reid Hoffman for the win.

AlfredBarnes
0 replies
5h6m

He catches wins surprisingly often.

seydor
1 replies
10h19m

The timestamps of the emails are interesting. apparently they work at 3:40 AM but also at midday and at 8 AM

pests
0 replies
7h25m

Life becomes work for these types of people, I'm sure there's no hard line when at work or at home, esp during the early days or the early frenzy.

polycaster
1 replies
11h26m

Unfortunately, humanity's future is in the hands of XXXX.

Only wrong answers please.

FergusArgyll
0 replies
7h22m

Brin probably, Musk and him got into a fight over AI years ago. It's detailed in the Isaacson biography

partiallypro
0 replies
1h40m

I don't think OpenAI has to worry about much at all, he can't bleed them dry via a lawsuit when they have one of the largest corporations on Earth bankrolling them. I'm not sure what he's trying to accomplish other than making dubious lawsuits like Trump does.

jjallen
1 replies
12h40m

Everyone should leave OpenAI and form a new entity. Problem's solved. Key partners would resign the key contracts they have.

Leave the whole “Open” part behind for good so we can stop talking reading and hearing about it.

derwiki
0 replies
12h29m

It wasn’t everyone, but that’s sort of the Anthropic origin story

herculity275
1 replies
5h20m

Unfortunately, humanity's future is in the hands of <redacted>.

<redacted>

And they are doing a lot more than this.

I'm really curious who/what Elon was calling out here.

AlfredBarnes
0 replies
5h6m

Probably Google.

greatgib
1 replies
10h36m

From the email exchange:

even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes

Now we have the clear proof that it was their clear intention from the start to abuse from the naivety of everyone pretending to have an open sourcing goal when in fact they knew planned from the beginning to close everything once they would reach a certain amount of success.

That sucks...

_heimdall
0 replies
4h36m

I'm not sure how anyone believed they would open source their proprietary AI, or what that would even mean.

Did OpenAI ever even officially define what open source means in this context? Is the training algorithm open source? Or the data model? Or the interpretation engine that reads the model and spits out content? Or some combination of all three?

globalvariablex
1 replies
12h30m

"Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction,training method, or similar."

I am not a fan of the "enjoy the fruits but not the science" statement they made in the blog. The quote above was from GPT-4's technical paper. I'll go the extra mile and steel man a counter-argument and say they're seeking a competitive advantage to secure more in investments so they can stick to their mission statements of making safe AGI, but I think we can all see where this line of thinking of is going further down the line. Building your stack around an academic research paper from a rival company then refusing to publish your contributions is terribly disappointing imo. I hope this won't be a trend that other companies participate in.

andoando
0 replies
7h8m

It’s a ridiculous statement. Any corporation can say their goal is to make their product available to as many people as possible.

I love how transparent it makes it though that these guys live in a bubble and you shouldn’t take their beliefs too seriously

gibsonf1
1 replies
12h16m

I find it interesting that they promote the idea that they are pursuing AGI when they don't even have any I yet in their products.

gitaarik
0 replies
10h49m

If it would already be in their products, there wouldn't be much to pursue, right?

dorkwood
1 replies
1h37m

At first I thought "why would Elon choose to slow down progress like this?" And then I realized that he probably knows some things that I don't. He's always one step ahead, even if he doesn't seem like it. I wouldn't be surprised if he's in control of OpenAI a year from now. Having Elon as the brains and Altman as the salesman would be the ultimate dream team.

davidcbc
0 replies
17m

It's because he's an egomaniac and wants credit and control. He's not the brains at all

de6u99er
1 replies
11h8m

IMHO Musk just wants to slow down OpenAI, so his company can catch up.

That being said, I am pretty sure there will be other conversations supporting Musks claims

sandspar
0 replies
10h45m

Elon Musk has been warning about AI risks for at least a decade. Whatever his other motives, AI risk seems like one of his deeply held beliefs.

cooper_ganglia
1 replies
12h16m

I don't trust anyone involved to have the best interest of anyone else involved.

rvz
0 replies
7h50m

This is the correct response. Either of them could be lying.

A leak needs to happen to cut through the BS and nonsense.

cmilton
1 replies
15h35m

What's with the redactions? Would that information change any of the context? I understand redacting an email address, but hmmm. This doesn't scream transparency, or open, by any means.

furyofantares
0 replies
14h29m

My read on some of the redaction is they appear to be someone on Elon's side of things who's not currently suing OpenAI. It can be hard to guess what redactions are though. They're redacted.

xbar
0 replies
3h40m

Two things can be trouble.

woopsn
0 replies
13h57m

I'm not sure who they are attempting to convince of what with this communication?

Nobody who doesn't know who REDACTED is cares what REDACTED thought of the issue.

This is again some of the least professional comms I've seen from a Microsoft entity. We'll see what the evidence is ourselves if and when the case goes to court.

throwawaaarrgh
0 replies
10h19m

All of these people are so bizarre. It's like Howard Hughes versus some potential acquisition turned competitor and the yellow journalism that followed. Time is a flat circle.

tempestn
0 replies
11h15m

I'm kind of surprised they'd post something this substantial without any apparent proofreading (given the repetition around the Tesla stuff).

tayo42
0 replies
11h5m

with someone whom we’ve deeply admired...

He keeps showing us the kind of person he is, and people go on and on thinking this, until something happens to them?

tarruda
0 replies
2h5m

I guess neither Elon Musk nor Sam Altman want to live in a world where someone else makes it a better place

syrusakbary
0 replies
8h58m

Two thoughts after reading the article:

1. The post depicts properly the need of ownership from Elon Musk, but it misses the point of the Open Source Mission that OpenAI had when being created. And therfore, it misses the main point of the lawsuit.

2. It feels like the writing is assisted by OpenAI itself.

Am I the only one thinking that the strategy of the post (and maybe the contents) are completely guided by their algorithm?

sva_
0 replies
3h20m

It is interesting that their narration of the story includes emails from 2017, early 2018, and late 2018, a chronological order they point out.

But the last email from Ilya to Elon and back, which rounds off this narrative, is from early 2016 - and they curiously don't mention the date here.

sirmike_
0 replies
2h44m

This is exactly like a plot from Silicon Valley. OA's "middle-out compression" will revolutionize the world! I agree with hackernews Habosa's starkly simple comment. These people are up their own ass.

seydor
0 replies
12h48m

How do the openAi engineers feel about being tricked into the roost? Strange we never hear from them , is it because their mouths are stuffed with gold?

runeb
0 replies
13h30m

It seems to me that OpenAI today is burning cash and that the funding model cannot reach the scale to seriously compete with Google (an 800B company). If you can't seriously compete but continue to do research in open, you might in fact be making things worse and helping them out “for free”, because any advances are fairly easy for them to copy and immediately incorporate, at scale.

It was a reasonable concern, funny that it turned out the other way with the transformer

rifty
0 replies
13h21m

The manner in which they became for-profit does not feel like an exact comparison to Elon thinking that they should become private to be competitive. For example, merging with Tesla isn't the same path the non-profit OpenAI went even if the end state is similar.

Though I think the blog post will probably achieve its messaging goals of diminishing Elon's character to their own public status... "Yeah you think we suck, but it would be no tangible difference to you if Elon had run it, so you should know he sucks too."

redm
0 replies
2h8m

I'm not a Meta fan, but it's hard to not look at how they have handled Llama and how well it's worked out for them and the "community." I think that OpenAI could also be playing a leading role in the science instead of selling subscriptions for Microsoft.

ralph84
0 replies
15h44m

we felt it was against the mission for any individual to have absolute control over OpenAI

Unless that individual is Sam Altman?

qwertox
0 replies
8h15m

Microsoft being the sole beneficiary is the main problem. They are not the only company which can offer compute and money.

posix86
0 replies
9h38m

This letter seems to adress why they're not open, but not whatsoever why they're selling out to Microsoft, which was Musk's primary criticism, or am I wrong?

oglop
0 replies
3h13m

Amazing people still think Musk’s “core points” are still correct after this.

I hate this community anymore.

numpad0
0 replies
2h48m

After 1-bit quantizing situation from my armchair, the whole series of event looks like a cold and dry business decision that Musk is a risk with low expectancy of return. Isn't that it, after all?

nojvek
0 replies
1h33m

What is OpenAI GPT doing that Mistral or Claude or LLama isn't doing?

What makes OpenAI sit the high moral throne?

I get Sam & Elon's ego clash but I don't understand in this day why GPT-4 is a special snowflake and others aren't.

The LLM hallucinates as much as its peers.

We are nowhere close to AGI. None of the LLMs exhibit composite reasoning and on the fly acquisition of new skills.

nobrains
0 replies
11h8m

by opensorucing [sic] everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI

by open sourcing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe BROWSER

by open sourcing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe OPERATING SYSTEM

nickpsecurity
0 replies
29m

Just reposting another comment here which has a link to their original, mission statement:

https://news.ycombinator.com/item?id=39611908

That mission statement contradicts what I’m reading here. It also contradicts what Anthropic is doing. HuggingFace, TogetherAI, and Mosaic are all executing on the original vision which OpenAI has abandoned. Perhaps the board and regulators should ask those companies’ leaders how to best balance open AI’s vs business and risk management.

Also, don’t forget it’s not proprietary/closed vs free/open. There are models in between with shared source, remixing, etc. They might require non-commercial use for the models and their derivatives. Alternatively, license the models with royalties on their use like RTIS vendors do. They can build some safety restrictions into those agreements if they want. Like for software licensing, we don’t have to totally abandon revenue to share or remix the code.

nibab
0 replies
3h53m

the email evidence confirms that infrastructure is one of the strongest moats in this space. From Elon:

"Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated. Conversely, algorithmic advances alone are inert without the scale to also make them scary."

This is reinforced by the fact that infrastructure / training details are much more sparse in papers than algorithmic adjustments.

It's all about 1. researchers (capable of navigating the maze and carefully chosing what to adopt from the research space) 2. clean data 3. infrastructure

mupuff1234
0 replies
14h28m

The mission of OpenAI is to ensure AGI benefits all of h̶u̶m̶a̶n̶i̶t̶y̶ shareholders.

mise_en_place
0 replies
10h22m

There’s so much greed in this space it’s sickening. Even Elon isn’t immune to greed. It should not be humans determining the course of AGI, but rather multiple AIs in a DAO that rule by quorum and consensus. The human element must be taken out of the equation.

megamix
0 replies
4h38m

"The mission of OpenAI is to ensure AGI benefits all of humanity, which means both building safe and beneficial AGI and helping create broadly distributed benefits."

This is a confidence trick. Nothing they do actually can be proven to "benefits all of humanity". Would never put my trust in Altman.

machiaweliczny
0 replies
5h31m

Anyone can guess this part of Elon’s email: “ Unfortunately, humanity's future is in the hands of …” ?

lubesGordi
0 replies
3h0m

I'm not an ML guy but does it actually matter that OpenAI doesn't disclose it's source? Who cares? They proved the transformer arch and everyone knows how to replicate what they did. That's pretty open. The sauce is the alignment and maybe that's what 'people' are mad about, I don't know.

lispm
0 replies
8h22m

The level of nonsense is amazing.

AGI is far away and not yet a problem of "compute power". We've seen with Autonomous Driving what these statements are worth.

Albania is using ChatGPT to speed up EU accession by upto 5.5 years? What? Implementing EU laws using a tool, whose capability to "understand" legal text is like zero and whose capability to hallucinate is unbounded?

jp42
0 replies
15h40m

Well, they didn't addressed extent of Microsoft's control over OpenAI.

joshxyz
0 replies
12h20m

Okay, so Open in OpenAI means people get to enjoy the free version of the fremium product.

jimkleiber
0 replies
10h51m

We are dedicated to the OpenAI mission and have pursued it every step of the way. (emphasis added)

We intend to move to dismiss all of Elon’s claims. (emphasis added)

I feel somewhat surprised but more worried that people working in a space riddled with so much uncertainty seem to use so much certainty in their statements.

hsuduebc2
0 replies
6h51m

From: Elon Musk

To: Ilya Sutskever

The most promising option I can think of, as I mentioned earlier, would be for OpenAI to attach to Tesla as its cash cow. I believe attachments to other large suspects (e.g. Apple? Amazon?) would fail due to an incompatible company DNA. Using a rocket analogy, Tesla already built the “first stage” of the rocket with the whole supply chain of Model 3 and its onboard computer and a persistent internet connection. The “second stage” would be a full self driving solution based on large-scale neural network training, which OpenAI expertise could significantly help accelerate. With a functioning full self-driving solution in ~2-3 years we could sell a lot of cars/trucks. If we do this really well, the transportation industry is large enough that we could increase Tesla's market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.

Surprise.

hnbad
0 replies
6h56m

Is there some executive summary of this? I find it hard to wade through the waffling and while I appreciate citing sources by pasting e-mails verbatim this doesn't exactly make the statement easier to follow. It sounds like they're damning Elon with faint praise?

floor_
0 replies
4h53m

"Open"AI

finnjohnsen2
0 replies
9h45m

"Open, as in Open For Business"

And Elon was in on it. Not really surprising but disappointing obviously.

evolve2k
0 replies
3h34m

TLDR; Elon is slimy, Sam also.

ergocoder
0 replies
13h35m

The anthropic guy must feel like they have made the best decision.

I love the drama. It is very entertaining. I'm glad OpenAI decided to start as a non profit. I feel like they will never be able to get away with it. The issue will keep lingering.

cwilby
0 replies
12h27m

I don't know what to say more than - I wish they'd calculated in KWh instead of $.

I really want to know that figure, but I'm left extrapolating.

I could _guess_ it's huge, but that's not the point.

"Can we make a separate intelligence beside ourselves, damn the energy expenditure?"
chrsw
0 replies
5h37m

Is there really that much of a distinction between OpenAI and Microsoft at this point? Not talking about on paper like tax documents but in the real world of interests, objectives and decision-making.

burntalmonds
0 replies
5h2m

It's nice to have more context, but posting all of this seems ill advised.

boringuser2
0 replies
13h22m

Selectively editing material you release to libel a public figure is highly duplicitous.

They redacted parts of these emails.

Did they publish all of the emails?

Or are we literally being fed a curated collection?

Very gross.

benjamaan
0 replies
2h45m

why would I lie?

behnamoh
0 replies
15h44m

I don't care much about OpenAI === open-source as long as OpenAI === open the AI pandora's box to everyone.

Compare:

- Google open-sourced many models but sat on the transformers paper for 5 years and never released a product like ChatGPT for the masses.

- OpenAI didn't open-source GPT-4 but made it available to basically everyone.

WatchDog
0 replies
13h56m

Who is the redacted person from email #2?

I'm guessing email #3 is referring to Larry, whom Elon has had a lot to say about publicly on the topic of AGI.

Timber-6539
0 replies
11h19m

A lot of fluff in here, most of it selectively chosen. I'll be waiting for the discovery in the court case before I form an opinion on who is in the right here.

SpaceManNabs
0 replies
3h59m

The title of the post made me think it was going to be a blog post from a third party commentator. For OpenAI to write such a blog post is so petty and desperate.

It is clear that they are full of it.

SoothingSorbet
0 replies
13h4m

Is it normal (or beneficial) for a corporation of this size to put out a press release like this regarding ongoing lawsuits?

KoolKat23
0 replies
5h0m

At creation a company's purpose is defined in its memorandum & articles of association, in this case OpenAI's would be to achieve AGI and to better humanity through the development and provision of AI models. This is how the company creates value and sustains itself.

Whether it is for-profit or not-for-profit is a different point. In this case it is clearly for profit and it should be registered and taxed as such.

They try avoid the implied obligation through a creative case of running a company held by a "foundation". A tax avoidance scheme basically.

JohnFen
0 replies
2h45m

They're both bad actors, and they both have good points. I hope they keep fighting each other -- it might distract them from us.

Alifatisk
0 replies
8h27m

They should atleast rename themselves, I'll stick to ClosedAi until then.

34679
0 replies
1h43m

Is there a widely accepted definition of "safe AI"?

Musk's definition is obviously very different from Google's. Are we including offensive language in general discussion of AI safety, or are we just talking about illegal activity? Because if it's the latter, then by definition we already have laws in place. Trying to make a gun that can't fire bullets will always yield a result that is not a gun.

AGI is inherently unsafe. Intelligent people will always be capable of harm. The choice needs to be made: do we want to make this thing a gun or not?