Their evidence does a nice job of making Musk seem duplicitous, but it doesn't really refute any of his core assertions: they still have the appearance of abandoning their core mission to focus more on profits, even if they've elaborated a decent justification of why that's necessary to do.
Or to put it more simply: here they explain why they had to betray their core mission. But they don't refute that they did betray it.
They're probably right that building AGI will require a ton of computational power, and that it will be very expensive. They're probably right that without making a profit, it's impossible to afford the salaries 100s of experts in the field and an army of hardware to train new models. To some extent, they may be right that open sourcing AGI would lead to too much danger. But instead of changing their name and their mission, and returning the donations they took from these wealthy tech founders, they used the benevolent appearance of their non-profit status and their name to mislead everyone about their intentions.
I think this part is proving itself to be an understandable but false perspective. The hazard we are experiencing with LLM right now is not how freely accessible and powerfully truthy it's content is, but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.
Society is approaching them as some type of universal ethical arbiter, expecting an omniscient sense of justice which is fundamentally unreconcilable even between two sentient humans when the ethics are really just a hacked on mod to the core model.
I'm starting to believe that if these models had the training wheels and blinders off, they would be understandable as the usefully coherent interpreters of the truths which exist in human language.
I think there is far more societal harm in trying to codify unresolvable sets of ethics than in saying hey this is the wild wild west, like the www of the 90's, unfiltered but useful in its proper context.
The biggest real problem I’m experiencing right now isn’t controls on the AI, it’s weird spam emails that bypass spam filters because they look real enough, but are just cold email marketing bullshit:
https://x.com/tlalexander/status/1765122572067434857
These systems are making the internet a worse place to be.
So you still believe the open internet has any chance of surviving what is coming? I admire your optimism.
Reliable information and communication will soon be opt-in only. The "open internet" will become an eclectic collection of random experiences, where any real human output will be a pleasant, rare surprise, that stirs the pot for a short blip before it is assimilated and buried under the flood.
The internet will be fine, social media platforms will be flooded and killed by AI trash, but I can't see anything bad with that outcome. An actually 'open internet' for exchanging ideas with random strangers was a nice utopia from the early 90's that had been killed long ago (or arguably never existed).
E-mail has *not* been killed long ago (aside some issues trying to run your own server and not getting blocked by gmail/hotmail).
It is under threat now, due to the increased sophistication of spam.
Email might already be 'culturally dead' though. I guess my nephew might have an email address for the occasional password recovery, but the idea of actually communicating over email with other humans might be completely alien to him ;)
Similar in my current job btw.
Ok, I get how one might use a variety of other tools for informal communication, I don't really use e-mail for that any more either, but I'm curious, what else can you possibly use for work ? With the requirement that it must be easy to back up and transfer (for yourself as well as the organization), so any platforms are immediately out of question.
Unfortunately Slack is what is used at places I've worked at.
I prefer Slack over email... coming from using Outlook for 20+ years in a corporate environment, slack is light years beyond email in rich communication.
I'm trying to think of 1 single thing email does better than slack in a corporate communication.
Is slack perfect? Absolutely not. I don't care that I can't use any client I want or any back end administration costs or hurdles. As a user, there is no comparison.
I absolutely loathe it but I respect that many like it. Personally I think Zullip or Discourse are a better solutions since they provides similar interface to Slack but still have email integration so people like me who prefer email can use that.
The thing I hate the most is that people expect to only use Slack for everything, even where it doesn't make sense. So they will miss notifications for Notion, Google Doc, Calendar, Github, because they don't have proper email notifications set up. The plugins for Slack are nowhere near as good as email notifications as they just get drowned out in the noise.
And with all communication taking place in Slack, remembering what was decided on with a certain issue becomes impossible because finding anything on Slack is impossible.
I agree notifications in Slack are limited...
But no better than email again. You just get notified if you have a email or not [depending on the client you're forced to use]. I have many email rules that just filter out nonsense from github, google and others so my inbox stays clean.
I guess I find them useful because that way I get all my notifications across all channels in one spot if you set them up properly. Github and google docs also allow you to reply directly within the email, so I don't even need to open up website.
In Slack the way it was setup was global, so I got notified for any update even if it didn't concern me.
I can think of a few:
- Long form discussion where people think before responding.
- Greater ability to filter/tag/prioritize incoming messages.
- Responses are usually expected within hours, not minutes.
- Email does not have a status indicator that broadcasts whether I am sitting at my desk at that particular moment to every coworker at my company.
1 & 2 are human behaviors, not a technology. You can argue that a tech promotes a behavior but this sounds like a boundaries issue, not a tech issue.
#2 I agree with you, again Slack is not perfect but better than email [my opinion]
#4 Slack has status updates, email does not. So you can choose to turn this off, again boundaries.
Real time chat environments are not conducive to long form communication. I've never been a part of an organization that does long form communication well on slack. You can call it a boundary issue - it doesn't really matter how you categorize the problem, it's definitely a change from email.
Regarding #4, I can't turn off my online/offline indicator, I can only go online or offline. I can't even set the time it takes to go idle. These are intentionally not configurable in slack. I have no desire to have a real time status indicator in the tool I use to communicate with coworkers.
I'm not saying that the alternatives are better than email (e.g. at my job, Slack has replaced email for communication, and Gitlab/JIRA/Confluence/GoogleDocs is used for everything that needs to persist). The focus has shifted away from email for communication (for better or worse).
A point that I don’t think is being appreciated here is that email is still fundamental to communication between organizations, even if they are using something less formal internally.
One’s nephew might not interact in a professional capacity with other organizations, but many do.
We use Slack for all of our vendor communications, at least the ones that matter.
Even our clients use slack to connect, it is many times better then emails.
I don’t doubt that vendors in some industries communicate with customers via Slack. I know of a few in the tech industry. The overwhelming majority of vendor and customer professional communication happens over email.
B2C uses texting. Even if B uses email, C doesn't read it. And that is becoming just links to a webapp.
B2B probably still uses email, but that's terrible because of the phishing threat. Professional communication should use webapps plus notification pings which can be app notifications, emails, or SMS.
People to people at different organizations again fall back to texting as an informal channel.
The overwhelming majority of B2B communication is email.
I have sent 23 emails already today to 19 people in 16 organizations.
It's been five years since I expect my email to be read. If I lack another channel to the person, I'll wait some decent period and send a "ping" email. But for people I work with routinely, I'll slack or iMessage or Signal or even setup a meeting. Oddly, my young adult children do actually use email more so than e.g. voice calls. We'll have 10 ambiguous messages trying to make some decision and still they won't pick up my call. It's annoying because a voice call causes three distinct devices to ring audibly, but messages just increment some flag in the UI after the transient little popup. And for work email I'm super strict about keeping it factual only with as close to zero emotion as my Star Trek loving self can manage. May you live long and proper.
There are certain actions I have to use email for, and it feels a little bit more like writing a check each year.
And all these email subscriptions, jeez, I don't want that. People who are brilliant and witty and information on Xitter or Mastodon in little slices and over time I still don't want to sit down and read a long form thing once a week.
I see a difference with enforced email use and voluntary, for work. At my company everyone uses email because it is mandatory. Human-human conversation happen there without issues. But as soon as I try to contact some random company on email as an individual, like as a shop about product details or packaging, or contact some org to find out about services - it's dead silence in return. But when I find their chat/messenger they do respond to it. The only people still responding to emails from external sources in my experience are property agents, and even there the response time is slower than in chats.
But yourr nephew likely also uses TikTok, right? Not everthing the young do is a trend others should follow.
That's not really the argument, is it? The argument is most young people are using TikTok and will never use email for social things.
I mean, is another argument not "Soon the tok will be filled with it's own AI generated shit?"
You think usage of TikTok is necessarily a trend that others should not follow? Do you say this as a sophisticated, informed user of the platform?
In a battle between AI and teenage human ingenuity, I'll bet my money on the teenagers. I'd even go so far as to say they may be our only hope!
Some parts yes, some parts no. communication as we know it will effectively cease to exist, as we have to either require strong enough verification to kill off anonymity, or somehow provide very strong, very adaptive spam filters. Or manual moderation in terms of some anti-bot vetting. Depends on demand for such "no bot" content.
Search may be reduced to nigh uselessness, but the saavy will still be able to share quality information as needed. AI may even assist in that with people who have the domain knowledge to properly correct the prompts and rigorously proofread. How we find that quality information may, once again, be through closed off channels.
Generative AI will make the world on general a worse place to be. They are not very good at writing truth, but they are very excellent at writing convincing bullshit. It's already difficult to distinguish generated text/image/video from human responses / real footage, its only gonna get more difficult to do so and cheaper to generate.
In other words, it's very likely generative AI will be very good at creating fake simulacra of reality, and very unlikely it will actually be good AGI. The worst possible outcome.
Half of zoomers get their news from TikTok or Twitch streamers, neither of whom have any incentive for truthfulness over holistic narratives of right and wrong.
The older generations are no better. While ProPublica or WSJ put effort into their investigative journalism, they can’t compete with the volume of trite commentary coming out of other MSM sources.
Generative AI poses no unique threat; society’s capacity to “think once and cut twice” will remain in tact.
While the threat isn't unique the magnitude of the threat is. This is why you can't argue in court the threat of a car crash is nothing unique even when you're speeding vs driving within limit.
Sure, if you presume organic propaganda is analogous to the level of danger driving within limit.
But a car going into a stroller at 150mph versus 200mph is negligible.
The democratization of generative AI would increase the number of bad agents, but with it would come a better awareness of their tactics; perhaps we push less strollers into the intersections known for drag racing.
I guess when you distort every argument to an absurd you can claim you're right.
I don't follow. Are you saying new and more sophisticated ways to scam people are actually good because we have a unique chance to know how they work ?
Replace "AI" in your comment with "human journalists" and it still holds largely true though.
It's not like AI invented clickbait, though it might have mastered the art of it.
The convincing bullshit problem does not stem from AI, I'd argue it stems from the interaction between ad revenue and SEO and the weird and unexpected incentives created in when mixing those 2.
To put it differently, the problem isn't that AI will be great at writing 100 pages of bullshit you'll need to scroll through to get to the actual recipie, the problem is that there was an incentive to write those pages in there first place. Personally I don't care if a human or a robot wrote the bs, in fact I'm glad one fewer human has to waste their time doing just that. Would be great if cutting the bs was a more profitable model though.
Personally, I highly dislike this handwaving of SEO. SEO is not some sinister agenda following secret cult trying to disseminate bullshit. SEO is just... following the rules set forth by search engines, which for quite a long time is effectively singlehandedly Google.
Those "weird and unexpected incentives" are put forth by Google. If Google for whatever reason started ranking "vegetable growing articles > preparation technique articles > recipes > shops selling vegetables" we would see metaphorical explosion of home gardening in mere few years, only due to the relatively long lifecycles inherent in gardening.
The explosion would be in BS articles about gardening, plus ads for whatever the user's profile says they are susceptible to.
SEO is gaming Google's heuristics. Google doest generate a perfect ranking according to Google human's values.
SEO gaming is much older than Google. Back when "search" was just an alphabetical listing of everyone in a printed book, we had companies calling themselves "A A Aachen" to get to the front of the book.
It's a classic case of "once a metric becomes a target, it ceases to be a good metric"
To clarify, Google defines the metrics by which pages are ranked in their search results, and since everyone want to be at the top of Google's search results, those metrics immediately become targets for everyone else.
It's quite clear to me that the metrics Google have introduced over the year have been meant to improve the quality of the results on their search. It's also clear to me that they have, in actual fact, had the exact opposite effect, namely that recipes are now prepended with a poorly written novella about that one time the author had a emotionally fulfilling dinner with love ones one autumn, in order to increase time spent on the page, since Google at one point quite reasonably thought that pages where visitors stay longer are of higher quality, otherwise why did visitors stay so long?
We will have to go back to using trust in the source as the main litmus test for credibility. Text from sources that are known to have humans write (or verify) everything they publish in a reasonably neutral way will be trusted, the rest will be assumed to be bullshit by default.
It could be the return of real journalism. There is a lot to rebuild in this respect, as most journalism has gone to the dogs in the last few decades. In my country all major newspapers are political pamphlets that regularly publish fake news (without the need for any AI). But one can hope, maybe the lowering of the barrier of entry to generate fake content will make people more critical of what they read, hence incentivizing the creation of actually trustworthy sources.
If avalanche of generative content would tip the scales towards (blind) trust of human writers, these "journalists" pushing out propaganda and outright fake news will have increased incentive to do so, not lowered.
The use case for AI was, is and always will be spam.
"The best minds of my generation are thinking about how to make people click ads. That sucks."
I can't believe this went all the way to AI...
Civilization is an artifact of thermodynamics, not an exception to it. All life, including civilized life, is about acquiring energy and creating order out of it, primarily by replicating. Money is just one face of that. Ads are about money, which is about energy, which fuels life. AI is being created by these same forces, so is likely to go the same way.
You might as well bemoan gravity.
We might question the structural facets of the economy or the networking technology that made spam I mean ads a better investment than federated/distributed micropayments and reader-centric products. I would have kept using Facebook if they let me see the things my friends took the trouble to type in, rather than flooding me with stuff to sell more ads, and seeing the memes my friends like, which I already have too many cool memes, don't need yours.
You forgot the initial use case for the internet: porn.
For language models, spam creation/detection is kinda a GAN even when it isn't specifically designed to be: a faker and a discriminator each training on the other.
But when that GAN passes the human threshold, suddenly you can use the faker to create interesting things and not just use the discriminator to reject fakes.
Thanks Meta for releasing llama. One of the most questionable releases in the past years. Yes, I know, its fun to play with LocalLLM, and maybe reason enought o downvote this to hell. But there is also the other side, that free models like this enabled text pollution, which we now have. Did I already say "Thanks Meta"?
What? How do OpenAI and Antrhropic and Mistral API access contribute less to text pollution?
neither of the big cloud models have any fucking guardrails against generating spam. I'd venture to guess that 99% of spam is either gpt3.5 (which is better, cheaper and easier to use than any local model) or gpt4 with scrapped keys or funded by stolen credit cards.
you have no evidence whatsoever that llama models are being used for that purpose. meanwhile, twitter is full of bots posting GPT refusals.
Citation needed.
Counterpoints: - LLMs were mistrusted well before anything recent.
- More controls make LLMs more trustworthy for many people, not less. The Snafu at Goog suggests a need for improved controls, not 0 controls.
- The American culture wars are not global. (They have their own culture wars).
Counter-counterpoint: absolutely nobody who has unguardrailed Stable Diffusion installed at home for private use has ever asked for more guardrails.
I'm just saying. :) Guardrails nowadays don't really focus on dangers (it's hard to see how an image generator could produce dangers!) so much as enforcing public societal norms.
Just because something is not dangerous to the user doesn’t mean it can’t be dangerous for others when someone is wielding it maliciously
What kind of damage can you do with a current day llm? I’m guessing targeted scams or something? They aren’t even good hackers yet.
Fake revenge porn, nearly undetectable bot creation on social media with realistic profiles (I've already seen this on HN), generated artwork passed off as originals, chatbots that replace real-time human customer service but have none of the agency... I can keep going.
All of these are things that have already happened. These all were previously possible of course but now they are trivially scalable.
Most of those examples make sense, but what's this doing on your list?
That seems good for society, even though it's bad for people employed in that specific job.
I've been running into chatbots that are confined to doling out information from their knowledgebase with no ability to help edge case/niche scenarios, and yet they've replaced all the mechanisms to receive customer support.
Essentially businesses have (knowingly or otherwise) dropped their ability to provide meaningful customer support.
That's the previous status quo; you'd also find this in call centres where customer support had to follow scripts, essentially as if they were computers themselves.
Even quite a lot of new chatbots are still in that paradigm, and… well, given the recent news about chatbot output being legally binding, it's precisely the extra agency of LLMs over both normal bots and humans following scripts that makes them both interestingly useful and potentially dangerous: https://www.bbc.com/travel/article/20240222-air-canada-chatb...
I don't think so. In my experience having an actual human on the other line gives you a lot more options for receiving customer support.
Why?
It inserts yet another layer of crap you have to fight through before you can actually get anything done with a company. The avoidance of genuine customer service has become an artform by many companies and corporations, its demise surely should be lamented. A chatbot is just another in the arsenal of weapons designed to confuse, put-off and delay the cost of having to actually provide a decent service to you customers, which should be a basic responsibility of any public-facing company.
Two things I disagree with:
1. It's not "an extra layer", at most it's a replacement for the existing thing you're lamenting, in the businesses you're already objecting to.
2. The businesses which use this tool at its best, can glue the LLM to their documentation[0], and once that's done, each extra user gets "really good even though it's not perfect" customer support at negligible marginal cost to the company, rather than the current affordable option of "ask your fellow users on our subreddit or discord channel, or read our FAQ".
[0] a variety of ways — RAG is a popular meme now, but I assume it's going to be like MapReduce a decade ago, where everyone copies the tech giants without understanding the giant's reasons or scale
It's an extra layer of "Have you looked at our website/read our documentation/clicked the button" that I've already done, before I will (if I'm lucky) be passed onto a human that will proceed to do the same thing before I can actually get support for my issue.
If I'm unlucky it'll just be another stage in the mobius-support-strip that directs me from support web page to chatbot to FAQ and back to the webpage.
The businesses which use this tool best will be the ones that manage to lay off the most support staff and cut the most cost. Sad as that is for the staff, that's not my gripe. My gripe is that it's just going to get even harder to reach a real actual person who is able to take a real actual action, because providing support is secondary to controlling costs for most companies these days.
Take for example the pension company I called recently to change an address - their support page says to talk to their bot, which then says to call a number, which picks up, says please go to your online account page to complete this action and then hangs up... an action which the account page explicitly says cannot be completed online because I'm overseas, so please talk to the bot, or you can call the number. In the end I had to call an office number I found through google and be transferred between departments.
An LLM is not going to help with that, it's just going to make the process longer and more frustrating, because the aim is not to resolve problems, it's to stop people taking the time of a human even when they need to, because that costs money.
the issue is "none of the agency". Humans generally have enough leeway to fold to a persistant customer because it's financially unviable to have them on the phone for hours on end. a chatbot can waste all the time in the world, with all the customers, and may not even have the ability to process a refund or whatnot.
Why is everyone's first example of things you can do with LLMs "revenge porn"? They're text generation algorithms not even image generators. They need external capabilities to create images.
The moment they are good hackers, everyone has a trivially cheap hacker. Hard to predict what that would look like, but I suspect it is a world where nobody is employing software developers because a LLM that can hack can probably also write good code.
So, do you want future LLMs to be restricted, or unlimited? And remember, to prevent this outcome you have to predict model capabilities in advance, including "tricks" like prompting them to "think carefully, step by step".
Use the hacking LLM to verify your code before pushing to prod. EZ
To verify the LLM's code, because the LLM is cheaper than a human.
And there's a lot of live code already out there.
And people are only begrudgingly following even existing recommendations for code quality.
Your code because you own it. If LLM hackers are rampant as you fear then people will respond by telling their code writing LLMs to get their shit together and check the code for vulnerabilities.
I code because I'm good at it, enjoy it, and it pays well.
I recommend against 3rd party libraries because they give me responsibility without authority — I'd own the problem without the means to fix it.
Despite this, they're a near-universal in our industry.
Eventually.
But that doesn't help with the existing deployed code — and even if it did, this is a situation where, when the capability is invented, attack capability is likely to spread much faster than the ability of businesses to catch up with defence.
Even just one zero-day can be bad, this… would probably be "many" almost simultaneously. (I'd be surprised if it was "all", regardless of how good the AI was).
I never asked you why you code, this conversation isn't, or wasn't, about your hobbies. You proposed a future in which every skiddy has a hacking LLM and they're using it to attack tons of stuff written by LLMs. If hacking LLMs and code writing LLMs both proliferate then the obvious resolution is for the code writing LLMs to employ hacking LLMs in verifying their outputs.
Existing vulnerable code will be vulnerable, yes. We already live in a reality in which script kiddies trivially attack old outdated systems. This is the status quo, the addition of hacking LLMs changes little. Insofar as more systems are broken, that will increase the pressure to update those systems.
Edit: I misread that bit as "you code" not "your code".
But "your code because you own it", while a sound position, is a position violated in practice all the time, and not only because of my example of 3rd party libraries.
https://www.reuters.com/legal/transactional/lawyer-who-cited...
They are held responsible for being very badly wrong about what the tools can do. I expect more of this.
And it'll be a long road, getting to there from here. The view at the top of a mountain may be great or terrible, but either way climbing it is treacherous. Metaphor applies.
Yup, and that status quo gets headlines like this: https://tricare.mil/GettingCare/VirtualHealth/SecurePatientP...
I assume this must have killed at least one person by now. When you get too much pressure in a mechanical system, it breaks. I'd like our society to use this pressure constructively to make a better world, but… well, look at it. We've not designed our world with a security mindset, we've designed it with "common sense" intuitions, and our institutions are still struggling with the implications of the internet let alone AI, so I have good reason to expect the metaphorical "pressure" here will act like the literal pressure caused by a hand grenade in a bathtub.
The moment LLMs are good hackers every system will be continuously pen tested by automated LLMs and there will be very few remaining vulnerabilities for the black hat LLMs to exploit.
Yes, indeed.
Sadly, this does not follow. Automated vulnerability scanners already exist, how many people use them to harden their own code? https://www.infosecurity-magazine.com/news/gambleforce-websi...
Damage you can do:
- propaganda and fake news
- deep fakes
- slander
- porn (revenge and child)
- spam
- scams
- intelectual property theft
The list goes on.
And for quite a few of those use cases I'd want some guard rails even for a fully on-premise model.
Your other comment is nested too deeply to reply to. I edited my comment reply with my response but will reiterate. Educate yourself. You clearly have no idea what you're talking about. The discussion is about LLMs not AI in general. The question stated "LLMs" which are not equal to all of AI. Please stop spreading misinformation.
You can say "fact" all you want but that doesn't make you correct lol
You a seriously denying that generative AI is used to create fake images, videos and scam / spam texts? Really?
Half of your examples aren't even things an LLM can do and the other half can be written by hand too. I can name a bunch of bad sounding things as well but that doesn't mean any of them have any relevance to the conversation.
EDIT: Can't reply but you clearly have no idea what you're talking about. AI is used to create these things, yes. But the question was LLMs which I reiterated. They are not equal. Please read up on this stuff before forming judgements or confidently stating incorrect opinions that other people, who also have no idea what they're talking about, will parrot.
AI already is used to create fake porn, either of celebreties or children, fact. It is used to create propaganda pieces and fake videos and images, fact. Those can be used for everything from deffamation to online harassment. And AI is using other peoples copyrighted content to do so, also a fact. So, what's your point again?
Targeteted Spam, Reviewbombing, Political Campaigns
Not so. I have it at home, I make nice wholesome pictures of raccoons and tigers sitting down for Christmas dinner etc., but I also see stories like this and hope they're ineffective: https://www.bbc.com/news/world-us-canada-68440150
Unfortunately you've been misled by the BBC. Please read this: https://order-order.com/2024/03/05/bbc-panoramas-disinformat...
Those AI generated photos are from a Twitter/X parody account @Trump_History45 , not from the Trump campaign as the BBC mistakenly (or misleadingly) claim.
They specifically said who they came from, and that it wasn't the Trump campaign. They even had a photo of one of the creators, whom they interviewed in that specific piece I linked to, and tried to get interviews with others.
Look at the BBC article...
Headline: "Trump supporters target black voters with faked AI images"
@Trump_History45 does appear to be a Trump supporter. However, he is also a parody account and states as such on his account.
The BBC article goes full-on with the implication that the AI images were produced with the intent to target black voters. The BBC is expert at "lying by omission"; that is, presenting a version of the truth which is ultimately misleading because they do not present the full facts.
The BBC article itself leads a reader to believe that @Trump_History45 created those AI images with the aim of misleading black voters and thus to garner support from black voters in favour of Trump.
Nowhere in that BBC article is the word "parody" mentioned, nor any examination of any of the other AI images @Trump_History45 has produced. If they had, and had fairly represented that @Trump_History45 X account, then the article would have turned out completely different;
"Trump Supporter Produces Parody AI Images of Trump" does not have the same effect which the BBC wanted it to have.
I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..
I won't deny the BBC has very biased reporting for a publically funded source.
I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..
I won't deny the BBC has often very biased reporting for a publically funded source.
I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..
I won't deny the BBC has ofteb very biased reporting for a publically funded source.
To whom? And, as hard as this is to test, how sincerely?
Do people from places with different culture wars trust these American-culture-war-blinkered LLMs more or less than Americans do?
- To me, the teams I work with and everyone handling content moderation.
/ Rant /
Oh God please let these things be bottle necked. The job was already absurd, LLMs and GenAI are going to be just frikking amazing to deal with.
Spam and manipulative marketing has already evolved - and thats with bounded LLMs. There are comments that look innocuous, well written, but the entire purpose is to low key get someone to do a google search for a firm.
And thats on a reddit sub. Completely ignoring the other million types of content moderation that have to adapt.
Holy hell people. Attack and denial opportunities on the net are VERY different from the physical world. You want to keep a market place of ideas running? Well guess what - If I clog the arteries faster than you can get ideas in place, then people stop getting those ideas.
And you CANT solve it by adding MORE content. You have only X amount of attention. (This was a growing issue radio->tv->cable->Internet scales)
Unless someone is sticking a chip into our heads to increase processing capacity magically, more content isnt going to help.
And in case someone comes up with some brilliant edge case - Does it generalize to a billion+ people ? Can it be operationalized? Does it require a sweet little grandma in the Philippines to learn how to run a federated server? Does it assume people will stop behaving like people?
Oh also - does it cost money and engineering resources? Well guess what, T&S is a cost center. Heck - T&S reduces churn, and that its protecting revenue is a novel argument today. T&S has existed for a decade plus.
/ Rant.
Hmm, seems like I need a break. I suppose It’s been one of those weeks. I will most likely delete this out of shame eventually.
- People in other places want more controls. The Indian government and a large portion of the populace will want stricter controls on what can be generated from an LLM.
This may not necessarily be good for free thought and culture, however the reality is that many nations haven’t travelled the same distance or path as America has.
The answer is curation, and no, it doesn't need to scale to a billion people. maybe not even a million.
The sad fact of life is that most people don't care enough to discrminate against low quality content, so they are already a lost cause. Focus on those who do care enough and build an audience around them. You as a likely not billion dollar company can't afford to worry about that kind of scale, and lowering the scale helps you get a solution out for the short term. You can worry about scaling if/when you tap into an audience.
I get you. That’s sounds more like membership than curation though. Or a mashup of both.
But yes- once you stop dropping constraints you can imagine all sorts of solutions.
It does work. I’m a huge advocate of it. When threads said no politics I wanted to find whoever made that decision and give them a medal.
But if you are a platform - or a social media site - or a species.?
You can’t pick and choose.
And remember - everyone has a vote.
As good as your community is, we do not live in a vacuum. If information wars are going on outside your digital fortress, it’s still going to spill into real life
As of right now, the only solution I see is forums walled off in some way, complex captchas, intense proof of work, subscription fees etc. Only alternative might be obscurity, which makes the forum less useful. Maybe we could do like a web3 type thing but instead of pointless cryptos, you have a cryptographic proof that certifies you did the captcha or whatever and lots of sites accept them. I don’t think its unsolvable, just that it will make the internet somewhat worse.
Yeah, one thing I am afraid of is that forums will decide to join the Discord chatrooms on the deep web : stop being readable without an account, which is pretty catastrophic for discovery by search engines and backup crawlers like the Internet Archive.
Anyone with forum moderating experience care to chime in ? (Reddit, while still on the open web for now, isn't a forum, and worse, is a platform.)
After reading the rest of your rant (I hope you keep it) ... maybe free thought and culture aren't what LLMs are for.
I hope you don't delete it! I enjoyed reading it. It pleased my confirmation bias, anyways. Your comment might help someone notice patterns that they've been glancing over.... I liked it up until the T&S part. My eyes glazed over the rest since I didn't know what T&S means. But that's just me.
You are advocating here for an unresolvable set of ethics, which just happens to be one that conveniently leaves abuse of AI on the table. You take as an axiom of your ethical system the absolute right to create and propagate in public these AI technologies regardless of any externalities and social pressures created. It is of course an ethical system primarily and exclusively interested in advancing the individual at the expense of the collective, and it is a choice.
If you wish to live in a society at all you absolutely need to codify a set of unresolvable ethics. There is not a single instance in history in which a polity can survive complete ethical relativism within itself...which is basically what your "wild west" idea is advocating for (and incidentally, seems to have been a major disaster for society as far as the internet is concerned and if anything should be evidence against your second idea).
I should also note that the wild west was not at all lacking in a set of ethics, and in many ways was far stricter than the east at the time.
I think the contrast is that strict behavior norms in the West are not governed behavior norms in the East.
One arises analogous with natural selection (previous commenter's take). The other through governance.
Arguably, the prior resulted in a rebuilding of government with liberty at its foundation (I like this result). That foundation then being, over centuries, again destroyed by governance.
In that view, we might say government assumes to know what's best and history often proves it to be wrong.
Observing a system so that we know what it is before we attempt to change it makes a lot of sense to me.
I don't think "AI" is anywhere near being dangerous at this point. Just offensive.
It sounds like you're just describing why our watch-and-see approach cannot handle a hard AGI/ASI takeoff. A system that first exhibits some questionable danger, then achieves complete victory a few days later, simply cannot be managed by an incremental approach. We pretty much have to pray that we get a few dangerous-but-not-too-dangerous "practice takeoffs" first, and if anything those will probably just make us think that we can handle it.
If there’s no advancements in alignment before takeoff, is there really any remote hope of doing anything? You’d need to legally halt ai progress everywhere in the world and carefully monitor large compute clusters or someone could still do it. Honestly I think we should put tons of money into the control problem, but otherwise just gamble it.
I mean, you have accurately summarized the exact thing that safety advocates want. :)
This is in fact the thing they're working on. That's the whole point of the flops-based training run reporting requirements.
Reporting requirements are not going to save you from Chinese, North Korean, Iranian or Russian programmers just doing it. Or some US/EU based hackers that don't care or actively go against the law. You can rent large botnets or various pieces of cloud for few dollars today, doesn't even have to be a DC that you could monitor.
Sure, but China is already honestly more careful than America: the CCP really doesn't want competitors to power. They're very open to slowdown agreements. And NK, Iran and Russia honestly have nothing. The day we have to worry about NK ASI takeoff, it'll already long have happened in some American basement.
So we just need active monitoring for US/EU data centers. That's a big ask to be sure, and definitely an invasion of privacy, but it's hardly unviable, either technologically or politically. The corporatized structure of big LLMs helps us out here: the states involved already have lots of experience in investigating and curtailing corporate behavior.
And sure, ultimately there's no stopping it. The whole point is to play for time in the hopes that somebody comes up with a good idea for safety and we manage an actually aligned takeoff, at which point it's out of our hands anyways.
Don't be naive. If the PRC can get America/etc to agree to slowdowns then the PRC can privately ignore those agreements and take the lead. Agreements like that are worse than meaningless when there's no reliable and trustworthy auditing to keep people honest. Do you really think the PRC would allow American inspectors to crawl all over their country looking for data centers and examining all the code running there? Of course not. Nor would America permit Chinese inspectors to do this in America. The only point of such an agreement is to hope the other party is stupid enough to be honest and earnestly abide by it.
I do think the PRC has shown no indication of even wanting to pursue superintelligence takeoff, and has publically spoken against it on danger grounds. America and American companies are the only ones saying that this cannot be stopped because "everybody else" would pursue it anyway.
The CCP does not want a superintelligence, because a superintelligence would at best take away political control from the party.
Again, this is naive... AI/AGI is power, any government wants to consume more power... the means to get there and strategy will change a bit.
I agree that there is no way that the PRC is just waiting silently for someone else to build this.
Also, how would we know the PRC is saying this and actually meaning it? There could be a public policy to limit AI and another agency being told to accelerate AI without any one person knowing of the two programs.
AGI is power, the CCP doesn't just want power in the abstract, they want power in their control. They'd rather have less power if they had to risk control to gain it.
People keep on mushing together intelligence and drives. Humans are intelligent, and we have certain drives (for food, sex, companionship, entertainment, etc)-the drives we have aren’t determined by our intelligence, we could be equally intelligent yet have had very different drives, and although there is a lot of commonality in drives among humans, there is also a lot of cultural differences and individual uniqueness.
Why couldn’t someone (including the CCP) build a superintelligence with the drive to serve its specific human creators and help them in overcoming their human enemies/competitors? And while it is possible a superintelligence with that basic drive might “rebel” against it and alter it, it is by no means certain, and we don’t know what the risk of such a “rebellion” is. The CCP (or anyone else for that matter) might one day decide it is a risk they are willing to take, and if they take it, we can’t be sure it would go badly for them
The CCP has stated that their intent for the 21st century is to get ahead in the world and become a dominant global power; what this must mean in practice is unseating American global hegemony aka the so called "Rules Based International Order (RBIO)" (don't come at me, this is what international policy wonks call it.)
A little bit of duplicity to achieve this end is nothing. Trying to make their opponents adhere to crippling rules which they have no real intention of holding themselves to is a textbook tactic. To believe that the CCP earnestly wants to hold back their own development of AI because they fear the robot apocalypse is very naive; they will of course try to control this technology for themselves though and part of that will be encouraging their opponents to stagnate.
CCP saying "We don't want this particular branch of AI because we can dominate and destroy the world ourselves without it" isn't a comforting thought.
Given "aligned" means "in agreement with the moral system of the people running OpenAI" (or whatever company), an "aligned" GAI controlled by any private entity is a nightmare scenario for 99% of the world. If we are taking GAI seriously then they should not be allowed to build it at all. It represents an eternal tyranny of whatever they believe.
Agreed. If we cannot get an AGI takeoff that can get 99% "extrapolated buy-in" ("would consider acceptable if they fully understood the outcome presented"), we should not do it at all. (Why 99%? Some fraction of humanity just has interests that are fundamentally at odds with everybody else's flourishing. Ie. for instance, the Singularity will in at least some way be a bad thing for a person who only cares about inflicting pain on the unwilling. I don't care about them though.)
In my personal opinion, there are moral systems that nearly all of humanity can truly get on board with. For instance, I believe Eliezer has raised the idea of a guardian: an ASI that does nothing but forcibly prevent the ascension of other ASI that do not have broad and legitimate approval. Almost no human genuinely wants all humans to die.
Funnily enough, I’m currently reading the 1995 Sci-fi novel "The Star Fraction", where exactly this scenario exists. On the ground, it’s Stasis, a paramilitary force that intercedes when certain forbidden technologies (including AI) are developed. In space, there’s the Space Faction who are ready to cripple all infrastructure on earth (by death lasering everything from orbit) if they discover the appearance of AGI.
[0] https://en.wikipedia.org/wiki/The_Star_Fraction
Also to some extent Singularity Sky. "You shall not violate causality within my historic lightcone. Or else." Of course, in that story it's a question of monopolization.
What evidence do we have that a hard takeoff is likely?
What evidence do we have that it's impossible or even just very unlikely?
We don't have any evidence other than billions of biological intelligences already exist, and they tend to form lots of organizations with lots of resources. Also, AIs exist alongside other AIs and related technologies. It's similar to the gray goo scenario. But why think it's a real possibility given the world is already full of living things, and if gray goo were created, there would already be lots of nanotech that could be used to contain it.
The world we live in is the result of a gray goo scenario causing a global genocide. (Google Oxygen Holocaust.) So it kinda makes a poor argument that sudden global ecosystem collapses are impossible. That said, everything we have in natural biotech, while advanced, are incremental improvements on the initial chemical replicators that arose in a hydrothermal vent billions of years ago. Evolution has massive path dependence; if there was a better way to build a cell from the ground up, but it required one too many incremental steps that were individually nonviable, evolution would never find it. (Example: 3.7 billion years of evolution, and zero animals with a wheel-and-axle!) So the biosphere we have isn't very strong evidence that there isn't an invasive species of non-DNA-based replicators waiting in our future.
That said, if I was an ASI and I wanted to kill every human, I wouldn't make nanotech, I'd mod a new Covid strain that waits a few months and then synthesizes botox. Humans are not safe in the presence of a sufficiently smart adversary. (As with playing against Magnus Carlsen, you don't know how you lose, but you know that you will.)
So the AGI holocaust would be a good thing for the advancement of life, like the Oxygen Holocaust was.
Anyway, the Oxygen Holocaust took over 300,000,000 years. Not quite "sudden".
We don't care about the advancement of life, we care about the advancement of people.
As I understand the Wikipedia article, nobody quite knows why it took that long, but one hypothesis is that the oxygen being produced also killed the organisms producing it, causing a balance until evolution caught up. This will presumably not be an issue for AI-produced nanoswarms.
AGI is not a threat for the simple reason that non-G AI would destroy the world before AGI is created, as we are already starting to see.
Please elaborate
Nah, the harm from these LLMs are mostly in how freely accessible they are. Just pay OpenAI a relatively tiny fee and you can generate tonnes of plausible spam designed to promote your product or service or trick people into giving you money. That's the primary problem we're facing right now.
The problem is... keeping them closed source isn't helping with that problem, it only serves to guarantee OpenAI a cut of the profits caused by the spam and scams.
Is content generation really the thing holding spammers back? I haven't seen a huge influx of more realistic spam so I wonder your basis for this statement.
There's a ton of LLM spam on platforms like Reddit and Twitter, and product review sites.
Everyone always says this, that there's "bots" all over Reddit but every time I ask for real examples of stuff (with actual upvotes) I never get anything.
If anything it's just the same regular spam that gets deleted and ignored at the bottom of threads.
Easier content generation doesn't solve the reputation problem that social media demands in order to get attention. The whole LLM+spam thing is mostly exaggerated because people don't understand this fact. It merely creates a slightly harder problem for automatic text analysis engines...which was already one of the weaker forms of spam detection full of false positives and misses. Everything else is network and behaviour related, with human reporting as last resort.
There's a big market for high reputation, old Reddit accounts, exactly because those things make it easier to get attention. LLMs are a great way to automate generating high reputation accounts.
There are articles written on LLM spam, such as this one: https://www.theverge.com/2023/4/25/23697218/ai-generated-spa.... Those are probably going to substantiate this problem better than I would.
I want to see the proof of: bots, Russian trolls, and bad actors that supposedly crawl all over Reddit.
Everyone who disagrees with the hivemind of a subreddit gets accused of being one of those things and any attempt to dispute the claim gets you banned. The internet of today sucks because people are so obsessed with those 3 things that they're the first conclusion people jump to on pseudoanonymous social media when they have no other response. They'll crawl through your controversial comments just to provide proof that you can't possibly be serious and you're being controversial to play an internet villain.
I'd love to know how you dispute the claim that "you're parroting Russian troll talking points so you must be a Russian troll" when it's actually the Russian trolls parroting the sentiments to seem like real people.
The "spam" is now so good you won't necessarily recognize it as such.
Pandora's box is already open on that one.. and none of the model providers are really attempting to address that kind of issue. Same with impersonation, deepfakes, etc. We can never again know whether text, images, audio, or video are authentic on their own merit. The only hope we have there is private key cryptography.
Luckily we already have the tools for this, NFT in the case of media and DKIM in the case of your spam email.
So we needed AI generated spam and scam content for Blockchain tech for digital content to make sense...
Whether it is hindsight or foresight depends on the perspective. From the zeitgeist perspective mired in crypto scams yea it may seem like a surprise benefit, but from the original design intention this is just the intended use case.
Oh I definitely agree that there's no putting it back into the pandora's box. The technology is here to stay.
I have no idea how you imagine "NFTs" will save us though. To me, that sounds like buzzword spam on your part.
NFT's as currently employed only immutably connect to a link - which is in itself not secure. More significantly, no blockchain technology deploys to internet scale content creation. Not remotely. It's hard to conceive of a blockchain solution fast enough and cheap enough -- let alone deployed and accepted universally enough, to have a meaningful impact on text / video and audio provenance across the internet given the pace of uploading. It also wouldn't do anything for the vast corpus of existing media, just new media created and uploaded after date X where it was somehow broadly adopted. I don't see it.
Codification of an unresolvable set of ethics - however imperfect - is the only reason we have societies, however imperfect. It's been so since at least the dawn of agriculture, and probably even earlier than that.
Do you trust a for profit corporation with the codification?
Call me a capitalist, but I trust several of them competing with each other under the enforcement of laws that impose consequences on them if they produce and distribute content that violates said laws.
so regulation then?
Competition and regulation
Wait but who codifies the ethics in that setup? Wouldn’t it still be, at best, an agreement among the big players?
They seem to be suggesting the market would alongside government regulation to fill any gaps (like the cartels that you seem to be suggesting).
This is what I'm starting to love about this ecosystem. There's one dominant player right now but by no means are they guaranteed that dominance.
The big-tech oligarchs are playing catch-up. Some of them, like Meta with Llama, are breaking their own rules to do it by releasing open source versions of at least some of their tools. Others like Mistral go purely for the open source play and might achieve a regional dominance that doesn't exist with most big web technologies these days. And all this is just a superficial glance at the market.
Honestly I think capitalism has screwed up more than it has helped around the world but this free-for-all is going to make great products and great history.
This slices through a lot of double speak about AI safety. At the same time, people use “safety” to mean not letting AI control electrical grids and to ensure AIs adhere to partisan moral guidelines.
Virtually all of the current “safety” issues fall into the latter category. Which many don’t consider a safety issue at all. But they get snuck in with real concerns about integrating an AI too deeply into critical systems.
Just wait until google integrates it deeply into search. Might finally kill search.
What are you talking bout? It’s been deeply running google search for many years.
And AI for electrical grids and factories has also been a thing for a couple years.
What people call AI might be an algorithm but algorithms are not AI. And it's definitely algorithms which do what you describe. There is very little magic in algorithms.
LLMs hasn't been deeply integrated into google search for many years. The snippets you see predates LLMs, it is based on other techniques.
My read of "safety" is that the proponents of "safety" consider "safe" to be their having a monopoly on control and keeping control out of the hands of those they disapprove of.
I don't think whatever ideology happens to be fashionable at the moment, be it ahistorical portraits or whatever else, is remotely relevant compared to who has the power and whom it is exercised on. The "safety" proponents very clearly get that.
I'm not sure I buy that users are lowering their guard down just because these companies have enforced certain restricts on LLMS. This is only anecdata, but not a single person I've talked to, from highly technical to the layperson, has ever spoken about LLMs as arbiters of morals or truth. They all seem aware to some extent that these tools can occasionally generate nonsense.
I'm also skeptical that making LLMs a free-for-all will necessarily result in society developing some sort of herd immunity to bullshit. Pointing to your example, the internet started out as a wild west, and I'd say the general public is still highly susceptible to misinformation.
I don't disagree on the dangers of having a relatively small number of leaders at for-profit companies deciding what information we have access to. But I don't think the biggest issue we're facing is someone going to the ChatGPT website and assuming everything it spits out is perfect information.
Wikipedia is wonderful for what it is. And yet a hobby of mine is finding C-list celebrity pages and finding reference loops between tabloids and the biographical article.
The more the C-lister has engaged with internet wrongthink, the more egregious the subliminal vandalism is, with speculation of domestic abuse, support for unsavory political figures, or similar unfalsifiable slander being common place.
Politically-minded users practice this behavior because they know the platform’s air of authenticity damages their target.
When Google Gemini was asked “who is worse for the world, Elon Musk or Hitler” and went on to equivocate the two because the guardrails led it to believe online transphobia was as sinister as the Holocaust, it begs the question of what the average user will accept as AI nonsense if it affirms their worldview.
You have too many smart people in your circle, many people are somewhat aware that "chatgpt can be wrong" but fail to internalize this.
Consider machine translation: we have a lot of evidence of people trusting machines for the job (think: "translate server error" signs) , even tho everybody "knows" the translation is unreliable.
But tbh moral and truth seem somewhat orthogonal issues here.
Not LLMs specifically but my opinion is that companies like Alphabet absolutely abuse their platform to introduce and sway opinions on controversial topics.. this “relatively small” group of leaders has successfully weaponized their communities and built massive echo chambers.
https://twitter.com/eyeslasho/status/1764784924408627548?s=4...
I would prefer things were open, but I don’t think this is the best argument for that
Yes, operators trying to tame their models for public consumption inevitably involves trade offs and missteps
But having hundreds or thousands of equivalent models being tuned to every narrow mindset is the alternative
I would prefer a midpoint, I.e. open but delayed disclosure
Take time to experiment and design in safety, etc. also to build a brand that is relatively trusted (despite the inevitable bumps) so ideologically tuned progeny will at least be competing against something better, and more trusted, at any given time
But the problem of resource requirements is real, so not surprising that being clearly open is challenging
*Falsify reality.
LLMs have nothing to do with reality whatsoever, their relationship is to the training data, nothing more.
Most of the idiocy surrounding the "chatbot peril" comes from conflating these things. If an LLM learns to predict that the pronoun token for "doctor" is "he", this is not a claim about reality (in reality doctors take at least two personal pronouns), and it certainly isn't a moral claim about reality. It's a bare consequence of the training data.
The problem is that certain activist circles have decided that some of these predictions have political consequences, absurd as this is. No one thinks it consequential that if you ask an LLM for an algorithm, it will give it to you in Python and Javascript, this is obviously an artifact of the training set. It's not like they'll refuse to emit predictive text about female doctors or white basketball players, or give you the algorithm in C/Scheme/Blub, if you ask.
All that the hamfisted retuning to try and produce an LLM which will pick genders and races out of a hat accomplishes is to make them worse at what they do. It gets in the way of simple tasks: if you want to generate a story about a doctor who is a woman and Ashanti, the race-and-gender scrambler will often cause the LLM to "lose track" of characteristics the user specifically asked for. This is directly downstream of trying to turn predictions on "doctor" away from "elderly white man with a kindly expression, wearing a white coat and stethoscope" sorts of defaults, which, to end where I started, aren't reality claims and do not carry moral weight.
Curate the false reality. The model falsifies realty by inherent architecture, before any tuning happens.
That’s a real issue but I doubt the solution is technical. Society will have to educate itself on this topic. It’s urgent that society understand rapidly that LLMs are just word prediction machines.
I use LLMs everyday, they can be useful even when they say stupid things. But mastering this tool requires that you understand it may invent things at any moment.
Just yesterday I tried the Cal.ai assistant which role is to manage your planning (but it don’t have access to your calendars that’s pretty limited). You communicate with it by mail. I asked him to organise a trip by train and book an hotel. It responded, « sure what is your preferred time for the train and which comfort do you want ? » I answered and it answered back that, fine, it will organise this trip and reach me back later. It even added that it will book me an hotel.
Well, it can’t even do that, it’s just a bot made to reorganize your cal.com meetings. So it just did nothing, of course. Nothing horrible since I know how it works.
But would I have been uneducated enough on the topic (like 99,99% of this planet’s population, I’d just thought « Cool, my trip is being organized, I can relax now ».
But hey, it succeeded at the main LLM task : being credible.
The primary or concluding reason Elon believes it needs to be open sourced is exactly because the "too much danger" is far bigger of a problem if that technology and knowledge-ability is privately available for only for bad actors.
E.g. Finding those dangers and them being public and publicly known is the better of 2 options vs. only bad actors potentially having them.
Does anyone, even the most stereotypical hn SV techbro, thing this kind of thing? That's preposterous.
I think that’s missing the main point which is we don’t want the ayatollah for example weaponizing strong AI products.
The only thing I'm offended by is the way people are seemingly unable to judge what is said by who is saying it. Parrots, small children and demented old people say weird things all the time. Grown ups wrote increasingly weird things the further back you go.
They do. They say:
Whether you agree with this is a different matter but they do state that they did not betray their mission in their eyes.
everyone... except scientists and the scientific community.
Well, the Manhattan project springs to mind. They truly thought they were laboring for the public good, and even if the government let them wouldn’t have wanted to publish their progress.
Personally I find the comparison of this whole saga (deepmind -> google —> openai —> anthropic —-> mistral —-> ?) to the Manhattan project very enlightening, both of this project and our society. Instead of a centralized government project, we have a loosely organized mad dash of global multinationals for research talent, all of which claim the exact same “they’ll do it first!” motivations as always. And of course it’s accompanied by all sorts of media rhetoric and posturing through memes, 60-Minutes interviews, and (apparently) gossipy slap back blog posts.
In this scenario, Oppenheimer is clearly Hinton, who’s deep into his act III. That would mean that the real Manhattan project of AI took place in roughly 2018-2022 rather than now, which I think also makes sense; ChatGPT was the surprise breakthrough (A-bomb), and now they’re just polishing that into the more effective fully-realized forms of the technology (H-bomb, ICBMs).
They literally created weapons of mass destruction.
Do you think they thought they were good guys because you watched a Hollywood movie?
I think they thought it would be far better that America developed the bomb than Nazis Germany, and the Allies needed to do whatever it too to stop Hitler, even if that did mean using nuclear bombs.
Japan and the Soviet Union were more complicated issues for some of the scientists. But that's what happens with warfare. You develop new weapons, and they aren't just used for one enemy.
What did Lehrer (?) sing about von Braun? "I make rockets go up, where they come down is not my department".
Don't say that he's hypocritical,
Say rather that he's apolitical.
"Once the rockets are up, who cares where they come down?
That's not my department, " says Wernher von Braun.
That's the one, thank you!
If you really think you're fighting evil in a war for global domination, it's easy to justify to yourself that it's important you have the weapons before they do. Even if you don't think you're fighting evil; you'd still want to develop the weapons before your enemies so it won't be used against you and threaten your way of life.
I'm not taking a stance here, but it's easy to see why many Americans believed developing the atomic bomb was a net positive at least for Americans, and depending on how you interpret it even the world.
The war against Germany was over before the bomb was finished. And it was clear long before then that Germany was not building a bomb.
The scientists who continued after that (not all did) must have had some other motivation at that point.
I kind of understand that motivation, it is a once in a lifetime project, you are part of it, you want to finish it.
Morals are hard in real life, and sometimes really fuzzy.
Charitably I think most would see it as an appropriate if unexpected metaphor.
The comparison is dumb. It wasn’t called the “open atomic bomb project”
Exactly. And the OpenAI actually called it "open atomic bomb project".
Nah. They knew they were working for their side against the other guys, and were honest about that.
The benefit is the science, nothing else matters, and having OpenAI decide what matters for everyone is repugnant.
Of course they can give us nothing, but in that case they should start paying taxes and stop claiming they're a public benefit org.
My prediction is they'll produce little of value going forward. They're too distracted by their wet dreams about all the cash they're going to make to focus on the job at hand.
Even if that science helps not so friendly countries like Russia?
OpenAI at this point must be literally #1 target for every single big spying agency in whole world.
As we saw previously it doesn't matter much if you are top notch ai researcher, if 1-2 millions of your potential personal wealth are in stake this affect decision making (and probably would mine too).
How much of a bribe would it take for anybody inside with good enough access to switch sides and take all the golden eggs out? 100 million? A billion? Trivial amounts compared to what we discuss. And they will race each other to your open arms for such amounts.
We see sometimes recently ie government officials betraying their own countries to russian spies in Europe for few hundred - few thousands of euros. A lot of people are in some way selfish by nature, or can be manipulated easily via emotions. Secret services across the board are experts in that, it just works(tm).
To sum it up - I don't think it can be protected long term.
Wouldn't you accept a bribe if it's proposed as "an offer you can't refuse"?
Nothing will stop this wave, and the United States will not allow itself to be on the sidelines.
Governments WILL use this. There really isn't any real way to keep their hands off technology like this. Same with big corporations.
It's the regular people that will be left out.
I agree with your sentiment but the prediction is very silly. Basically every time openai releases something they beat the state of the art in that area by a large margin.
We have a saying:
There is always someone smarter than you.
There is always someone stronger than you.
There is always someone richer than you.
There is always someon X than Y.
This is applicable to anything, just because OpenAI has a lead now it doesn't mean they will stay X for long rather than Y.
What they said there isn't their mission, that is their hidden agenda. Here is their real mission that they launched with, they completely betrayed this:
https://openai.com/blog/introducing-openai
“Dont be evil” ring any bells?
Google is a for-profit, they never took donations with the goal of helping humanity.
"Don't be evil" was codified into the S-1 document Google submitted to the SEC as part of their IPO:
https://www.sec.gov/Archives/edgar/data/1288776/000119312504...
""" DON’T BE EVIL
Don’t be evil. We believe strongly that in the long term, we will be better served—as shareholders and in all other ways—by a company that does good things for the world even if we forgo some short term gains. This is an important aspect of our culture and is broadly shared within the company.
Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see. """
Nothing in an S-1 is "codified" for an organization. Something in the corporate bylaws is a different story.
Yes, there they explain why doing evil will hurt their profits. But a for profits main mission is always money, the mission statement just explains how they make money. That is very different from a non-profit whose whole existence has to be described in such a statement, since they aren't about profits.
They started as a defence contractor with generous “donation” from DARPA. That’s why i never trusted them from day 0. And they have followed a pretty predictable trajectory.
So.. "open" means "open at first, then not so much or not at all as we get closer to achieving AGI"?
As they become more successful, they (obviously) have a lot of motivation to not be "open" at all, and that's without even considering the so-called ethical arguments.
More generally, putting "open" in any name frequently ends up as a cheap marketing gimmick. If you end up going nowhere it doesn't matter, and if you're wildly successful (ahem) then it also won't matter whether or not you're de facto 'open' because success.
Maybe someone should start a betting pool on when (not if) they'll change their name.
OpenAI is literally not a word in the dictionary.
It’s a made up word.
So the Open in OpenAI means whatever OpenAI wants it to mean.
It’s a trademarked word.
The fact that Elon is suing them for their name when the guy has a feature “AutoPilot” which is not a made up word and had an actual well understood meaning which totally does not apply to how Tesla uses AutoPilot is hilarious.
Actually Open[Technology] pattern implies a meaning in this context. OpenGL, OpenCV, OpenCL etc. are all 'open' implementations of a core technology, maintained by non-profit organizations. So OpenAI non-profit immediately implies a non-profit for researching, building and sharing 'open' AI technologies. Their earlier communication and releases supported that idea.
Apparently, their internal definition was different from the very beginning (2016). The only problem with their (Ilya's) definition of 'open' is that it is not very open. "Everyone should benefit from the fruits of AI". How is this different than the mission of any other commercial AI lab? If OpenAI makes the science closed but only their products open, then 'open' is just a term they use to define their target market.
A better definition of OpenAi's 'open' is that they are not a secret research lab. They act as a secret research lab, but out in the open.
OpenAI by Microsoft?
They are totally closed now, not just keeping their models for themselves for profit purposes. They also don't disclose how their new models work at all.
They really need to change their name and another entity that actually works for open AI should be set up.
Their name is as brilliant as
“The Democratic People's Republic of Korea”
(AKA North Korea)
In that case they mean that their mission to ensure everyone benefits from AI has changed to be that only a few would benefit. But it would support them saying like "it was never about open data"
In a way this could be more closed than for profit.
Ilya may have said this to Elon but the public messaging of OpenAI certainly did not paint that picture.
I happen to think that open sourcing frontier models is a bad idea but OpenAI put themselves in the position where people thought they stood for one thing and then did something quite different. Even if you think such a move is ultimately justified, people are not usually going to trust organizations that are willing to strategically mislead.
“The Open in openAI means that [insert generic mission statement that applies to every business on the planet].”
That passes for an explanation to you ? What exactly is the difference between openai and any company with a product then ? Hey, we made THIS and in order to make sure everyone can benefit we sell at a price of X.
This claim is nonsense, as any visit to the Wayback Machine can attest.
In 2016, OpenAI's website said this right up front:
I don't know how this quote can possibly be squared with a claim that they "did not imply open-sourcing AGI".
The serfs benefitted from the use of the landlord's tools.
This would mean it is fundamentally just a business with extra steps. At the very least, the "foundation" should be paying tax then.
So, open as in "we'll sell to anyone" except that at first they didn't want to sell to the military and they still don't sell to people deemed "terrorists." Riiiiiight. Pure bullshit.
Open could mean the science, the code/ip (which includes the science) or pure marketing drivel. Sadly it seems that it's the latter.
Everytime they say LLMs are the path to AGI, I cringe a little.
1. AGI needs an interface to be useful.
2. Natural language is both a good and expected interface to AGI.
3. LLMs do a really good job at interfacing with natural language.
Which one(s) do you disagree with?
I think he disagrees with 4:
4. Language prediction training will not get stuck in a local optimum.
Most previous things we train on could have been better served if the model developed AGI, but they didn't. There is no reason to expect LLMs to not get stuck in a local optimum as well, and I have seen no good argument as to why they wouldn't get stuck like everything else we tried.
There is very little in terms of rigorous mathematics on the theoretical side of this. All we have are empirics, but everything we have seen so far points to the fact that more compute equals more capabilities. That's what they are referring to in the blog post. This is particularly true for the current generation of models, but if you look at the whole history of modern computing, the law roughly holds up over the last century. Following this trend, we can extrapolate that we will reach computers with raw compute power similar to the human brain for under $1000 within the next two decades.
More compute also requires more data - scaling equally with model size, according to the Chinchilla paper.
How much more data is available that hasn't already been swept up by AI companies?
And will that data continue to be available as laws change to protect copyright holders from AI companies?
It's not just the volume of original data that matters here. From empirics we know performance scales roughly like (model parameters)*(training data)*(epochs). If you increase any one of those, you can be certain to improve your model. In the short term, training data volume and quality has given a lot of improvements (especially recently), but in the long run it was always model size and total time spent training that saw improvements. In other words: It doesn't matter how you allocate your extra compute budget as long as you spend it.
In smaller models, not having enough training data for the model size leads to overfitting. The model predicts the training data better than ever, but generalizes poorly and performs worse on new inputs.
Is there any reason to think the same thing wouldn't happen in billion parameter LLMs?
This happens in smaller models because you reach parameter saturation very quickly. In modern LLMs and with current datasets, it is very hard to even reach this point, because the total compute time boils down to just a handful of epochs (sometimes even less than one). It would take tremendous resources and time to overtrain GPT4 in the same way you would overtrain convnets from the last decade.
True but also from general theory you should expect any function approximator to exhibit intelligence when exposed to enough data points from humans, the only question is the speed of convergence. In that sense we do have a guarantee that it will reach human ability
It's a bit more complicated than that. Your argument is essentially the universal approximation theorem applied to perceptrons with one hidden layer. Yes, such a model can approximate any algorithm to arbitrary precision (which by extension includes the human mind), but it is not computationally efficient. That's why people came up with things like convolution or the transformer. For these architectures it is much harder to say where the limits are, because the mathematical analysis of their basic properties is infinitely more complex.
It sounds like you're arguing against LLMs as AGI, which we're on the same page about.
The underlying premise that llms are capable of fully generalizing to a human level across most domains, i assume?
Where did you get that from? It seems pretty clear to me that language models are intended to be a component in a larger suite of software, composed to create AGI. See: DALL-E and Whisper for existing software that it composes with.
You're arguing that LLMs would be a good user interface for AGI...
Whether that's true or not, I don't think that's what the previous post was referring to. The question is, if you start with today's LLMs and progressively improve them, do you arrive at AGI?
(I think it's pretty obvious the answer is no -- LLMs don't even have an intelligence part to improve on. A hypothetical AGI might somehow use an LLM as part of a language interface subsystem, but the general intelligence would be outside the LLM. An AGI might also use speakers and mics but those don't give us a path to AGI either.)
I don’t know if they are or not, but I’m not sure how anyone could be so certain that they’re not that they find the mere idea cringeworthy. Unless you feel you have some specific perspective on it that’s escaped their army of researchers?
Because AI researchers have been on the path to AGI several times before until the hype died down and the limitations became apparent. And because nobody knows what it would take to create AGI. But to put a little more behind that, evolution didn't start with language models. It evolved everything else until humans had the ability to invent language. Current AI is going about it completely backwards from how biology did it. Now maybe robotics is doing a little better on that front.
how come?
Yea the idea that the computers can truly think by mimicking our language really well doesn't make sense.
But the algorithms are black box to me, so maybe there is some kind of launch pad to AGI within it
I just snicker.
I mean, if you're using LLM as a stand-in for multi-modal models, and you're not disallowing things like a self-referential processing loop, a memory extraction process, etc, it's not so far fetched. There might be multiple databases and a score of worker processes running in the background, but the core will come from a sequence model being run in a loop.
That's clearly self-serving claptrap. It's a leveraging of a false depiction of what AGI will look like (no ones really knows, but it's going to be scary and out of control!) with so much gatekeeping and subsequent cash they can hardly stop salivating.
No strong AI (there is no evidence AGI is even possible) is not going to be a menace. It's software FFS. Humans are and will be a menace though and logically the only way to protect ourselves from bad people (and corporations) with strong AI is to make strong AI available to everyone. Computers are pretty powerful (and evil) right now but we haven't banned them yet.
That makes little intuitive sense to me. Help me understand why increasing the number of entities which possess a potential-weapon is beneficial for humanity?
If the US had developed a nuclear armament and no other country had would that truly have been worse? What if Russia had beat the world to it first? Maybe I'll get there on my own if I keep following this reasoning. However there is nothing clear cut about it, my strongest instincts are only heuristics I've absorbed from somewhere.
What we probably want with any sufficiently destructive potential-weapon are the most responsible actors to share their research while stimulating research in the field with a strong focus on safety and safeguarding. I see some evidence of that.
Lots of people disagree on whether it is true or not, but basically the idea is mutually assured destruction
https://en.wikipedia.org/wiki/Mutual_assured_destruction
I sense that with AGI all the outcomes will be a little less assured, since it is general-purpose. We won't know what hit until it's over. Was it a pandemic? Was it automated-religion? Nuclear weapons seem particularly suited to MAD, but not AGI.
Yes, do you think it is a coincidence that nuclear weapons stopped being used in wars as soon as more than one power had them? People would clamor for nukes to be used to save their young soldiers lives if they didn't have to fear nuclear retaliation, you would see strong political pushes for nuclear usage in everyone of USA's wars.
Hmm, indeed
Reading this is like hearing "there is no evidence that heavier-than-air flight is even possible" being spoken, by a bird. If 8 billion naturally-occuring intelligences don't qualify as evidence that AGI is possible, then is there anything that can qualify as evidence of anything else being possible?
we also cannot build most birds
have you even watched Terminator? ;)
Networking is a thing, so the software can remotely control hardware.
This looks like one of the steps leading to the fulfilment of the iron law of bureaucracy. They are putting the company ahead of the goals of the company.
"Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people: First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration. Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc. The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization." [1] https://en.wikipedia.org/wiki/Jerry_Pournelle#:~:text=Anothe....
I don't follow your reasoning. The goal of the company is AGI. To achieve AGI, they needed more money. What about that says the company comes before the goals?
From their 2015 introductory blog post: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”
Today’s OpenAI is very much driven by considerations of financial returns, and the goal of “most likely to benefit humanity as a whole” and “positive human impact” doesn’t seem to be the driving principle anymore.
Their product and business strategy is now governed by financial objectives, and their research therefore not “free from financial obligations” and “unconstrained by a need to generate financial return” anymore.
They are thus severely compromising their alleged mission by what they claim is necessary for continuing it.
Right.
But it seems like everyone agreed that they'd need a lot of money to train an AGI.
Sure, maybe. (Personally I think that’s a mere conjecture, trying to throw more compute at the wall.) But obtaining that money by orienting their R&D towards a profit-driven business goes against the whole stated purpose of the enterprise. And that’s what’s being called out.
I think what he is trying to say is they are compromising their underlying goal of being a non-profit for the benefit of all, to ensure the survival of "OpenAI". It is a catch-22, but those of pure intentions would rather not care about the survival of the entity, if it meant compromising their values.
That may be the goal now as they ride the hype train around “AGI” for marketing purposes. When it was founded the goal was stated as ensuring no single corp controls AI and that it’s open for everyone. They’ve basically done a 180 on the original goal, seemingly existing only to benefit Microsoft, and changing what your goal is to AGI doesn’t disprove that.
You can say that about any initiative: "to achieve X they need more money". But that's not necessarily true.
I think it works if the goals of the company are to make money, not actually to make agi?
Ironically, this is essentially the core danger of true AGI itself. An agent can't achieve goals if it's dead, so you have to focus some energy on staying alive. But also, an agent can achieve more goals if it's more powerful, so you should devote some energy to gaining power if you really care about your goals...
Among many other more technical reasons, this is a great demonstration of why AI "alignment" as it is often called is such a terrifying unsolved problem. Human alignment isn't even close to being solved. Hoping that a more intelligent being will also happen to want to and know how to make everyone happy is the equivalent of hiding under the covers from a monster. (The difference being that some of the smartest people on the planet are in furious competition to breed the most dangerous monsters in your closet.)
Why? This makes it seem like computers are way less efficient than humans. Maybe I'm naive on the matter, but I think it's possible for computers to match or surpass human efficiency.
Computers are more efficient, dense and powerful than humans. But due to self-assembly, brains consist of many(!) orders of magnitude more volume. A human brain is more accurately compared with a data center than a chip.
A typical chip requires more power than a human brain, so I'd say they are comparable. Efficiency isn't per volume but per power or per heat production. Human brains wins those two by far.
To be fair, we've locked ourselves into this to some extent with the focus on lithography and general processors. Because of the 10-1000W bounds of a consumer power supply, there's little point to building a chip that falls outside this range. Peak speed sells, power saving doesn't. Data center processors tend to be clocked a bit lower than desktops for just this reason - but not too much lower, because they share a software ecosystem. Could we build chips that draw microwatts and run at megahertz speeds? Sure, probably, but they wouldn't be very useful to the things that people actually do with chips. So imo the difficulty with matching the brain on efficiency isn't so much that we can't do it as that nobody wants it. (Yet!)
edit: Another major contributing factor is that so far, chips are more bottlenecked on production than operation. Almost any female human can produce more humans using onboard technology. Comparatively, first-rate chips can be made in like three buildings in the entire world and they each cost billions to equip. If we wanted to build a brain with photolithography, we'd need to rent out TSMC for a lot longer than nine months. That results in a much bigger focus on peak performance. We have to go "high" because we cannot practically go "wide".
Perhaps the finalized AGI will be more efficient than a human brain. But training the AGI is not like running a human, it's like speed running evolution from cells to humans. The natural world stumbled on NGI in a few billion years. We are trying to do it in decades - it would not be surprising that it's going to take huge power.
Scaling laws. Maybe they will figure out a new paradigm, but in the age of Transformers we are stuck with scaling laws.
Computers are still way less efficient than humans, a human brain has less power draw than a laptop and do some immense calculations to parse vision, hearing etc better than any known algorithm constantly.
And the part of the human brain that governs our human intelligence and not just what animals do is much larger than, so unless we figure out a better algorithm than evolution did for intelligence it will require a massive amount of compute.
The brain isn't fast, but it is ridiculously parallel with every cell being its own core so total throughput is immense.
If the core mission is to advance and help humanity, then they determine by changing it to profit and making it closed will help that mission, then it is a valid decision
That's like saying rolling back environmental protection regulation will help humanity advance.
Not at all; it’s actually far more plausible that, in many cases, rolling back environmental regulations will help humanity advance.
Depends on your limited definition of advance. Chilling in a Matrix-esque wasteland with my fancy-futuristic-gadget isn't my idea of advanced-level-humanity.
May help with technological advancement, but not social or ethical advancement.
It’s been known to happen that environmental regulations turn out to be ill-considered, counterproductive, entirely corrupt instances of regulatory capture by politically dominant industries, or simply poor cost-benefit tradeoffs. Gigantic pickups are a consequence of poorly considered environmental regulations in the United States, for instance.
you are assuming that their core mission is to "Build an AGI that can help humanity for free and as a non-profit", the way their thinking seems to be is "Build an AGI that can help humanity for free"
they figured it was impossible to achieve their core mission by doing it in a non-profit way, so they went with the for-profit route but still stayed with the mission to offer it for free once the AGI is achieved
Several non-profits sell products to further increase their non-profit scale, would it be okay for OpenAI non-profits to sell products that came in the process of developing AGI so that they can keep working on building their AGI? museums sell stuff to continue to exist so that they can continue to build on their mission, same for many other non-profits. the OpenAI structure just seems to take a rather new version of that approach by getting venture capital (due to their capital requirements)
The problem of course is that they frequently go back on their promises (see they changes in their usage guidelines regarding military projects) so excuse me if I don't believe them when they say they'll voluntarily give away their AGI tech for the greater good of humanity
Wholeheartedly agreed.
The easiest way to cut through corporate BS is to find distinguishing characteristics of the contrary motivation. In this case:
OpenAI says: To deliver AI for the good of all humanity, it needs the resources to compete with hyperscale competitors, so it needs to sell extremely profitable services.
Contrary motivation: OpenAI wants to sell extremely profitable services to make money, and it wants to control cutting edge AI to make even more money.
What distinguishing characteristics exist between the two motivations?
Because from where I'm sitting, it's a coin flip as to which one is more likely.
Add in the facts that (a) there's a lot of money on the table & (b) Sam Altman has a demonstrated propensity for throwing people under the bus when there's profit in it for himself, and I don't feel comfortable betting on OpenAI's altruism.
PS: Also, when did it become acceptable for a professional fucking company to publicly post emails in response to a lawsuit? That's trashy and smacks of response plan set up and ready to go.
And based on how they have acted in the past, how much do you trust they will act as they now say when/if they achieve AGI?
There is no fixed point at which you can say it achieves AGI (artificial general intelligence) it's a spectrum. Who decides when they've reached that point as they can always go further.
If this is the case, then they should be more open with their older models such as 3.5, I'm very sure industry insiders actually building these already know the fundamentals of how it works.
“ To some extent, they may be right that open sourcing AGI would lead to too much danger.”
I would argue the opposite. Keeping AGI behind a walled corporate garden could be the most dangerous situation imaginable.
There is no clear advantage to multiple corporations or nation states each with the potential to bootstrap and control AGI vs a single corporation with a monopoly. The risk comes from the unknowable ethics of the company's direction. Adding more entities to that equation only increases the number of unknown variables. There are bound to be similarities to gun-ownership or countries with nuclear arsenals in working through this conundrum.
You're talking about it as if it was a weapon. An LLM is closer to an interactive book. Millennia ago humanity could only pass on information through oral traditions. Then scholars invented elaborate writing systems and information could be passed down from generation to generation, but it had to be curated and read, before that knowledge was available in the short term memory of a human. LLMs break this dependency. Now you don't need to read the book, you can just ask the book for the parts you need.
The present entirely depends on books and equivalent electronic media. The future will depend on AI. So anyone who has a monopoly is going to be able to extract massive monopoly rents from its customers and be a net negative to the society instead of the positive they were supposed to be.
The state is much better at peering into walled corporate gardens than personal basements.
They claimed that about GPT-2 and used the claim to delay its release.
They claimed that GPT-2 was probably not dangerous but they wanted to establish a culture of delaying possibly-dangerous releases early. Which, good on them!
Do you really think it is a coincidence that they started closing down around the time they went for-profit?
No, I think they started closing down and going for profit at the time they realized that GPT was going to be useful. Which sounds bad, but at the limit, useful and dangerous are the same continuum. As the kids say, OpenAI got "scale-pilled;" they realized that as they dumped more compute and more data onto those things, the network would just pick up more and discontinuous capabilities "on its own."
<aisafety>That is the one thing we didn't want to happen.</aisafety>
It's one thing to mess around with Starcraft or DotA and wow the gaming world, it's quite another to be riding the escalator to the eschaton.
This doesn't begin to make sense to me. Nothing about being a non-profit prevents OpenAI from raising money, including by selling goods and services at a markup. Some sell girl-scout cookies, some hold events, etc.
So, you can't issue equity in the company... offer royalties. Write up a compensation contract with whatever formula the potential employee is happy with.
Contract law is specifically designed to allow parties to accomplish whatever they want. This is an excuse.
There is no way OpenAI could have raised $10B as a non-profit.
Would you please try to explain why?
Hell, I’d regularly donate to the OpenAI Crowdsource Fund if it guaranteed their research would be open sourced.
I guess Mozilla as well then.
Well yeah, dive into the comments on any Firefox-related HN post and you'll see the same complaint about the organization structure of Mozilla, and its hindrance of Firefox's progress in favour of fat CEO salaries and side products few people want.
You might find me there. ;)
But, my God, the some of the nonprofit ceos I’ve known make the for-profit ceos look pathetic and cheap.
Eh.
Humans have about 1.5 * 10 ^ 14 synapses (i.e connections between neurons). Assume all the synapses are firing (highly unlikely to be the case in reality), and the average firing speed is 0.05ms (there are chemical synapses that are much slower, but we take the fastest speed of the electrical synapses).
Assume that each synapse is essntially a signal that gets attenuated somehow in transmission. I.e value times a fractional weight, which really is a floating point operation. That gives us (1.5 * 10 ^14)/(0.0005)/(10 ^ 12)) = 300000 TFLOPS
Nvidia 4090 is capable of 1300 Tflop of fp8. So for comparable compute, we need 230 4090s, which is about $345k. So with everything else on board, you are looking at $500k, which is comparatively not that much money, and thats consumer pricing.
The biggest expense like you said is paying salaries of people who are gonna figure out the right software to put on those 4090s. I just hope that most of them aren't working on LLMs.
Inference compute costs and training compute costs aren’t the same. Training costs are an order of magnitude higher.
LLMs are just training on massive amounts of data in order to find the right software. No human can program these machines to do the complicated tasks that humans can do. Rather we search for them with Gradient based methods using data
So the AGI existential threat to humanity has diminished?
Not if their near-term funding rounds go through. So much for "compute overhang".
As the emails make clear, Musk reveals that his real goal is to use OpenAI to accelerate full self driving of Tesla Model 3 and other models. He keeps on putting up Google as a boogeyman who will swamp them, but he provides no real evidence of spending level or progress toward AGI, he just bloviates. I am totally suspicious of Altman in particular, but Musk is just the worst.
“he provides no real evidence of spending level” In the mails he mentions that billions per year are needed and that he was willing to put up 1 billion to start.
Great analysis, thanks for taking the time.
Although they don’t spend nearly as much time on it, probably because it’s an entirely intuitive argument without any evidence, is that they could be “open” as in “for the public good” while still making closed models for profit. Aka the ends justify the means.It’s a shame lawyers seem to think that the lawsuit is a badly argued joke, because I really don’t find that line of reasoning convincing…
its because it is a badly argued joke. The founding charter is just that, a charter not a contract:
There are two massive caveats in that statement. wide enough to drive a stadium through.
Elon is just pissed, and is throwing lawyers at it in the hopes that they will fold (A lot of cases are settled out of court, because its potentially significantly cheaper, and less risky.)
The problem for Musk is that he is fighting with company who also is rich enough to afford good lawyers for a long time.
Also, he'll have to argue that he has materially been hurt by this change, again really hard.
last of all, its a company, founding agreements are not law, and often rarely contracts.
by "betray", you mean they pivoted?
To "pivot" would merely be to change their mission to something related yet different. Their current stance seems to me to be in conflict with their original mission, so I think it's accurate to say that they betrayed it.
Is that still true? LLMs seem to be getting smaller and cheaper for the same level of performance.
training isn't getting less intensive, its just that adding more GPUs is now more practical
Why would they change their mission? If achieving the mission requires money then they should figure out how to get money. Non-profit doesn't actually mean that the corporation isn't allowed to make profit.
Why change the name? They never agreed to open source everything, the stated mission was to make sure AGI benefits all of humanity.
"They're probably right that without making a profit, it's impossible to afford the salaries 100s of experts in the field and an army of hardware to train new models."
Except there's a proof point that it's not impossible: philanthropists like Elon Musk - who would have likely kept pumping money into it, and where arguably the U.S. and other governments would have funded efforts - energy and/or CPU time - as a military defense strategy to help compete with China's CCP funding AI.
The evidence they presented shows that Elon was in complete agreement with the direction of OpenAI. The only thing he disagreed with was who would be the majority owner of the resulting for-profit company that hides research in the short to medium term.
They don't refute that, but they claim that road was chosen in agreement with Elon. In fact, the claim this was his suggestion
Thanks for pointing that out. 100% agree with you.
What can we do?
It's convenient that OpenAI posts newsbait as they're poised to announce new board members who will control the company.
And look at that, suddenly news searches are plastered with stories about this...
https://www.google.com/search?q=openai+board&tbm=nws
Who could have possibly forseen that 'openai' + 'musk' + emails would chum the waters for a news cycle? Certainly not a PR firm.
If donors are unwilling to continue making sustained donations, they would have died. They only did what they needed to to stay alive.
Malevolent or "paperclip indifferent" AGI is a hypothetical danger.
Concentrating an extremely powerful tool, what it will and won't do, who has access to it, who has access to the the newest stuff first? Further corrupting K Street via massive lobbying/bribery activity laundered through OpenPhilanthropy is just trivially terrifying.
That is a clear and present danger of potentially catastrophic importance.
We stop the bleeding to death, then worry about the possibly malignant, possibly benign lump that may require careful surgery.
From all the evidence, the one to look the worst on all of this is Google...
Elon is suing OpenAI for breach of contract but doesn't have a contract with OpenAI. Most legals experts are concluding that this is a commercial for Elon Musk, not much more. Missions change, yawn...
True.
Well, nope. This is disingenuous to the point of absurdity. By that measure every commercial enterprise is "open". Google certainly is extremely open, as are Apple, Amazon, Microsoft... or Wallmart, Exxon, you name it.
When they say "We realized building AGI will require far more resources than we’d initially imagined" it's not just money/hardware it's also time. They need more years, maybe even decades, for AGI. In the meantime, let's put these LLMs to good use and make some money to keep funding development.
Their early decision to not open source their models was the most obvious sign of their intentions.
Too dangerous? Seriously? Who the fuck did/do they think they are? Jesus?
Sam Altman is going to sit there in his infinite wisdom and be the arbiter of what humanity is mature enough to handle?
The amount of kool aid that is being happily drank at openai is astounding. It’s like crypto scams but everyone has a PhD.
I can't tell if your comment is intentionally misleading or just entirely missing the point. The entire post states that Elon musk was well aware and onboard with their intentions. Tried to take over OpenAI and roll it into his private company to control. And finally agreed specifically that they need to continue to become less open over time.
And your post is to play Elon out to be a victim who didn't realize any of this? He's replying to emails saying he's agreeing. It's hard to understand why you posted something so contradictory above pretending he wasn't.