I think one should attribute a good amount of credit of this to Yann LeCun (head of meta's research). From early stage he have been vocal about keeping things open-source, and voiced that to Mark before even joining. (and credit to Mark for keeping that).
It probably also stems from his experience working at Bell Labs, and how his pioneering work very much required a lot of help from things available openly, as is still the case in academia.
The man have been repeating the advantage of open-source being the better option, and been very vocal about his opposition to "regulate AI (read: let a handful of US government aligned closed-source companies have complete monopoly on AI under the guise of ethics)". On this point he (and meta) stands in a pretty stark contrast to a lot of big-tech AI mainstream voices.
Me myself recently moved some research tech stack to meta stuff, and unlike platform that basically stops being supported as soon as the main contributor finishes his PhD, it have been great to work on it (and they're usually open to feedback and fixes/PR).
Regulatory capture would certainly be a bad outcome. Unfortunately, the collateral damage has been to also suppress regulation aimed at making AI safer, advocated by people who are not interested in AI company profits, but rather in arguing that "move fast and break things" is not a safe strategy for building AGI.
It's been remarkably effective for the original "move fast and break things" company to attempt to sweep all safety under the rug by claiming that it's all a conspiracy by Big AI to do regulatory capture, and in the process, leave themselves entirely unchecked by anyone.
When Sam Altman is calling for AI regulation, yes it is a conspiracy by big AI to do regulatory capture. What is this regulation aimed at making AI safer that you refer to anyway? Because I certainly haven't heard of it. Furthermore, there doesn't seem to be any agreement on whether or how AI, at a state remotely similar to the level it is at today, is dangerous or how to mitigate that danger. How can you even attempt to regulate in good faith without that?
Sure; that's almost certainly not being done in good faith.
When numerous AI experts and luminaries who left their jobs in AI are advocating for AI regulation, that's much more likely to be being done in good faith.
https://pauseai.info/
You could also write that as "there's no agreement that AI is safe". But that aside...
Most of the arguments about AI safety are not about current AI technology. (There are some reasonable arguments about the use of AI for impersonation, such as that AI-generated content should be labeled as such, but those are much less critical and they aren't the critical part of AI safety.)
The most critical arguments about AI safety are not primarily about current technology. They're about near-future expansions of AI capabilities.
https://arxiv.org/abs/2309.01933
Pause capabilities research until we have proven strategies for aligning AI to human safety.
We don't know that it's safe, many people are arguing that it's dangerous on an unprecedented scale, there are no good refutations of those arguments, and we don't know how to ensure its safety. That's not a situation in which "we shouldn't do anything about it" is a good idea.
How can you even attempt to regulate biological weapons research without having a good way to mitigate it? By stopping biological weapons research.
> When numerous AI experts and luminaries who left their jobs in AI are advocating for AI regulation, that's much more likely to be being done in good faith.
Their big revelation that they left their jobs over is that AI might be used to usurp identities, which, admittedly, is entirely plausible and arguably already something we're starting to see happen. It is humorous that your takeaway from that is that we need to double down on their identity instead of realizing that identity is a misguided and flawed concept.
Is it condescending to describe a differing opinion as "humorous?" It came across as quite rude.
Let's assume, for the sake of discussion, that it is. Is rudeness not rooted in the very same misguided and flawed identity concept? I don't suppose a monkey that you cannot discern from any other monkey giving you the middle finger congers any feelings of that nature. Yet, here the output of software has, I suspect because the software presents the message alongside some kind of clear identity marker. But is it not irrational to be offended by the output of software?
In principle, I'm not opposed to a pause.
However, in practice, enforcing a pause entails a massive global increase in surveillance and restrictions on what can be done with a computer.
So we now have an agency with a register of all computer sales? If you give a computer with a a GPU to a friend or family member, that's breakibf the law ubless you also report it? This takes us in a very scary direction.
This has to be a global effort, so we need a system of international enforcement to make it happen. We've been marginally successful in limiting proliferation of nuclear weapons, but at significant international and humanitarian costs. Nuclear weapons require a much more specialized supply chain than AI so limiting and monitoring adversarial access was easier.
Now we want to use the same techniques to force adversarial regimes to implement sane regulations around the sale and usage of computer equipment. This seems absolutely batshit insane. In what world does this seem remotely feasible?
We've already tried an easier version of this with the biological weapon research ban. The ban exists but enforcement doesn't. In that case, a huge part of the issue was that facilities that do that research very similar or the same as all kinds of other research.
An AI Pause has the same issue, but it is compounded by the fact that AI models can grant significant economic advantage in a way that nuclear/biological weapons don't so incentives to find a way to skirt regulations are higher. (Edit: it's further complicated by the fact that the AI risks that the Pause tries to mitigate are theoretical and people haven't seen them, unlike biological/nuclear. This makes concerted global action harder)
A global pause on AI research is completely unrealistic. Calls for a global pause are basically calls for starting regime change wars around the globe and even that won't be sufficient.
We have to find a different way to mitigate these risks.
Great comment!
It's unlooked that sometimes any implementation of an Obvious Good Thing requires an Obvious Bad Thing.
In which case we need to weigh the two.
In the case of a prerequisite panopticon, I usually come down against. Even after thinking of the children.
If the power of AI deserves this reaction, then governments are in a race to avoid doing this. We might be able to keep it out of the hands of the average person, but I don't find that to be the real threat (and what harm that does exist at the individual level is from a Pandora's box that has already been opened).
Think of it like stopping our nuclear research program before nuclear weapons were invented. A few countries would have stopped but not all and the weapons would have appeared on the world stage all the same, though perhaps with a different balance of power.
Also, is the threat of AI enough to be willing to kill people to stop it? If not, then government vs government action won't take place to stop it and even intra-government bans end up looking like abuse. If it is... I haven't thought too much on this conditional path because I have yet to see anyone agree that it is.
Then again, perhaps I have a conspiratorial view of governments because I don't believe that those in power stopped biological weapons research as it is too important to understand even if just from a defensive perspective.
This is coincidentally a ridiculously bad-faith argument, and I think that you know that.
The fact that one particular person is advocating for AI regulation does not mean that all calling for AI regulation are doing so due to having the same incentives.
This is exactly the point the parent poster is making. It feels like you only skimmed the comment before replying to it.
When I talk to folks like GP, they often assert that the non-CEO people who are advocating for AI safety are essentially non-factors, that really the only people whose policy agendas will be enacted are those who already have power, and therefore that the opinions of everyday activists don’t need to be considered when discussing these issues.
It’s a darkly nihilistic take that I don’t agree with, but I just wanted to distill it here because I often see people imply it without stating it out loud.
I think the whole "AGI safety" debate is a red herring that has taken attention away from the negative externalities of AI as it exists today. Namely, (even more) data collection from users and questions of IP rights around models and their outputs.
We can do more than one thing at a time. (Or, more to the point, different people can do different things.) We can advocate against misuses of current capabilities, and advocate about the much larger threats of future capabilities.
We really can’t. We are terrible at multitasking.
If you look around you’ll see that there are indeed very many people who are doing very different things from one another and without much centralized coordination.
Parent is likely referring to political/mass pressure behind initiatives.
In which case the lack of a clear singular message, when confronted with a determined and amoral adversary, dissolves into confusion.
Most classically, because the adversary plants PR behind "It's still an open debate among experts."
See: cigarettes, climate change
There's a big fucking difference between people who want to regulate AI because they might become a doomsday terminator paperclip factory (the register-model-with-government-if-they-are-too-big-crowd) and the folks who want to prevent AI being used to indirectly discriminate in hiring and immigration.
Also the potential for massive job losses and even more wealth inequality. I feel a lot of the people who are philosophizing about AI safety are well-off people who are worried about losing their position of influence and power. They don't care about the average guy who will lose his job.
This regulation can only be done through a clueless government body listening to sneaky AI leaders. Let's make it new healthcare? Premature regulation is the last thing you want.
For a bigger pic, it's time for the civilization to realize that speech itself is dangerous and to build something that isn't prone to "someone with <n>M subs said something and it began" that much. Without such transformation, it will get stuck in this era of bs forever. "Safety" is the rug. It hides the wires while the explosives remain armed. You can only three-monkey it for so long.
This seems like an uninformed rant to me. I’m not even sure where you’re trying to go with that.
Do you know offhand, approximately what percentage of the White House AI Council members are from the private sector? The government doesn’t need to seek advice from tech bro billionaires.
If we are interested in AGI safety, we should experiment with slightly unsafe things before they become hugely unsafe, instead of trying to fix known unknowns while ignoring unknown unknowns.
We should open source current small models and observe what different people are actually doing with them. How they abuse it. We will never invent some things on our own.
That's all very nice and good, but Meta keeps things open (for now) because it's perfectly aligned with their business goals. It couldn't have happened any other way.
You make it sound like it's a bad thing that it aligns with their business goals. I'd turn this around: if it didn't align with their business goals I would be worried that they would course correct very soon.
No, just that they shouldn't be showered with praise for doing what is in their best interests - commoditizing their complement.
For the same reason Microsoft doesn't deserve credit for going all in on linux/open source when they did. They were doing it to stave off irrelevance that came from being out-competed by open source.
They were not doing it because they had a sudden "come to jesus" moment about how great open source was.
In both cases, it was a business decision being marketed as an ideological decision.
I am sure that they would have found a way to also align a closed approach with their business goals if they wanted to.
However, they chose not to. And I assume the the history of e.g. android, pytorch and other open source technologies had a lot to do with it.
I believe in praise for any company that finds a way to profit and do the right thing.
If they don't profit, then they don't have resources to do those things in addition to not being able to provide a livelihood for their workers.
The suggestion was to credit LeCun, not Meta. (Perhaps you were responding to the secondary suggestion also to credit Zuckerberg?)
> Meta keeps things open (for now) because it's perfectly aligned with their business goals.
How?
I would have said a flood of LLM-generated spam is a pretty big threat to Facebook's business. Facebook don't seem to have any shortage of low/medium quality content; it's not like they need open-weights LLMs to increase the supply of listicles and quizzes, which are already plentiful. There isn't much of a metaverse angle either. And they risk regulatory scrutiny, because everyone else is lobotomising their models.
And if they wanted a flood of genai content - wouldn't they also want to release an image generation model, to ensure instagram gets the same 'benefits' ?
Sure there are some benefits to the open weights LLM approach that make them better at LLMs - I'm sure it makes it easier for them to hire people with LLM experience for example - but that's only helpful to the extent that Facebook needs LLMs. And maybe they'll manage to divert some of that talent to ad targeting or moderation - but that hardly seems 'perfectly aligned', more of a possible indirect benefit.
In a recent interview, Mark Zuckerberg said they're spending $10B-$100B on training and inferencing over the next X years. They see open source as a way to get the community to cut that cost. In his view, even just 10% cheaper inferencing pays for a lot.
Also is perfectly aligned with Yann’s goals as an (academic) researcher whose career is built on academic community kudos far more than, say, building a successful business
I'd definitely rather build a product on an assumption that a company/individual will continue to act in its own best interest than on its largess.
That's a good thing.
Does open source just not count if you have an alternative business model? Even big open source projects hold on to enterprise features for funding. What company would meet your criteria of proper open source contributer?
Isn't this more attributable to the fact that whilst Open AI's business model is to monetise AI, FB has another working business model and it costs them little to open source their AI work (whilst restricting access by competitors).
Yeah, the way I see it Meta is undermining OpenAI's business model because it can, I have serious doubts Meta would be doing as it does with OpenAI out of the picture.
Things like PyTorch help everyone (massively!), including OpenAI.
Another of Meta's major "open source" initiatives is Open Compute which has nothing to do with OpenAI.
I see zero relationship between Meta's open source initiatives and OpenAI. Why would there be? OpenAI is not a competitor, and in fact help push the field of AI forwards which is helpful to Meta.
Meta's advantage in AI is that they have leading-scale and continuous feeds of content to which they have legal entitlement. (Unlike OpenAI)
If state of open art pushes forward and is cutting edge, Meta wins (English) by default.
Also Meta's models are nowhere near as advanced so they couldn't even ask significant amount of money for them.
This is clear as day. If they got an early lead to the LLM/AI space like OpenAI did with ChatGPT, then things would be very different. Attributing the open source to "good will" and Meta being righteous seems like some mis-founded 16 year old's overly simplistic ideal of the world. Meta is a business. Period.
Part that and part Zuckerberg's misanthropy. Zuckerberg doesn't care about Facebook's harms to children and society as long as he makes a quick buck. He also doesn't care about gen AI's potential to harm society for the same reason.
I thought LeCun once said he was not the head of the research and he didn't manage people. Nonetheless, I'm sure he has enormous influence in Meta.
LeCun is a blowhard and a hack, who lies regularly with great bluster and self-assurance.