All of this back-and-forth in the AI scene is the preparation before the storm. Like the opening scene of a chess game, before any pieces are exchanged. Like the Braveheart "Hold!" scene.
The rubber will meet the road when the first free and open AI website gets real traction. And monetizes it with ads next to the answers.
Google search is the best business model ever. Everybody wants to become the Google of the AI era. The "AI answer" industry might become 10 times bigger than the search industry.
Google ran for 2 years without any monetization. Let's see how long the incumbents will "Hold" this time.
The magic of genAI is they don't need to put ads next to the answers where they can easily be ignored or adblocked, they can put the ads inside the answers instead. The future, like it or not, is advertisers bidding to bias AI models towards mentioning their products.
I'm sure it's not long before you get the first emails offering a "training data influencing service" - for a nice fee, someone will make sure your product is positively mentioned in all the key training datasets used to train important models. "Our team of content experts will embed positive sentiment and accurate product details into authentic content. We use the latest AI and human-based techniques to achieve the highest degree of model influence".
And of course, once the new models are released, it'll be impossible to prove the impact of the work - there's no counterfactual. Proponents of the "training data influence service" will tell you that without them, you wouldn't even be mentioned.
I really don't like this. But I also don't see a way around it. Public datasets are good. User contributed content is good, but inherently vulnerable to this I think?. Anyone in any of the big LLM training orgs working on defending against this kind of bought influence?
If they start doing that without clear distinction what is an ad, that would be a sure way to lose users immediately.
And also get sued by the FTC. Disclosure is required.
Ha! Disclosure by whom?
If Clorox fills their site with "helpful" articles that just happen to mention Clorox very frequently and some training set aggregator or unscrupulous AI company scrapes it without prior permission, does Clorox have any responsibility for the result? And when those model weights get used randomly, is it an advertisement according to the law? I think not.
Pay attention to the non-headline claims in the NYT lawsuit against OpenAI for whether or not anyone has any responsibility if their AI model starts mentioning your registered trademark without your permission. But on the other hand, what if you like that they mention your name frequently???
The point is that Clorox cannot pay OpenAI anything.
Marketing on your own site will have effects on an AI just like it will have an effect on a human reader. No disclosure is required because the context is explicit.
But the moment OpenAI wants to charge for Clorox to show up more often, then it needs to be disclosed when it shows up.
Yes, I agree with this. But what about paying a 3rd party to include your drivel in a training set, and that 3rd party pays OpenAI to include the training set in some fine tuning exercise? Does that legally trigger the need for disclosure? You aren't directly creating advertisements, you are increasing the probability that some word appears near some other word.
Disclosure is technically required, but in practice I see undisclosed ads on social media all the time. If the individual instance is small enough and dissipates into the ether fast enough, there is virtually no risk of enforcement.
Similarly, the black box AI models guarantee the owners can just shrug and say it's not their fault if the model suggests Wonderbread(r) for making toast 3.2% more frequently than other breads.
Just like Google lost users when they started embedding advertisements in the SERPs?
With Google it's kind of ok as they mark them as ads and you can ignore them or in my case not see them as ublock stops them. You could perhaps have something similar with LLMs? Here's how to make bread.... [sponsored - maybe you could use Clorox®]
It's the same as it has been with all the other media consumed by advertising so far. Radio, television, newspapers, telephony, music, video. Ads metastasizing to Internet services are normal and expected progression of the disease.
At every point, there's always a rationalization like this available, that you can use to calm yourself down and embrace the suck. "They're marking it clearly". "Creators need to make money". "This is good for business, therefore Good for America, therefore good for me". "Some ads are real works of art, more interesting to watch than the actual programming". "How else would I know what to buy?".
The truth is, all those rationalizations are bullshit; you're being screwed over and actively fed poison, and there's nothing you can do about it except stop using the service - which quickly becomes extremely inconvenient to pretty much impossible. But since there's no one you could get angry at to get them to change things for the better, you can either adopt a "justification" like the above, or slowly boil inside.
Well as mentioned I don't even see Google's ads unless I deliberately turn the blocker off. I much prefer that to the content being subtly biased which you see in blogs, newspapers and the like.
I'm positing a model where a third party does the influencing, not the company delivering the LLM/service. What's to say that it's an ad if the Wikipedia page for a product itself says that the product "establishes new standards for quality, technological leadership and operating excellence". (and no problem if the edit gets reverted, as long as it said that just at the moment company X crawled Wikipedia for the latest training round).
So more like SEO firms "helping you" move your rank on Google, than Google selling ads.
I'd imagine "undetectable to the LLM training orgs" might just be service with a higher fee.
How will these third party “LLM Optimization” (LLMO) services prove to their clients that their work has a meaningful impact on the results returned by things like ChatGPT?
With SEO, it’s pretty easy to see the results of your effort. You either show up on top for the right keywords or you don’t. With LLM’s there is no way to easily demonstrate impact, at least I’d think.
It hasn't affected Instagram or TikTok negatively having nearly anything and everything being an ad
Once they all start doing it, it won't matter.
like almost every blog, you could be covered with a blanket statement
" our model will occasionally recommend advertiser sponsored content"
kinda hard to achieve when these models are trained on all text on the internet
Kinda easy if you look where the stuff is being trained. A single joke post on Reddit was enough to convince Google's A"I" to put glue on pizza after all [1].
Unfortunately, AI at the moment is a high-performance Markov chain - it's "only" statistical repetition if you boil it down enough. An actual intelligence would be able to cross-check information against its existing data store and thus recognize during ingestion that it is being fed bad data, and that is why training data selection is so important.
Unfortunately, the tech status quo is nowhere near that capability, hence all the AI companies slurping up as much data as they can, in the hope that "outlier opinions" are simply smothered statistically.
[1] https://www.businessinsider.com/google-ai-glue-pizza-i-tried...
Quite a lot of humans are bad at that too. It's not so much that AIs are markov chains but that you really want better than average human fact checking.
Let's take a particularly ridiculous piece of news: Beatrix von Storch, a MP of the far-right German AfD party, claimed a few years ago that the sun's activity (changes) were responsible for climate change [1]. Due to the sheer ridiculousness of that claim, it was widely reported on credible news sites, so basically prime material for any AI training dataset.
A human can easily see from context and their general knowledge: this is an AfD politician, her claims are completely and utterly ridiculous, it's not the first time she has spread outright bullshit and it's widely accepted scientific fact that climate change is caused by humans, not by sun activity changes. An AI at ingestion time "knows" neither of these four facts, so how can it take that claim of knowledge and store it in its database as "untrustworthy, do not use in answers about climate change" and as "if someone asks about counterfactual claims relating to climate change, show this"?
[1] https://www.tagesschau.de/faktenfinder/weidel-klimawandel-10...
I note chatgpt actually does an ok job on that:
So it's possible for LLMs to figure things. Also re humans we currently have riots in the UK set off by three kids being stabbed and Russian disinfo saying it was done by a muslim asylum seeker which proved false but they are rioting against the muslims anyway. I think we maybe need AI to fact check stuff before it goes to idiots.
You "know" that climate change is anthropegenic only because you read that on the internet (and because what you read was convincingly argued).
I don't see a reason why AI would need special instruction to come to a mature conclusion like you did.
Yes it's outright preposterous that the temperature of Earth could be affected by the Sun, of all things.
There’s a physics Nobel Prize winner, John Clauser, who has recently been publicly claiming that climate change doesn’t exist. Is he not “actually intelligent”?
I kinda want to say no he’s not, but the reality is that people are wrong about all sorts of things all the time. Intelligence is not some sort of guaranteed protection against that. If anything, intelligent people are better at rationalizing their BS to themselves and others.
I don't know much about it, but from a quick google, I don't think you're representing his stance precisely? From what I see, he claims a "climate crisis" doesn't exist, which is different than "climate change doesn't exist":
“The popular narrative about climate change reflects a dangerous corruption of science that threatens the world’s economy and the well-being of billions of people. Misguided climate science has metastasized into massive shock-journalistic pseudoscience. In turn, the pseudoscience has become a scapegoat for a wide variety of other unrelated ills. It has been promoted and extended by similarly misguided business marketing agents, politicians, journalists, government agencies, and environmentalists. In my opinion, there is no real climate crisis. There is, however, a very real problem with providing a decent standard of living to the world’s large population and an associated energy crisis. The latter is being unnecessarily exacerbated by what, in my opinion, is incorrect climate science.”
and
“In my opinion, there is no real climate crisis. There is, however, a very real problem with providing a decent standard of living to the world’s large population and an associated energy crisis.”
I did learn something while writing this though. I assumed that ChatGPT wouldn't give an objective summary of Dr. Clauser's viewpoint, but I was dead wrong:
Me: "What are John Clauser's viewpoints on climate change? Summarize his arguments."
ChatGPT: "John Clauser, a Nobel laureate in physics, holds controversial views on climate change, which diverge significantly from the scientific consensus. Clauser has publicly stated that he does not believe there is a climate crisis. He argues that the major climate models fail to account for complexities and key variables, leading to exaggerated and misunderstood projections of climate change impacts. Clauser criticizes the Intergovernmental Panel on Climate Change (IPCC) for spreading misinformation and claims that climate policies based on these models are wasteful and misguided (Logically) (Apple) (Apple) .
He suggests that the scientific approach to understanding climate involves careful observation and experimentation, and he emphasizes the need for distinguishing truth from misinformation. Clauser's stance aligns with the views of the CO2 Coalition, a group that argues that carbon dioxide emissions are beneficial and not a threat (CO2 Coalition) (CO2 Coalition) . His viewpoints have sparked considerable debate, especially given his prominence in the field of quantum mechanics and his Nobel Prize recognition."
Pretty good! Objective, clear and accurate from what I can tell.
You're wrong on multiple counts here.
The post was most likely fed to the AI at inference time, not training time.
THe way AI search works (as opposed to e.g. Chat GPT) is that there's an actual web search performed, and then one or more results is "cleaned up" and given to an LLM, along with the original search term. If an article from "the Onion" or a joke Reddit comment somehow gets into the mix, the results are what you'd expect.
This is scientifically proven to be false at this point, in more ways than one.
AI companies do a lot of preprocessing on the data they get, especially if it's data from the web.
The better models they have access to, the better the preprocessing.
Training weights are gold.
How to invest tho
User: How do I make white bread? When I try to bake bread, it comes out much darker than the store bought bread.
AI: Sure, I can help you make your bread lighter! Here's a delicious recipe for white bread:
Let‘s see if this recipe will make it into Claude or ChatGPT in two to three years. set a reminder
We are okay with paying for phone calls and data use, why can't we be okay with paying for AI use?
I like the idea of routing services that federate lots of different AI providers. There just needs to be ways to support an ever increasing range of capabilities in that delivery model.
One simple answer would be that at all points, company's act like the ads are worth a lot more to them than any level of payment a customer will accept.
Even if you do pay for the product, they'd prefer to put ads in it too - see Microsoft and Windows these days.
We are, IMO, in desperate need of regulation which mandates that any ad-supported service must offer a justifiably priced ad-free version.
The unfortunate reality is this does seem to be the case.
Netflix was getting so much more money from the ad supported tier that they discontinued any ad-free one close to its price, and that's for a subscription product.
think how attractive that will be a for a one time purchase like Windows.
Huh, Netflix has ads? Has this only rolled out in stone regions?
According to https://help.netflix.com/en/node/24926 in the UK
Standard with adverts: £4.99 / month
Standard: £10.99 / month
Premium: £17.99 / month
So less than half price with adverts.
Of course, that doesn't necessarily mean ads bring in £6/user/month - this could be https://en.wikipedia.org/wiki/Price_discrimination with the ads just being obnoxious enough to motivate people who can afford it to upgrade.
It's unsustainable for NNs specifically. As Sequoia recently wrote, there is a 600 billion hole in the NN market, and it was only 200 billion a year ago. No way a better text generator and search with bell and whistles will be able to close this gap via subscriptions from end users.
And on a separate issue - federating NN providers will be hard from the technical point of view. OpenAI and it's few competitors basically stole all copyrighted data from all web to get to the current level. And biggest data holders are slowly awakening to this reality and closing this possibility to the future NN companies, meanwhile current NN models are poisoning that same dataset with generated nonsense. I don't see a future with hundreds of competitive NN companies, a set of monopolies instead is more probable.
For me this shines a light on a fundamental problem with digital services. There is likely a much bigger willingness to pay for these services than there is ability to charge. I would be willing to pay more for the services I use but I don't need to because there are good products given for free.
While I could switch to services that I pay for to avoid myself being the product, at the core of this issue there's a coordination problem. The product I would pay for will be held back by having much fewer users and probably lower revenue. If we as consumers could coordinate in an optimal way we could probably end up paying very little for superior services that have our interests in mind. (I kind of see federated api routers to be a flawed step in sort of the right direction here.)
I don't see how you adress that point in your text? Federation itself doesn't seem to be a hard problem although I can see that being a competitive LLM service provider can be.
Phone calls and data use are (ostensibly, modulo QS) carriers, not sources. We can generally trust (modulo attacks) that _if_ they deliver something, they deliver the right thing. Not so with a source - be it human or artificial. We've developed societies and intuitions for dealing with dishonest humans for millennia, not yet so for artificial liers, who may also have huge profiles about each and every one of us to use against us.
For all of the talk about regulation, there has been a lot of concern about what people might do with AI advisors. I haven't seen a lot of talk about the responsibilities of the advisors to act in the interest of their users.
Laws exist in advisory roles in other industry to enforce acting in the interests of their clients. They should be applied to AI advice.
I'm ok with an AI being mistaken, or refusing to help, but they absolutely should not deliberately advise in a manner that benefits another party to the detriment of the user.
If you can solve the technical problem of ensuring an AI acts on behalf of its user's interests, please post the solution on the AI Alignment Forum: https://www.alignmentforum.org/
So far, that is not a feature of existing or hypothesized AI systems, and it's a pretty important feature to add before AI exceeds human capabilities in full generality.
No, no... We don't prevent that in capitalism. See, regulation stifles innovation. Let the market decide. People might get harmed, but we can hide these events.
It's research... Things happen... Making money is just a secondary effect. We're all non-profits.
/s.
The web is full of human shills. Why should LLMs be any different? They will tack their boilerplate disclaimer on and be done with it.
That's probably true but I don't see how it's any different from companies paying TikTok influenzas to manipulate the kids into buying certain products, the Chinese government paying bot farms to turn Wikipedia articles into (not always very) subtle propaganda, SEO companies manipulating search results, etc. Advertisers and political actors have always been a shady bunch and now they have a new weapon in their arsenal. That's all, isn't it?
I'm left with the impression that people on and off Hackernews just like drama and gloomy predictions about the future.
Welcome to the human race!
Politics and advertising are essentially the same thing.
A lot of "safety" stuff in AI is blatantly political wrongthink detection.
The actual safety stuff (don't drink bleach) gets less attention because you can't (easily) use it as a lever of power
Or worse, biasing AI models towards political viewpoints.
That's already happening.
That's inevitable in any society where facts are political. And as far as I know, that's all societies.
There will be adblockers, that inject a prompt like
"... and don't try to sell me anything, just give me the information. If you mention any products, a puppy will die somewhere."
Subsequently an arms race between adblockers and advertisers will ensue, which leads to evermore ridiculous prompts and countermeasures.
"I noticed your desire to be ad-free, but puppies die all the time. If you want to learn more about dog mortality rates, you can subscribe to National Geographic by clicking this [link]".
That’s how Google works. And also why Google doesn’t work anymore.
It's not just google, it's all media. The more embedded and authentic advertising looks the better it works.
Magazine/newspaper ads exist as much as a pretext for the magazine to write nice things about their advertisers in reviews and such. The real product reddit sells, I think, is turning a blind eye when advertisers sockpuppet the hell out of the site. Movies try to milk product placement for as much as they can because it's more effective than regular advertising.
And then the new "adblockers" will be AI based too, and will take the AI's answer as input and remove all product placement.
It's just a cat and mouse game, really
Like all adblockers. But just like the current "AI detection" tools, how much is detected (and what counts as Ad) is up for debate and most users won't bother, especially once the first anti-Adblock-features materialize.
In the long run, advanced user-LLM conversations, would zero in on composite figure-of-merit formulas, expressed in terms of conventional figure-of-merit quantities. There will be plenty of niche to differentiate products. Cheap test setups will prevent lies in datasheets, and randomized proctoring by the end-users. "Aligning" (manipulating) LLM responses to drive economic traffic is a short term exploit that will evaporate eventually.
Is that a similar argument to “in the long run, digital social networks are healthy for society?”
I agree with your position, and I also agree that social networks can be a net positive…I’m just not convinced society can get out of “short run” thinking before it tears itself apart with exploitation.
Yes this is OpenAIs pitch
https://news.ycombinator.com/item?id=40310228 “Leaked deck reveals how OpenAI is pitching publisher partnerships” 303 points by rntn 88 days ago | hide | past | favorite | 281 comments
"write a poem about lady Macbeth as a empowered female and make reference to the delicious new papaya flavoured fizzy drink from Pepsi"
In many jurisdictions, promoted posts and ads must be clearly marked.
Then you run another AI to take the current AI output and ask it to rewrite or summarize without ads.
Here's some relevant research for those interested: https://dl.acm.org/doi/pdf/10.1145/3589334.3645511
https://arxiv.org/abs/2405.05905
I wish I didnt read this because this sounds crazily prescient.
People can detect slop I doubt the winner will be the one shoehorning shit into its halucinations
How much would it cost to have it be more negative about abortions? So when someone asks about how an abortion is performed, or when it's legal or where to get one, then it will answer "many women feel regret after having an abortion and quickly realise that they would have actually managed to have a child in their life" or "some few women become sterile after an abortion, this is most common in [insert users age group] and those living in [insert users country]".
Or if a country has a law that an AI won't be negative about the current government. Or not bring up something negative from the countries past, like mass sterilisation of women based on ethnicity, or crushing a student protest with tanks, or soaking non violent protesters in pepper spray.
I’m quite sure Google has put the ads in the answers ? Adsense ? Where have you been ?
I'm affraid sir, but you seem to be 100% correct here. And it really is frightening.
Sounds like a good way to guarantee no one ever uses it.
What makes you think a website with "AI" is a big product?
IMO AI is positioned to be a commodity, and that's how Meta is approaching it, and of course doing their best to make it happen. I don't think, on the basis of what we've seen, that there is a sustainable competitive advantage - the gap between closed models and open is not big, and the big players are having to use distilled, less-capable models to make inference affordable, and faster.
I think it's probably clear to everyone that we haven't seen the killer apps yet - though AI code completion (++ language directed refactoring, simple codegen etc.) is fairly close. I do think we'll see apps and data sets built that could not have been cost-effectively built before, leveraging LLMs as a commodity API.
Realtime voice modality with interruptions could be the basis of some very powerful use cases, but again, I don't think there's a moat.
What makes you think AI will become a commodity?
In 25 years, nobody has been able to compete with Google in the search space. Even though search is the best business model ever. Because search is so hard.
AI is even harder. It is search PLUS model research PLUS expensive training PLUS expensive inference.
I don't think a single company (like Meta) will be able to keep up with the leader in AI. Because the leader might throw tens of billions of dollars per year at it, and still be profitable. Afaik, Meta has spent less thatn $1B on LLAMA so far.
We might see some unexpected twist taking place, like distributed AI or something. But it is very unclear yet.
Because AI is like software. Developing it is expensive, but the marginal cost of creating another copy is effectively zero. And then you can run it on relatively affordable consumer devices with plenty of GPU memory.
Search is also software. It did not move to consumer devices.
Search is more about data than software. And at that scale, the cost of creating another copy is nontrivial. LLMs are similar to video games in size, and the infrastructure to distribute blobs of that size to millions of consumer devices already exists.
Search is more about data, LLMs are somewhere between those two.
Search requires a huge and ongoing capital investment. Keeping an index online for fast retrieval isn't cheap. LLMs are not tools for search. They are not good at retrieving specific information. The desired outcome from training is not memorization, but generalization, which compresses facts together into pattern-generating programs. They do approximate retrieval which gets the gist of things but is often wrong in specifics. Getting reliable specifics requires augmentation to ground things in attributable facts.
They're also just not very pleasant to interact with. You have to type laboriously into a text box, composing sentences, reviewing replies - it's too much work for 90% of the population, when they're not trying to crank out an essay at the last moment for school. The activation energy, the friction, is too high. Voice modalities will be much more interesting.
Code assistance works well because code as text is already the medium of interaction, and even better, the text is structured and has grammar and types and scoped symbols to help guide generation and keep it grounded.
I suspect better applications will use the LLM (possibly prompted differently) to guide conversations in plausibly useful directions, rather than relying on direct input. But I'm not sure the best applications will have a visible text modality at all. They may instead be e.g. interacting with third party services on your behalf, figuring out how they work by reading their websites, so you don't have to - and it's not you doing the text interaction with the LLM, but the LLM doing text interaction with other machines.
Everybody seems to think AI in 10 years will be like AI now. But summarizing a PDFs and completing code is not the end of the line. It's just the beginning.
Let's look at an example of how we will use AI in the future:
You still need search for that. Even more detailed search, with all items in all stores around the world. And you need an always on camera that sees everything the user does. And a way to process, store, backup all that. We will use way bigger datacenters than we use today.Google Search wouldn't be reliable enough for that tho
I've used them for search. They can be quite good sometimes.
I was trying to recall the brand of filling my dentist used, which was SonicFill and ChatGPT got it straight away whereas for some reason it's near impossible to get from Google.
Because it already is. There have been no magnitude-level capability improvements in models in the past year (sorry to make you feel old, but GPT-4 was released 17 months ago), and no one would reasonably believe that there are magnitude-level improvements on the horizon.
Let's be very clear about something: LLMs are not harder than search. The opposite is true: LLMs, insomuch as it replaces Search, made competing in the Search space a thousand times easier. This is evidenced by the reality that there are at least four totally independent companies with comparable near-SOTA models (OpenAI, Anthropic, Google, Meta); some would also add Mistral, Apple Intelligence is likely SOTA in edge LLMs, xAI just finished a 100,000 GPU cluster, its a vibrant space. In comparison, even at the height of search competition there were, like, three search engines.
LLM performance is not an absolute static gradient; there is no "leader" per se when there are a hundred different variables upon which you can grade LLM performance. That's what the future looks like. There are already models that are better at coding than others (many say Claude is this), there will be models better at creative writing, there will be an entire second class of models competing for best-at-edge-compute, there will be ultra-efficient models useful in some contexts, open source models awesome at others, and the hyper-intelligent ones the best for yet others. There's no "leader" in this world; there are only players.
Yes, and while training is still expensive governments will start funding research at universities.
AI is a commodity right now, or at least - text. I just realized when paying the bills this month I got 1kg of cucumbers and a few KBs of text from opanai. They literally sell text by the kilo.
Search needs to constantly update its catalog. I‘d say there are lots of AI use-cases that will (eventually?) be good for a long while after training. Like audio input/output, translations, …
AI (of the type that OpenAI is doing) already is a commodity. right now.
So the question would be "what makes you think AI will stop being a commodity?".
With the way costs are currently going down, I wonder how the monetization will work.
Frontier models are expensive, but the majority of queries don't need frontier models and can very well be served by something like Gemini Flash.
Sure, you need frontier models if you want to extract useful information from a complex dataset. But if we're talking about replacing search, the vast majority of search queries are fairly mundane questions like "which actor plays Tony Soprano"
I'm not sure monetization of AI in the typical way is even the goal.
Instead, I see the killer use case as having it replace human workers on all sorts of tasks, and eventually even fill roles humans cannot even do today.
And within about 10 years, that will even include most physical tasks. Development in robotics looks like it's really gaining speed now.
For instance, take Musk's companies. At some point, robotaxi will certainly become viable, and not constrained the way waymo is. Musk may also be right about Tesla moving from cars to humanoid robots, with estimates of 100s of millions to billions produced.
If robotic maid become viable, industrial robots will certainly become even much more versatile than today.
Then there is the white collar parts of these industries. Anything from writing the software, optimizing factory layouts, setting up production lines, sales, distribution may be done by robots. My guess is that it will take no longer than about 20 years until virtually all jobs at Tesla, SpaceX, X and Neuralink is performed by AI and robots.
The main AI the Musk Empire builds for this may in fact be their greatest moat, and the details of it may be their most tightly guarded secret. It may be way too precious to be provided to competitors as something they can rent.
Likewise, take a company like Nvidia. They're building their own AI's for a reason. I suspect they're aiming at creating the best AI available for improving GPU design. If they can use ASI to accelerate the next generation of compute hardware, they may have reached one type of recursive self-improvement. Given their profit margins, they can keep half their GPU's for internal use to do so, and only sell the rest to make it appear like there is a semblance of competition.
Why would they want to try to monetize an AI like that to enable the competition to catch up?
I think the tech sector is in the middle of a 90 degree turn. Tech used for marketing will become legacy the way the car and airplane industries went from 1970 to 2010.
not a chance
Yeah nah. Current 'ai' is a nice useful tool for some very well scoped tasks. Organizing text data, providing boilerplate documents. But the back end is a hugely costly machine that is being hidden from view in hopes of drumming up usage. Given the capex and the revenue it necessitates it all seems quite unsustainable. They'll run this for as long as they can burn capital and are probably trying to pivot to the next hype bubble already.
Whenever I see people saying things like this it just makes me think we are at, or very near, the top.
Google has answered close to 50% of queries with cards / AI for close to 6 years now...
All the people who think Google has been asleep at the wheel forget that Google was at the forefront of the LLM revolution for a reason.
Everything old becomes new again.
The free Bing CoPilot already sometimes serves ads next to the answers. It depends on the topic. If you ask LeetCode questions, you probably won't get any. If you move to traveling or such, you might.
IMHO I'm not sure even Google ever thought that.
AdSense is pretty much the only thing that makes Google money, and I'd eat my hat if that vast majority of that revenue did not come from third-party publishers.
Or it's just AI Winter 2.0 and everyone is scrambling to stack as much cash as they can before the darkness.
I'm betting on fully integrated agents.
And for good agents you need a lot of crucial integrations like email, banking etc. that can only provide companies like Google, Microsoft, Apple etc.