One thing I’ve noticed with the AI topic is how there is no discussion on how the name of a thing ends up shaping how we think about it. There is very obviously a marketing phenomenon happening now where “AI” is being added to the name of every product. Not because it’s actually AI in any rigorous or historical sense of the word, but because it’s trendy and helps you get investment dollars.
I think one of the results of this is that the concept of AI itself increasingly becomes more muddled until it becomes indistinguishable from a word like “technology” and therefore useless for describing a particular phenomenon. You can already see this with the usage of “AGI” and “super intelligence” which from the definitions I’ve been reading, are not the same thing at all. AGI is/was supposed to be about achieving results of the average human being, not about a sci-fi AI god, and yet it seems like everyone is using them interchangeably. It’s very sloppy thinking.
Instead I think the term AI is going to slowly become less marketing trendy, and will fade out over time, as all trendy marketing terms do. What will be left are actually useful enhancements to specific use cases - most of which will probably be referred to by a word other than AI.
+1.
What happened to ML? With the relatively recent craze precipitated by chatgpt, the term AI (perhaps in no small part due to "OpenAI") has completely taken over. ML is a more apt description of the current wave.
Open AI was betting on AI, AGI, Super inteligence.
Look at the google engineer who thought they had an AI locked up in the basement... https://www.theverge.com/2022/6/13/23165535/google-suspends-...
MS paper on sparks of AGI: https://www.microsoft.com/en-us/research/publication/sparks-...
The rumors that OpenAI deal with MS would give them everything till they got to AGI... A perpetual license to all new development.
All the "Safety people" have left the OpenAi building. Even musk isnt talking about safety any more.
I think the bet was that if you fed an LLM enough, got it big enough it would hit a tipping point, and become AGI, or sentient or sapient. That lines up nicely with the MS terms, and MS's on paper.
I think they figured out that the math doesn't work that way (and never was going to). A prediction of the next token being better isnt intelligence any more than weather prediction will become weather.
When my friends talked about how AGI is just creating huge enough neural network & feeding it enough data, I have always compared it to: imagine locking a mouse in a library with all the knowledge in the world & expecting it to come out super intelligent.
GPT-3 is about the same complexity as a mouse brain; it did better than I expected…
… but it's still a brain the size of a mouse's.
Mice are fare more impressive though.
I've yet to see a mouse write even mediocre python, let alone a rap song about life in ancient Athens written in Latin.
Don't get me wrong, organic brains learn from far fewer examples than AI, there's a lot organic brains can do that AI don't (yet), but I don't really find the intellectual capacity of mice to be particularly interesting.
On the other hand, the question of if mice have qualia, that is something I find interesting.
I have yet to see a machine that would survive a single day in a mouse's natural habitat. And I doubt I'll see one in my lifetime.
Mediocre, or even excellent, Python and rap lyrics in Latin are easy stuff, just like chess and arithmetic. Humans just are really bad at them.
It doesn't say specifically, but I think these lasted more than a day, assuming you'll accept random predator species as a sufficient proof-of-concept substitute for mice which have to do many similar things but smaller:
https://www.televisual.com/news/behind-the-scenes-spy-in-the....
Those robots aren't acquiring their own food, and they're not edible by the creatures that surround them. They're playing on easy mode.
Still passes the "a machine that would survive a single day" test, and given machines run off electricity and we have PV already food isn't a big deal here.
But you should find their self-direction capacity incredible and their ability to instinctively behave in ways that help them survive and propagate themselves. There isn't a machine or algorithm on earth that can do the same, much less with the same minuscule energy resources that a mouse's brain and nervous system use to achieve all of that.
This isn't to even mention the vast cellular complexity that lets the mouse physically act on all these instructions from its brain and nervous system and continue to do so while self-recharging for up to 3 years and fighting off tiny, lethal external invaders 24/7, among other things it does to stay alive.
All of that in just a mouse.
No, why would I?
Depending on what you mean by self-direction, that's either an evolved trait (with evolution rather than the mouse itself as the intelligence) for the bigger picture what-even-is-good, or it's fairly easy to replicate even for a much simpler AI.
The hard part has been getting them to be able to distinguish between different images, not this kind of thing.
https://en.wikipedia.org/wiki/Evolutionary_algorithm
Is nice, but again, this is mixing up the intelligence of the animal with the intelligence of the evolutionary process which created that instance.
I as a human have no knowledge of the evolutionary process which lets me enjoy the flavour of coriander, and my understanding of the Krebs cycle is "something about vitamin C?" rather than anything functional, and while my body knows these things it is unconventionable to claim that my body knowing it means that I know it.
Evolution is an abstract concept, and abstract concepts cannot be “intelligent” (whatever that means). This is like saying that gravity or thermodynamics are “intelligent”.
Isn't this distinction more about "language" than "intelligence". There are some fantastically intelligent animals, but none of them can do the tasks you mention because they're not built to process human languages.
I agree with the general sentiment but want to add: Dogs certainly process human language very well. From anecdotal experience of our dogs:
In terms of spoken language they are limited, but they surprise me all the time with terms they have picked up over the years. They can definitely associate a lot of words correctly (if it interests them) that we didn't train them with at all, just by mere observation.
A LLM associates bytes with other bytes very well. But it has no notion of emotion, real world actions and reactions and so on in relation to those words.
A thing that dogs are often way better than even humans is reading body language and communicating through body language. They are hyper aware of the smallest changes in posture, movement and so on. And they are extremely good at communicating intent or manipulate (in a neutral sense) others with their body language.
This is a huge, complex topic that I don't think we really fully understand, in part because every dog also has individual character traits that influence their way of communicating very much.
Here's an example of how complex their communication is. Just from yesterday:
One of our dogs is for some reason afraid of wind. I've observed how she gets spooked by sudden movements (for example curtains at an open window).
Yesterday it was windy and we went outside (off leash in our yard), she was wary and showed subtle fear and hesitated to move around much. The other dog saw that and then calmly got closer to her, posturing towards the same direction she seemed to go. He made small very steps forward, waited a bit, let her catch up and then she let go of the fear and went sniffing around.
This all happened in a very short amount of time, a few seconds, there is a lot more to the communication that would be difficult and wordy to explain. But since I got more aware of these tiny movements (from head to tail!) I started noticing more and more extremely subtle clues of communication, that can't even be processed in isolation but typically require the full context of all movements, the pacing and so on.
Now think about what the above example all entails. What these dogs have to process, know and feel. The specificity of it, the motivations behind it. How quickly they do that and how subtle their ways of communications are.
Body language is a large part of _human_ language as well. More often than not it gives a lot of context to what we speak or write. How often are statements misunderstood because it is only consumed via text. The tone, rhythm and general body language can make all the difference.
Prior to LLMs, language was what "proved" humans were "more intelligent" than animals.
But this is besides the point; I have no doubt that if one were to make a mouse immortal and give it 50,000 years experience of reading the internet via a tokeniser that turned it into sensory nerve stimulation and it getting rewards depending on how well it can guess the response, it would probably get this good sooner simply because organic minds seem to be better at learning than AI.
But mice aren't immortal and nobody's actually given one that kind of experience, whereas we can do that for machines.
Machines can do this because they can (in some senses but not all) compensate for the sample-inefficient by being so much faster than organic synapses.
And they can cook. I know it, I have seen it in a movie /s
The mouse would go mad, because libraries preserve more than just knowledge, they preserve the evolution of it. That evolution is ongoing as we discover more about ourselves and the world we live in, refine our knowledge, disprove old assumptions and theories and, on occasion, admit that we were wrong to dismiss them. Also, over time, we place different levels of importance to knowledge from the past. For example, an old alchemy manual from the middle ages used to record recipes for a cure for some nasty disease was important because it helped whoever had access to it quickly prepare some ointment that sometimes worked, but today we know that most of those recipes were random, non-scientific attempts at coming up with a solution to a medical problem and we have proven that those medicines do not work. Therefore, the importance of the old alchemist's recipe book as a source of scientific truth has gone to zero, but the historic importance of it has grown a lot, because it helps us understand how our knowledge of chemistry and its applications in health care has evolved. LLMs treat all text as equal unless it will be given hints. But those hints are provided by humans, so there is an inherent bias and the best we can hope for is that those hints are correct at the time of training. We are not pursuing AGI, we are pursuing the goal of automating the process of creation of answers that look like they are the right answers to the given question, but without much attention to factual, logical, or contextual correctness.
No. The mouse would just be a mouse. It wouldn't learn anything, because it's a mouse. It might chew on some of the books. Meanwhile, transformers do learn things, so there is obviously more to it than just the quantity of data.
(Why spend a mouse? Just sit a strawberry in a library, and if the hypothesis holds that the quantity of data is the only thing that matters holds, you'll have a super intelligent strawberry)
No need to waste a strawberry. Just test the furniture in the library. Either you have super intelligent chairs and tables or not.
Or a pebble; for a super intelligent pebble.
“God sleeps in the rock, dreams in the plant, stirs in the animal, and awakens in man.” ― Ibn Arabi
That's the question though, do they? One way of looking at gen AI is as a highly efficient compression and search. WinRAR doesn't learn, neither does Google - regardless of the volume of input data. Just because the process of feeding more data into gen AI is named "learning" doesn't mean that it's the same process that our brains undergo.
Why? I think it absolutely can be intelligence.
Intelligence implies a critical evaluation of the statement under examination, before stating it, on considerations over content.
("You think before you speak". That thinking of course does not stop at "sounding" proper - it has to be proper in content...)
A LLM has to do an accurate simulation of someone critically evaluating their statement in order to predict a next word.
If an LLM can predict the next word without doing a critical evaluation, then it raises the question of what the intelligent people are doing. They might not be doing a critical evaluation at all.
Well certainly: in the mind ideas can be connected tentatively by affinity, and they become hypotheses of plausible ideas, but then in the "intelligent" process they are evaluated to see if they are sound (truthful, useful, productive etc.) or not.
Intelligent people perform critical evaluation, others just embrace immature ideas passing by. Some "think about it", some don't (they may be deficient in will or resources - lack of time, of instruments, of discipline etc.).
And who says LLM are not able to do that (eventually)?
I think it depends on how you define intelligence, but _I_ mostly agree with Francois Collet's stance that intelligence is the ability to find novel solutions and adaptability to new challenges. He feels that memorisation is an important facet, but it is not enough for true intelligence ant that these systems excel at type2 thinking but gave huge gaps at type1.
The alternative I'm considering is that It might just be that it's just a dataset problem, feeding these llms on words makes the lack a huge facet of embodied axistance that is needed to get context.
I am a nobody though, so who knows....
*Chollet
And who says that LLMs won't be able to adapt to new conditions?
Right now they are limited by the context, but that's probably a temporary limitation.
I agree, LLM are interesting to me only to the extent that they are doing more than memorisation.
They do seem to do generalisation, to at least some degree.
If it was literal memorisation, we do literally have internet search already.
The "next token" thing is literally true, but it might turn out to be a red herring, because emergence is a real phenomenon. Like how with enough NAND-gates daisy-chained together you can build any logic function you like.
Gradually, as these LLM next-token predictors are set up recursively, constructively, dynamically, and with the right inputs and feedback loops, the limitations of the fundamental building blocks become less important. Might take a long time, though.
The version of emergence that AI hypists cling to isn't real, though, in the same way that adding more NAND gates won't magically make the logic function you're thinking about. How you add the NAND gates matters, to such a degree that people who know what they're doing don't even think about the NAND gates.
Exactly this, a thousand times.
There are infinitely more wrong and useless circuits than there are the ones that provide the function you want/need.
But isn't that what the training algorithm does? (Genuinely asking since I'm not very familiar with this.) I thought it tries anything, including wrong things, as it gradually finds better results from the right things.
Responding to this:
It's true that training and other methods can iteratively trend towards a particular function/result. But in this case the training is on next token prediction which is not the same as training on non-verbal abstract problem solving (for example).
There are many things humans do that are very different from next token prediction, and those things we do all combine together to produce human level intelligence.
This presupposes that conscious, self-directed intelligence is at all what you're thinking it is, which it might not be (probably isn't). Given that, perhaps no amount of predictors in any arrangement or with any amount of dynamism will ever create an emergent phenomenon of real intelligence.
You say emergence is a real thing, and it is, but we have not one single example of it taking the form of sentience in any human-created thing of complexity.
After the AI Winter, people started calling it ML to avoid the stigma.
Eventually ML got pretty good and a lot of the industry forgot the AI winter, so we're calling it AI again because it sounds cooler.
There’s a little more to it. Learning-based systems didn’t prove themselves until the ImageNet moment around 2013. Machine learning wasn’t used in previous AI hype cycles because it wasn’t a learning system but what we now call good old fashioned AI - GOFAI - hard coded features, semantic web, etc.
Are classical machine learning techniques that don't involve neural networks / DL not considered "learning-based systems" anymore? I would argue that even something as simple as linear regression can and should be considered a learning-based system, even ignoring more sophisticated algorithms such as SVMs or boosted tree regression models. And these were in use for quite a while before the ImageNet moment, albeit not with the same level of visibility.
I've just accepted that these broad terms are really products of their time, and use of them just means people want to communicate at a higher level above the details. (Whether that's because they don't know the details, or because the details aren't important for the context.) It's a bit better than "magic" but not that different; the only real concern I have is vague terms making it into laws or regulations. I agree on linear regression, and I remember being excited by things like random forests in the ML sphere that no one seems to talk about anymore. I think under current standards of vagueness, even basic things like a color bucket-fill operation in a paint program count as "AI techniques". In the vein of slap a gear on it and call it steampunk, slap a search step on it and call it AI, or slap a past-data storage step on it and call it learning.
Maybe if you go by article titles it has. If you look at job titles, there are many more ML engineers than AI engineers.
I make it a point to use ML in every work meeting or interaction where AI is the subject. it has not gained traction.
Yep, the difference is that ML doesn’t have the cultural legacy of the AI concept and thus isn’t as marketable.
It's cyclical. The term "ML" was popular in the 1990s/early 2000s because "AI" had a bad odor of quackdom due to the failures of things like expert systems in the 1980s. The point was to reduce the hype and say "we're not interested in creating AGI; we just want computers to run classification tasks". We'll probably come up with a new term in the future to describe LLMs in the niches where they are actually useful after the current hype passes.
In the beginning I think some, if not many, people did genuinely think it was "AI". It was the closest we've ever gotten to a natural language interface and that genuinely felt really different than anything before, even to an extreme cynic. And I also think there's many people that want us to develop AI and so were trying to actively convince themselves that e.g. GPT or whatever was sentient. Maybe that Google engineer who claimed an early version of Bard was "sentient" even believed himself (though I still suspect that was probably just a marketing hoax).
It's only now that everybody's used to natural language interfaces that I think we're becoming far less forgiving of things like this nonsense:
---
- "How do I do (x)."
- "You do (A)."
- "No, that's wrong because reasons."
- "Oh I'm sorry you're 100% right. Thank you for the correction. I'll keep that in mind in the future. You do (B)."
- "No that's also wrong because reasons."
- "Oh I'm sorry you're 100% right. Thank you for the correction. I'll keep that in mind in the future. You do (A)."
- #$%^#$!!#$!
---
Yes, it's a language model, not a reasoning model. But people got confused at the start.
Thank you. Few people really seem to acknowledge the name.
Or they are adherents to the hypothesis of Linguistic Determinism which preexisted LLMs by quite some time.
https://en.wikipedia.org/wiki/Linguistic_determinism
Agreed, but I think we'll soon start to discover that interacting with systems as if they were text adventure games of the 80's is also going to get pretty weird and frustrating, not to mention probably inefficient.
Even worse, because the text adventure games of the '70s and '80s (at least, the ones that were reasonably successful) had consistent internal state. If you told it to move to the Back of Fountain, you were in the Back of Fountain and would stay there until you told it to move somewhere else. If you told it to wave the black rod with the star at the chasm, a bridge appeared and would stay there.
If you tell ChatGPT that your name is Dan, and give it some rules to follow, after enough exchanges, it will simply forget these things. And sometimes it won't accurately follow the rules, even when they're clearly and simply specified, even the first prompt after you give them. (At least, this has been my experience.)
I don't think anyone really wants to play "text adventure game with advanced Alzheimer's".
"So, when will you be updating yourself to not provide the same wrong answer in future, or when I ask it later in a new session"
"No."
I love this. I want a t-shirt with this
There was a great post here about this specific possibility, that NN crowd has effectively convinced themselves in the supernatural abilities of their NNs:
https://softwarecrisis.dev/letters/llmentalist/
Sort of, kind of. Those natural language interfaces that Actually Work aren't even 2 years old yet, and in that time there weren't many non-shit integrations released. So at best, some small subset of the population is used to the failure modes of ChatGPT app - but at least the topic is there, so future users will have better aligned expectations.
Hallucinations!
A generative tool can’t Hallucinate! It isn’t misperceiving its base reality and data.
Humans Hallucinate!
ARGH. At least it’s becoming easier to point this out, compared to when ChatGPT came out.
Calculators compute; they have to compute reliably; humans are limited and can make computing mistakes.
We want reliable tools - they have to give reliable results; humans are limited and can be unreliable.
That is why we need the tools - humans are limited, we want tools that overcome human limitation.
I really do not see where you intended to go with your post.
This. We have trained generations of people to trust computers to give correct answers. LLM peddlers use that trust to sell systems that provide unreliable answers.
Not the poster you’re replying to, but -
I took his point to mean that hallucinate is an inaccurate verb to describe the phenomenon of AI creating fake data, because the word hallucination implies something that is separate from the “real world.”
This term is thus not an accurate label, because that’s not how LLMs work. There is no distinction between “real” and “imagined” data to an LLM - it’s all just data. And so this metaphor is one that is misleading and inaccurate.
Great example. People will say, “oh that’s just how the word is used now,” but its misuse betrays a real lack of rigorous thought about the subject. And as you point out, it leads one to make false assumptions about the nature of the data’s origin.
If "hallucination" refers to mistaking the product of internal processes for external perception, then generative AI can only hallucinate, as all of its output comes from inference against internal, hard-coded statistical models with zero reconciliation against external reality.
Humans sometimes hallucinate, but still have direct sensory input against which to evaluate inferential conclusions against empirical observation in real time. So we can refine ideas, whatever their origin, against external criteria of correctness -- this is something LLMs totally lack.
Amen. But if you do, you'll still be attacked by apologists. Per the norm for any Internet forum... this one being a prime example.
The grifter is a nihilist. Nothing is holy to a grifter and if you let them they will rob every last word of its original meaning.
The problem with AI, as perfectly clear outlined in this article, is the same as the problem with the blockchain or with other esotheric grifts: It drains needed resources from often already crumbling systems¹.
The people falling for the hype beyond the actual usefulness of the hyped object are wishing for magical solutions that they imagine will solve all their problems. Problems that can't be fixed by wishful thinking, but by not fooling yourself and making technological choices that adequately address the problem.
I am not saying that LLMs are never going to be a good choice to adequately address the problem. What I am saying is that people blinded by blockchain/AI/quantum/snakesoil hype are the wrong people to make that choice, as for them every problem needs to be tackled using the current hype.
Meanwhile a true expert will weigh all available technological choices and carefully test them against the problem. So many things can be optimized and improved using hard, honest work, careful choices and a group of people trying hard not to fool themselves, this is how humanity managed to reach the moon. The people who stand in the way of our achievements are those who lost touch with reality, while actively making fools of themselves.
Again: It is not about being "against" LLMs, it is about leaders admitting they don't know, when they do in fact not know. And a sure way to realize you don't know is to try yourself and fail.
¹ I had to think about my childhood friend, whose esotheric mother died of a preventable disease, because she fooled herself into believing into magical cures and gurus until the fatal end.
That is funny, because of all the problems with LLMs, the biggest one is that they will lie/hallucinate/confabulate to your face before saying I don't know, much like those leaders.
Is this inherent to LLMs by the way, or is it a training choice? I would love for an LLM to talk more slowly when it is unsure.
This topic needs careful consideration and I should use more brain cycles on it. Please insert another coin.
It's fairly inherent. Talking more slowly wouldn't make it more accurate, since it's a next-token predictor: you'd have to somehow make it produce more tokens before "making up its mind" (i.e., outputing something that's sufficiently-correlated with a particular answer that it's a point of no return), and even that is only useful to the extent it's memorised a productive algorithm.
You could make the user interface display the output more slowly "when it is unsure", but that'd show you the wrong thing: a tie between "brilliant" and "excellent" is just as uncertain as a tie between "yes" and "no".
Couldn't agree more. We are seeing gradual degradation of the term AI. LLMs have stolen attention of the media and the general population and I notice that people equate ChatGPT with all of AI. It is not, but an average person doesn't know the difference. In a way, genAI is the worst thing that happened to AI. The ML part is amazing, it allows us to understand the world we live in, which is what we have evolved to seek and value, because there is an evolutionary benefit to it--survival; the generative side is not even a solution to any particular problem, it is a problem that we are forced to pay to make bigger. I wrote "forced", because companies like Adobe use their dominant position to override legal frameworks developed to protect intellectual property and client-creator contracts in order to grab content that is not theirs to train models they resell back to the people they stole content from and to subject ourselves to unsupervised, unaccountable policing of content.
Adobe is an obnoxious company that does a lot of bad things, but it's weird to me seeing them cast in a negative light like this for their approach to model training copyright. As far as I know they were the first to have a genAI product that was trained exclusively on data that they had rights to under existing copyright law, rather than relying on a free use argument or just hoping the law would change around them. Out of all the companies who've built AI tooling they're the last one I'd expect to see dragged out as an example of copyright misbehavior.
I was surprised, too: I thought Adobe's main problem was their abusive pricing, and I was actually a little impressed by their take on this hype wave. And yet. https://news.ycombinator.com/item?id=40607442
When we build something we do not intend to build something that just achieves results «of the average human being», and a slow car, a weak crane, a vague clock are built provisionally in the process of achieving the superior aid intended... So AGI expects human level results provisionally, while the goal remains to go beyond them. The clash you see is only apparent.
Are you aware that we have been using that term for at least 60 years?
And that the Brownian minds of the masses very typically try to interfere while we proceed focusedly and regarding it as noise? Today they decide that the name is Anna, tomorrow Susie: childplay should remain undeterminant.
Building something that replicates the abilities of the average human being in no way implies that this eventually leads to a superintelligent entity. And my broader point was that many people are using the term AGI as synonymous with that superintelligent entity. The concepts are very poorly defined and thrown around without much deeper thought.
Yes, and for the first ±55 of those years, it was largely limited to science fiction stories and niche areas of computer science. In the last ±5 years, it's being added to everything. I can order groceries with AI, optimize my emails with AI, on and on. It's become exceptionally more widespread of a term recently.
https://trends.google.com/trends/explore?date=today%205-y&q=...
You're going to have to rephrase this sentence, because it's unclear what point you're trying to make other than "the masses are stupid." I'm not sure "the masses" are even relevant here, as I'm talking about individuals leading/working at AI companies.
I honestly never understood AGI as a simulation of Average Joe: it makes no sense to me. Either we go for the implementation of a high degree of intelligence, or why should we give an "important" name to "petty project" (however complicated, that can only be an effort that does not have an end in itself). Is it possible that the terminological confusion you see is because we are individually very radicated in our assumptions (e.g. "I want AGI as a primary module in Decision Support Systems")?
Who has «added to everything» the term AI? The «individuals leading/working at AI companies»? I would have said, the onlookers, or relatively marginal actors (e.g. marketing) who have an interest in the buzzword. So my point was: we will go on using he term in «niche [and not so niche] areas of computer science» irregardless of the outside noise.
I was reading one famous book about investing some times ago (I don't remember which one exactly, I think it was a random walk into wall st, but don't quote me on that) and one chapter at the beginning of the book talk about the .com bubble and how companies, even ones who had nothing to do with the web, started to put .com or www in their name and were seeing an immediate bump in their stock price (until it all burst, as we know now).
And every hype cycle / bubble is like that. We saw something similar with cryptocurrencies. For a while, every tech demos at dev convention had to have some relation to the "blockchain". We saw every variation of names ending in -coin. And a lot of company, that where not even in tech, had dumb project related to the blockchain, which for anyone slightly knowledgeable with the tech it was clear that it was complete BS, and they almost all the time were quietly killed off after a few month.
To a much lesser extent, we saw the same with "BigData" (who even use this word anymore?) and AR/VR/XR.
And now its AI, until the next recession and/or the next shiny thing that makes for amazing demos pops-out.
It is not to say that it is all fake. There is always some genuine business that have actual use case with the tech and will probably survive the burst (or get brought up and live on has MS/Google/AWS Thingamajig). But you have to be pretty naïve if you think 99% of the current AI company will live in the next 5 years, and believe their marketing material. But it doesn't matter if you manage to sell before the bubble pop, and so the cycle continue.
Long Island Iced Tea renamed to Long Island Blockchain for a stock price bump.
Hah, I forgot about that, and their subsequent delisting. That was about as blatantly criminal as you could get and in that particular hype train I remember some noise about that but not nearly enough.
Yeah, happens everytime. Remember when people were promising blockchain but had nothing to show for it (sometimes, not even an idea)? Or "cloud powered" for apps that barely made API calls? Remember when every and anything needed an app, even if it was just some static food menu?
It's obvious BS from anyone in tech, but the people throwing money aren't in tech.
it'll die down, but the marketing tends to stick, sadly. we'll have to deal with if AI means machine learning or LLMs or video game pathfinding for decades to come.
They're still on r/Metaverse_blockchain. Every day, a new "meme coin". Just in:
"XXX - The AI-Powered Blockchain Project Revolutionizing the Crypto World! ... Where AI Meets Web3"
"XXX is an advanced AI infrastructure that develops AI-powered technologies for the Web3, Blockchain, and Crypto space. We aim to improve the Web3 space for retail users & startups by developing AI-powered solutions designed explicitly for Web3. From LLMs to Web3 AI Tools, XXX is the go-to place to boost your Web3 flow with Artificial Intelligence."
They too are a redlisted species now. Just prompt chatGPT to buisness speak wirh maximum buzzword boolshit and be amazed. When they came for the kool aid visionaries i was not afraid, cause I was not a grifter.
This is a synergistic, paradigm-shifting piece of content that leverages cutting-edge, scalable innovation to deliver a robust, value-added user experience. It epitomizes next-gen, mission-critical insights and exemplifies best-in-class thought leadership within the dynamic ecosystem
The 'intelligence' label has been applied to computers since the beginning and it always misleads people into expecting way more than they can deliver. The very first computers were called 'electronic brains' by newspapers.
And this delay between people's mental images of what an 'intelligent' product can do and the actual benefits they get for their money once a new generation reaches the market creates this bullwhip effect in mood. Hence the 'AI winters'. And guess what, another one is brewing because tech people tend to think history is bunk and pay no attention to it.
Turbo
AI should be re-acronymed to mean Algorithmic Insights. artificial intelligence is akin to a ziplock bag doing algebra with no external interactions
I say LLM when Im talking about LLMs, I say Generative ML when I'm talking about Generative ML, and I say ML when I'm talking about everything else.
I don't know what AI is, and nobody else does, that's why they're selling you it.
I'd like to think that AI right now is basically a placeholder term, like a search keyword or hot topic and people are riding the wave to get attention and clicks.
Everything that is magic will be labeled under AI for now, until it gets seated into their proper terms and are only closely discussed by those who are actually driving innovation in the space or are just casually using the applications in business or private.
The term "artificial intelligence" was marketing from its creation. It means "your plastic pal who's fun to be with, especially if you don't have to pay him." Multiple disparate technologies all called "AI", because the term exists to sell you the prospect of magic.
Typically at this point in the hype cycle a new term emerges so companies can differentiate their hype from the pack.
Next up: Synthetic Consciousness, "SC"
Prediction: We will see this press release within 24 months:
"Introducing the Acme Juice Squeezer with full Synthetic Consciousness ("SC"). It will not only squeeze your juice in the morning but will help you gently transition into the working day with an empathetic personality that is both supportive and a little spunky! Sold exclusively at these fine stores..."