return to table of content

Please don't mention AI again

keiferski
85 replies
14h13m

One thing I’ve noticed with the AI topic is how there is no discussion on how the name of a thing ends up shaping how we think about it. There is very obviously a marketing phenomenon happening now where “AI” is being added to the name of every product. Not because it’s actually AI in any rigorous or historical sense of the word, but because it’s trendy and helps you get investment dollars.

I think one of the results of this is that the concept of AI itself increasingly becomes more muddled until it becomes indistinguishable from a word like “technology” and therefore useless for describing a particular phenomenon. You can already see this with the usage of “AGI” and “super intelligence” which from the definitions I’ve been reading, are not the same thing at all. AGI is/was supposed to be about achieving results of the average human being, not about a sci-fi AI god, and yet it seems like everyone is using them interchangeably. It’s very sloppy thinking.

Instead I think the term AI is going to slowly become less marketing trendy, and will fade out over time, as all trendy marketing terms do. What will be left are actually useful enhancements to specific use cases - most of which will probably be referred to by a word other than AI.

elromulous
44 replies
14h10m

+1.

What happened to ML? With the relatively recent craze precipitated by chatgpt, the term AI (perhaps in no small part due to "OpenAI") has completely taken over. ML is a more apt description of the current wave.

zer00eyz
35 replies
13h31m

Open AI was betting on AI, AGI, Super inteligence.

Look at the google engineer who thought they had an AI locked up in the basement... https://www.theverge.com/2022/6/13/23165535/google-suspends-...

MS paper on sparks of AGI: https://www.microsoft.com/en-us/research/publication/sparks-...

The rumors that OpenAI deal with MS would give them everything till they got to AGI... A perpetual license to all new development.

All the "Safety people" have left the OpenAi building. Even musk isnt talking about safety any more.

I think the bet was that if you fed an LLM enough, got it big enough it would hit a tipping point, and become AGI, or sentient or sapient. That lines up nicely with the MS terms, and MS's on paper.

I think they figured out that the math doesn't work that way (and never was going to). A prediction of the next token being better isnt intelligence any more than weather prediction will become weather.

garretraziel
19 replies
12h2m

When my friends talked about how AGI is just creating huge enough neural network & feeding it enough data, I have always compared it to: imagine locking a mouse in a library with all the knowledge in the world & expecting it to come out super intelligent.

ben_w
13 replies
11h34m

GPT-3 is about the same complexity as a mouse brain; it did better than I expected…

… but it's still a brain the size of a mouse's.

dgb23
12 replies
10h33m

Mice are fare more impressive though.

ben_w
10 replies
9h58m

I've yet to see a mouse write even mediocre python, let alone a rap song about life in ancient Athens written in Latin.

Don't get me wrong, organic brains learn from far fewer examples than AI, there's a lot organic brains can do that AI don't (yet), but I don't really find the intellectual capacity of mice to be particularly interesting.

On the other hand, the question of if mice have qualia, that is something I find interesting.

jampekka
3 replies
9h20m

I have yet to see a machine that would survive a single day in a mouse's natural habitat. And I doubt I'll see one in my lifetime.

Mediocre, or even excellent, Python and rap lyrics in Latin are easy stuff, just like chess and arithmetic. Humans just are really bad at them.

ben_w
2 replies
9h2m

It doesn't say specifically, but I think these lasted more than a day, assuming you'll accept random predator species as a sufficient proof-of-concept substitute for mice which have to do many similar things but smaller:

https://www.televisual.com/news/behind-the-scenes-spy-in-the....

wizzwizz4
1 replies
3h56m

Those robots aren't acquiring their own food, and they're not edible by the creatures that surround them. They're playing on easy mode.

ben_w
0 replies
3h4m

Still passes the "a machine that would survive a single day" test, and given machines run off electricity and we have PV already food isn't a big deal here.

southernplaces7
2 replies
3h24m

but I don't really find the intellectual capacity of mice to be particularly interesting.

But you should find their self-direction capacity incredible and their ability to instinctively behave in ways that help them survive and propagate themselves. There isn't a machine or algorithm on earth that can do the same, much less with the same minuscule energy resources that a mouse's brain and nervous system use to achieve all of that.

This isn't to even mention the vast cellular complexity that lets the mouse physically act on all these instructions from its brain and nervous system and continue to do so while self-recharging for up to 3 years and fighting off tiny, lethal external invaders 24/7, among other things it does to stay alive.

All of that in just a mouse.

ben_w
1 replies
3h6m

But you should find their self-direction capacity incredible

No, why would I?

Depending on what you mean by self-direction, that's either an evolved trait (with evolution rather than the mouse itself as the intelligence) for the bigger picture what-even-is-good, or it's fairly easy to replicate even for a much simpler AI.

The hard part has been getting them to be able to distinguish between different images, not this kind of thing.

and their ability to instinctively behave in ways that help them survive and propagate themselves. There isn't a machine or algorithm on earth that can do the same,

https://en.wikipedia.org/wiki/Evolutionary_algorithm

much less with the same minuscule energy resources that a mouse's brain and nervous system use to achieve all of that.

Is nice, but again, this is mixing up the intelligence of the animal with the intelligence of the evolutionary process which created that instance.

I as a human have no knowledge of the evolutionary process which lets me enjoy the flavour of coriander, and my understanding of the Krebs cycle is "something about vitamin C?" rather than anything functional, and while my body knows these things it is unconventionable to claim that my body knowing it means that I know it.

GOD_Over_Djinn
0 replies
1h8m

Evolution is an abstract concept, and abstract concepts cannot be “intelligent” (whatever that means). This is like saying that gravity or thermodynamics are “intelligent”.

AlexandrB
2 replies
7h57m

I've yet to see a mouse write even mediocre python, let alone a rap song about life in ancient Athens written in Latin.

Isn't this distinction more about "language" than "intelligence". There are some fantastically intelligent animals, but none of them can do the tasks you mention because they're not built to process human languages.

dgb23
0 replies
6h17m

I agree with the general sentiment but want to add: Dogs certainly process human language very well. From anecdotal experience of our dogs:

In terms of spoken language they are limited, but they surprise me all the time with terms they have picked up over the years. They can definitely associate a lot of words correctly (if it interests them) that we didn't train them with at all, just by mere observation.

A LLM associates bytes with other bytes very well. But it has no notion of emotion, real world actions and reactions and so on in relation to those words.

A thing that dogs are often way better than even humans is reading body language and communicating through body language. They are hyper aware of the smallest changes in posture, movement and so on. And they are extremely good at communicating intent or manipulate (in a neutral sense) others with their body language.

This is a huge, complex topic that I don't think we really fully understand, in part because every dog also has individual character traits that influence their way of communicating very much.

Here's an example of how complex their communication is. Just from yesterday:

One of our dogs is for some reason afraid of wind. I've observed how she gets spooked by sudden movements (for example curtains at an open window).

Yesterday it was windy and we went outside (off leash in our yard), she was wary and showed subtle fear and hesitated to move around much. The other dog saw that and then calmly got closer to her, posturing towards the same direction she seemed to go. He made small very steps forward, waited a bit, let her catch up and then she let go of the fear and went sniffing around.

This all happened in a very short amount of time, a few seconds, there is a lot more to the communication that would be difficult and wordy to explain. But since I got more aware of these tiny movements (from head to tail!) I started noticing more and more extremely subtle clues of communication, that can't even be processed in isolation but typically require the full context of all movements, the pacing and so on.

Now think about what the above example all entails. What these dogs have to process, know and feel. The specificity of it, the motivations behind it. How quickly they do that and how subtle their ways of communications are.

Body language is a large part of _human_ language as well. More often than not it gives a lot of context to what we speak or write. How often are statements misunderstood because it is only consumed via text. The tone, rhythm and general body language can make all the difference.

ben_w
0 replies
7h36m

Prior to LLMs, language was what "proved" humans were "more intelligent" than animals.

But this is besides the point; I have no doubt that if one were to make a mouse immortal and give it 50,000 years experience of reading the internet via a tokeniser that turned it into sensory nerve stimulation and it getting rewards depending on how well it can guess the response, it would probably get this good sooner simply because organic minds seem to be better at learning than AI.

But mice aren't immortal and nobody's actually given one that kind of experience, whereas we can do that for machines.

Machines can do this because they can (in some senses but not all) compensate for the sample-inefficient by being so much faster than organic synapses.

surfingdino
0 replies
10h25m

And they can cook. I know it, I have seen it in a movie /s

surfingdino
4 replies
10h26m

The mouse would go mad, because libraries preserve more than just knowledge, they preserve the evolution of it. That evolution is ongoing as we discover more about ourselves and the world we live in, refine our knowledge, disprove old assumptions and theories and, on occasion, admit that we were wrong to dismiss them. Also, over time, we place different levels of importance to knowledge from the past. For example, an old alchemy manual from the middle ages used to record recipes for a cure for some nasty disease was important because it helped whoever had access to it quickly prepare some ointment that sometimes worked, but today we know that most of those recipes were random, non-scientific attempts at coming up with a solution to a medical problem and we have proven that those medicines do not work. Therefore, the importance of the old alchemist's recipe book as a source of scientific truth has gone to zero, but the historic importance of it has grown a lot, because it helps us understand how our knowledge of chemistry and its applications in health care has evolved. LLMs treat all text as equal unless it will be given hints. But those hints are provided by humans, so there is an inherent bias and the best we can hope for is that those hints are correct at the time of training. We are not pursuing AGI, we are pursuing the goal of automating the process of creation of answers that look like they are the right answers to the given question, but without much attention to factual, logical, or contextual correctness.

immibis
3 replies
8h50m

No. The mouse would just be a mouse. It wouldn't learn anything, because it's a mouse. It might chew on some of the books. Meanwhile, transformers do learn things, so there is obviously more to it than just the quantity of data.

(Why spend a mouse? Just sit a strawberry in a library, and if the hypothesis holds that the quantity of data is the only thing that matters holds, you'll have a super intelligent strawberry)

lgas
0 replies
4h2m

No need to waste a strawberry. Just test the furniture in the library. Either you have super intelligent chairs and tables or not.

ghostzilla
0 replies
4h59m

Or a pebble; for a super intelligent pebble.

“God sleeps in the rock, dreams in the plant, stirs in the animal, and awakens in man.” ― Ibn Arabi

AlexandrB
0 replies
8h1m

Meanwhile, transformers do learn things

That's the question though, do they? One way of looking at gen AI is as a highly efficient compression and search. WinRAR doesn't learn, neither does Google - regardless of the volume of input data. Just because the process of feeding more data into gen AI is named "learning" doesn't mean that it's the same process that our brains undergo.

cyberax
8 replies
13h9m

A prediction of the next token being better isnt intelligence

Why? I think it absolutely can be intelligence.

mdp2021
3 replies
10h38m

Intelligence implies a critical evaluation of the statement under examination, before stating it, on considerations over content.

("You think before you speak". That thinking of course does not stop at "sounding" proper - it has to be proper in content...)

roenxi
1 replies
9h14m

A LLM has to do an accurate simulation of someone critically evaluating their statement in order to predict a next word.

If an LLM can predict the next word without doing a critical evaluation, then it raises the question of what the intelligent people are doing. They might not be doing a critical evaluation at all.

mdp2021
0 replies
6h32m

If an LLM can predict the next word without doing a critical evaluation, then it raises the question of what the intelligent people are doing

Well certainly: in the mind ideas can be connected tentatively by affinity, and they become hypotheses of plausible ideas, but then in the "intelligent" process they are evaluated to see if they are sound (truthful, useful, productive etc.) or not.

Intelligent people perform critical evaluation, others just embrace immature ideas passing by. Some "think about it", some don't (they may be deficient in will or resources - lack of time, of instruments, of discipline etc.).

cyberax
0 replies
1h59m

Intelligence implies a critical evaluation of the statement under examination, before stating it, on considerations over content.

And who says LLM are not able to do that (eventually)?

RealityVoid
3 replies
13h1m

I think it depends on how you define intelligence, but _I_ mostly agree with Francois Collet's stance that intelligence is the ability to find novel solutions and adaptability to new challenges. He feels that memorisation is an important facet, but it is not enough for true intelligence ant that these systems excel at type2 thinking but gave huge gaps at type1.

The alternative I'm considering is that It might just be that it's just a dataset problem, feeding these llms on words makes the lack a huge facet of embodied axistance that is needed to get context.

I am a nobody though, so who knows....

smus
0 replies
11h42m

*Chollet

cyberax
0 replies
2h0m

And who says that LLMs won't be able to adapt to new conditions?

Right now they are limited by the context, but that's probably a temporary limitation.

ben_w
0 replies
11h32m

I agree, LLM are interesting to me only to the extent that they are doing more than memorisation.

They do seem to do generalisation, to at least some degree.

If it was literal memorisation, we do literally have internet search already.

aa-b
5 replies
10h33m

The "next token" thing is literally true, but it might turn out to be a red herring, because emergence is a real phenomenon. Like how with enough NAND-gates daisy-chained together you can build any logic function you like.

Gradually, as these LLM next-token predictors are set up recursively, constructively, dynamically, and with the right inputs and feedback loops, the limitations of the fundamental building blocks become less important. Might take a long time, though.

wizzwizz4
3 replies
3h59m

Like how with enough NAND-gates daisy-chained together you can build any logic function you like.

The version of emergence that AI hypists cling to isn't real, though, in the same way that adding more NAND gates won't magically make the logic function you're thinking about. How you add the NAND gates matters, to such a degree that people who know what they're doing don't even think about the NAND gates.

RaftPeople
2 replies
2h49m

Exactly this, a thousand times.

There are infinitely more wrong and useless circuits than there are the ones that provide the function you want/need.

lcnPylGDnU4H9OF
0 replies
55m

But isn't that what the training algorithm does? (Genuinely asking since I'm not very familiar with this.) I thought it tries anything, including wrong things, as it gradually finds better results from the right things.

RaftPeople
0 replies
42m

Responding to this:

But isn't that what the training algorithm does?

It's true that training and other methods can iteratively trend towards a particular function/result. But in this case the training is on next token prediction which is not the same as training on non-verbal abstract problem solving (for example).

There are many things humans do that are very different from next token prediction, and those things we do all combine together to produce human level intelligence.

southernplaces7
0 replies
3h29m

This presupposes that conscious, self-directed intelligence is at all what you're thinking it is, which it might not be (probably isn't). Given that, perhaps no amount of predictors in any arrangement or with any amount of dynamism will ever create an emergent phenomenon of real intelligence.

You say emergence is a real thing, and it is, but we have not one single example of it taking the form of sentience in any human-created thing of complexity.

floren
3 replies
13h48m

After the AI Winter, people started calling it ML to avoid the stigma.

Eventually ML got pretty good and a lot of the industry forgot the AI winter, so we're calling it AI again because it sounds cooler.

gessha
2 replies
13h14m

There’s a little more to it. Learning-based systems didn’t prove themselves until the ImageNet moment around 2013. Machine learning wasn’t used in previous AI hype cycles because it wasn’t a learning system but what we now call good old fashioned AI - GOFAI - hard coded features, semantic web, etc.

sharma-arjun
1 replies
10h5m

Are classical machine learning techniques that don't involve neural networks / DL not considered "learning-based systems" anymore? I would argue that even something as simple as linear regression can and should be considered a learning-based system, even ignoring more sophisticated algorithms such as SVMs or boosted tree regression models. And these were in use for quite a while before the ImageNet moment, albeit not with the same level of visibility.

Jach
0 replies
9h26m

I've just accepted that these broad terms are really products of their time, and use of them just means people want to communicate at a higher level above the details. (Whether that's because they don't know the details, or because the details aren't important for the context.) It's a bit better than "magic" but not that different; the only real concern I have is vague terms making it into laws or regulations. I agree on linear regression, and I remember being excited by things like random forests in the ML sphere that no one seems to talk about anymore. I think under current standards of vagueness, even basic things like a color bucket-fill operation in a paint program count as "AI techniques". In the vein of slap a gear on it and call it steampunk, slap a search step on it and call it AI, or slap a past-data storage step on it and call it learning.

sk11001
0 replies
11h0m

the term AI (...) has completely taken over

Maybe if you go by article titles it has. If you look at job titles, there are many more ML engineers than AI engineers.

konfusinomicon
0 replies
5h58m

I make it a point to use ML in every work meeting or interaction where AI is the subject. it has not gained traction.

keiferski
0 replies
13h55m

Yep, the difference is that ML doesn’t have the cultural legacy of the AI concept and thus isn’t as marketable.

jhbadger
0 replies
11h2m

What happened to ML?

It's cyclical. The term "ML" was popular in the 1990s/early 2000s because "AI" had a bad odor of quackdom due to the failures of things like expert systems in the 1980s. The point was to reduce the hype and say "we're not interested in creating AGI; we just want computers to run classification tasks". We'll probably come up with a new term in the future to describe LLMs in the niches where they are actually useful after the current hype passes.

somenameforme
9 replies
13h52m

In the beginning I think some, if not many, people did genuinely think it was "AI". It was the closest we've ever gotten to a natural language interface and that genuinely felt really different than anything before, even to an extreme cynic. And I also think there's many people that want us to develop AI and so were trying to actively convince themselves that e.g. GPT or whatever was sentient. Maybe that Google engineer who claimed an early version of Bard was "sentient" even believed himself (though I still suspect that was probably just a marketing hoax).

It's only now that everybody's used to natural language interfaces that I think we're becoming far less forgiving of things like this nonsense:

---

- "How do I do (x)."

- "You do (A)."

- "No, that's wrong because reasons."

- "Oh I'm sorry you're 100% right. Thank you for the correction. I'll keep that in mind in the future. You do (B)."

- "No that's also wrong because reasons."

- "Oh I'm sorry you're 100% right. Thank you for the correction. I'll keep that in mind in the future. You do (A)."

- #$%^#$!!#$!

---

maigret
2 replies
11h13m

Yes, it's a language model, not a reasoning model. But people got confused at the start.

lagrange77
0 replies
8h50m

Thank you. Few people really seem to acknowledge the name.

another2another
1 replies
8h36m

It was the closest we've ever gotten to a natural language interface and that genuinely felt really different

Agreed, but I think we'll soon start to discover that interacting with systems as if they were text adventure games of the 80's is also going to get pretty weird and frustrating, not to mention probably inefficient.

danaris
0 replies
5h40m

Even worse, because the text adventure games of the '70s and '80s (at least, the ones that were reasonably successful) had consistent internal state. If you told it to move to the Back of Fountain, you were in the Back of Fountain and would stay there until you told it to move somewhere else. If you told it to wave the black rod with the star at the chasm, a bridge appeared and would stay there.

If you tell ChatGPT that your name is Dan, and give it some rules to follow, after enough exchanges, it will simply forget these things. And sometimes it won't accurately follow the rules, even when they're clearly and simply specified, even the first prompt after you give them. (At least, this has been my experience.)

I don't think anyone really wants to play "text adventure game with advanced Alzheimer's".

stn_za
0 replies
9h39m

"So, when will you be updating yourself to not provide the same wrong answer in future, or when I ask it later in a new session"

"No."

pineaux
0 replies
11h46m

I love this. I want a t-shirt with this

Yizahi
0 replies
7h4m

There was a great post here about this specific possibility, that NN crowd has effectively convinced themselves in the supernatural abilities of their NNs:

https://softwarecrisis.dev/letters/llmentalist/

TeMPOraL
0 replies
8h40m

It's only now that everybody's used to natural language interfaces that I think we're becoming far less forgiving (...)

Sort of, kind of. Those natural language interfaces that Actually Work aren't even 2 years old yet, and in that time there weren't many non-shit integrations released. So at best, some small subset of the population is used to the failure modes of ChatGPT app - but at least the topic is there, so future users will have better aligned expectations.

intended
6 replies
12h28m

Hallucinations!

A generative tool can’t Hallucinate! It isn’t misperceiving its base reality and data.

Humans Hallucinate!

ARGH. At least it’s becoming easier to point this out, compared to when ChatGPT came out.

mdp2021
2 replies
10h29m

Calculators compute; they have to compute reliably; humans are limited and can make computing mistakes.

We want reliable tools - they have to give reliable results; humans are limited and can be unreliable.

That is why we need the tools - humans are limited, we want tools that overcome human limitation.

I really do not see where you intended to go with your post.

surfingdino
0 replies
10h21m

This. We have trained generations of people to trust computers to give correct answers. LLM peddlers use that trust to sell systems that provide unreliable answers.

keiferski
0 replies
10h19m

Not the poster you’re replying to, but -

I took his point to mean that hallucinate is an inaccurate verb to describe the phenomenon of AI creating fake data, because the word hallucination implies something that is separate from the “real world.”

This term is thus not an accurate label, because that’s not how LLMs work. There is no distinction between “real” and “imagined” data to an LLM - it’s all just data. And so this metaphor is one that is misleading and inaccurate.

keiferski
0 replies
12h4m

Great example. People will say, “oh that’s just how the word is used now,” but its misuse betrays a real lack of rigorous thought about the subject. And as you point out, it leads one to make false assumptions about the nature of the data’s origin.

Gormo
0 replies
5h18m

If "hallucination" refers to mistaking the product of internal processes for external perception, then generative AI can only hallucinate, as all of its output comes from inference against internal, hard-coded statistical models with zero reconciliation against external reality.

Humans sometimes hallucinate, but still have direct sensory input against which to evaluate inferential conclusions against empirical observation in real time. So we can refine ideas, whatever their origin, against external criteria of correctness -- this is something LLMs totally lack.

DidYaWipe
0 replies
12h9m

Amen. But if you do, you'll still be attacked by apologists. Per the norm for any Internet forum... this one being a prime example.

atoav
3 replies
10h45m

The grifter is a nihilist. Nothing is holy to a grifter and if you let them they will rob every last word of its original meaning.

The problem with AI, as perfectly clear outlined in this article, is the same as the problem with the blockchain or with other esotheric grifts: It drains needed resources from often already crumbling systems¹.

The people falling for the hype beyond the actual usefulness of the hyped object are wishing for magical solutions that they imagine will solve all their problems. Problems that can't be fixed by wishful thinking, but by not fooling yourself and making technological choices that adequately address the problem.

I am not saying that LLMs are never going to be a good choice to adequately address the problem. What I am saying is that people blinded by blockchain/AI/quantum/snakesoil hype are the wrong people to make that choice, as for them every problem needs to be tackled using the current hype.

Meanwhile a true expert will weigh all available technological choices and carefully test them against the problem. So many things can be optimized and improved using hard, honest work, careful choices and a group of people trying hard not to fool themselves, this is how humanity managed to reach the moon. The people who stand in the way of our achievements are those who lost touch with reality, while actively making fools of themselves.

Again: It is not about being "against" LLMs, it is about leaders admitting they don't know, when they do in fact not know. And a sure way to realize you don't know is to try yourself and fail.

¹ I had to think about my childhood friend, whose esotheric mother died of a preventable disease, because she fooled herself into believing into magical cures and gurus until the fatal end.

fragmede
2 replies
9h46m

That is funny, because of all the problems with LLMs, the biggest one is that they will lie/hallucinate/confabulate to your face before saying I don't know, much like those leaders.

hengheng
1 replies
5h35m

Is this inherent to LLMs by the way, or is it a training choice? I would love for an LLM to talk more slowly when it is unsure.

This topic needs careful consideration and I should use more brain cycles on it. Please insert another coin.

wizzwizz4
0 replies
3h50m

It's fairly inherent. Talking more slowly wouldn't make it more accurate, since it's a next-token predictor: you'd have to somehow make it produce more tokens before "making up its mind" (i.e., outputing something that's sufficiently-correlated with a particular answer that it's a point of no return), and even that is only useful to the extent it's memorised a productive algorithm.

You could make the user interface display the output more slowly "when it is unsure", but that'd show you the wrong thing: a tie between "brilliant" and "excellent" is just as uncertain as a tie between "yes" and "no".

surfingdino
2 replies
12h0m

Couldn't agree more. We are seeing gradual degradation of the term AI. LLMs have stolen attention of the media and the general population and I notice that people equate ChatGPT with all of AI. It is not, but an average person doesn't know the difference. In a way, genAI is the worst thing that happened to AI. The ML part is amazing, it allows us to understand the world we live in, which is what we have evolved to seek and value, because there is an evolutionary benefit to it--survival; the generative side is not even a solution to any particular problem, it is a problem that we are forced to pay to make bigger. I wrote "forced", because companies like Adobe use their dominant position to override legal frameworks developed to protect intellectual property and client-creator contracts in order to grab content that is not theirs to train models they resell back to the people they stole content from and to subject ourselves to unsupervised, unaccountable policing of content.

lolinder
1 replies
5h52m

I wrote "forced", because companies like Adobe use their dominant position to override legal frameworks developed to protect intellectual property and client-creator contracts in order to grab content that is not theirs to train models they resell back to the people they stole content from and to subject ourselves to unsupervised, unaccountable policing of content.

Adobe is an obnoxious company that does a lot of bad things, but it's weird to me seeing them cast in a negative light like this for their approach to model training copyright. As far as I know they were the first to have a genAI product that was trained exclusively on data that they had rights to under existing copyright law, rather than relying on a free use argument or just hoping the law would change around them. Out of all the companies who've built AI tooling they're the last one I'd expect to see dragged out as an example of copyright misbehavior.

wizzwizz4
0 replies
4h2m

Out of all the companies who've built AI tooling they're the* last one I'd expect to see dragged out as an example of copyright misbehavior.

I was surprised, too: I thought Adobe's main problem was their abusive pricing, and I was actually a little impressed by their take on this hype wave. And yet. https://news.ycombinator.com/item?id=40607442

mdp2021
2 replies
10h16m

AGI is/was supposed to be about achieving results of the average human being, not about a sci-fi AI god

When we build something we do not intend to build something that just achieves results «of the average human being», and a slow car, a weak crane, a vague clock are built provisionally in the process of achieving the superior aid intended... So AGI expects human level results provisionally, while the goal remains to go beyond them. The clash you see is only apparent.

I think the term AI is going to ... will fade out over time

Are you aware that we have been using that term for at least 60 years?

And that the Brownian minds of the masses very typically try to interfere while we proceed focusedly and regarding it as noise? Today they decide that the name is Anna, tomorrow Susie: childplay should remain undeterminant.

keiferski
1 replies
10h4m

Building something that replicates the abilities of the average human being in no way implies that this eventually leads to a superintelligent entity. And my broader point was that many people are using the term AGI as synonymous with that superintelligent entity. The concepts are very poorly defined and thrown around without much deeper thought.

Are you aware that we have been using that term for at least 60 years?

Yes, and for the first ±55 of those years, it was largely limited to science fiction stories and niche areas of computer science. In the last ±5 years, it's being added to everything. I can order groceries with AI, optimize my emails with AI, on and on. It's become exceptionally more widespread of a term recently.

https://trends.google.com/trends/explore?date=today%205-y&q=...

And that the Brownian minds of the masses very typically try to interfere while we proceed focusedly and regarding it as noise? Today they decide that the name is Anna, tomorrow Susie: childplay should remain undeterminant.

You're going to have to rephrase this sentence, because it's unclear what point you're trying to make other than "the masses are stupid." I'm not sure "the masses" are even relevant here, as I'm talking about individuals leading/working at AI companies.

mdp2021
0 replies
6h18m

I honestly never understood AGI as a simulation of Average Joe: it makes no sense to me. Either we go for the implementation of a high degree of intelligence, or why should we give an "important" name to "petty project" (however complicated, that can only be an effort that does not have an end in itself). Is it possible that the terminological confusion you see is because we are individually very radicated in our assumptions (e.g. "I want AGI as a primary module in Decision Support Systems")?

In the last ±5 years, it's being added to everything // I'm not sure "the masses" are even relevant here, as I'm talking about individuals leading/working at AI companies

Who has «added to everything» the term AI? The «individuals leading/working at AI companies»? I would have said, the onlookers, or relatively marginal actors (e.g. marketing) who have an interest in the buzzword. So my point was: we will go on using he term in «niche [and not so niche] areas of computer science» irregardless of the outside noise.

maeln
2 replies
10h12m

One thing I’ve noticed with the AI topic is how there is no discussion on how the name of a thing ends up shaping how we think about it. There is very obviously a marketing phenomenon happening now where “AI” is being added to the name of every product. Not because it’s actually AI in any rigorous or historical sense of the word, but because it’s trendy and helps you get investment dollars.

I was reading one famous book about investing some times ago (I don't remember which one exactly, I think it was a random walk into wall st, but don't quote me on that) and one chapter at the beginning of the book talk about the .com bubble and how companies, even ones who had nothing to do with the web, started to put .com or www in their name and were seeing an immediate bump in their stock price (until it all burst, as we know now).

And every hype cycle / bubble is like that. We saw something similar with cryptocurrencies. For a while, every tech demos at dev convention had to have some relation to the "blockchain". We saw every variation of names ending in -coin. And a lot of company, that where not even in tech, had dumb project related to the blockchain, which for anyone slightly knowledgeable with the tech it was clear that it was complete BS, and they almost all the time were quietly killed off after a few month.

To a much lesser extent, we saw the same with "BigData" (who even use this word anymore?) and AR/VR/XR.

And now its AI, until the next recession and/or the next shiny thing that makes for amazing demos pops-out.

It is not to say that it is all fake. There is always some genuine business that have actual use case with the tech and will probably survive the burst (or get brought up and live on has MS/Google/AWS Thingamajig). But you have to be pretty naïve if you think 99% of the current AI company will live in the next 5 years, and believe their marketing material. But it doesn't matter if you manage to sell before the bubble pop, and so the cycle continue.

immibis
1 replies
8h48m

Long Island Iced Tea renamed to Long Island Blockchain for a stock price bump.

JohnMakin
0 replies
53m

Hah, I forgot about that, and their subsequent delisting. That was about as blatantly criminal as you could get and in that particular hype train I remember some noise about that but not nearly enough.

johnnyanmac
2 replies
13h53m

Yeah, happens everytime. Remember when people were promising blockchain but had nothing to show for it (sometimes, not even an idea)? Or "cloud powered" for apps that barely made API calls? Remember when every and anything needed an app, even if it was just some static food menu?

It's obvious BS from anyone in tech, but the people throwing money aren't in tech.

I think the term AI is going to slowly become less marketing trendy, and will fade out over time, as all trendy marketing terms do. What will be left are actually useful enhancements to specific use cases - most of which will probably be referred to by a word other than AI.

it'll die down, but the marketing tends to stick, sadly. we'll have to deal with if AI means machine learning or LLMs or video game pathfinding for decades to come.

Animats
1 replies
13h25m

Remember when people were promising blockchain but had nothing to show for it (sometimes, not even an idea)?

They're still on r/Metaverse_blockchain. Every day, a new "meme coin". Just in:

"XXX - The AI-Powered Blockchain Project Revolutionizing the Crypto World! ... Where AI Meets Web3"

"XXX is an advanced AI infrastructure that develops AI-powered technologies for the Web3, Blockchain, and Crypto space. We aim to improve the Web3 space for retail users & startups by developing AI-powered solutions designed explicitly for Web3. From LLMs to Web3 AI Tools, XXX is the go-to place to boost your Web3 flow with Artificial Intelligence."

Log_out_
0 replies
11h47m

They too are a redlisted species now. Just prompt chatGPT to buisness speak wirh maximum buzzword boolshit and be amazed. When they came for the kool aid visionaries i was not afraid, cause I was not a grifter.

This is a synergistic, paradigm-shifting piece of content that leverages cutting-edge, scalable innovation to deliver a robust, value-added user experience. It epitomizes next-gen, mission-critical insights and exemplifies best-in-class thought leadership within the dynamic ecosystem

namaria
0 replies
10h50m

The 'intelligence' label has been applied to computers since the beginning and it always misleads people into expecting way more than they can deliver. The very first computers were called 'electronic brains' by newspapers.

And this delay between people's mental images of what an 'intelligent' product can do and the actual benefits they get for their money once a new generation reaches the market creates this bullwhip effect in mood. Hence the 'AI winters'. And guess what, another one is brewing because tech people tend to think history is bunk and pay no attention to it.

mrcartmeneses
0 replies
13h59m

Turbo

konfusinomicon
0 replies
5h52m

AI should be re-acronymed to mean Algorithmic Insights. artificial intelligence is akin to a ziplock bag doing algebra with no external interactions

jszymborski
0 replies
3h56m

I say LLM when Im talking about LLMs, I say Generative ML when I'm talking about Generative ML, and I say ML when I'm talking about everything else.

I don't know what AI is, and nobody else does, that's why they're selling you it.

jerieljan
0 replies
13h47m

I'd like to think that AI right now is basically a placeholder term, like a search keyword or hot topic and people are riding the wave to get attention and clicks.

Everything that is magic will be labeled under AI for now, until it gets seated into their proper terms and are only closely discussed by those who are actually driving innovation in the space or are just casually using the applications in business or private.

davidgerard
0 replies
11h22m

The term "artificial intelligence" was marketing from its creation. It means "your plastic pal who's fun to be with, especially if you don't have to pay him." Multiple disparate technologies all called "AI", because the term exists to sell you the prospect of magic.

RaftPeople
0 replies
2h58m

Typically at this point in the hype cycle a new term emerges so companies can differentiate their hype from the pack.

Next up: Synthetic Consciousness, "SC"

Prediction: We will see this press release within 24 months:

"Introducing the Acme Juice Squeezer with full Synthetic Consciousness ("SC"). It will not only squeeze your juice in the morning but will help you gently transition into the working day with an empathetic personality that is both supportive and a little spunky! Sold exclusively at these fine stores..."

cleandreams
64 replies
13h14m

I worked for an AI startup that got bought by a big tech company and I've seen the hype up close. In the inner tech circles it's not exactly a big lie. The tech is good enough to make incredible demos but not good enough to generalize into reliable tools. The gulf between demo and useful tool is much wider than we thought.

beoberha
31 replies
12h57m

I work at Microsoft, though not in AI. This describes Copilot to a T. The demos are spectacular and get you so excited to go use it, but the reality is so underwhelming.

moose_man
17 replies
12h29m

It's a lot like AR before Vision Pro. The situation for the demo and reality didn't meet. I'm not trying to claim Vision Pro is perfect but it seems to do AR in the real world without the circumstances needing to be absolutely ideal.

Animats
8 replies
11h19m

The Vision Pro is not doing well. Apple has cancelled the next version.[1] As Carmack says, AR/VR will be a small niche until the headgear gets down to swim goggle size, and will not go mainstream until it gets down to eyeglass size.

[1] https://www.msn.com/en-us/lifestyle/shopping/apple-shelves-n...

Yizahi
2 replies
7h26m

Which it probably won't, because real life physics are not aware about roadmaps and corporate ads.

matthewdgreen
1 replies
6h46m

What physics are you talking about? Limits on power? Display? Sensor size? I ask because I’ve had similar feelings about things like high speed mobile Internet or mobile device screen size (over a couple of decades) and lived to see all my intuition blown away, so I really don’t believe in limits that don’t have explicit physical constraints behind them.

throwup238
0 replies
6h8m

Lens diffraction limits. VR needs lenses that are small and thin enough while still being powerful enough to bend the needed light towards the eyes. Modern lenses need more distance between the screen and the eyes and they’re quite thick.

Theoretically future lenses may make it possible, but the visible light metamaterials needed are still very early research stage.

namaria
1 replies
10h53m

I think both hardware and software in AR have to become unobtrusive for people to adopt it. And then it will be a specialized tool for stuff like maintenance. Keeping large amounts of information in context without requiring frequent changes in context. But I also think that the information overload will put a premium on non-AR time. Once it becomes a common work tool, people using it will be very keen to touch grass and watch clouds afterwards.

I don't think it will ever become the mainstream everyday carry proponents want it to be. But only time will tell...

Turskarama
0 replies
9h47m

Until there is an interface for it that allows you to effectively touch type (or equivalent) then 99% of jobs won't be able to use it away from a desk anyway. Speech to text would be good enough for writing (non technical) documentation but probably not for things like filling spreadsheets or programming.

salad-tycoon
0 replies
9h9m

Your article states this differently. The development has not been canceled fully but re focused.

“and now hopes to release a more standard headset with fewer abilities by the end of next year.

addandsubtract
0 replies
9h11m

It was always the plan for Apple to release a cheaper version of the Vision Pro next. That the next version of the PRO has been postponed isn't a huge sign. It just seems that the technology isn't evolving quickly enough to warrant a new version any time soon.

pbmonster
6 replies
11h13m

But does what Apple has shown in its demos of the Vision Pro actually meet reality? Does it provide any value at all?

In my eyes, it's exactly the same as AI. The demos work. You can play around with it, and its impressive for an hour. But there's just very little value.

MrScruff
3 replies
10h17m

The value would come if it was something you would feel comfortable wearing all day. So it would need perfect pass through, be much much lighter and more comfortable. If they achieved that and can do multiple high resolution virtual displays then people would use it.

The R&D required to get to that point is vast though.

pbmonster
1 replies
9h59m

can do multiple high resolution virtual displays

In most applications, it then would need to compete on price with multiple high resolution displays, and undercut them quite significantly to break the inertia of the old tech (and other various advantages - like not wearing something all day and being able to allow other people to look at what you have on your screen).

MrScruff
0 replies
9h44m

I take your point but living in a London flat I don't have the room for multiple high resolution displays. Nor are they very portable, I have a MBP rather than an iMac because mobility is important.

I do think we're 4+ years until it gets to the 'iPhone 1' level of utility though, so we'll see how committed Apple are to it.

Yizahi
0 replies
7h21m

That's what all these companies are peddling though. The question is - do humans actually NEED a display before their eyes for all awake time? Or even most of it? Maybe, but today I have some doubts.

zmmmmm
1 replies
9h53m

it's very sad because it's sort of so near but so far kind of situation

It would be valuable if it could do multimonitor, but it can't. It would be valuable if it could run real apps but it only runs iPad apps. It would be valuable if Apple opened up the ecosystem, and let it easily and openly run existing VR apps, including controllers - but they won't.

In fact the hardware itself crosses the threshold to where the value could be had, which is something that couldn't be said before. But Apple deliberately crimped it based on their ideology, so we are still waiting. There is light at the end of the tunnel though.

pbmonster
0 replies
8h7m

But Apple deliberately crimped it based on their ideology

It's in a strange place, because Apple definitely also crimped it by not even writing enough software for it inhouse.

Why can't it run Mac apps? Why can't you share your "screen configuration" and its contents with other people wearing a Vision Pro in the same room as you?

timeon
0 replies
9h57m

It is not really AR. Reality is not just augmented but captured first with camera. It can make someone dizzy.

TeMPOraL
10 replies
10h38m

Copilot isn't underwhelming, it's shit. What's impressive is how Microsoft managed to gut GPT-4 to the point of near-uselessness. It refuses to do work even more than OpenAI models refuse to advise on criminal behavior. In my experience, the only thing it does well is scan documents on corporate SharePoint. For anything else, it's better to copy-paste to a proper GPT-4 yourself.

(Ask Office Copilot in PowerPoint to create you a slide. I dare you! I double dare you!!)

The problem with demos is that they're staged, they showcase integrations that are never delivered, and probably never existed. But you know what's not hype and fluff? The models themselves. You could hack a more useful Copilot with AutoHotkey, today.

I have GPT-4o hooked up as a voice assistant via Home Assistant, and what a breeze that is. Sure, every interaction costs me some $0.03 due to inefficient use of context (HA generates too much noise by default in its map of available devices and their state), but I can walk around the house and turn devices on and off by casually chatting with my watch, and it work, works well, and works faster than it takes to turn on Google Assistant.

So no, I honestly don't think AI advances are oversold. It's just that companies large and small race to deploy "AI-enabled" features, no matter how badly made they are.

wordofx
3 replies
10h34m

lol I can’t help but assume that people who think copilot is shit have no idea what they are doing.

TeMPOraL
1 replies
10h30m

I have it enabled company-wide at enterprise level, so I know what it can and can't do in day-to-day practice.

Here's an example: I mentioned PowerPoint in my earlier comment. You know what's the correct way to use AI to make you PowerPoint slides? A way that works? It's to not use the O365 Copilot inside PowerPoint, but rather, ask GPT-4o in ChatGPT app to use Python and pandoc to make you a PowerPoint.

I literally demoed that to a colleague the other day. The difference is like night and day.

falcor84
0 replies
5h1m

I've gone back to using GitHub Copilot with reveal.js [0]. It's much nicer to work with, and I'd recommended it unless you specifically need something from PowerPoint's advanced features.

[0] https://revealjs.com/

fragmede
0 replies
10h27m

GitHub (which is owned by Microsoft) Copilot or Microsoft Copilot?

NautilusWave
3 replies
7h52m

Basically, functional AI interactions are prohibitively resource intensive and expensive. Microsoft's non-coding Copilots are shit due to resource constraints.

TeMPOraL
1 replies
6h13m

Basically, yes. My last 4 days of playing with this voice assistant cost me some $3.60 for 215 requests to GPT-4o, amounting to a little under 700 000 tokens. It's something I can afford[0], but with costs like this, you can't exactly give GPT-4 access out to people for free. This cost structure doesn't work. It doesn't with GPT-4o, so it more than twice as much didn't with earlier model iterations. And yet, that is what you need if you want a general-purpose Copilot or Assistant-like system. GPT-3.5-Turbo ain't gonna cut it. Llamas ain't gonna cut it either[1].

In a large sense, Microsoft lied. But they didn't lie about capability of the technology itself - they just lied about being able to afford to deliver it for free.

--

[0] - Extrapolated to a hypothetical subscription, this would be ~$27 per month. I've seen more expensive and worse subscriptions. Still, it's a big motivator to go dig into the code of that integration and make it use ~2-4x fewer tokens by encoding "exposed entities" differently, and much more concisely.

[1] - Maybe Llama 3 could, but IIRC license prevents it, plus it's how many days old now?

falcor84
0 replies
5h5m

they just lied about being able to afford to deliver it for free.

But they never said it'll be free - I'm pretty sure it was always advertised as a paid add-on subscription. With that being the case, why would they not just offer multiple tiers to Copilot, using different models or credit limits?

pdimitar
0 replies
7h16m

Contrary to what the corporations want you to believe -- no, you can't buy your way out of every problem. Most of the modern AI tools are mostly oversold and underwhelming, sadly.

dsauerbrun
1 replies
4h21m

whoa that's very cool. can you share some info about how you set up the integration in ha? would love to explore doing something like this for myself

TeMPOraL
0 replies
3h53m

With the most recent update, it's actually very simple. You need three things:

1) Add OpenAI Conversation integration - https://www.home-assistant.io/integrations/openai_conversati... - and configure it with your OpenAI API key. In there, you can control part of the system prompt (HA will add some stuff around it) and configure model to use. With the newest HA, there's now an option to enable "Assist" mode (under "Control Home Assistant" header). Enable this.

2) Go to "Settings/Voice assistants". Under "Assist", you can add a new assistant. You'll be asked to pick a name, language to use, then choose a conversation model - here you pick the one you configured in step 1) - and Speech-to-Text and Text-to-Speech models. I have a subscription to Home Assistant Cloud, so I can choose "Home Assistant Cloud" models for STT and TTS; it would be great to integrate third party ones here, but I'm not sure if and how.

3) Still in "Settings/Voice assistants", look for a line saying "${some number} entities exposed", under "Add assistant" button. Click that, and curate the list of devices and sensors you want "exposed" to the assistant - "exposed" here means that HA will make a large YAML dump out of selected entities and paste that into the conversation for you[0]. There's also other stuff (I heard docs mentioning "intents") that you can expose, but I haven't look into it yet[1].

That's it. You can press the Assist button and start typing. Or, for much better experience, install HA's mobile app (and if you have a smartwatch, the watch companion app), and configure Home Assistant as your voice assistant on the device(s). That's how you get the full experience of randomly talking to your watch, "oh hey, make the home feel more like a Borg cube", and witnessing lights turning green and climate control pumping heat.

I really recommend everyone who can to try that. It's a night-and-day difference compared to Siri, Alexa or Google Now. It finally fulfills those promises of voice-activated interfaces.

(I'm seriously considering making a Home Assistant to Tasker bridge via HA app notification, just to enable the assistant to do things on my phone - experience is just that good, that I bet it'll, out of the box, work better than Google stuff.)

--

[0] - That's the inefficient token waster I mentioned in the previous comment. I have some 60 entities exposed, and best I can tell, it generates a couple thousand token's worth of YAML, most of which is noise like entity IDs and YAML structure. This could be cut down significantly if you named your devices and entities cleverly (and concisely), but I think my best bet is to dig into the code and trim it down. And/or create a synthetic entities that stand for multiple entities representing a single device or device group, like e.g. one "A/C" entity that combines multiple sensor entities from all A/C units.

[1] - Outside the YAML dump that goes with each message (and a preamble with current date/time), which is how the Assistant know current state of every exposed entity, there's also an extra schema exposing controls via "function calling" mechanism of OpenAI API, which is how the assistant is able to control devices at home. I assume those "intents" go there. I'll be looking into it today, because there's a bunch of interactions I could simplify if I could expose automation scripts to the assistant.

throwaway2037
1 replies
12h14m

I never considered this angle. (Yeah, I am a sucker -- I know.) Are you saying that they cherry pick the best samples for the demo? Damn. I _still_ have high hopes for something like Copilot. I work on CRUD apps. There are so many cases where I want Copilot to provide some sample code to do X.

beoberha
0 replies
11h33m

Sorry I didn’t mean GitHub Copilot. Code generation is definitely one of the better use cases for AI. I meant the “Copilot” brand that Microsoft has trotted out into pretty much everyone of its products and rolled together in this generic “Copilot” app on windows.

yawnxyz
16 replies
11h55m

I just used Groq / llama-7b to classify 20,000 rows of Google sheets data (Sidebar archive links) that would have taken me way longer... Every one I've spot checked right now has been correct, and I might write another checker to scan the results just in case.

Even w/ a 20% failure it's better than not having the classifications

physicsguy
5 replies
8h39m

The problem isn't that it's not useful for self driven tasks like that, it's that you can't really integrate that into a product that does task X because when someone buys a system to do task X, they want it to be more reliable than 80%.

p1necone
4 replies
8h35m

Stick a slick UI that lets the end user quickly fix up just the bits it got wrong and flip through documents quickly and 80% correct can still be a massive timesaver.

kgeist
2 replies
8h17m

We're thinking about adding AI to the product and that's the path I'd like to take. View AI as an intern who can mistakes, and provide a UI where the user can review what the AI is planning to do.

whiplash451
1 replies
5h58m

Except that people hate monitoring an intern all day, regardless of whether it is a human or a machine.

techostritch
0 replies
5h8m

I think this is going to be a heavy lift, and one of the reasons I think a chat bot is not the right UX. Every time someone says “all you need to get to do to get ChatGPT working is provide it explicitly requirements and iterate”, and for a lot of coding tasks it’s much easier to just hack on code for a while, then be a manager to a 80% right intern.

physicsguy
0 replies
7h43m

I think that can kind of work for B2C things, but is much less likely to do so for B2B. Just as an example, I work on industrial maintenance software, and customers expect us to catch problems with their machinery 100% of the time, and in time to prevent it. Sometimes faults start and progress to failure faster than they're able to send data to us, but they still are upset that we didn't catch it.

It doesn't matter whether that's reasonable or not, there are a lot of people who expect software systems to be totally reliable at what they do, and don't want to accept less.

lifeisstillgood
5 replies
11h22m

Sorry this actually sounds like a real use case. What was the classification? (I tried google “sidebar archive”). I assume somehow you visited 20,000 web pages and it classified the text on the page? How was that achieved ? You ran a local llama?

karles
2 replies
10h2m

We had ChatGPT look at 200.000 products, and make a navigation structure in 3 tiers based on the attributes of each product. The validation took 2% of the time it would have taken to manually create the hierarchy ourselves.

I think that even the simple LLM's are very well suited for classification-tasks, where very little prompting is needed.

lifeisstillgood
1 replies
8h51m

Sorry to harp on.

So you had a list of products (what sort - I am thinking like widgets from a wholesaler and you want to have a three tier menu for an e-commerce site?)

I am guessing each product has a description - like from Amazon, and chatgpt read the description and said “aha this is a Television/LCD/50inch or Underwear/flimsy/bra

I assume you sent in 200,000 different queries - but how did you get it to return three tiers? (Maybe I need to read one of those “become a ChatGPt expert” blogs

t-writescode
0 replies
4h44m

I'm not this person; but, I've been working on LLMs pretty aggressively for the last 6ish months and I have some ideas of how this __could__ be done.

You could plainly ask the LLM something like this as the query goes on:

"Please provide 3 categories that this product could exist under, with increasing specificity in the following format:

  {
     "broad category": "a broad category that would encompass this product, as well as others, for example 'televisions' for a 50" OLED LG with Roku integration",
     "category": "a narrower category that describes this product more aggressively, for example 'Smart Televisions'",
     "narrow category": "an even narrower category that describes this product and its direct competitors, for example OLED Smart televisions"
  }
A next question you'll have pretty quick is, "Well, what if sometimes it returns 'Smart televisions' and other times it returns 'Smart TVs', won't that result in multiple of the same category?" And that's a good and valid question, so you then have another query that takes the categories that have been provided to you and asks for synonyms, alternative spellings, etc, such as:

"Given a product categorization of a specific level of specificity, please provide a list of words and phrases that mean the same thing".

In OpenAI's backend - and many of them, I think, you can have the api run the query multiple times and get back multiple answers. enumerate over those answers, build the graph, and you can have all that data in an easy to read and follow format!

It might not be perfect, but it should be pretty good!

hobofan
1 replies
10h0m

It sounds like a real use case, but possibly quite overkill to use an LLM.

Unless you need to have some "reasoning" to classify the documents correctly, a much more lightweight BERT-like model (RoBERTa or DistilBERT) will perform on par in accuracy while being a lot faster.

t-writescode
0 replies
9h41m

"while being a lot faster", yes; but something that LLMs do that those other tools don't is being hilariously approachable.

LLMs can operate as a very, very *very* approachable natural language processing model without needing to know all the gritty details of NLP.

runeks
2 replies
9h20m

Every one I've spot checked right now has been correct, and I might write another checker to scan the results just in case.

If you already have the answers to verify the LLM output against why not just use those to begin with?

kees99
0 replies
8h47m

Not GP, but I would imagine "another checker to scan the results" would be another NN classifier.

Thinking being that you'd compare outputs of the two, and under assumption of the results being statistically independent from each other and of similar quality, say 1% difference between the two in said comparison, would suggest ~ 0.5% error rate from "ground truth".

TeMPOraL
0 replies
8h47m

Maybe their problem is using LLM to solve f:X→Y, where the validation, V:{X,Y}→{0,1}, is trivial to compute?

smusamashah
0 replies
9h52m

I classified ~1000 GBA game roms files by using their file names to put each in a category folder. It worked like 90% correctly. Used GPT 3.5 and therefore it didn't adhere to my provided list of categories but they were mostly not wrong otherwise.

https://gist.github.com/SMUsamaShah/20f24e80cfe962d26af5315e...

rob74
7 replies
11h20m

Looks like this is another application of the ninety-ninety rule: getting to the stage where you can make incredible demos has required 90% of the effort, and the last 10% to make it actually reliable will require the other 90% (https://en.wikipedia.org/wiki/Ninety%E2%80%93ninety_rule).

com
3 replies
11h3m

Excellent find, I’d never heard of Zipf’s law.

GP was talking about something else though, the 90:90 rule is related to an extremely common planning optimism fallacy around work required to demo, and work required to productise.

0xEF
1 replies
9h27m

Can you elaborate? I am curious. In my line of work, the 80/20 rule is often throw about, that being "to do 80% of the work, you only need 20% of the knowledge." I thought the other reply was talking about the same diminutive axiom, but now I am not sure.

com
0 replies
8h26m

The sibling post gives a good account of the 90:90 challenge.

The last part of any semi-difficult project nearly always takes much longer than the officially difficult “main problem” to solve.

It leads to the last 10% of the deliverables costing at least 90% of the total effort for the project (not the planned amount; the total ad calculated after completion, if that ever occurs)

This seems to endlessly surprise people in tech, but also many other semi-professional project domains (home renovations are a classic)

Sakos
0 replies
9h34m

It's not just demos though. It's that the final 10% of any project, which largely consists of polishing, implementing feedback, ironing out bugs or edge cases, and finalization and getting to a point where it's "done" can end up taking as much effort as what you needed to complete the first 90% of the project.

viking123
1 replies
10h6m

Isn't it a bit similar situation with the self driving cars?

huppeldepup
0 replies
9h11m

I'm going to copy my answer from zellyn in a thread some time ago:

  "It’s been obvious even to casual observers like myself for years that Waymo/Google was one of the only groups taking the problem seriously and trying to actually solve it, as opposed to pretending you could add self-driving with just cameras in an over-the-air update (Tesla), or trying to move fast and break things (Uber), or pretending you could gradually improve lane-keeping all the way into autonomous driving (car manufacturers). That’s why it’s working for them. (IIUC, Cruise has pretty much also always been legit?)"
https://news.ycombinator.com/item?id=40516532

rsynnott
3 replies
7h48m

The gulf between demo and useful tool is much wider than we thought.

This is _always_ the problem with these things. Voice transcription was a great tech demo in the 1990s (remember DragonDictate?), and there was much hype for a couple of years that, by the early noughties, speech would be the main way that people use computes. In the real world, 30 years on, it has finally reached the point where you might be able to use it for things provided that accuracy doesn't matter at all.

jraph
2 replies
7h39m

Assuming it works perfectly, speech still couldn't possibly be the main way to use a computer:

- hearing people next to you speaking to the computer would be tiring and annoying. Though remote work might be a partial solution to this

- hello voice extinction after days of using a computer :-)

rsynnott
0 replies
7h36m

Oh, yeah, I mean, it would've been awful had it actually happened, even if it worked properly. But try telling that to Microsoft's marketing department circa 1998.

(MS had a bit of a fetish for alternate interfaces at the time; before voice they spent a few years desperately trying to make Windows for Pen Computing a thing).

acuozzo
0 replies
1h37m

Same here, but I'm hoping it takes off for other people.

I get requests all the time from colleagues to have discussions via telephone instead of chat because they are bad at typing.

xwolfi
1 replies
13h0m

So a large cluster of nvidia cards cannot predict the future, generate correct http links, rotate around objects with only a picture at source with the right lighting or program a million lines of code from 3 lines of prompt ?

Color me surprised. Maybe we should ask Mira Murati to step aside from her inspiring essays about the future of poetry and help us figure out why the world spent trillions on nvidia equity and how to unwind this pending disaster...

waciki
0 replies
8h29m

it also can't reliably add two numbers to each other.

help us figure out why the world spent trillions on nvidia equity and how to unwind this pending disaster..

There are many documented examples of the market being irrational.

surfingdino
0 replies
12h20m

The tech is good enough to make incredible demos but not good enough to generalize into reliable tools. The gulf between demo and useful tool is much wider than we thought.

One thing it is good at is scaring people into paying to feed it all the data they have for a promise of an unquantifiable improvement.

pech0rin
55 replies
15h0m

Author makes good points but suffers from “i am genius and you are an idiot” syndrome which makes it seem mostly the ranting of an asshole vs a coherent article about the state of AI.

pushfoo
11 replies
14h3m

TL;DR: This is intentional hyperbole and satire

1. "ludic" means playful[1].

2. The blog's tagline implies this is satire:

"Wow, if I was the leader of this person's company I would immediately terminate them." [2]

It seems like most of the comment thread failed to pick up on this.

That's understandable. The post's humor is a style which won't make sense if you're not fluent in both English and online culture.

Even if you understand the style, you also might not like it.

1. https://www.merriam-webster.com/dictionary/ludic

2. https://ludic.mataroa.blog/

jsnell
3 replies
13h44m

The post's humor is a style which won't make sense if you're not fluent in both English and online culture.

Oh, please. That's like saying that only native speakers with a university degree can understand a 6 year old's fart jokes.

The humor in this article is juvenile shock-jocking. It starts from the trashy clickbait headline, and is never elevated past that. There's no particular sophistication needed to understand it. It's just not particularly funny or insightful; it's just taking some rote complaints about AI and the hype cycle, and threatening to kill people in various graphic ways. Hilarious.

metabagel
2 replies
13h6m

At least you understood that it was an attempt at humor. Humor is subjective.

theGeatZhopa
0 replies
12h45m

It's called ironic/satirical, actually. And irony can't be understood if there's not the same knowledge about a thing. So understanding the irony and humor in this means the thinking expressed in the post is aligned to the reader's. If one can understand it, one already shows the same thinking patterns about the topic (corresponds to having the same knowledge..)

ThrowawayR2
0 replies
3h38m

"'t ain't funny, McGee." The author isn't talented enough to pull off the humor angle so it just comes across as what kids these day refer to as "cringe".

JasonSage
3 replies
13h51m

It seems like most of the comment thread failed to pick up on this.

At that point, is it a problem of most of the comment thread, or the way it was written?

I may say something that comes across really snarky to my coworker. Just because I didn't mean it to be snarky does not mean that it won't be interpreted that way.

Also, I have a feeling a lot of the comment thread are fluent in both English and online culture. This doesn't come across as a good-faith argument.

It's like I say something that comes across as snarky, my coworker confronts me about it, and I say "oh don't worry about it, if you came from low-context culture you would understand." It's very demeaning. Not to mention unsympathetic.

pushfoo
1 replies
13h27m

At that point, is it a problem of most of the comment thread, or the way it was written?

Maybe OP's fault for posting it here, then. It's angry cathartic humor, so you're right not everyone will appreciate it.

This doesn't come across as a good-faith argument.

It was meant to be, but you're also right that it no longer seems to be true:

* the aggressive knee-jerk stuff is getting flagged quickly

* more comments have been posted

It's very demeaning.

I see your point given the way the thread is evolving. However, the posts I was referring to were implying OP is a schizophrenic[1] or bipolar.

[1]: https://news.ycombinator.com/item?id=40734705

JasonSage
0 replies
13h21m

I take your points here, and I agree.

intended
0 replies
12h31m

Many people think satire or similar humor is serious, it happens in real life as you mentioned. There are many times the Onion has been quoted as a source.

However, since we are talking about people being rational, then the points and links above that show this is satirical should lead people to make their own decision.

Hopefully, this doesn’t become a place for people to draw lines in the sand.

aakresearch
1 replies
13h58m

won't make sense if you're not fluent in both English and online culture.

I am not fluent in either and I'm in love with his style and substance!

theGeatZhopa
0 replies
12h49m

I also good read.

busterarm
0 replies
14h1m

There's so much about this guy's work that just flies over most peoples' heads, but that's fine by me. Most people don't get it, regardless of what _it_ is.

This is the only blog that I actively look forward to reading.

bdw5204
11 replies
13h47m

I think he's polarizing because he's right about the industry and everybody on both sides knows it. Many people in the industry are just selling snake oil. There's also a ton of idiots such as the people who misconfigure cloud software to waste half a million dollars of company money. A truth teller comes off as an a-hole to people who don't want particular truths to be told.

margalabargala
7 replies
12h52m

A truth teller comes off as an a-hole to people who don't want particular truths to be told

If someone is repeatedly threatening physical violence, as the author of this post is, that also tends to come off as an a-hole to some people even if the threats are not genuine.

I agree the author of this post is saying accurate things, and that will piss people off, too.

So we have two completely separate ways in which someone might think the author is an a-hole. They aren't all trying to hide some truth, like you imply.

pineaux
3 replies
11h37m

It is not serious. You shouldn't see it as a literal threat. It's a writing tool. Just like adding "fucking" to something doesn't literally mean that that thing is copulating.

throwaway_ab
0 replies
51m

I find the writing comes across as far too aggressive and angry.

Why is this writing style in use if their argument/point/observation stands on its own?

Same goes for "fucking", which I feel is not appropriate with this type of writing.

margalabargala
0 replies
3h14m

I'm aware that it's a writing tool, not a literal threat. That's why I pointed out the threats are not genuine. Thank you for your explanation.

The choice to use the writing tool in question makes the author come off like an a-hole.

emptyfile
0 replies
8h3m

Yeah but it makes you sound like an asshole, same as constantly talking about punching people.

csomar
2 replies
11h6m

If someone is repeatedly threatening physical violence, as the author of this post is

Really? I think most people will agree that it's a writing style (not that I enjoy it) rather than the author really threatening actual violence.

throwaway_ab
0 replies
1h0m

Sure it's a writing style, just not a good one.

I struggled reading it all the way through, the points they were trying to make become increasingly lost as they made yet another threat.

I know it's not real threats but it's confronting in ways I don't think the writer intended and it put me off.

I would imagine some people just find it funny, in others it might just manifest as something being off with the writing, there are surely more people like me who found it not so subtle and super distracting.

Also consider people who have been through physical abuse, those readers might be triggered with each "threat"

And for what?

At best it's juvenile, surely if you have a good point to make, wrapping it up in this way does more harm than good?

margalabargala
0 replies
3h17m

Yes, including myself, which is why I ended that sentence with

even if the threats are not genuine

My point is that using that writing style makes the author come off as an a-hole.

boffinAudio
1 replies
9h40m

Many people in the industry are just selling snake oil

We have always been selling snake oil - its just the inexperienced and those who have never shipped anything of any value to the world that feel that the snake oil is where the buck stops - but those of us who have shipped tons of snake oil know that eventually that oil congeals and becomes an essential substance in the grinding wheels of industry.

Which this wanker (Disclaimer: Australian, can use it if I wanna, since I know a lot about snakes, too..) seems to not have fully understood yet, as there is a great deal of evidence to support the fact that their experience is mostly academic, and hasn't actually resulted in anything being shipped.

Academics seem too often to forget that software is a service industry and in such a context, snakes and oil are very definitely par for the course.

Nobody cares if you implemented the important bits all by yourself - what are your USERS doing with it? Oh, you don't have actual users? Then please STFU and let the snake wranglers get on with it ..

techostritch
0 replies
5h2m

“ Nobody cares if you implemented the important bits all by yourself - what are your USERS doing with it? Oh, you don't have actual users? Then please STFU and let the snake wranglers get on with it ..”

I got the opposite impression of the article that it was mostly about the fact that companies thought they needed to be to theoretical and academic and in fact taking advantage of AI should be looked at very practically. Granted it’s a long article and he makes lots of points, but I felt like most of the part of section 4 was that you don’t need to implement it yourself and gluing libraries together was probably the right tack, and that most companies were ignoring this in the gold rush of “AI good”.

rplnt
0 replies
12h11m

I have read only a few sentences, so it can't be the hard truths that give off this vibe. Saying you are one of the greats based on things someone inexperienced would list underlined by the very short time in the industry comes off as arogant.

torginus
9 replies
12h50m

Author sounds like a young person who feels like he's a god among men just for the fact that he's implemented the algortihms and understands the math and engineering behind the libraries most DS's just pip install.

Which is weird coming from a generation of devs, where actually doing this work yourself was the norm.

As for DS, from what little I've experienced from the field, he sounds right. Most people come in without a mathematically rigorous education, they talk fancy, but what they end up doing is pulling in dependencies from a pre-written library and using those without understanding the theory behind them.

They also ignore the fact that 99% of the value in data science is created by taking good data, understanding the domain, in which case fancy algorithms are unnecessary. And the acquisition of said things needs good data engineering, not data science.

But more often than not, the credit and prestige goes to folks who pull in fancy ML algorithms and run extensive experiments and build massive ML pipelines, feeding in truckloads of tangentially relevant data.

camillomiller
7 replies
11h32m

Americans not getting Aussies is the best part of this thread

bozey07
2 replies
11h24m

I'm Australian and concur that OP sounds full of himself.

Telling self-righteous... friends, to wind their neck in is far more Australian than OP's behaviour.

[0] https://en.wikipedia.org/wiki/Tall_poppy_syndrome

Edit:

To clarify, I have no problem with his style of writing, which is great, but "I am clearly better than most of my competition"? Lord, get a grip.

torginus
0 replies
10h13m

Not sure about tall-poppy syndrome, but I think it's somewhat justified (this could be argued though) that success most often doesn't look like what we think it should look like.

In most people's minds success should come from a combination of talent and hard work. We think people who work hard and come up with good ideas should become successful. But usually working 'within the system' limits your ability to be succesful. If you save the day at your current job, you might get a 20% raise if you're lucky. If you are mediocre but change jobs often, you will probably beat that.

In software, getting a high paying job usually hinges on your ability to get someone willing to pay you a lot of money.

I'm sure there are people who are getting paid 10x more or less for doing work that is fundamentally the same, just with different presentation.

For example I know a guy who's a mediocre PHP dev, but managed to snag a couple of high paying clients, and got into OE over covid, and brings in a ton of money, despite the fact that somehow he still doesn't seem to be working that hard.

Does he deserve that money? Is he someone we should look up to? I don't wanna say no, but I also don't wanna say yes.

andrei_says_
0 replies
10h35m

I think he was mentioning winning a specific competition?

hu3
0 replies
7h2m

I have Australian friends and they are not like this.

Sorry but, being Australian doesn't get you a free pass to banter everywhere and still expect to be taken seriously. Let alone spill self-diagnosed superiority in form of text.

exodust
0 replies
3h38m

He grew up in Penang, moved to Australia in 2013 according to his blog.

I'm not a fan of the "I'll break your neck" theme. He doesn't want people talking about AI but his own business website says he'll talk to you about AI in exchange for money.

Does he want to be Louis CK Live at the Beacon Theater AND a data scientist consultant? I don't think it's possible to be both.

dang
0 replies
1h32m

Please don't take HN threads into nationalistic flamewar. I know you didn't intend to but it's what this kind of internet comment leads to (in the general case), and we don't want that here.

In fact, since your comment is a putdown both of a nationality and of the community, it might be good to quote this from https://news.ycombinator.com/newsguidelines.html: "Please don't sneer, including at the rest of the community."

Exoristos
0 replies
11h8m

Silicon Valley is not "Americans." Otherwise, I agree.

pineaux
0 replies
11h40m

You start quite condescending but then basically acknowledge what the author is saying. Most DS's, even "from your generation" probably don't write their own tools. I bet you are even guilty of this too. No need to do some implicit grandstanding.

aakresearch
6 replies
14h12m

I've recently read through many of the author's articles and also through his LinkedIn content, and came to the opposite conclusion. The intentional "In-Your-Face-Trolling" style is intended as a cover for "Impostor Syndrome in Overdrive", which lots of us suffer. Yet he was able to fool so many! Just check the "Compliments" section on his website :)

I made the following comment about him in a conversation with a coworker: " The guy who authored the article is mad. Certifiably mad. Just spewing around pure unadulterated truth. (LinkedIn link goes here) What does it tell about me (or anybody) who so far hadn't found anything in his writings to disagree about? "

Krazam (search YouTube) is the other example of largely the same. But because it is visual it is a bit more obvious.

camillomiller
4 replies
11h32m

LOL nah, mate’s just Australian

boffinAudio
2 replies
9h43m

Tall poppy syndrome and a HUGE piling spoonful of cultural cringe.

phist_mcgee
0 replies
9h20m

Australian dream.

pcollins123
0 replies
6h38m

Just bloody shootin for bronze… lookin for a Steven Bradbury.

aakresearch
0 replies
11h23m

Ah-ha, "Boys just havin' fun"™. Being Australian is not incompatible with impostors' or being a good troll!

TeMPOraL
0 replies
10h22m

What does it tell about me (or anybody) who so far hadn't found anything in his writings to disagree about?

It tells they're too excited by the delivery and aren't thinking about the merit of what's being said.

Same mistake people made with all those "tells it as it is" vloggers and pundits.

ehnto
2 replies
11h23m

I don't disagree entirely, but there is a pretty strong hint of the self-aware dry humour typical of Australians. I think he believes what he's saying, but they're probably not taking themselves that seriously or literally.

ehnto
0 replies
8h10m

I am talking about people, not politics. Unless you think individual Australians, are well known for their personal authoritarianism?

I don't find myself conducting much authoritarianism but admittedly I do keep a pretty tight grip on the movements of my budgies. It's for their own good you see.

boffinAudio
0 replies
9h35m

humour

The word you should have used is authoritarianism, which this writer has, alas, in spades.

Your users are more important than your sense of self worth, in this industry.

Nobody ships ego. We ship working software: to users who find it valuable.

torginus
1 replies
12h6m

“i am genius and you are an idiot” syndrome

which is a weird thing, since I think in fields where most people can be assumed to be smart, there's usually not that much differentiation in cognitive ability.

Just for reference, if we take IQ as a proxy measure for intelligence, an average group of people (say, a high school class, a council meeting), the worst 10% will have an IQ of <80 with the best 10% will have an IQ of >120.

That's the difference of 40 points, and its a common enough scenario for most people to get a feel of what it's like.

In contrast, lets say you have a room of professionals, who have been screened to be in the top 10% of the population (not a huge stretch) as a cutoff. In this scenario, you'd need 100k people in this hypothetical room to get a similarly large IQ gap.

While I think the author might be a sharp guy, and probably studied his field deeper than most, to say there's an insurmountable chasm between him and the rest of his readers might be a bit of a stretch.

But hey, if you want to sell your unique genius as your upscale consulting brand, I guess this is how you market yourself.

pineaux
0 replies
11h32m

I don't think that is the goal of this blog. I think he is bragging a bit because he is afraid people will ask: "Who is this nobody that talks about AI as if he knows anything about it?" He is trying to qualify himself to give this opinion. He uses a hyperbolic style. I for one like it. But I like style and care less about the background of people to be honest. I think a good analysis is a good analysis regardless of who makes it.

intended
1 replies
12h35m

Author has a writing style that he likes to use, which gives him the ability to speak about certain topics with less struggle.

Its getting into the art/performance category of code blogs.

charles_f
0 replies
3h49m

And that makes it enjoyable to read if you are inclined to that style

soraminazuki
0 replies
9h17m

When I see people getting hyped up about AI, I just roll my eyes and move on. But now, quite a few people in the anti-AI camp are getting more hyped up than its promoters, jumping at the throats of anyone who dares mention the term. The recent harassment campaign from Mastodon folks toward the iTerm2 dev was a particularly disturbing one.

I personally would stay far far away from either of these two camps.

ppeetteerr
0 replies
13h34m

It's a polarizing style of communication but the message is on point.

matrix87
0 replies
9h49m

but suffers from “i am genius and you are an idiot” syndrome

But he's still nowhere near as unhinged as the rabid AI bullshittery shitting up the airwaves for the past year

Not a lot of room for nuance when the subject matter is this polluted. Typical HN convention of preferring nuance to outright dismissal is bad at filtering BS

bongodongobob
0 replies
14h19m

Especially since most "data scientists" turn spreadsheets into reports for the C suite, I'd argue that his entire role fits into the same arguments he makes against AI. Like he says, unless you're doing things on the cutting edge, I don't think most businesses have seen positive outcomes from employing data "scientists" or "engineers". They just take people's excel spreadsheets and make them prettier, taking approximately one quarter to implement each into Power BI.

Also being 5 years into his career thinking he actually groks how it all works is adorable. I get the impression he has the idea that work is supposed to be a rewarding passion project rather than getting shit done for your boss. Give me all the cushy bullshit AI projects please. I can play with the toys for 6 months and come back with whatever and it will be perfectly acceptable. Either "this is great, super helpful for the company" or "welp, the tech isn't there yet, but at least we tried". That's called riding the gravy train.

andrei_says_
0 replies
10h37m

I feel his frustration with the grifters, the kool aid drinkers and makers.

For me the writing felt authentic and entertaining. Emotionally charged but rightfully so. It is incredibly disturbing to see people lying with a straight face and getting insane investments.

Gormo
0 replies
5h5m

It seems like “I am genius and you are an idiot syndrome" and "ranting of an asshole" are on opposite ends of the spectrum, and directly in opposition to each other.

The very sort of hypemongers and grifters the author complains about often hide unsustainable claims behind complicated language and opaque terminology, with the intent of portraying themselves as experts and making clear-headed criticism seem uneducated or uninformed in comparison.

The author here is making a deliberate choice to use a ranty tone to cut through that sort of bullshit, and in doing so, successfully expressed his frustration with the pervasive level of hype in AI discussions.

CyberDildonics
0 replies
7h39m

suffers from “i am genius and you are an idiot” syndrome which makes it seem mostly the ranting of an asshole vs a coherent

Very true. A thread about it here is a hat on a hat.

habosa
46 replies
14h12m

This post has an unnecessarily aggressive style but has some very good points about the technology hype cycle we're in. Companies are desperate to use "AI" for the sake of using it, and that's likely not a good thing.

I remember ~6 years ago wondering if I was going to be able to remain relevant as a software engineer if I didn't learn about neural networks and get good with TensorFlow. Everyone seemed to be trying to learn this skill at the same time and every app was jamming in some ML-powered feature. I'm glad I skipped that hype train, turns out only a minority of programmers really need to do that stuff and the rest of us can keep on doing what we were doing before. In the same way, I think LLMs are massively powerful but also not something we all need to jump into so breathlessly.

zer00eyz
20 replies
13h45m

This post has an unnecessarily aggressive style

Im not sure its "unnecessary".

He is, very clearly venting into an open mic. He starts with his bonfides (a Masters, he's built the tools not just been an API user). He adds more through out the article (talking about peers).

His rants are backed by "anecdotes"... I can smell the "consulting" business oozing off them. He cant really lay it out, just speak in generalities... And where he can his concrete examples and data are on point.

I dont know when angry became socially unacceptable in any form. But he is just that. He might have a right to be. You might have the right to be as well in light of the NONSENSE our industry is experiencing.

Maybe its time to let the AI hate flow though you...

advael
18 replies
13h30m

As someone who spent an inordinate amount of time trying very hard to be less angry despite having a lot of good reasons to be, a chunk of which overlap with this piece, I get a lot of dismissal from people who seem to think any expression of any negative emotion, especially anger, deeply discredits a person on its own. It's so pervasive that I find even addressing it to be refreshing, so thank you

roenxi
7 replies
12h56m

If you want a theory; a man who isn't in control of his emotions can present anything up to an immediate mortal danger to the people around him (particularly if they are female).

Being able to control negative emotions isn't a nice-to-have trait or something that can be handled later. There is an urgent social pressure that men only get angry about things that justify it - a class of issues which includes arguably nothing in tech. Maybe a few topics, but not many.

Anger isn't a bad thing in itself (and can be an effective motivator in the short term). But people get very, very uncomfortable around angry people for this obvious reason.

zer00eyz
3 replies
11h1m

If you want a theory; a man who isn't in control of his emotions can present anything up to an immediate mortal danger to the people around him (particularly if they are female).

What emotions do you really control?

We expect men to suppress this emotion. And there's is 400k years of survival and reproductive success tied up with that emotion. We didn't get half a percent of the population with Ghegis Khans Y chromosome with a smile, balloons and a cake.

It's not like violence doest exist. But we seem to think that we can remove it just like the murder in the meat case. Are we supposed to put anger on a foam tray and wrap it in plastic and store it away like a steak because the reality of it upsets people?

It's to the point where words murder, suicide, rape and porn are "forbidden words"... were saying unlike, grape and corn. So as not to offend advertisers a peoples precious sensibilities. Failing to see this behavior is a major plot point in 1984.

I think we all need to get bad to the reality of the world being "gritty" and having to live in it.

roenxi
2 replies
9h47m

1) If you are comparing people's behaviour to Genghis Khan, don't expect positive social reinforcement. The man was a calamity clothed in flesh, we could do without anything like him happening ever again.

2) Violence != anger [0]. I don't know much about him, but Ghengis Khan could have been an extremely calm person. It is hard to build an empire that large and win that many campaigns for someone prone to clouded thinking which is a point in favour of him being fairly calculating.

What emotions do you really control?

3) In terms of what gets expressed? Nearly all of them. Especially in a written setting, there is more than enough time to take a deep breath and settle.

We expect men to suppress this emotion.

4) As an aside, I advise against suppressing negative emotions if that means trying to hold them back or something. That tends to lead to explosions sooner or later. It is better to take a soft touch, let the emotion play out but disconnect it from your actions unless it leads to doing something productive. Reflect on it and think about it; that sort of thing.

[0] Although maybe I should not that angery violence is a lot more dangerous than thoughtful violence; angry violence tends to be harder to predict and lead to worse outcomes.

zer00eyz
1 replies
8h8m

If you want a theory; a man who isn't in control of his emotions can present anything up to an immediate mortal danger to the people around him

You cant posit this and then go on to try and claim Violence != anger.

The man was a calamity clothed in flesh Nice, well said!!! He was also likely brilliant. Its rare stupid people make it to the top!

I hope that Ghengis Kahn NEVER happen again...But I think society is just a thin veil between us and those monsters. The whole idea of pushing down anger is just moving us one more steep from that reality!

roenxi
0 replies
7h32m

Sure I can posit both. They're both true.

There is a Venn diagram here. One circle is violence and one is anger.

advael
2 replies
12h7m

Okay but we're not talking anger that's expressed by violent behavior or even clear significant loss of control, I'm talking people on the internet can pick up the mildest hint of anger from your tone or even subject matter. As a woman and a pretty scrawny one at that, as well as being, well, obviously very opinionated and belligerent, I have experienced every flavor of the threatening behavior you're invoking and I can assure you this has nothing to do with why people reflexively dismiss people who they think are being "emotional". More and more, the accusation of being angry specifically seems to be all people think they need to say to smugly claim to be speaking from a "rational" high ground, often despite having contributed nothing of substance to the topic at hand. Like pointing out that this person's blog post aimed at no one particular person did not really have to contend with the perception that this person was going to actually become violent at anyone, although actually I could see getting that impression from this post more than most, since it frequently explained the anger as cartoonish threats of hypothetical violence. I'm not exaggerating. When I see this in person and can make better assumptions about the genders of the people involved, this seems disproportionately likely to be leveraged against women, as are most arguments to "obvious" or "apparent" disqualifying irrationality, and this is not a shock because we are within living memory of much of work culture treating it as conventional wisdom that this should be assumed of all women by default. People really be trying to win epistemic pissing contests by posting something that looks like running "u mad" through google translate and back once, unironically, just as surely as you're trying to do that obnoxious thing of trying to invoke the gravity of situations in which people genuinely fear for their safety, hoping that gravity will somehow make it harder to question what you said for fear of seeming chauvanistically oblivious or whatever that's supposed to do

I propose the alternate theory that as in-person interaction becomes a smaller portion of most people's social experience, many have gotten worse at handling even mild interpersonal conflict without the kind of impersonal mediating forces that are omnipresent online, and this kneejerk aversion reaction can rationalize itself with the aid of this whole weird gilded age revivalist-ass cartoon notion of "rationality" that's become popular among a certain flavor of influential person of late and, especially in a certain kind of conversation with a certain kind of smug obnoxious person, seems kind of like classic Orwellian doublespeak

Also this position that "arguably almost nothing" in tech warrants anger seems super tonedeaf in a context where most of the world has become a panopticon in the name of targeting ads, you need a mobile phone owned by a duopoly to authenticate yourself to your bank, and large swaths of previously functional infrastructure is being privatized and stripmined to function as poorly as the companies that own them can get away with while the ancillary benefit of providing employees with subsistence and purpose wherever possible, while still managing to nickel and dime you for the privilege with all manner of junk fees, and offer poorly-designed phone trees in place of any meaningful documentation or customer service

roenxi
1 replies
9h35m

Just going through your last paragraph; the logical implication of getting angry about any of that is either living in a state of ignorance or getting angry all the time. Either of those options is far inferior to just taking note of what is happening and calmly suggesting some improvements or working to make things better when the opportunity arises.

And these issues are just minor compared to all the terrible stuff that happens routinely. If we're ranking issues from most to least important things like "you need a mobile phone owned by a duopoly to authenticate yourself to your bank" are just so far down it is laughable (the wry type, like "why do I even care"). The fact that you need a bank at all is a far more crippling issue. Let alone all the war, death, cruelty and disinterest in suffering that is just another day in a big world.

advael
0 replies
2h38m

Two things can be true at once. We live in a big world and in that world, there are many things that warrant our anger, some of which are more important or urgent than others. Yes, it's probably more important that there are two wars going on or that the rich country that I live in has become a police state that jails millions of people on dubious and often bigoted pretenses or that the capital that owns the industrial capacity that won the last major era of technological progress is hell-bent on continuing business as usual in a way that we're now pretty sure will drastically harm the ecological infrastructure we depend on to survive, and has been engaged in decades of attacking the scientific and political capacity to dismantle them. Also, many of these problems are directly aided and abetted by the owners of the current wave of technological advances, who have also created and continue to iteratively worsen a pervasive network of surveillance and control, as well as an experiential environment that reliably produces apathy and learned helplessness, while destroying significant hard-won freedoms and infrastructure in the process (including uber rolling back labor rights gains, amazon crippling public delivery infrastructure it views as competition, etc)

Epictetus wrote of concerning oneself more with that which one may be able to control than that which one can't, and people who aren't familiar with the Enchiridion have nonetheless internalized this wisdom. It pops up in lots of places, like in various schools of therapy, or in the serenity prayer. My career is in computers, and this website is a nexus wherein people who do computers for a living gather to discuss articles. Therefore, the shared context we have is disproportionately about issues surrounding computers. We are all of us likely better positioned to enact or at least advocate for change in how computer things are done in the world, and in each of the last 7 decades this has become a larger share of the problems affecting the world, and anger is difficult to mask when talking about problems precisely because one of the major ways we detect anger in these text conversations devoid of body language or vocal tone is expressing a belief that something is unacceptable and needs to be changed

xg15
2 replies
11h5m

There is at least one German comedian who made an entire career solely out of being angry...

xg15
0 replies
6h37m

Yup, exactly this guy! Known and beloved in his stage persona Herr Hassknecht...

zer00eyz
1 replies
13h9m

Thanks I will get down voted to oblivion for it.

Cause getting angry at a problem and spending 2 days coding to completely replace a long standing issue isnt something that happens...

People need to be less precious. You cant be happy all the time. In fact you probably should not be (life without contrast is boring). A little anger about the indignities and bullshit in the world is a good thing. As long as you're still rational and receptive, it can be an effective tool for communicating a point.

advael
0 replies
11h54m

Or just communicating the emotion! I think aligning on an emotional layer of perception is important for shaking people out of automated behaviors when it's necessary to, and I dislike this cultural shift toward punishing doing any of that from the standpoint of its mechanism design implications almost as much as I hate it on vibes

xwolfi
1 replies
13h10m

Be angry more. I work in China but Im French, so people assume (and I nudge them to think), that it's a culture thing for me to express anger publicly at injustice or idiocy.

But it's liberating to be angry at bullshit (and God knows China is the bullshit Mecca), and AI is the top bullshit these days. We're not anti innovation because we say chatgpt is not gonna maintain our trading systems or whatever you work on. It's a funny silly statistical text generator that uses hundreds of thousands of video cards to output dead http links.

We're far from anything intelligent but it's at least indeed very artificial.

advael
0 replies
11h36m

As someone who was in academia at the right time to really contextualize the seismic shift in the capabilities of automated natural language processing models that came of both attention and the feasible compute scale increase that allowed for them to use long enough context windows to outpace recurrent models in this regard, I really didn't think I'd end up having to roll my eyes at people falling over themselves to exaggerate the implications, but in retrospect it's clear that this was more me being unconscionably naive then than it being that unpredictable

mns
1 replies
12h19m

Not only that, but the thing is that it’s all fake in our industry and the companies that we work on. People seem to be very sensitive today to showing any kind of actual emotion or feelings, be it anger or frustration. Everyone puts on the fake american service industry smile, say words like I hear you, we’re a team, we must be constructive. Then in the background all do the most insane political backstabbing, shit talk about other teams, projects, people walk over the careers and the future of others just to advance themselves, but as long as you put a smile on your face in the meetings and in public, none of that matters.

advael
0 replies
11h56m

I mean you make some very good points but you sound like you could be kind of mildly upset if I squint at it right so I think you should really be more mindful and adopt a positive additude before I will even consider listening to anything you have to say

surfingdino
0 replies
10h15m

He did what some of us want to do in meetings with clients. I hear and read all those BS arguments he's used as headings every week. It's insane.

johnnyanmac
15 replies
13h59m

I empathize with it, but ultimately it's fruitless. This happens with every big tech hype. They very much want people to keep talking about it. It's part of the marketing, and tech puts a lotta money into marketing.

But that's all it is, hype. It'll die down like web3, Big Data, cloud, mobile, etc. It'll probably help out some tooling but it's not taking our jobs for decades (it will inevitably cost some jobs from executives who don't know better and ignore their talent, though. The truly sad part).

zer00eyz
8 replies
13h51m

Go back further:

At a point in time the database was a bleeding edge technology.

Ingres (Postgres)... (the ofspring of Stonebreaker), Oracle, ... Db2? MSSQL? (Heavily used but not common)... So many failed DB's along the way, people keep trying to make "new ones" and they seem to fade off.

When was the last time you heard someone starting a new project with Mongo, or Hadoop? Postgres and Maria are the go to for a reason.

daemonologist
3 replies
13h21m

There's a team at my company that chose Mongo for a new data transform project about a year ago. They didn't create a schema for their output (haven't to this day) and I'm convinced they chose it purely because they could just not handle any edge cases and hope nobody would notice until it was already deployed, which is what happened. For example maybe one in a thousand of the records are just error messages - like they were calling an API and got rate limited or a 404 or whatever and just took that response and shoved it into the collection with everything else.

ikari_pl
0 replies
13h14m

still, I would have out that mess in jsonb in Postgres :D

threatofrain
2 replies
13h27m

Mongo is still very big, including for greenfield.

xwolfi
1 replies
13h12m

It's tiny compared to 10 years ago when it was all the rage and DBs were dead...

patrickhogan1
0 replies
12h55m

Postgres is awesome and part of its charm is the extensibility it offers enabling the adoption of innovative features introduced by competing DB's.

Postgres adopted a lot of mongos features when it released the JSON data type and support for path expressions

roenxi
5 replies
13h5m

It'll die down like web3, Big Data, cloud, mobile, etc

At least half of those the promise was realised though - mobile is substantially bigger than the market for computers and cloud turned out to be pretty amazing. AWS is not necessarily cost effective but it is everywhere and turned out to be a massive deal.

Big Data and AI are largely overlapping, so that is still to play. Only web3 hasn't had a big win - assuming web3 means a serious online use case for crypto.

"Die down" in this context means that the hype will come, go and then turn out to be mostly correct 10 years later. That was largely what happened in the first internet boom - everyone could see where it was going, the first wave of enthusiasm was just early. I don't think any technology exists right now that will take my job, but I doubt that job will exist in 20 years because it looks like AI will be doing it. There are a lot of hardware generations still to land.

roenxi
1 replies
9h57m

To a first approximation, I expect companies to spend nothing on AI and get put out of business if they are in a sector where AI does well. Over the medium-long term the disruption looks so intense that it'll be cheaper to rebuild processes from the ground up than graft AI onto existing businesses.

rsynnott
0 replies
7h43m

Which sectors are these?

physicsguy
0 replies
8h28m

AI and 'Big Data' (as trends) aren't really overlapping in my view. Of course training these LLM models requires a huge amount of data but that's very different from the prospect of spinning up a Spark cluster and writing really badly performing Python code to process something that could have easily been done in a reasonable time anyway on a decent workstation with 128gb of RAM and a large hard drive/SSD, which was a large part of what the hype train was a few years ago.

Terr_
0 replies
11h50m

At least half of those the promise was realised though

I dunno, I think there might be different sets of "promises" here.

For example, "cloud infrastructure" is now a real thing which is useful to some people, so one could claim that "the promise of cloud infrastructure" was fulfilled.

However that's not really the same promises as when consultants preached that a company needed to be Ready For The Cloud, or when marketing was in a slapping "Cloud" onto existing product marketing, or unnecessary/failed attempts to rewrite core business logic into AWS lambda functions, etc.

nutrie
3 replies
13h53m

I’ve been doing just fine ignoring AI altogether and focusing on my thing. I only have one life. Fridman had a guy on his podcast a while ago, I don’t remember his name, but he studies human languages, and the way he put it was the best summary of the actual capabilities I’ve heard so far. Very refreshing.

blast
2 replies
13h40m

Who was that?

selcuka
0 replies
13h20m

Could it be Edward Gibson [1]?

I work on all aspects of human language: the words, the structures, across lots of languages. I mostly works [sic] with behavioral data: what people say, and how they perform in simple experiments.

(I find it ironic to see a grammatical error in his bio. Probably because of a mass find/replace from "He" to "I" but still...)

[1] http://tedlab.mit.edu/ted.html

nutrie
0 replies
13h19m

I think #426

throwaway2037
0 replies
12h21m

I agree. A bunch of his other posts have similar style. I like it. It is witty, mixed in with serious technical subjects!

yosito
0 replies
12h33m

Honestly, I couldn't get past the violent language. Why do we give people who speak like this any respect. It's completely inappropriate.

openmajestic
0 replies
12h27m

I agree that the world isn't changing tomorrow like so much of the hype makes it out to be. I think I disagree that engineers can skip this hype train. I think it's like the internet - it will be utterly fundamental to the future of software, but it will take a decade plus for it to be truly integrated everywhere. But I think many companies will be utterly replaced if they don't adapt to the LLM world. Engineers likewise.

Worth noting that I don't think you need to train the models or even touch the PyTorch level, but you do need to understand how LLMs work and learn how (if?) they can be applied to what you work on. There are big swaths of technology that are becoming obsolete with generative AI (most obviously/immediately in the visual creation and editing space) and IMO AI is going to continue to eat more and more domains over time.

freeopinion
0 replies
12h40m

In the last few years I have come to think of AI as transformative in the same way as relational databases. Yes, right now there's a lot of fad noise around AI. That will fade. And not everyone in IT will be swimming in AI. Just like not everyone today is neck deep in databases. But databases are still pretty fundamental to a lot of occupations.

Front-end web devs might not write SQL all day, but they probably won't get very far without some comprehension. I see AI/ML becoming something as common. Maybe you need to know some outline of what gradient descent is. Maybe you just need some understanding of prompt engineering. But a reasonable grasp of the priciples is still going to be useful to a lot of people after all the hype moves to other topics.

Tiberium
16 replies
21h44m

I myself have formal training as a data scientist, going so far as to dominate a competitive machine learning event at one of Australia's top universities and writing a Master's thesis where I wrote all my own libraries from scratch. I'm not God's gift to the field, but I am clearly better than most of my competition - that is, practitioners who haven't put in the reps to build their own C libraries in a cave with scraps, but can read textbooks and use libraries written by elite institutions.

I really didn't have any illusions on the article after reading this - apparently the author believes that anyone who hasn't written a C library is below him.

And also, this author is known to make articles that are full of ranting and have rage titles, for example https://news.ycombinator.com/item?id=34968457

smoothbenny
11 replies
14h34m

Maybe I'm misinterpreting but it seems like he considers himself a part of the "practitioners who haven't put in the reps to build their own C libraries in a cave with scraps, but can read textbooks and use libraries written by elite institutions" not above them

lolinder
10 replies
14h29m

Except for that in the sentence right before that he says that he did write his own C libraries from scratch, which I think means that the only reasonable interpretation of the "practitioners..." clause is that those are the people people that he describes as his "competition" who he is "clearly better than".

I'd really like to read it that way, but I'm afraid he actually did come across that arrogant.

smoothbenny
9 replies
14h26m

I'm not seeing where he says he wrote libraries in C in a cave with scraps. That sounds like a few steps beyond writing libraries from scratch (non-specified language). One's competition is their equals, no? It's not a competition otherwise.

lolinder
7 replies
14h25m

"I am clearly better than most of my competition"?

smoothbenny
6 replies
14h20m

Still not seeing where he said he himself wrote C in a cave with scraps. Not suggesting he's humble, but I still think he sees himself as one of them (he doesn't say he's "better than everyone [at that level]")

hluska
5 replies
14h16m

The entire sentence is:

“I'm not God's gift to the field, but I am clearly better than most of my competition - that is, practitioners like myself who haven't put in the reps to build their own C libraries in a cave with scraps, but can read textbooks and use libraries written by elite institutions.“

It’s quite poorly written, but they note they’re clearly better than most of their competition.

smoothbenny
4 replies
14h3m

they note they’re clearly better than most of their competition.

Yup, I've noted that three times. Are we still claiming he wrote libraries in C in a cave with scraps? Or just moving on?

Poorly written? Maybe. Poorly read? Equally likely. Maybe we should just ask the guy, he'll know.

Did the downvoter see me as their "competition"? "Clearly better"? Both? I don't have enough karma yet to respond in kind, so they're right about something at least.

lol they did it again. mods!

jsnell
3 replies
13h36m

The section starts with him bragging about having written libraries from scratch. Given that, I really don't understand how you can arrive at them considering themselves one of the people who hasn't written libraries.

smoothbenny
2 replies
6h38m

I really don't understand how you can arrive at them considering themselves one of the people who hasn't written libraries.

There seems to be a comprehension issue on one side or the other. Can someone point me to where he says his competition has never written a single library? I only see that he says they haven't written a library "in C in a cave with scraps". Where does he say he's written libraries in C? I don't see that either. Maybe the words "scraps" and "scratch" are too close together? If a subset of readers are inclined to dismiss him out of hand for this perceived slight, nothing I write will convince them otherwise, but that doesn't make their uncharitable interpretation of his words the correct one.

jsnell
1 replies
5h36m

Yes, there is a comprehension issue here. Everyone else understood this as a discussion on whether the author is being arrogant and dismissive in this section. You seem to be looking for a discussion on whether anyone really writes C libraries in a cave with scraps.

Nobody here but you is taking the "cave" and "scraps" literally. It'd be total nonsense if taken literally. Like, what would it even mean? It's obviously the author trying to make their writing punchy. You should not take it any more seriously than their threats to snap people's necks for talking about AI.

If you want to ignore than actual discussion and steer it toward a discussion of an interpretation of the text that's so literal that the text doesn't even make sense, you should probably be very explicit about it.

smoothbenny
0 replies
5h18m

No one's discussing whether he's arrogant or not, they've all made up their minds. I gave an alternate, more charitable interpretation of his words, that would offer an otherwise offended reader a way to reframe the blog, and not dismiss it out-of-hand if they took his words as a slight. I believe this interpretation.

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

:)

eichin
0 replies
10h46m

It's just an Iron Man(2008) movie reference. (Though it was the Big Bad saying that a problem wasn't that hard if Our Hero didn't need any resources - which doesn't exactly fit what the author seems to be saying here - I read it mostly as lightening the mood a bit regardless.)

financltravsty
0 replies
14h16m

Please note that Ludic is partly writing for himself to vent about the massive disappointment it is working in tech after all he's been through and partly as a rallying cry for others who can relate to being primarily surrounded by well-spoken incompentents.

If you subdue the urge to write him off based on his emotionally expressive writing, you'll find a lot of poignant observations you wouldn't get from more civilized venues (e.g. HN, famous tech influencers, thinkers, and execs, etc.).

Ludic's blog is a one-man 4chan board, minus the racism, sexism, and so on.

anigbrowl
0 replies
21h23m

It worked for DHH. The substance is significantly better than I expected from the opener.

acureau
0 replies
21h14m

Glad I'm not the only one put off by that. Though it's nice of him to signal so early on that his words aren't worth reading

dieselgate
14 replies
15h8m

I probably agree with a lot of the points the author makes but abhor the style and tone this is written in.

oofbey
11 replies
14h29m

I abhor the style and disagree with most of their comments.

dang
10 replies
14h7m

You deserve credit for saying that so even-keeledly. Usually people just do the same thing back.

pushfoo
6 replies
13h53m

TL;DR: Do you have a stance on HN's change in culture?

It seems like HN comments are shifting from technical focus toward:

* Early Reddit's low-effort but tame "snark"

* Aggressively moralizing posts dismissing sardonic criticism as dangerous mental illness

I haven't seen much of OP's style lately, especially since n-gate[1] went inactive.

I'm wondering whether that's a bad thing. Although the tone is hostile on the surface, there's usually some aspiration toward competence associated with it.

[1]: http://n-gate.com/

dang
2 replies
13h0m

I don't think it's changed much. I think perceptions of the kind you're describing (HN is turning into reddit, comments are getting worse, etc.) are more a statement about the perceiver than about HN itself, which to me seems same-as-it-ever-was. I don't know, however.

pushfoo
1 replies
11h27m

TL;DR: I think you're right. Ty for maintaining HN!

are more a statement about the perceiver than about HN itself

I may have some rose tint to my oldest memories of lurking HN. Not only was I younger, but I was also seeing hours-old threads instead of comments arriving in real-time before any sorting or flagging.

In other words, thank you for maintaining the site all these years.

dang
0 replies
11h25m

Yes exactly! These perceptions are strongly conditioned by each generation's "rose tint".

It's ok though, because the perception "HN isn't what it used to be" is somehow part of the overall system.

AtlasBarfed
2 replies
13h36m

n-gate was the best. Referencing it was always worth the downvotes.

I get we are semi-autistic nerds and can't appreciate comedy/satire/sarcasm, but comedy/satire/sarcasm is a potent means of criticism and analysis, especially since we are in an ever-increasing torrent of bullshit.

kjkjadksj
1 replies
13h25m

HN is a bit too serious I think. More comedy would be better than the usual cynicism which can bleed over into your mindset even after visiting the site. You are what you read, after all. Maybe even this comment comes off as cynical; no doubt I’ve been infected too.

AtlasBarfed
0 replies
12h3m

What the world needs very badly in the generative spam world is a healthy dose of skepticism.

What qualifies as skepticism versus cynicism is often in the eye of the beholder.

highlaif
2 replies
1h57m

Did you shadowban me for asking a question about frontpage ranking? That's a bit ridiculous, don't you think?

dang
1 replies
1h27m

It would indeed be ridiculous; also stupid and counterproductive. Those could be clues to the fact that we didn't do it.

Some of your comments are getting killed by software filters. Those are tuned more strictly for new accounts, for reasons which aren't hard to figure out.

I don't think any mods even saw your comments (before now)

highlaif
0 replies
46m

Well, thank you software filters. Are you sure they're working correctly because all my comments except my first are getting killed. What did I do? This is no fun.

boffinAudio
0 replies
9h29m

I used AI to strip out the hatred and self-promotion. It did a pretty good job of it:

(My prompt: The following text is written in a very aggressive and hostile style. Please re-write it, maintaining the core points being made, in fashion that removes the rudeness and inherent violence in the statements being made, such that it no longer feels offensive or ego-driven: <paste of vitriolic article text>)

The AI's response:

*Ludicity* *Thoughts on AI Innovations* Published on June 19, 2024

The recent advancements in AI, particularly with models like GPT-4, have significant implications for society. These range from potentially eliminating tedious tasks to raising concerns about the livelihoods of artists and even existential risks to humanity.

As someone with formal training as a data scientist, who has excelled in competitive machine learning events and authored a Master's thesis by creating custom libraries in MATLAB, I have some perspective on the field. While I'm not claiming to be the best, I believe my experience gives me some authority on the subject.

It is concerning to see the rampant enthusiasm for AI deployment, often without a clear understanding of its limitations and appropriate use cases. The AI landscape is filled with hype, leading many to push for AI initiatives without fully grasping the complexities involved.

During my time as a data scientist, I noticed that many leaders lacked a deep understanding of the technology they promoted. This led to numerous AI projects being launched without practical applications, driven more by a desire to ride the wave of AI hype than by a genuine need or strategy. Consequently, many companies invested heavily in AI without seeing significant returns.

I've since transitioned to data and software engineering, finding more stability and meaningful work. Unlike those chasing AI trends, we focus on building reliable systems and fostering genuine professional relationships.

The AI field has seen many initiatives fail, yet the hype continues. This disconnect is troubling, as it often leads to resources being wasted on projects with little chance of success. A more pragmatic approach would involve improving foundational systems and processes before diving into sophisticated AI technologies.

AI does have the potential to revolutionize various industries, but its implementation should be thoughtful and driven by clear, achievable goals. Many organizations struggle with basic IT operations, making it unrealistic to expect them to successfully deploy advanced AI solutions.

In summary, while AI can offer significant benefits, it is crucial to approach its integration with caution, ensuring that foundational issues are addressed first. The focus should be on solving real problems and improving operational efficiency rather than chasing the latest technological trends.

...

Thank FNORD for AI!

ajkjk
0 replies
12h50m

Love the tone and the comments, personally.

blast
11 replies
14h4m

you do not need AI for anything

I'd so have preferred this to be true, and to ignore the AI thing (mainly to avoid any effort to change any of my habits in any way). But as an end user I can say that this is wrong. I definitely need LLMs for one critical thing: search that works.

Google has become clogged with outright spam and endless layers of indirection (useless sites that point to things that point to things that point to things, never getting me to the information that actually fucking matters), but I can ask the best LLMs queries like "what's the abc that does xyz in the context of ijk" and get meaningful answers. It only works well when the subject has a lot of "coverage" (a lot of well-trodden ground, nothing cutting-edge) but that's 80% of what I need.

I still have to check that the LLM found a real needle in the haystack rather than making up a bullshit one. (Ironically, Google works great for that once you know what the candidate needle actually is—it just sucks at finding any needle, even a hallucinated one, in the first place.) For shortest path from question to answer, LLMs are state of the art right now. They're not only kicking Google's ass, they're the first major improvement in search since Google showed up 20+ years ago.

Therefore I think this author is high on his own fumes. It reminds me of the dotcom period: yeah there was endless stupid hype and cringey grifters and yeah there were excellent rants about how stupid and craven it all was—but the internet really did change everything in the end. The ranters were right about most of the battles but ended up wrong about the war, and in retrospect don't look smart at all.

Always42
4 replies
13h47m

This it's incredible if you keep in mind it's just gonna find the most "common" words that come alongside your query in some jobbled together answer

tavavex
3 replies
13h25m

Machine learning isn't just a word frequency machine. If that's all it was, we'd have this technology decades ago.

blast
2 replies
13h20m

I'm really wondering about this. Is it just "recycling things people have said before", or is it genuinely generative?

"Recycling things people have said before" is basically search, and that is hugely valuable and quite enough for me. If it's genuinely generative, that's a cosmic leap beyond search.

My guess is that it's not genuinely generative, but rather that the long tail of "everything that everyone has said before" is so vast as that it feels like magic when it's retrieved.

But LLMs have shocked me enough on the search front that I'm no longer smugly confident about this.

tavavex
0 replies
12h48m

I mean, if the question is if they act as just pure, 100% search and nothing else, the answer is pretty self-evident.

I'm not much of an LLM user, but the few times that I did turn to it for programming advice was in rare and obscure situations that weren't really discussed anywhere on the internet (usually because they contained multiple issues in one). The LLM tended to produce something I'd call a reasonable answer, especially on topics that weren't completely obscure.

But we don't even need to go that deep to answer the question. For example, if an LLM was pure search, you couldn't make one generate text in some specific style or with specific constraints, unless that exact answer already existed somewhere on the internet. They can mash up ideas or topics, and still output good or reasonable data.

The billion dollar question isn't whether it's generative - it's whether the generative capabilities are "enough". Machine learning is about finding patterns, and a complex enough pattern finder will be very good at approximating answers accurately. LLMs don't actually have an accurate "model of the world" learned - but they have something that's just close enough on certain topics to make people use it as if it does.

fenomas
0 replies
12h48m

Have you seen the Othello paper? [1] To me it really puts paid to the idea that LLMs could just be stochastic parrots, in the sense of just rearranging things they've seen before. They at least can apply world models they devised during training (though whether or not one does for some given prompt can be a different question).

[1] https://thegradient.pub/othello/

trustno2
1 replies
13h34m

I think you missed the point of the article.

It's not meant as a criticism of ChatGPT and some LLMs, more like a criticism of corporate drones and "thought leaders" that latch on latest hype instead of fixing their core problems.

blast
0 replies
13h34m

Oh I probably did because none of that is interesting to me. Every 5 years another crop of omg corporate drones are so awful consciousness appears, but once you've been through 3 or 4 cycles, everything new is old again. The rants aren't more interesting than the suits are. (well, - the rants at least have their fun side)

What interests me is that there is new technology here that does something relatively magic - that just doesn't happen often. Corporate crap and MBAs and suits and founder hucksters are forever with us. I'm sure it was equally true decades before me and you showed up.

I'm rather annoyed that this is so, because now I have to slightly get off my ass and learn something, so as to reach a better state of lethargy later.

I'm a ranter by character, but you know who I'd pick between the ranters and the drones? The drones. The drones are the 100 million sperm racing to fertilize the egg. They're all losers (save one infinitesimal winner) and it's easy to point that out. The ranters stand on the side saying "those fucking idiots. those morons. what losers they are. as opposed to me, who sees this." But the ranters are the dead end, smart as they may be. The sperm are at least in the race.

boxed
1 replies
9h48m

I definitely need LLMs for one critical thing: search that works.

Hmm, that seems weird.

Google has become clogged with outright spam and endless layers of indirection

So.. the problem isn't search, it's that Googles ad business has destroyed googles search and created spam to try to get ad clicks.

I switched to Kagi a few months ago and I'm much happier. Most of the blog spam is just gone, and the things that slip through I can nuke.

blast
0 replies
49m

yeah I should try it, I'm just a laggard.

Still though, the way that LLMs can give customized answers to complex specific questions, questions that probably don't have any specific page on the web, feels like a leveling up beyond traditional search engines.

here's a recent example: I asked how to call a particular function in some library from a little-used language, and it not only told me exactly how to do it but wrote the FFI wrapper. I could definitely have dug that information up but it would have taken a lot longer and then I'd still have had to pore over documentation to write that tedious code.

rixrax
0 replies
10h28m

I wholeheartedly agree (that first ask from ChatGPT, then double check with whatever, and that Google results have become junk - mostly). Alas, I can totally see how AI companies will monetize this by allowing advertisers to influence the answers. And not in a add-in-a-sidebar fashion, but baking the advertisers product etc. into the answers. Personally, I find this somewhat disturbing.

Indeed, I think this is a key area where regulation is needed. If answer from AI is influenced by 'something unexpected' - what ever that might be, then it has to be clearly highlighted in the answer.

kjkjadksj
0 replies
13h31m

The issue with google is true. That being said ai isn’t the answer in my experience. Wikipedia is a better source of casual information I find. And if I want to dig further I look for the real sources if I can, not the slop written by journalists but the actual material people in the field are engaging with. The primary sources. The documentation. If I can’t find that or understand that material, then thats that and I move on, considering anything ‘accessible’ is going to be an example of gell mann amnesia and not worth my time.

softwaredoug
10 replies
19h46m

The jump to AI capabilities from data illiterate leadership is of such a pattern...

It reminds me of every past generation of focusing on the technology, not the underlying hard work + literacy needed to make it real.

Decades ago I saw this - I worked at a hardware company that tried to suddenly be a software company. Not at all internalizing - at every level - what software actually takes to build well. That leading, managing, executing software can't just be done by applying your institutional hardware knowledge to a different craft. It will at best be a half effort as the software craftspeople find themselves attracted to the places that truly understand and respect their craft.

There's a similar thing happening with data literacy where the non data literate hire the data literate, but don't actually internalize those practices or learn from them. They want to continue operating like the always have, but just "plug in AI" (or whatever new thing) without changing fundamentally how they do anything

People want to have AI, but those company's leaders struggle with basic understanding of statistical significance, basic fundamentals of experimentation, and thus essentially destroy any culture needed to build the AI-thing.

a_bonobo
6 replies
14h11m

Do they struggle with the basics, or do they just not care?

I'm in a similar situation with my own 'C-suite' and it's impossible to try and make them understand, they just don't care. I can't make them care. It's a clash of cultures, I guess.

pushfoo
4 replies
13h40m

TL;DR: Yes, and I think that's why some of these comments are so hostile to OP.

it's impossible to try and make them understand, they just don't care. I can't make them care. It's a clash of cultures, I guess.

That seems to be what OP's cathartic humor is about. It's also (probably) a deliberate provocation since that sub-culture doesn't deal well with this sort of humor.

If that's the case, you can see it working in this thread. Some of the commenters with the clearest C-suite aspirations are reacting with unironic vitriol as if the post is about them personally.

I think most of those comments already got flagged, but some seemed genuine in accusing OP of being a dangerously ill menace to society, e.g. "...Is OP threatening us?"

In a sense, OP is threatening them, but not with literal violence. He's making fun of their aspirations, and he's doing so with some pretty vicious humor.

tavavex
2 replies
13h0m

I think it's a bit reductive to flatten the conversation so much. While I don't have as much of an extreme reaction as the people you talk about, the post left a bit of a sour taste in my mouth. Not because I'm one of "those people" - I agree with the core of the post, and appreciate that the person writing it has actual experience in the field.

It's that the whole conversation around machine learning has become "tainted" - mention AI, and the average person will envision that exact type of an evil MBA this post is rallying against. And I don't want to be associated with them, even implicitly.

I shouldn't feel ashamed for taking some interest and studying machine learning. I shouldn't feel ashamed for having some degree of cautious optimism - the kind that sees a slightly better world, and not dollar signs. And yet.

The author here draws pretty clear lines in what they're talking about - but most readers won't care or even read that far. And the degree of how emotionally charged it is does lead me to think that there's a degree of further discontent, not just the C-suite rhetoric that almost everyone but the actual C-suites can get behind.

pushfoo
1 replies
12h42m

I think it's a bit reductive to flatten the conversation so much.

Is that because I added a TL;DR line, or my entire post?

I shouldn't feel ashamed for taking some interest and studying machine learning. I shouldn't feel ashamed for having some degree of cautious optimism - the kind that sees a slightly better world, and not dollar signs. And yet.

I agree with this in general. I didn't mean to criticize having interest in it.

And the degree of how emotionally charged it is does lead me to think that there's a degree of further discontent

Do you mean the discontent outside the C-suite? If so, yes, I agree with that too. But if we start discussing that, we'll be discussing the larger context of economic policy, what it means to be human, what art is, etc.

tavavex
0 replies
12h23m

Is that because I added a TL;DR line, or my entire post?

The TL;DR was a fine summary of the post, I was talking about the whole of it. Though, now that I re-read it, I see that you were cautious to not make complete generalizations - so my reply was more of a knee-jerk reaction to the implication that most people who oppose the author's style are just "temporarily embarrassed C-suites", unlike the sane people who didn't feel uncomfortable about it.

I didn't mean to criticize having interest in it.

I don't think you personally did - I was talking about the original post there, not about yours. The sentiment in many communities now is that machine learning itself (or generative AI specifically) is an overhyped, useless well that's basically run dry - and there's no doubt that the dislike of financial grifters is what started their disdain for the whole field.

Do you mean the discontent outside the C-suite?

Yes.

jiggawatts
0 replies
11h3m

the post is about them personally.

There is a decent chance that, yes, this rant is quite literally aimed at the people that frequent Hacker News. Where else are you going to find a more concentrated bunch of people peddling AI hype, creating AI startups, and generally over-selling their capabilities than here?

zer00eyz
0 replies
13h23m

'C-suite' and it's impossible to try and make them understand,

I think we should do a HN backed project, crowd funded style.

1. Identify the best P-hackers in current science with solid uptake on their content (citations).

2. Pay them to conduct a study proving that C levels who eat crayons have higher something... revenue, pay, job satisfaction, all three.

3. buy stock in crayons

4. Publish and hype, profit.

5. Short crayons and out your publication as fraud.

6. Profit

Continue to work out of spite, always with a crayon on hand for when every someone from the C-suite demands something stupid and offer it to them.

A man can dream... I feel like this is the plot to a Hunter S Thompson writes Brave new World set in the universe of Silicon Valley.

I should be a prompt engineer.

ranger207
0 replies
51m

Senior management's skill set is fundamentally not technical competence, business competence, financial competence, or even leadership competence. It's politics and social skills (or less charitably, schmoozing). Executives haven't cared about the "how" of anything to do with their business since the last generation of managers from before the cult of the MBA aged out

kjkjadksj
0 replies
13h19m

The issue its that its not just with technology, but absolutely anything that could be loosely defined as an expert-client relationship. Management always budgets less time and money than what anyone with expertise in the subject would feel is necessary. So most things are compromised from the outset, and if they are successful its miraculous that the uncredited experts that moved heaven and earth overcame such an obstinate manager. Its no wonder most businesses fail.

jiggawatts
0 replies
11h5m

This is a common problem across all fields. A classic example is that you don't change SAP to suit your particular business, but instead you change your business to suit SAP.

remoquete
10 replies
1d1h

Tone aside, the post contains some hard truths. I'm curious to see what the HN audience think of the point the author makes.

thrillgore
8 replies
19h40m

The audience isn't getting to see it because it's getting flagged as it shows up.

oefrha
3 replies
14h12m

IMO this should be flagged. This is outrage porn, and comments overwhelmingly fall into two camps: ones discussing the author and writing style, completely devoid of intellectual curiosity; and ones venting their own pent up energy based on the title/topic. Very few are interested in the actual points of the article, because it’s written in a way that maximally discourages civil discussion.

I know occasionally a “discuss the title” thread is allowed, but this one is almost strictly worse than just the title without the link, since we don’t get useless comments on the author in the latter case.

Oh, and I say this as someone who vaguely agree with the sentiment.

defrost
1 replies
12h35m

It's been flagged.

Seven | eight or more prior submissions have been flagged. They've been vouched for, flagged again, marked dead, and resubmitted .. by many different unaligned not bot people.

As dang noted above the HN community wants to thrash this out .. some want it deader than a parrot, others want to comment that much of the AI hype appears to have no clothes and this opinion range comes from 10 year old active accounts (and more recent ones).

rcarmo
0 replies
10h37m

This is better than the Monty Python dead parrot sketch. Which, now that I think about it, could be a metaphor for AI.

dang
0 replies
12h24m

You're right, but this seems to be a rare boundary condition where some users think it's fun/interesting and others are triggered by it and the two groups are the same order of magnitude.

In such cases the story is going to keep showing up no matter what we do. If we don't yield after 14 submissions we're destined to yield after 140! I'd rather yield after 14. It's just one thread, and easy enough to move on from.

jubalfh
1 replies
10h22m

and of course you had to lie about title, because that's how pitiful you are.

dang
0 replies
1h17m

Pitiful as I am, I need to ask you to stop breaking HN's guidelines. Your account has been doing this repeatedly, and in fact is way over the line at which we ban people.

I'm not banning you at the moment because it wouldn't feel sporting to do so in response to a personal remark. But if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules going forward, that would be good.

remoquete
0 replies
12h40m

This is what great moderation looks like. Thank you, dang.

marssaxman
0 replies
23h15m

I love his point and I love his exuberantly colorful tone.

Kiro
10 replies
10h52m

I didn't find this article refreshing. If anything, it's just the same dismissive attitude that's dominating this forum, where AI is perceived as the new blockchain. An actually refreshing perspective would be one that's optimistic.

andrei_says_
7 replies
10h29m

Is it possible that this forum may be dismissive of the AI bubble because the people on HN tend to have better understanding of the technology, its limitations, and the deceptive narrative around it?

Kuinox
2 replies
9h48m

Did you heard about the opinion of HN about Dropbox ?

sureglymop
1 replies
9h17m

The one infamous comment you mean? Because otherwise people in that thread thought it was pretty cool! I see this often mentioned here but it really just seemed like one comment in a whole thread.

Kuinox
0 replies
5h39m

It was also the most upvoted comment.

Kiro
2 replies
10h17m

We like to think we're above everyone else but in reality our technical expertise is mostly irrelevant when evaluating the utility. I've been programming for 20 years and I can't think of a single thing I know that puts me in a better position to predict the future of AI.

mu53
0 replies
9h12m

Compared to other populations, the HN group will know about AI tools being released, use them, and have some vague understanding their fundamental basis.

The tools are impressive but have their limitations. I think the demos are more for investors looking for the next unicorn. Like self driving cars, its a hard problem

andrei_says_
0 replies
2h9m

I was not speaking about the future however. And especially not about the distant future. AI is such a wide term that it’s hard to even discuss the topic.

It is already used for surveillance and racial profiling (China etc.) and that’s already a disruption in oppression and control.

Will likely disrupt driving.

Image recognition and generation are clearly influential.

Probabilistic text generation tech is making huge promises and getting a lot of investment but application is lacking - seems like a technology of search of application.

TeMPOraL
0 replies
8h29m

That still requires to have tight blinders on your eyes, the same kind that were needed to not be dismissive of the blockchain.

Or maybe we're confusing two things. AI and blockchain are complete opposites in real terms; the former represents an unexpected and qualitative jump in technological capabilities of humanity, and already delivers spectacular results, while the latter is just a purposefully inefficient database, neat on mathematical grounds, but useless in the real world. However, in business terms, both are in fact the same - a way for grifters to get rich on peddling bullshit.

To this I can only say: please remember that the nature of a thing is not affected by how much the fraudsters can lie to you about it. Hustlers gonna hustle. So sure, it's a business-level hype - but that doesn't affect the merit of the underlying tech.

cainxinth
0 replies
6h4m

It reads a bit like someone at the turn of the 20th century describing the shortcomings of newfangled automobiles.

__rito__
0 replies
6h59m

That's far from what the article actually tries to say. Did you read the full thing?

HN is disproportionately dismissive, where one comment in late 2023 was this:

    Ruby is so much better than Python, and Python is only pumped up by AI hype, and the AI hype will die down soon. Ruby will regain the throne again.
Imagine that!

This article is not that. This article just tells you to get your basics correct as a company, and don't think about using AI before you are absolutely sure where and how you will use it. And non-technical people are the main drivers of AI hype (which is besides the true thing).

blast
9 replies
14h17m

I swear this particular rant style is derived from earlier writers who I've read, probably many times, but don't remember. It feels completely familiar, way more so than someone who started working in 2019 could possibly have invented ex nihilo. They're good it though! And somebody has to keep the traditions going.

fenomas
3 replies
13h29m

I get what you mean but I have to disagree about this one being good. Epic rants are fun when there's a cold hard point being made, and the author uses the rant format to make the point irrefutably clear and drive it cruelly home.

Here, if you strip away the guff about how smart the author is and how badly he wants to beat up people who disagree, I have no idea what he's trying to say. The rest reads like "these companies who want to use AI are bad, and instead of using AI they should try not being bad", and such. Ok?

blast
2 replies
13h15m

I was trying to be nice but yeah, "i'm smarter" and "i crush your skull" are not witty. There are nice twists of phrase in there though. The kid has potential!

richardatlarge
0 replies
10h14m

In short, get him an editor

fenomas
0 replies
12h54m

To be honest I'm not 100% sure it was meant to be satire - but if it was then I agree with you :D

zzeniou86
0 replies
13h9m

Maybe Wilhelm Reich with his book 'Listen, Little Man!'?

p1necone
0 replies
14h12m

It feels a little bit like a James Mickens rant, but not as funny (not a jab at the author of this, James Mickens is just really funny).

metabolian
0 replies
10h29m

It has a kind of seanbaby, somethingawful vibe.

labrador
0 replies
12h12m

Maddox was the originator of this style afaik. He's been at it for over 20 years.

https://maddox.xmission.com/

busterarm
0 replies
13h47m

A little bit of Maddox, a little bit of BOFH.

exabrial
8 replies
14h19m

There you have it - what are you most interested in, dear leader? Artificial intelligence, the blockchain, or quantum computing?6 They know exactly what their target market is - people who have been given power of other people's money because they've learned how to smile at everything, and know that you can print money by hitching yourself to the next speculative bandwagon.

Nailed it.

anon291
5 replies
14h17m

I don't see how AI, quantum and block chain are at all equivalent.

Block chain has no use.

AI and quantum hage obvious uses if they work.

Quantum is not close to working now. It's where AI was at in the 80s/90s.

AI may not be perfect but there is no denying that the GPTs were a dramatic shift.

cfeduke
2 replies
14h8m

I don't see how AI, quantum and block chain are at all equivalent.

It's not that there is any claim to equivalency, its that these are the technology trends that are most useful - the trend itself, nevermind any sort of usable technology - for those who grift.

tavavex
1 replies
13h41m

It's not an uncommon argument to try and draw parallels between these - X was a fairly useless, yet extremely speculative new tech scam, and Y is speculative new tech. Therefore Y is also a useless scam, QED.

There's a difference between something being an extremely hyped development and it being an actual grift down to the core. The internet was an extremely overhyped development, but ultimately not a grift. Cryptocurrency was, to a large extent, both. Whether generative AI is one or the other won't be apparent until a bubble truly starts growing.

kjkjadksj
0 replies
12h58m

Something can be both a good development and overhyped grift. You have to keep in mind that for a lot of people and their businesses, the grift is literally the entire angle. Not building a technology. Not building a 100 year old business. But to get rich fast as you can while you have the opportunity, tech and business be damned. Ironically, both the real technologists and the grifters benefit from this preaching of misleading overstatements and half truths from the rooftop. It’s therefore tolerated almost as a funding mechanism for the industry at large. Mac os 15, same as it ever was now with AI under the hood. Sounds like a good seller to me.

zer00eyz
0 replies
14h11m

GPTs were a dramatic shift

IT was a massive leap forward for a 50 year old idea.

IF it takes another 50 years to make a leap of equal size, we might get to AGI before the heat death of the the universe.

I don't see how AI, quantum and block chain are at all equivalent.

If we shut them off tomorrow what do YOU need to replace in your life without them?

financltravsty
0 replies
14h8m

Blockchain is used as a stand-in financial system for those who don't or can't use the incumbent ones (for banking, commerce, gambling, and so on).

AI is nowhere close to delivering on its promises. But it's pretty useful for many tasks.

Quantum computers are vaporware. Quantum sensors are already here and game-changing. But just like AI, has failed to live up to hype.

---

Crypto is already here and makes good on its promises: decentralized finance.

Full disclosure: I do not own any crypto besides negligible sums in forgotten wallets.

safety1st
1 replies
13h45m

Yeah, this is pretty spot on, unfortunately.

Most people don't really comprehend how much money there is in the hands of people at the top that just... falls down to whatever random stuff those people are getting worked up about at the moment. The vast majority of it ends up being nonproductive and it really does get allocated based on what those people see in their Twitter feeds. This is a much more pronounced problem than it was 20 years ago because of all the money printing governments have done, in general if you are connected to the government and banks, you will be the largest beneficiary of that type of action. None of this stuff is really subjected to market economics, it either flows through some kind of government/NGO bureaucracy or from someone who controls a monopoly or something similar to one. The waste and inefficiency in this modern pseudo-command-economy is mind blowing to behold.

kjkjadksj
0 replies
12h53m

Not only do people merely get worked up of course. Theres grifters and plenty of nepotism. Sometimes people in positions of power expand the system solely to make contracts for people connected to them, ostensibly for some benefit but with the amount of waste and unaccounted for money once leaving one org and entering for another intentions are sufficiently masked. And even if peoples intentions were plainly unmasked, everyone at the top of the org are probably just as leveraged and isn’t going to stop the music just to appease the little man that propaganda defanged 100 years ago in this country.

arbfay
6 replies
10h40m

Started a career in ML/AI years before ChatGPT changed everything.

At the time, we only used the term AI if we referred more than just machine/deep learning techniques to create models or research something (thinks operations research, Monte Carlo simulations, etc). But it started to change already.

I think startups and others will realise to make a product successful, you will need clean data and data engineers, the rest will fill follow. Fundamentals first.

All the startups trying to sell "AI" to traditional industries: good luck!

I've worked as an AI engineer for a big insurance, contractor with a bank, and oh gosh!

alecco
5 replies
10h35m

I bet the old guard will refuse to change course and new companies using the new tools will displace the old ones. Like what is happening with online banks and similar. Like what happened with low-cost airlines.

physicsguy
3 replies
8h19m

I actually think the example of banks is a good one, because what's happened in places with a competitive banking landscape is just that the big players have upped their game and the benefit of the challenger banks has diminished, with them struggling to become profitable.

Monzo is the biggest new player in the UK and it's not making much of a profit. Revolut doesn't have a banking license because it can't comply with the regulatory requirements. Starling has taken much more of a conservative path and is being led by an ex-Barclays person, but even it is being investigated by the FCA for having poor controls around financial crime. All of those giving loans have unnaceptably high % of defaults from an investor perspective.

alecco
2 replies
6h11m

In Europe there are other more successful cases like Wise. Or big banks starting whole subsidiaries from the ground up.

physicsguy
1 replies
5h32m

Wise aren't a bank again though, they don't have a European or UK banking license.

alecco
0 replies
43m

I didn't say it was. I was saying they are displacing the old guard.

varjag
0 replies
10h7m

I think new hot upstarts will find that a combination intricate regulation, physical reality and institutional inertia would only allow them to make a tiny dent over years.

AIorNot
6 replies
14h3m

So what about: 1. vector search 2. RAG pipelines with your data 3. Semantic, image, video, object, detection 4. Robotics 5. Code generation /Review 6. AI language translation 7. AI research agents 8. ChatGPT

Clickbaity article aside there’s tons of legitimate uses of LLMs for corporations of all sizes

Now blockchain on the other hand…

zer00eyz
5 replies
13h55m

> RAG pipelines with your data

FTA

"Everyone is talking about Retrieval Augmented Generation, but most companies don't actually have any internal documentation worth retrieving. Fix. Your. Shit."

5. Code generation /Review

FTA

"If another stupid motherfucker asks me to try and implement LLM-based code review to "raise standards" instead of actually teaching people a shred of discipline, I am going to study enough judo to throw them into the goddamn sun.

I cannot emphasize this enough. You either need to be on the absolute cutting-edge and producing novel research, or you should be doing exactly what you were doing five years ago with minor concessions to incorporating LLMs. Anything in the middle ground does not make any sense unless you actually work in the rare field where your industry is being totally disrupted right now."

The man trots out his bonified's at the start and in the article. He's inside and backing up that rage.

AIorNot
3 replies
13h24m

What the hell does that mean fix your docs?? -

companies have tons of document libraries and documentation that need sifting through and are generating more content regularly and => RAG and vector search is game changer with real value there

Eg we implemented RAG + vector search at a manufacturing company and it changed their workflows entirely

And to coding with LLMs, Say what you will about AI coding but code review/linting and LLM created unit tests are as itself game changing as IDE intellisense this value is worth at least one junior developer on the team - that’s 70k yearly salary alone benefit

zer00eyz
0 replies
10h57m

What the hell does that mean fix your docs??

If you asked a company to run for a week, based on what its docs said, and nothing else I suspect it would be bankrupt before Friday.

Knowlege is tribal, human, and adaptive.

AI did this once already. Harvesting the data from professionals for expert systems... The problem is you need to keep feeding it data... the people dont go away, they aren't doing the job any more, they are just documenting the job at that point.

kjkjadksj
0 replies
13h13m

A measly 70k saved today to make fewer positions for junior talent that you will so desperately need to be senior when your current crop retires. You might as well burn the office furniture for heat this winter and save on hvac.

eichin
0 replies
10h39m

In manufacturing, maybe you haven't done ISO9000 but you've heard of it and have all sorts of regulated documentation as a baseline - a "bias for writing process details down" that is absent in a bunch of other industries, with software at the top of the list. "Documentation // it is written by strangers // throw it in the trash" is a software haiku that I keep running into (not as a policy or anything, just a recurring meme about how bad/uninformative/absent software docs generally are.)

xarope
0 replies
13h30m

"Everyone is talking about Retrieval Augmented Generation, but most companies don't actually have any internal documentation worth retrieving. Fix. Your. Shit."

I read up about Copilot as part of some internal research, and absolutely the first things to do for Copilot (I'll copy-paste just the line-item headings from the section entitled "Prepare your data for Copilot for M365 searches") is:

  - Clean out redundant, outdated, and trivial (ROT) content.   
  - Organize content into logical folders and sites.
  - Tag files with keywords.
  - Standardize file names.
  - Consolidate multiple versions.
  - Promote data hygiene habits.
Sigh. If I could do all this at an organizational level, I wouldn't need copilot at all.

rKarpinski
5 replies
14h14m

Please don't mention AI again

Editorialized & not the title of the post.

dang
4 replies
14h13m

You're right, but "I Will Fucking Piledrive You If You Mention AI Again" is definitely linkbait, so we had to change it by the site guideline "Please use the original title, unless it is misleading or linkbait; don't editorialize." - https://news.ycombinator.com/newsguidelines.html

I confess to being a bit self-indulgent in coming up with the edit—I thought it would be hilarious to flip it to something meek and passive aggressive.

Normally when we edit a title according to that guideline, we search earnestly for a representative and neutral phrase from the article itself. In this case "earnest' and "neutral" aren't much of a fit so I felt it more in the authorial spirit to troll a little.

loeg
2 replies
12h13m

How did you decide that the original title is linkbait?

dang
1 replies
12h8m

I decided it by spontaneous reflex. But we can unpack it reflectively like this:

I Will

Ok in principle but already pattern matches to dodginess

Fucking

That's a noise amplifier—can be ok if the rest of the title is whimsical, but in most cases it's just ponderous

Piledrive

Wtf? that's extremely aggressive. "I will do to you what professional wrestlers do to those whom they are violently defeating, except without the training not be terminally injured by it"

Hot language like that may be ok if it gets balanced by other things (but in this case there turn out to be no other things)

You

"You" is a linkbait trope. In this case it doubles down on the aggression—not just "i will piledrive" but "I will piledrive you". What did you have to do with it?

If You Mention

The superfluous you again, plus the word "mention"—what's wrong with mentioning things? This is a rhetorical trcik to drive up the menace: "don't you dare mention $thing you piece of shit"

AI

The commonest hot topic du jour. Ok, but there'd better be something substantive to balance the buzzword. Is there?

Again

Rhetorical escalation. You are mentioning Ai AGAIN? I will fucking piledrive you.

Conclusion: not a single word in that title isn't linkbait. Take it all out and you end up with the empty string.

muzani
0 replies
11h52m

Ahahaha, please add this comment to the highlights.

rKarpinski
0 replies
14h7m

Ah didn't realize you modified it. That makes sense.

Since the original author submitted I thought it was misleading.

delichon
5 replies
1d6h

No actually it isn't cute to threaten to break their neck if someone mentions a topic you don't like.

fuzzfactor
1 replies
1d1h

You're correct.

It would be so much bolder and more mature if he threatened to run with scissors or hold his breath until he turned blue ;)

He may have some good points but might be more detracting than helping them stick.

nonplus
0 replies
14h34m

Some of us won't ever know about his good points since we tuned out (left his web a page) due to the authors juvenile tone.

yifanl
0 replies
1d4h

Things would be substantially more productive if boardroom meetings allowed more definitive responses than "We'll circle back to that later". This is the other extreme.

remoquete
0 replies
1d1h

I hear you, the tone might be too much to some. But it's also part of what makes it so fresh and almost a McSweeney's article.

busterarm
0 replies
14h0m

As opposed to the rampant _passive aggression_ that permeates every discussion and meeting of every workplace.

This entire comments section is exemplary of the industry at large. We're doing fucking _engineering work_ "professionally" and the bitching is 50:1 on his tone versus his actual ideas.

The entire professional class has been brought to its knees by a culture that demands we tiptoe around everyone's insecurities.

bryanrasmussen
5 replies
21h12m

this guy keeps threatening violence in his blog posts, has anyone ever had to fight him? How tough is he actually?

ludicity
2 replies
12h19m

Author here. Someone sent me this comment this morning (along with a note saying: "I am not afraid of you") and it absolutely sent me. Probably because he knows I am of exactly average height and exactly average build, and I quit Muay Thai after one day of getting conditioned against knees-to-the-stomach.

But if you don't tell anyone, I won't tell anyone.

whoknowsidont
0 replies
1h8m

I think it's lame.

chibi_giant
0 replies
14h8m

No one that has fought him is here to tell the tale.

aakresearch
0 replies
13h24m

Judging by what he puts in the open in his other articles, an accomplished fencer he is, so there must be a few who had to fight!

timlod
4 replies
9h35m

I'm a Data Scientist currently consulting for a project in the Real Estate space (utilizing LLMs). I understand the article is hyperbole for perhaps comedic purposes, and actually do perhaps 80% align with a lot of the authors views, but it's a bit much.

There is industry-changing tech which has become available, and many orgs are starting to grasp it. I won't deny that there's probably a large percentage of projects which fall under what the author describes, but these claims are doing a bit of a disservice to the legitimately amazing projects being worked on (and the competent people performing that work).

runeks
1 replies
9h13m

I'm a Data Scientist currently consulting for a project in the Real Estate space (utilizing LLMs).

Consultants are obviously making huge amounts of money implementing LLMs for companies. The question is whether the company profits from it afterwards.

timlod
0 replies
8h26m

Time will tell, but I would cautiously say say yes.

Note that I don't usually work in that particular space (I prefer simple solutions and don't follow the hype), didn't sell myself using 'AI' (I was referred), and also would always tell a client if I believe there isn't much sense in a particular ask.

This particular project really uniquely benefits from this technology and would be much harder, if possible at all, otherwise.

lagrange77
1 replies
7h19m

Would you recommend to still get into freelance consulting (with a ML background) at this point in time? Or will the very technology you're consulting about, replace you very soon? AutoML, LLMs etc..

timlod
0 replies
6h40m

I'd say it depends on what your other options are. I don't think the technology will replace me soon, even at the rate I see it improving. At this point it's still a tool we can use to deliver faster, if we use it wisely.

Especially about ChatGPT et al. - I use it daily, but having the proper foundation to discern and verify its output shows me that it's still very far from being a competent programmer for any but the 'easy' tasks which have been solved hundreds of times over.

Like I hinted, I also view all of this hype sceptically. I dislike the 'we need AI in our org now!' types and am not planning on taking on projects if I don't see their viability. But there's obviously still a lot of demand and people offering services like those in TFA who're just looking to cash in, and that seems to work.

If you can find projects you believe you can make a difference in with your background, why not give it a shot?

vundercind
3 replies
13h31m

How stupid do you have to be to believe that only 8% of companies have seen failed AI projects?

That’s because most of them are still in progress. Enterprise moves slow and only started on this recently. They still think it’s going great because they’re riding the high of imagination.

whoknowsidont
0 replies
1h5m

That’s because most of them are still in progress.

What? You know AI has been around since before ChatGPT.

arbfay
0 replies
10h27m

That is not true unfortunately.

ML has been around for decades, DL for more than a decade.

In 2019, I had to explain to executives that 95% of AI projects fail (based on some other survey), top 1 reason is bad or missing data and top 2 is misaligned internal processes. I probably still have the slides somewhere.

One project I worked on was impossible because the data was so bad that after cleaning, we went from 4M rows to 10k usable rows in 4 languages. We could have salvaged a lot more if we restricted the use case but then the benefits of the projects would be not so interesting anymore. The internal sponsor gave up and understood the problem. Instead, they decided to train everyone on how to improve data entry and quality! In just 6 months I could see the data was getting better indeed. But I had to leave this company, the IT dep was too toxic.

So I think the author is right. According to Scale, we'd have gone from 95% failures to 95% successes in just 4-5 years just thanks to LLMs? This is of course ridiculous, knowing the problem was never poor models.

Barrin92
0 replies
13h14m

No, it's because the numbers are made the f** up. That same chart, as the author notes also points out that allegedly a third of companies have outsourced "strategic decision making" to AI. That is so offensive to any person who has a brain that the authors much criticized tone is completely warranted if you have any love for truth at all.

I mean, contempt is literally the only sane and appropriate emotional reaction to the amount of lies, and it is intentional lies, that are being marketed to people.

swedonym
3 replies
19h45m

Wow, thoroughly entertaining (though slightly deranged) read. Did not expect to see a RA Salvatore reference in this, that brought me back.

ggm
2 replies
18h35m

I deleted my comment to the author's chutzpah. It had a bit of "IamTheMainCharacter" and "IamAveryStableGenius" qualities. The thing is, I agree with them, even if I want to say "dial the ego down mate" -and after all if you cannot parade your worth on your own blog, where can you?

Still. I prefer self-deprecating. Maybe you don't get seniority in his space if you don't sell.

aakresearch
1 replies
13h49m

May I point you to the "Compliments" section of his blog, in your search for self-deprecating.

ggm
0 replies
12h24m

Fair. But he's unrepentant. It's like the book author who adds bad review blurb to their book on the basis controversy simply feeds readership.

But yes. Clearly self aware. Just doesn't care. Which is fine too: it's his blog.

snazz
3 replies
19h56m

If you're interviewing for job at a company you're not familiar with, what are some good heuristics (and/or questions to ask) to politely get a sense of whether it's run by buzzword bingo enthusiasts?

kjkjadksj
0 replies
13h8m

Easy. Is the interview process a dehumanizing process, or is it not a process at all and you are treated as a potential friend and colleague? Are they trying to sell you on the team and project or are they merely hazing their applicants? That will tell you more than any heuristic about the culture of working at this company.

cfeduke
0 replies
14h5m

If its tech and they want a 30-45 minute interview with live coding on something like hackerrank - especially if whatever brain teaser they've chosen has absolutely nothing to do with the field they operate in - I'd put the chances around 80%.

__s
0 replies
18h45m

If you ask how they handle hard problems they'll tell you about AI solving it soon

llm_trw
3 replies
14h32m

started working as a data scientist in 2019

Just Use Postgres, You Nerd. You Dweeb.

I'm really sick and tired of kids coming in and shitting on what we had to do to search a tb worth of data in 2009 (or 2004).

A computer in 2019 (or 2024) has enough power to run postgress queries to extract statistics from columns.

Yeah, great.

Now try running that stack on an iphone 7 and report your results back. We didn't create all that complexity for shits and giggles, we did it because it was at the edge of what was possible and the companies that got it right made billions.

jayd16
2 replies
14h27m

try running that stack on an iphone 7

Why?

llm_trw
1 replies
14h18m

Because that's the average machine we had to use back then.

jayd16
0 replies
13h33m

I one of us misinterpreted this part. Isn't the author echoing other people's sentiments here?

imchillyb
3 replies
14h30m

Quite vulgar for a topic that doesn’t require any vulgarity. I was put off by the f bombs.

popalchemist
1 replies
14h25m

Disgruntled aspy with no social awareness vibes.

nrki
0 replies
14h16m

Or just Australian.

jauntywundrkind
0 replies
14h21m

It doesn't require it, but it sure feels good!

Mostly the world has to sit back and suffer for this clown car of over enthused business types promising us that their AI will make all our lives better. Mostly the world has to let this weirdly quasi-cultish hype run unchecked.

I'm frelling pissed about it. I frelling miss personal computing being an aspiration; I wish we were actually improving systems. Instead we're investing billions to make machines good bullshitters. It's enormously relieving seeing such rancorous disgust on display here.

arisAlexis
3 replies
9h25m

"With God as my witness, you grotesque simpleton, if you don't personally write machine learning systems and you open your mouth about AI one more time, I am going to mail you a brick .."

I really dislike articles that have rage against (x) and try to appear smart and with authority to tell you that everyone else is stupid and they know better but deep down they are just anti-everthing because contrarianism is the target here and not AI or anything that any article like this talks about. The idea is to just say "no, it's not like that".

whoknowsidont
1 replies
1h0m

Strangely, the most boring people are the ones that seem to hate the article.

arisAlexis
0 replies
54m

You mean you prefer haters and jihadists so as not to be bored? Ok for game of thrones

brap
0 replies
9h7m

To me it sounds like a typical neckbeard internet power-trip.

moorow
2 replies
11h58m

Probably the most entertaining thing about the article (aside from the article itself) is seeing a comment section full of Americans who have precisely zero understanding of Australian humour or writing style.

As a former-PhD-Data-Scientist-who-quit-the-industry-because-it-was-full-of-fraud-and-went-back-into-software-engineering-and-is-now-an-Australian-consulting-to-Americans, this is even more hurty. Someone sent this to me and I thought I'd dreamposted it.

Great article, the anecdotes physically pain me in the same way that watching Utopia does.

whoknowsidont
0 replies
1h2m

It's mainly midwest Americans that like to tone police. People from the east coast, especially northern east coast, read this as comfort literature.

WickyNilliams
0 replies
8h59m

Similarly, as a Brit, the tone and humour is not unusual at all. Feels like someone blowing off steam down the pub with their mates.

It is totally lost on many here, some of whom equivocate the rant with serious threats of violence.

Personally I found it very funny and an accurate portayal of industry trends

mellosouls
2 replies
10h38m

The editorialising of the title is amusing. The writer comes across as like a teenager still developing his style.

boxed
1 replies
9h51m

The writer? Or the person editorializing the headline? :P

mellosouls
0 replies
9h4m

Yes, the writer :)

lucubratory
2 replies
14h23m

Deleted because it was unnecessarily inflammatory.

tymscar
1 replies
13h27m

Funny how those academics and professionals like François Chollet and Yann LeCun are on the skeptic side, and a vast number of vocal proponents are old NFT pushers.

lucubratory
0 replies
12h21m

Or Geoffrey Hinton, or Ilya Sutskever, or Yoshua Bengio, oh look those are already the three most cited researchers in deep learning. François and LeCunn shouldn't be categorised together anyway, François is a reasonable person and LeCunn was the second-best head of AI at Meta until the best head of AI at Meta used theories Yann LeCunn has railed against for years to release some of the best AI research Meta has ever released only to get replaced as head by Yann because Yann was better at corporate politics.

infrawhispers
2 replies
14h28m

This rings true and is also hilarious. I am tired of the AI hype/grift that seems to pervade the industry - funnily enough some of these same promoters were on the blockchain/crypto hype that was going to revolutionize every industry on earth.

There is meat to recent advances in Machine Learning…but the revenue / savings for actual businesses will need to start coming in if this hype is going to be sustainable for the next 16-24 months.

moose_man
1 replies
14h17m

I feel like OpenAI took this thing out of the oven too soon or packaged in a way which misrepresents its nature. The power of AI isn't to answer your technical support questions or be your AI personal assistant, it's in the manic creativity that convinced people they were talking with a human (albeit a slight unhinged one.) But that energy had to be nerfed for business use cases and in of itself is problematic for monetization. So what we're left with after they've been nerfed is basically an iteration on what we already had, not the world changing technology that we were promised.

tavavex
0 replies
13h35m

I honestly don't think OpenAI anticipated it becoming as big of a deal, and they had to lean into grandiose expectations once it did become a big deal. They didn't have a grandiose marketing campaign to my memory - the only reason their tech exploded overnight was because it was something that made an average person interested. News of previous GPT versions and primitive image generators were passed around among techy people for years before that point. Maybe there's a threshold of generation quality, at which a user perceives it as something that can rudimentarily "understand" the input, and isn't just a generic chatbot.

asveikau
2 replies
21h15m

by 2021 I had realized ... it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes ... Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders.

This was true of web tech in the 2000s, social media and mobile apps in the 2010s, crypto, now AI...

There's substance behind most of these topics, they're intellectually interesting, useful to people etc., but they tend to be largely grift, hype, office politics, narcissists tryina narcissist.

ToucanLoucan
1 replies
21h12m

Tech as an industry is addicted to hype bubbles, and addiction in people and in large groups of people fosters bad decision making, skewed priorities, and over the long haul does great harm.

asveikau
0 replies
2h36m

I feel like a simpler explanation is that the money involved attracts lots of shallow people trying to make a buck, who do not care about the substance or technology.

TZubiri
2 replies
14h38m

Someone's angry and resentful at software developers and business people interested in AI. Ok. The author also takes some kind of authority stance as a "data scientist" being uniquely qualified to talk about the subject.

The article follows the format of sections with titles as interrupted quotes from people "mentioning AI" and the author shutting them up, let's see what amazing arguments they offer for "not talking about AI"

" "III. We've Already Seen Extensive Gains From-" When I was younger, I read R.A Salvatore's classic fantasy novel, The Crystal Shard. There is a scene in it where the young protagonist, Wulfgar, challenges a barbarian chieftain.... "

Just at a glance, we have business people on one side talking about income and how to increase it. On the other we have an enraged nerd talking about his favourite high-fantasy fake world. Gee, I wonder whose side I should be on, is this an ad for AI hype?

" Well this is me. Begging you. To stop lying. I don't want to crush your skull, I really don't.

But I will if you make me."

... Is OP threatening us?

I say this unironically now OP, this sounds like schizophrenic, I won't suggest you "take your meds", but at least consult a psychiatrist if you haven't already.

globalnode
0 replies
14h9m

oh jeez stop it, its embarrassing.

__loam
0 replies
14h34m

So business people are seeing increase income related to AI

They sure are spending a lot of money on graphics cards!

yawpitch
1 replies
1d8h

so I'm going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you're an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)

And (theseFeets) in (yourNutz).

I'm going to ask ChatGPT how to prepare a garotte and then I am going to strangle you with it, and you will simply have to pray that I roll the 10% chance that it freaks out and tells me that a garotte should consist entirely of paper mache and malice.

Sadly it’s got a much greater than 10% chance of getting a garotte wrong.

bfuller
0 replies
14h24m

A garotte is a device or weapon used for strangulation. It typically consists of a cord or wire, sometimes with handles at each end to provide better grip and leverage. The garotte is used by wrapping it around a person's neck and tightening it, cutting off the air supply and causing asphyxiation. Historically, garottes have been used for execution and assassination due to their silent and deadly nature. In modern contexts, they are often associated with covert or criminal activities.

welp

lordofmoria
1 replies
14h11m

Even if it’s meant as tongue-in-cheek, it comes off as dismissive and carries a certain smugness that I find particularly irritating.

And the smugness feels especially unfounded because the author clearly has decided to double down on their views even though it’s not at all evident that they’ve taken the time to interact with the tech as an end user (which millions upon millions have, last time I checked). They’ve just not gotten it, or: yes, it is you, not me.

This is just another form of rote, unthinking (anti) hype. It reminds me of the smugness that Balmer had towards the iPhone.

zer00eyz
0 replies
14h0m

Even if it’s meant as tongue-in-cheek,

It is not this.

There is actual anger there, he's pointing out WHY he's pissed.

it’s not at all evident that they’ve taken the time to interact with the tech as an end user (which millions upon millions have, last time I checked)

He covers this point, in depth in the article. From a few angles.

lmm
1 replies
14h7m

In the case that the technology continues to make incremental gains like this, your company does not need generative AI for the sake of it. You will know exactly why you need it if you do, indeed, need it. An example of something that has actually benefited me is that I keep track of my life administration via Todoist, and Todoist has a feature that allows you to convert filters on your tasks from natural language into their in-house filtering language. Tremendous! It saved me learning a system that I'll use once every five years. I was actually happy about this, and it's a real edge over other applications. But if you don't have a use case then having this sort of broad capability is not actually very useful. The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant. Everyone is talking about Retrieval Augmented Generation, but most companies don't actually have any internal documentation worth retrieving. Fix. Your. Shit.

This is purely wishful thinking. We're good at making rigorous well-organised stuff and we hope that will somehow continue to be a useful skill. But actually there's no evidence that a half-baked bullshit generator isn't going to outperform carefully written documentation, to the point that carefully writing documentation becomes a waste of resources.

I swear to God, I am going to study, write, network, and otherwise apply force to the problem until those resources are going to a place where they'll accomplish something for society instead of some grinning clown's wallet.

Bet. Go on, actually have a go at doing this. You'll find it's much harder than you think and what looks like success will usually turn out to be actively counterproductive.

kjkjadksj
0 replies
12h48m

If you write the documentation as you write the code its not very time consuming since you kind of have to write up good notes anyhow to roadmap what the heck you are doing next. One issue is people are lazy and often code first, document later, which becomes a slog for a bad end product that chatgpt might outperform.

jauntywundrkind
1 replies
14h54m

Already [flagged] [dead] at least 8 times, as linked by @greyface3- https://news.ycombinator.com/item?id=40733576

Which is a pity. The style is excellent & so wonderful, is a critical relief, after suffering through insane out of this world hype-bordering-religion. At least to me; he doesn't read as menacing, he reads as being on a justifiably distraught polemic against total madness that's allowed to pointlessly suck up all the oxygen in the room.

We should be flipping our shit (if not each other) that we have to put up with this endless exuberant schucksterism. That robs us of agency & pollutes our noosphere with inauthentic bullshitting.

itkovian_
1 replies
13h15m

Sometimes things really are as significant as they seem.

itkovian_
0 replies
13h10m

This person has significantly less ml exp. than I do. I guess it's fine for me to totally dismiss their argument in the same way they dismiss anyone who has less than them.

zombiwoof
0 replies
11h37m

What if we mean “Apple” Intelligence LOL

what-the-grump
0 replies
13h40m

Would be perfect if the article was written by gpt.

vfclists
0 replies
5h9m

When I asked the innocuous question -

https://www.reveddit.com/v/singularity/comments/1bdjf7u/is_t... - The mods deleted the content of the question and the whole question itself.

Go figure!!

This is the original question

=====

Is the power of AI companies basically their ability to outbid for AI accelerator chips?

It seems to me that all the hype AI companies are getting is primarily due to the ability to outbid competitors for scarce AI chips, or even design and/or build their own. This also includes their ability to build their own infrastructure around them.

IMHO this scarcity is the main source of their high valuations, and it seems that in the long term when chipmaking capacity builds up this advantage will eventually wane.

Having the ability to train on information provided by users and customers is also a factor, but that doesn't look like it will last either

=====

tuatoru
0 replies
1d11h

If anyone wants to know the definition of "trenchant", read this blog post.

terminatornet
0 replies
13h47m

i agree with all of the points just wish it wasn't couched in a "maddox / best page in the universe" writing style, circa 2007. can't be writing like that in 2024

talldayo
0 replies
21h41m

This is just what happens, though. We were promised computer proliferation, and got locked-down squares with (barely) free internet access and little else to get excited for besides new ways to serve API requests. The future of programming isn't happening locally. Crypto, AI, shitty short-form entertainment, all of it is dripping from the spigot of an endless content pipeline. Of course people aren't changing the world on their cell-phone, all it's designed to do is sign up for email mailing lists and watch YouTube ads.

So I really don't actually know what the OP wants to do, besides brutalize idiots searching for a golden calf to worship. AI will progress regardless of how you gatekeep the public from percieving it, and manipulative thought-leaders will continue to schiester idiots in hopes of turning a quick buck. These cycles will operate independently of one-another, and those overeager idiots will move onto the next fad like Metaverse agriculture or whatever the fuck.

shrimp_emoji
0 replies
1d4h

Lol he fell for the data science meme

senectus1
0 replies
13h51m

While i agree with what he's said, its really a very violent read, I wish it wasnt so.

Also, I attend an online Microsoft meeting every week where they inform the attending community about all the new changes and technology coming from MS.

I'm utterly sick and tired of hearing about "AI" and Co-Pilot from them. every. signle. meeting. is 90% AI/Co-pilot.

I'm over it. I'm burnt out on it as a product.

scottmcdot
0 replies
1h6m

Nikhil is the Hambini of the AI industry.

rkagerer
0 replies
13h27m

Have not read the article.

But the title (as phrased here on HN) is exactly how I felt at Google IO this year.

physicsguy
0 replies
8h37m

Zero interest rates are gone, so dying companies are throwing themselves onto the AI bandwagon as a last ditch attempt to bring in cash and keep going.

patrickrafferty
0 replies
18h45m

The commentary on Scale's "2024 AI readiness chart" is so spot on

noisy_boy
0 replies
8h45m

GPT-4 can't even write coherent Elixir, presumably because the dataset was too small to get it to the level that it's at for Python

Yes but it does save me a whole bunch of time writing boilerplate command line entries in argparse. I can give it a table definition and ask it to write a bunch of crud methods instantly. I can do all of this myself but why?

Of course the stuff it produces doesn't work all the time but then I'm not asking it to write my entire app. I'm asking it to spare me the tedium of iterating over the building blocks so that I can get to the main part - building things.

nektro
0 replies
13h14m

this is so refreshing to see

moose_man
0 replies
13h22m

The AI hype happened in close proximity to the decline of the crypto mania and so many mediocre grifters from crypto latched onto AI as a fraud life raft.

lemonwaterlime
0 replies
5h26m

This article is great use of rhetorical delivery to drive home these messages.

kmnc
0 replies
13h28m

I think the author disproves his point by ridiculing the stupidity of human actors. AI doesn’t need to become much more powerful to displace the idiots in management, product design and marketing. Just wait until the keys are actually handed over. Dismissing things such as Copilot seems crazy to me, give it time. Chatgpt made stackoverflow obsolete overnight. AI isn’t going away anytime soon.

jusonchan81
0 replies
11h42m

From what I have been seeing there are a few real use cases:

1. Customer service - seems like natural language processing by AI could be a better offering than someone manually trying to resolve problems. I have been in front of many CS agents who couldn't do what I wanted them to do. Untrained CS agents, non native speakers, or people who just don't care enough to help.

2. Internet search - I don't have to search through arbitrary articles and text to get the answer I want. Now its not always accurate or the latest, but still better than scrolling through Google search (feel bad for the publishers and writers though - they aren't getting the same ad views as before and clearly not ideal)

3. Summarizing - AI does a fabulous job here - TLDR and more.

4. Rewriting things to a better tone - AI is going an amazing job here, every time I get stuck on how to write something OpenAI has helped me. Now I don't use the output as it is, but it gives me an idea of how to write my own message.

5. NLP interface to devices / tools = I think this is a really valuable use case.

Almost everything I suggested here is pointing to a $20 or a fixed monthly subscription for individual user. I don't know if its an "enterprise" need. Except for the customer service use case.

javier123454321
0 replies
1h51m

This is how I feel. GPT and similar LLMs demoes extremely well, but collapses as soon as its put through any kind of real world problem. However, we absolutely are living through press release driven business administration, so it's likely that the realistic approach will be discarded for things that are sexier for the media. Companies will pay for this.

iainctduncan
0 replies
3h47m

A lot of this is on point. I work in tech diligence, talking to companies raising money. The amount of pointless AI hand waving is unreal, and the majority have not ever tested their disaster recovery plan.

globalnode
0 replies
14h4m

it is refreshing, whether you agree 100% or not, the runaway hype of AI entraps many.

edit: those knocking his style i think may miss the point that hes acting. its a show. noone is like that. (i hope)

foundart
0 replies
1d3h

Interesting analysis wrapped in satire

firefoxd
0 replies
12h3m

I remember the hype around BigData. I was in those meetings where vendors pitched their product. Our director would asked "Do you do Big Data?" Any vendor who said no was immediately dismissed.

I still don't know what the answer to that question was supposed to be. We scraped coupons from our competitors then displayed them on our websites.

danielovichdk
0 replies
1d11h

Thank you!

d--b
0 replies
11h31m

ChatGPT, can you remove the unnecessary violence expressed in this article please?

Fucking internet.

casebash
0 replies
9h14m

I'm just going to say it.

The author is an idiot who is using insults as a crutch to make his case.

angrydingo
0 replies
1d9h

good stuff

Yizahi
0 replies
7h29m

I'm working at a hardware shop, we are making what is essentially is a special kind of network routers/switches for analog network (DOCSIS). And we also got hit by NN craze, half of automation team is developing an "analyzer" based on some neural network model for our lab tests which classify fails by type. A completely useless activity, because even if that code guesses correctly, we still need to go see logs manually and make a decision manually, one by one. Exactly as if NN didn't exist. But after trying to tell that in team chat I was confronted with doubt and rejection. And so this development continues.

Timber-6539
0 replies
11h41m

This writeup is pure comedy (both in content and subject matter). Should be recommended reading for all those curious about the AI buzzword.

RickyS
0 replies
13h36m

What a shit blog in general.

JSDevOps
0 replies
1d9h

Brilliant

DidYaWipe
0 replies
12h4m

At first I was thinking that "AI" is the "HD" or "digital quality" of this decade. But it's a broader fraud than that. It reminds me of this IBM commercial, which made me laugh out loud at the time: https://www.youtube.com/watch?v=IvDCk3pY4qo

DeathArrow
0 replies
11h31m

I assume that anyone who has ever typed in import tensorflow is a scumbag

That's true, decent people write import torch.

ChicagoMan
0 replies
13h47m

Done right, the rant is mostly a style of expression easily adapted to making clear contrasts. If you can make it work, I say go for it.

CaptainFever
0 replies
8h45m

This article makes me want to increase my AI usage hundred-fold.

23david
0 replies
13h46m

Awesome. Best rant I’ve read since Zed Shaw’s “Rails is a Ghetto” back in 2008.