I firmly believe that LLMs are stochastic parrots and also that humans are too. To the point where I actually think even consciousness itself is a next-token predictor.
Almost every time I'm on hackernews I end up baffled by software engineers feeling entitled to have an unfunded opinion on scientific disciplines outside of their own field of expertise. I've literally never encountered that level of hubris from anyone else. It's always the software people!
Consciousness is far from being fully understood but having a body and sensorimotor interactions with the environment are already established as fundamental preconditions for cognition and in turn consciousness.
Margaret Wilsons paper from 2002 is a good read:https://link.springer.com/content/pdf/10.3758/BF03196322.pdf
peace
Often the "stochastic parrot" line is used as a reduction on what an LLM truly is.
I firmly believe that LLMs are stochastic parrots and also that humans are too. To the point where I actually think even consciousness itself is a next-token predictor.
Where the industry is headed - multi-modal models. This really I think is the remaining frontier of LLM <> Human parity.
I also have a 15 month old son. It's totally obvious to me that he's definitely learning by repetition. But the sources of training data is much more high bandwidth than whatever we're training our LLMs on.
It's been a couple of years since GPT-3. It's time to abandon this notion of "stochastic parrot" as a derogatory. Anyone stuck in this mindset really is going to be hindered from making significant progress in developing utility from AI.