LLMs seem to operate on the principle of "will this pass for a correct answer" rather than "this is how this thing works, so here's a reasonable opinion answering your question".
Isn't the hint in the name? It's a language model, not a knowledge model. LLMs are exceptional at generating stuff that passes as coherent language (like, the parts of speech are all where one would expect them to be in the sentences). The trouble is people think it goes deeper when the knowledge modeling part only happens incidentally because language and knowledge modeling are so closely related.
I think this essay is right in many ways. LLMs seem to operate on the principle of "will this pass for a correct answer" rather than "this is how this thing works, so here's a reasonable opinion answering your question".
However at some point you have to admit the LLM does generate things that are good answers. They might be good answers that happen to pass the smell test, but they are nonetheless good answers. For instance when you ask it for a snippet of code and it gets it right.
And here is the crucial thing: you need to already know what you're doing to know whether the LLM got it right. I'm no historian, and I can ask cGPT for an essay about the causes of the Great War. When I get the answer, it sounds right to me. I don't know if the essay talks about the things an actual historian would find important, all I know is that it gives me the vanilla answer that some layman who has read a little bit would think was the right answer.
Now there's another issue this brings up. Most of us are experts in one field only. What is stopping the LLM from fooling me in every field that I don't know anything about? I best be wary of using it outside of my area of expertise.
So in the current iteration, I think LLMs are a shortcutting tool for experts. I can tell when it spits out a snippet of code that is correct, and when it's wrong. Someone who wasn't working in my domain would get fooled.