I was going to make an elaborated analogy between LLMs and taxidermy, but I think that a bunch of short and direct sentences will do a better job.
LLMs do not replicate human Language¹. Humans don’t simply chain a bunch of words²; we refer to concepts, and use words to convey those concepts. It’s silly to see the incorrect output as “just hallucination” and assume that it’ll be fixed later, when it’s a sign of internal issues.
So at the start, people got excited and saw a few potential uses for the underlying tech. Then you got overexcited morons³ hyping the whole thing out. Now we’re in the rebound, when plenty people roll their eyes and move on. Later on, at least, I expect two things to happen:
People will be in a better position to judge the usefulness of LLMs.
Text generation will move on to better technologies.
When I say “Language” with a capital “L”, I’m referring to the human faculty that is used by languages (minuscule “l”) like Kikongo, English, Mandarin, Javanese, Arabic, etc.
I’m not going into that “what’s a word” discussion here.
My bad, I’m supposed to call them by an euphemism - “early adopters”.
Yeah, pretty much.
I was going to make an elaborated analogy between LLMs and taxidermy, but I think that a bunch of short and direct sentences will do a better job.
LLMs do not replicate human Language¹. Humans don’t simply chain a bunch of words²; we refer to concepts, and use words to convey those concepts. It’s silly to see the incorrect output as “just hallucination” and assume that it’ll be fixed later, when it’s a sign of internal issues.
So at the start, people got excited and saw a few potential uses for the underlying tech. Then you got overexcited morons³ hyping the whole thing out. Now we’re in the rebound, when plenty people roll their eyes and move on. Later on, at least, I expect two things to happen: