I can believe it.
In economics, the Jevons paradox occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced.
Too many AI language models are just word salad. It will spit out very long responses that add nothing of substance. Sometimes it’s kind of like a high schooler desperately trying to reach the paragraph requirement on an essay.
My favorite example so far was when a person asked a car dealer chatbot intended to talk to customers about their cars to write a python script, and it complied.
Why does the NPC audio not match the text in the ‘tutorial’ video?
For starters:
“Fewer.”
This technology is so goofy that the simple solution might be to prompt with a story that ends right before the NPC says something. Doesn’t necessarily have to be a different story per-character, or even change much beyond appending that character’s dialog and yours. If you feed an LLM most of a chapter from The Hobbit and then end the prompt at “Then Thorin said,” you’re very likely to get some sentences that are in-theme and even in-character.
Telling the machine what to do, as abstract directions, suffers from very silly errors. Like how “draw a room with absolutely no elephants” will predictably draw a room with a high positive number of elephants. The great thing about this technology is how it works kinda like how human intelligence works. Too bad we have no goddamn idea how human intelligence works.
I don’t believe but okay.