phoneymouse@lemmy.world to People Twitter@sh.itjust.works · 1 month agoWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?lemmy.worldimagemessage-square114fedilinkarrow-up11.03Karrow-down123
arrow-up11.01Karrow-down1imageWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?lemmy.worldphoneymouse@lemmy.world to People Twitter@sh.itjust.works · 1 month agomessage-square114fedilink
minus-squareWalnutLumlinkfedilinkarrow-up22arrow-down3·1 month agoAll LLMs are text completion engines, no matter what fancy bells they tack on. If your task is some kind of text completion or repetition of text provided in the prompt context LLMs perform wonderfully. For everything else you are wading through territory you could probably do easier using other methods.
minus-squareburgersc12@mander.xyzlinkfedilinkarrow-up2arrow-down1·30 days agoI love the people who are like “I tried to replace Wolfram Alpha with ChatGPT why is none of the math right?” And blame ChatGPT when the problem is all they really needed was a fucking calculator
All LLMs are text completion engines, no matter what fancy bells they tack on.
If your task is some kind of text completion or repetition of text provided in the prompt context LLMs perform wonderfully.
For everything else you are wading through territory you could probably do easier using other methods.
I love the people who are like “I tried to replace Wolfram Alpha with ChatGPT why is none of the math right?” And blame ChatGPT when the problem is all they really needed was a fucking calculator