That looks good on paper, but while I find ChatGPT good to create critical thinking, I’ve found Meta’s products (Facebook and Instagram) to be sources of disinformation. That makes me have reservations about Meta’s intentions with LLMs. As the article says, the model comes pre-trained, so it’s most made up of information gathered by Meta.
Neither Meta nor anyone else is hand-curating their dataset. The fact that Facebook is full of grandparents sharing disinformation doesn’t impact what’s in their model.
But all LLMs are going to have accuracy issues because they’re 1) trained on text written by humans who themselves are inaccurate and 2) designed to choose tokens based on probability rather than any internal logic as to whether an answer is factual.
All LLMs are full of shit. That doesn’t mean they’re not fun or even useful in some applications, but you shouldn’t trust anything they write.
That looks good on paper, but while I find ChatGPT good to create critical thinking, I’ve found Meta’s products (Facebook and Instagram) to be sources of disinformation. That makes me have reservations about Meta’s intentions with LLMs. As the article says, the model comes pre-trained, so it’s most made up of information gathered by Meta.
Neither Meta nor anyone else is hand-curating their dataset. The fact that Facebook is full of grandparents sharing disinformation doesn’t impact what’s in their model.
But all LLMs are going to have accuracy issues because they’re 1) trained on text written by humans who themselves are inaccurate and 2) designed to choose tokens based on probability rather than any internal logic as to whether an answer is factual.
All LLMs are full of shit. That doesn’t mean they’re not fun or even useful in some applications, but you shouldn’t trust anything they write.