My point is that it doesn’t know for the reasons you gave. It is just hallucinating. So I don’t think the answer is trustworthy. Of course, it seems like a reasonable answer, but how do you get to the conclusion that the answer is truthful? It could just be made up because that was what sounded the most likely to the AI.
I don’t know that the original commenter believed the AI legitimately was giving them industry secrets or anything. It came across as more of a joke than anything. Admittedly, almost no AI answer is trustworthy, but in this case it’s kinda funny that it’s giving an answer that is probably accurate but is probably not “brand safe”.
My point is that it doesn’t know for the reasons you gave. It is just hallucinating. So I don’t think the answer is trustworthy. Of course, it seems like a reasonable answer, but how do you get to the conclusion that the answer is truthful? It could just be made up because that was what sounded the most likely to the AI.
I don’t know that the original commenter believed the AI legitimately was giving them industry secrets or anything. It came across as more of a joke than anything. Admittedly, almost no AI answer is trustworthy, but in this case it’s kinda funny that it’s giving an answer that is probably accurate but is probably not “brand safe”.