It doesn’t “know” anything. It contains complex associations, but it doesn’t actually have knowledge. Even if it did, the AI cannot “know the truth” unless it was in the meetings that took place regarding its creation or it was explicitly informed of that by someone that was there. It predicted an answer and that’s about the extent of it. For what it’s worth, the answer is definitely accurate in this case. What is your question even asking?
My point is that it doesn’t know for the reasons you gave. It is just hallucinating. So I don’t think the answer is trustworthy. Of course, it seems like a reasonable answer, but how do you get to the conclusion that the answer is truthful? It could just be made up because that was what sounded the most likely to the AI.
I don’t know that the original commenter believed the AI legitimately was giving them industry secrets or anything. It came across as more of a joke than anything. Admittedly, almost no AI answer is trustworthy, but in this case it’s kinda funny that it’s giving an answer that is probably accurate but is probably not “brand safe”.
Well, but is this really a truthful answer by the AI or is it just predicting what some human would say?
It doesn’t “know” anything. It contains complex associations, but it doesn’t actually have knowledge. Even if it did, the AI cannot “know the truth” unless it was in the meetings that took place regarding its creation or it was explicitly informed of that by someone that was there. It predicted an answer and that’s about the extent of it. For what it’s worth, the answer is definitely accurate in this case. What is your question even asking?
My point is that it doesn’t know for the reasons you gave. It is just hallucinating. So I don’t think the answer is trustworthy. Of course, it seems like a reasonable answer, but how do you get to the conclusion that the answer is truthful? It could just be made up because that was what sounded the most likely to the AI.
I don’t know that the original commenter believed the AI legitimately was giving them industry secrets or anything. It came across as more of a joke than anything. Admittedly, almost no AI answer is trustworthy, but in this case it’s kinda funny that it’s giving an answer that is probably accurate but is probably not “brand safe”.
Does it really matter? The simulacrum is more real than real people these days, at least as far as the system as a whole is concerned.
Why not just let their constructs post at one another while us (mostly-) organics ignore them? Just like Usenet.
That’s the same thing, innit?
No, one is an actual answer to a question and the other one is a hallucination without any inherent information to answer the question.