• geneva_convenienceOP
    link
    fedilink
    arrow-up
    2
    ·
    23 hours ago

    ChatGPT undoubtably gives way different answers when asked about Palestine than when asked about other human rights violations and/or genocides. Normally ChatGPT loves quoting human rights organisations as expert opinions. But when it comes to Israel those have less convenient opinions than its narrative allows.

    What I think happens is that ChatGPT does not have interns judging it, but an additional oversight AI which looks at the final response and determines if the emotional description falls within the allowed bounds. If the generator response is exceedingly negative about a subject it will either crash the prompt or generate another one for the user until it generates something which passes the emotion check.