• amemorablename@lemmygrad.ml
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      Probably tuned on a lot of the same kind of slop that models like ChatGPT get tuned on. They (meaning companies, generally, I don’t know specifically what tencent says about it) make it out like it’s about “safety and ethics,” but in practice, it amounts to corporate sanitizing to reduce as much as possible the chances of a model saying something that could reflect badly on the company who made it. It may not even be that there is any tuning specific to Palestine, other than “this is a nuanced and complicated issue” generic slop that covers politics more generally.

      • amemorablename@lemmygrad.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I may have underestimated how shitty it is on this. Did some playing around and it got all high and mighty about me describing Israel as doing a genocide, saying there isn’t evidence for one and blah blah blah. Mind you, LLMs can get up to all kinds of bullshit (what some term “hallucination”) and take one position one time and another at another time. It’s not as though this means it is deterministically tuned to take the stance it did. It also said something to me about China’s position earlier in the test convo, which seemed to more or less match what I’ve heard about China’s stance on Palestine. But it really wasn’t happy with throwing around the word genocide. Anyway, I still stand by the general point of these corps tuning models to try to act neutral, so as not to be controversial. Even though they definitely aren’t because everything has bias. I could really get into a rant about how unethical it is to present a model as striving to be neutral when it obviously can never be.