Gg ez, the communists, fascists and ancaps will never be able to survive their own phones and laptops bringing liberalism/democracy to their households

It’s actually left liberal, but the joke is that lemmyml considers us liberals so they’ll eat up the title hopefully

  • Cold Hotman@nrsk.no
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    Small mix up of terms, they’ve been trained on material that allows them to make certain statements - They’ve been blocked from stating such, not retrained.

    It’s dangerously easy to use human terms in these situations, a human who made racist statements at work would possibly be sent for “work place training”. That’s what I was alluding to.

    Would the effect be that they were blocked from making such statements or truly change their point of view?

    • anova (she/they/it)@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Would the effect be that they were blocked from making such statements or truly change their point of view?

      Does ChatGPT have a point of view? If not, and if we were to say, block all possible racist statements, then you might be able to say that ChatGPT isn’t “racist,” at least if OpenAI has a reductive sense of what it means to be racist (quite possibly, considering they made ChatGPT). That also assumes people only use it as a machine that generates statements, which they haven’t been, so there’s a pretty good argument either way that it can behave in a racist manner, even if it can’t make explicit racist statements. That, like you said, is pretty scary.

      It’d probably be easier to think of ChatGPT being racist in the same way we’d say that the US legal system is racist. But that changes a bit if you ascribe it personhood.

      • Cold Hotman@nrsk.no
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        >Does ChatGPT have a point of view?

        Even if it isn’t from a place of intelligence, it has enough knowledge to pass the BAR exam (and technically be a lawyer in NY) per OpenAI. Even if it doesn’t come from a place of reasoning, it makes statements as an individual entity. I’ve seen the previous iteration of ChatGPT produce statements with better arguments and reasoning than quite a lot of people making statements.

        Yet, as I understand the way Large Language Models (LLM) work, it’s more like mirroring the input than reasoning in the way humans think of it.

        With what seems like rather uncritical use of training material, perhaps ChatGPT doesn’t have a point of view of it’s own but rather presents a personification of society, with the points of views that follows.

        A true product of society?