• sznowicki@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    23 hours ago

    If you use the model it literally tells where it will not tell something to the user. Same as guardrails on any other LLM model on the market. Just different topics are censored.

    • FolknForage@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      23 hours ago

      So we are relying on the censor to tells us what they don’t censor?

      AFAIK, and I am open to being corrected, the American models seem to mostly negate requests regarding current political discussions (I am not sure if this is still true even), but I don’t think they taboo other topics (besides violence, drug/explosives manufacturing, and harmful sexual conducts).

      • sznowicki@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 hours ago

        I don’t think they taboo some topics but I’m sure the model has a bias specific to what people say in the internet. Which might not be correct according to people who challenge some views on historical facts.

        Of course Chinese censorship is super obvious and made by design. American is rather a side effect of some cultural facts or beliefs.

        What I wanted to say that all models are shit when it comes to fact checking or seeking truth. They are good for generating words that look like truth and in most cases are representing the overall consensus in that cultural area.

        I asked about Tiananmen events the smallest deepseek model and at first it refused to talk about it (while thinking loud that it should not give me any details because it’s political) and then later when I tried to make it to compare these events to Solidarity events where former Polish government would use violence against the people, it would start talking about how sometimes the government has to use violence when the leadership thinks it’s required to bring peace or order.

        Fair enough Mister Model made by autocratic country!

        However. Compared to GPT and some others I tried it did count Rs in a word tomato. Which is zero. All others would tell me it has two R.