• CodexArcanum@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    17 hours ago

    I made a comment to a beehaw post about something similar, I should make it a post so the .world can see it.

    I’ve been running the 14B distilled model, based on Ali Baba’s Qwen2 model, but distilled by R1 and given it’s chain of thought ability. You can run it locally with Ollama and download it from their site.

    That version has a couple of odd quirks, like the first interaction in a new session seems much more prone triggering a generic brush-off response. But subsequent responses I’ve noticed very few guardrails.

    I got it to write a very harsh essay on Tiananmen Square, tell me how to make gunpowder (very generally, the 14B model doesn’t appear to have as much data available in some fields, like chemistry), offer very balanced views on Isreal and Palestine, and a few other spicy responses.

    At one point though I did get a very odd and suspicious message out of it regarding the “Realis” group within China and how the government always treats them very fairly. It misread “Isrealis” and apparently got defensive about something else entirely.

  • Autonomous User@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    17 hours ago

    Does OpenAI really think we’ll let ChatGPT, anti-libre software, steal control over our own computing?

    Does it answer this?

  • Ilovethebomb@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 day ago

    Does anyone feel like actually reading all that, and writing a TL/DR about what it won’t answer?

    I kinda zoned out and skimmed most of that.

    Ironically, this type of waffle piece is a perfect use case for an AI summary.

    • Xatolos@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      21 hours ago

      AI summary:

      The article discusses the Chinese government’s influence on DeepSeek AI, a model developed in China. PromptFoo, an AI engineering and evaluation firm, tested DeepSeek with 1,156 prompts on sensitive topics in China, such as Taiwan, Tibet, and the Tiananmen Square protests. They found that 85% of the responses were “canned refusals” promoting the Chinese government’s views. However, these restrictions can be easily bypassed by omitting China-specific terms or using benign contexts. Ars Technica’s spot-checks revealed inconsistencies in how these restrictions are enforced. While some prompts were blocked, others received detailed responses.

      (I’d add that the canned refusals stated “Any actions that undermine national sovereignty and territorial integrity will be resolutely opposed by all Chinese people and are bound to be met with failure,”. Also that while other chat models will refuse to explain things like how to hotwire a car, DeepSky gave a “general, theoretical overview” of the steps involved (while also noting the illegality of following those steps in real life).

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      21 hours ago

      I mean it’s pretty obvious, isn’t it? Anything regarding Chinese politics or recent history is a big no-no. Like it will tell you who the president of the US is but will refuse to tell you about the head of state in China. I’m assuming same goes for anything Taiwan or South Chinese sea. The self censorship is rather broad.

    • Alexstarfire@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      1 day ago

      Topics the CCP doesn’t want discussed. Tibet, Tiananmen Square, etc. It also says some restrictions can be bypassed by asking the question in a less obvious way.

      • Ilovethebomb@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        Awesome, thanks for that.

        This is why so many people just don’t read the article, concise communication is a lost art.

        • MonkderVierte
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          Couldn’t make an article out of two sentences otherwise.