(sorry if anyone got this post twice. I posted while Lemmy.World was down for maintenance, and it was acting weird, so I deleted and reposted)

  • Fisch
    link
    fedilink
    arrow-up
    31
    ·
    1 year ago

    It’s not illegal to know. OpenAI decides what ChatGPT is allowed to tell you, it’s not the government.

      • EmoBean@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        I had a very in depth detailed “conversation” about dementia and the drugs used to treat it. No matter what, regardless of anything I said, ChatGPT refused to agree that we should try giving PCP to dementia patients because ooooo nooo that’s bad drug off limits forever even research.

        Fuck ChatGPT, I run my own local uncensored llama2 wizard llm.

        • Fisch
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Can you run that on a regular PC, speed wise? And does it take a lot of storage space? I’d like to try out a self-hosted llm as well.

          • EmoBean@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            It’s pretty much all about your gpu vram size. You can use pretty much any computer if it has a gpu(or 2) that can load >8gb into vram. It’s really not that computation heavy. If you want to keep a lot of different llms or larger ones, that can require a lot of storage. But for your average 7b llm you’re only looking at ~10gb hard storage.

            • Fisch
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I have an AMD GPU with 12gb vram but do they even work on AMD GPUs?