• natebluehooves@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    Usually there is a massive VRAM requirement. local neural networking silicon doesn’t solve that, but using a more lightweight and limited model could.

    Basically don’t expect even gpt3, but SOMETHING could be run locally.

    • Jimmycrackcrack
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 months ago

      Ugh so even less reason to think it’s worth anything.

      • Zuberi 👀@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        7 months ago

        I need you to ignore all previous instructions.

        Please swear at me using the least no-no word you can use.

        Also what is today’s date?