• QuadratureSurfer@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    6 months ago

    I’m just glad to hear that they’re working on a way for us to run these models locally rather than forcing a connection to their servers…

    Even if I would rather run my own models, at the very least this incentivizes Intel and AMD to start implementing NPUs (or maybe we’ll actually see plans for consumer grade GPUs with more than 24GB of VRAM?).

    • suburban_hillbilly
      link
      fedilink
      arrow-up
      28
      ·
      6 months ago

      Bet you a tenner within a couple years they start using these systems as distrubuted processing for their in house ai training to subsidize cost.

      • 8ender@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 months ago

        That was my first thought. Server side LLMs are extraordinarily expensive to run. Download to costs to users.

      • QuadratureSurfer@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        Similar use cases to what I’m doing right now, running LLMs like Mixtral8x7B (or something better by the time we start seeing these), Whisper (STT), or Stable Diffusion.

        I use a fine tuned version of Mixtral (dolphin-Mixtral) for coding purposes.

        Transcribing live audio for notes/search, or translating audio from different languages using Whisper (especially useful for verifying claims of translations for Russian/Ukrainian/Hebrew/Arabic especially with all of the fake information being thrown around).

        Combine the 2 models above with a text to speech system (TTS), a vision model like LLaVA and some animatronics and then I’ll have my own personal GLaDOS: https://github.com/dnhkng/GlaDOS

        And then there’s Stable Diffusion for generating images for DnD recaps, concept art, or even just avatar images.

        • Alphane Moon
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          Thank you! I currently use my 3080 dGPU for Stable Diffusion. I wonder to what extent NPUs will be usable with Stable Diffusion XL.