• 474D@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 hours ago

    I mean obviously you need to run a lower parameter model locally, that’s not a fault of the model, it’s just not having the same computational power

    • AtHeartEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      In both cases I was talking about local models, deepseek-r1 32b parameter vs an equivalent that is uncensored from hugging face