• Corngood
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      I keep seeing this sentiment, but in order to run the model on a high end consumer GPU, doesn’t it have to be reduced to like 1-2% of the size of the official one?

      Edit: I just did a tiny bit of reading and I guess model size is a lot more complicated than I thought. I don’t have a good sense of how much it’s being reduced in quality to run locally.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      arrow-up
      8
      ·
      5 hours ago

      What they’re actually in panic over is companies using a Chinese service instead of US ones. The threat here is that DeepSeek becomes the standard that everyone uses, and it would become entrenched. At that point nobody would want to switch to US services.