• be_gt@lemmy.world
    link
    fedilink
    arrow-up
    43
    ·
    edit-2
    9 days ago

    It is an Chinese llm with the same focus as chatgpt. The main threat is that it claims to be cheaper to create and run. Less hardware needed and so on. Edit: it is also mostly open source so that is also a factor

  • CaptainBasculin
    link
    fedilink
    arrow-up
    20
    ·
    edit-2
    9 days ago

    It’s a model that was built with a way smaller budget and uses less computing resources than OpenAI’s models. It’s also open sourced (you can run it locally; or improvise it to your needs) unlike OpenAI’s models.

    The way AI usage is typically sold so it could be integrated to other tools is that you pay as much as you use. Deepseek’s models significantly undercuts OpenAI’s models in pricing while offering similar performance.

  • peto (he/him)@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    9 days ago

    Threaten in the sense that it competes with, and is built under a different economic system. Got the free market folk all ruffled because it isn’t playing the game the way they want it played.

    It has it’s own limitations of course, and the full strength version isn’t quite ‘run at home’ but it is pretty close, and there are weaker versions that can even be run on an SBC (if slowly). It’s definitely in reach of the enthusiast though.

    It’s also supposed to have some strong reasoning capabilities (haven’t tested myself yet) which is a big thing compared to your common or garden chat bots that at best sometimes know facts.

    • Bronzebeard@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 days ago

      As if Western AIs didn’t also censor things.

      This one is open source. You can train over the omissions and run your own instance, unlike the big ones here

    • Bahnd Rollard@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      9 days ago

      If your using the mobile app or a web app that runs your request on a server in china, yes, the responses you get will be based on their laws.

      You can go get your own copy off of hugging face (assuming its not getting its own hug of death), run it locally and generate your own model/weights with what ever data you want and it will tell you anything you want. (You do have the option to use their pre-trained weights and you will get the same result as the first paragraph)

      This is the reason Sam Altman has been crying himself to sleep for the last week, if you want an equivilent model from openAI, its in their most expensive plan (IIRC thats ~200$/mo)

    • Album@lemmy.ca
      link
      fedilink
      arrow-up
      7
      arrow-down
      4
      ·
      9 days ago

      Look man, I’m all down for anti Chinese government sentiment but at least say something that isn’t this fucking asinine

      Have you tried Western LLM? It’s so fucking censored it’s basically useless.