cross-posted from: https://lemmy.ml/post/24102825

DeepSeek V3 is a big deal for a number of reasons.

At only $5.5 million to train, it’s a fraction of the cost of models from OpenAI, Google, or Anthropic which are often in the hundreds of millions.

It breaks the whole AI as a service business model that OpenAI and Google have been pursuing making state-of-the-art language models accessible to smaller companies, research institutions, and even individuals.

The code is publicly available, allowing anyone to use, study, modify, and build upon it. Companies can integrate it into their products without paying for usage, making it financially attractive. The open-source nature fosters collaboration and rapid innovation.

The model goes head-to-head with and often outperforms models like GPT-4o and Claude-3.5-Sonnet in various benchmarks. It excels in areas that are traditionally challenging for AI, like advanced mathematics and code generation. Its 128K token context window means it can process and understand very long documents. Meanwhile it processes text at 60 tokens per second, twice as fast as GPT-4o.

The Mixture-of-Experts (MoE) approach used by the model is key to its performance. While the model has a massive 671 billion parameters, it only uses 37 billion at a time, making it incredibly efficient. Compared to Meta’s Llama3.1 (405 billion parameters used all at once), DeepSeek V3 is over 10 times more efficient yet performs better.

DeepSeek V3 can be seen as a significant technological achievement by China in the face of US attempts to limit its AI progress. China once again demonstrates that resourcefulness can overcome limitations.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    22
    ·
    22 hours ago

    Last year’s leaked “We Have No Moat, And Neither Does OpenAI” memo from inside Google continues to age like fine wine. The big industry leaders spend umpteen billions of dollars forcing their way up to the top of the leaderboards and then just a few weeks or months later some little upstart is nipping at their heels with competition that cost only millions to build. I love it.

    • cyd@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      The moat is probably mostly inertia. Microsoft or whoever will offer a code assistant that directs to OpenAI’s model, and users will just use that. Most software moats are like that, rather than being based on intrinsic technological superiority.

  • errer@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    21 hours ago

    I’m 90% sure this article was written by AI. It’s repetitive and unnecessarily long-winded. People realize this sort of writing is crap right?

    • SayCyberOnceMore@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      18 hours ago

      Yeah, that was my thoughts - there’s basically a few details reworded many times.

      I was looking for the part where they’ll want to earn their $5m back…

  • cyd@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    19 hours ago

    Kudos to Deepseek for continuing to releasing the code and model under a permissive license. Would be nicer if the weights were under an MIT license rather than a custom license, but I guess they’re afraid of liability. Strange situation we’re now in, where the future of open AI (as opposed to “open but actually closed” AI) now almost entirely depends on Chinese companies.

    In practice, though, I wonder how many people would actually self host and tinker with this, since the model is way too large to run on any desktop. It would be very interesting to find downstream use-cases and modifications, which is supposed to be a strength of the open source model. Deepseek themselves don’t seem to be much concerned about applications; from my understanding, they are basically funded by a sugar daddy and are happy to just do R&D (funnily enough, that is kinda what OpenAI was originally supposed to be before they sold out to Microsoft).