If they can really train a model at 600b parameters for 6M usd it’s an absolutely mind boggling achievement. No wonder it’s getting downvoted, it puts the capital intensive US firms to shame for lack of innovation. I’d love to see some benchmarks.
As a side note, this would open up the market for non English language LLMs with less requirements than the mammoth necessities of current models.
The trick that makes it possible is the mixture-of-experts approach. While it has 671 billion parameters overall, it only uses 37 billion at a time, making it very efficient. For comparison, Meta’s Llama3.1 uses 405 billion parameters used all at once. It also has 128K token context window means it can process and understand very long documents, and processes text at 60 tokens per second, twice as fast as GPT-4o.
If they can really train a model at 600b parameters for 6M usd it’s an absolutely mind boggling achievement. No wonder it’s getting downvoted, it puts the capital intensive US firms to shame for lack of innovation. I’d love to see some benchmarks.
As a side note, this would open up the market for non English language LLMs with less requirements than the mammoth necessities of current models.
there’s some more info with benchmarks here, it does as well and in some cases better than top tier commercial models https://www.analyticsvidhya.com/blog/2024/12/deepseek-v3/
The trick that makes it possible is the mixture-of-experts approach. While it has 671 billion parameters overall, it only uses 37 billion at a time, making it very efficient. For comparison, Meta’s Llama3.1 uses 405 billion parameters used all at once. It also has 128K token context window means it can process and understand very long documents, and processes text at 60 tokens per second, twice as fast as GPT-4o.
Ty for the benchmarks and extra info. Much appreciated!
no prob