cross-posted from: https://lemmy.ml/post/24102825
DeepSeek V3 is a big deal for a number of reasons.
At only $5.5 million to train, it’s a fraction of the cost of models from OpenAI, Google, or Anthropic which are often in the hundreds of millions.
It breaks the whole AI as a service business model that OpenAI and Google have been pursuing making state-of-the-art language models accessible to smaller companies, research institutions, and even individuals.
The code is publicly available, allowing anyone to use, study, modify, and build upon it. Companies can integrate it into their products without paying for usage, making it financially attractive. The open-source nature fosters collaboration and rapid innovation.
The model goes head-to-head with and often outperforms models like GPT-4o and Claude-3.5-Sonnet in various benchmarks. It excels in areas that are traditionally challenging for AI, like advanced mathematics and code generation. Its 128K token context window means it can process and understand very long documents. Meanwhile it processes text at 60 tokens per second, twice as fast as GPT-4o.
The Mixture-of-Experts (MoE) approach used by the model is key to its performance. While the model has a massive 671 billion parameters, it only uses 37 billion at a time, making it incredibly efficient. Compared to Meta’s Llama3.1 (405 billion parameters used all at once), DeepSeek V3 is over 10 times more efficient yet performs better.
DeepSeek V3 can be seen as a significant technological achievement by China in the face of US attempts to limit its AI progress. China once again demonstrates that resourcefulness can overcome limitations.
It’s a real game changer, and the trick of using window into a larger token space is pretty clever. This kind of stuff is precisely why I don’t take arguments that LLMs are inherently wasteful and useless very seriously. We’re just starting to figure out different techniques for using and improving them, and nobody knows what the actual limits are. I’m also very optimistic that open source models are consistently catching up and surpassing closed ones, meaning that the tech continues to stay available to the public. This was a pretty fun write up for a little while back, but still holds up well today https://steve-yegge.medium.com/were-gonna-need-a-bigger-moat-478a8df6a0d2
Ah yea, I remember when the We Have no Moats article dropped. It’s wild because for years I was on the cutting edge of what was going on; Tinkering with java based neural network apps , then python based tensors, and right around when Transformers dropped I was pulled away from my hobbies for familial reasons and I’ve been playing catch up ever since. Everything is happening very fast and I’ve got so much to do that I just can’t find time to stay on top of it all. Or have the money, tbh. But, yeah, lot of potential that the Left (in these parts) have plugged their fingers into their ears about. Especially as resistance is moving in a more physical way, but the infrastructure of our oppression is built on the cloud.
I saw this interesting video the other day. Basically since some of these mini-PCs share their memory with the onboard gpu, they can load up the 70b models. Slow as hell, but if you’re running everything through a queue it’d be pretty handy.
https://www.youtube.com/watch?v=xyKEQjUzfAk
I’ve kind of given up trying to keep up with the details as well, stuff is moving way too fast for that. I’m really encouraged by the fact that open source models have consistently managed to keep up with, and often outperform commercial ones.
There’s also stuff like petals that’s really exciting. It’s basically similar idea to SETI@home and torrents where you just have a big network doing computing so you can amortize the work that way. This seems like a really good approach for running big models leveraging volunteer resources.
https://github.com/bigscience-workshop/petals
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy: