The bubble must be repaired. Pump more cash in!
I tried DeepSeek, and immediately fell in love… My only nitpick is that images have to have text on them, otherwise it complains, but for the price of free, I’m basically just asking for too much. Contemporaries be damned.
Come on, OP, Altman is still a billionaire. If he got out of the game right now, with OpenAi still unprofitable, he’d still have enough wealth for a dozen generations.
He’s a billionaire based on the valuation of OpenAI, if the company fizzles so does his wealth.
🙏🏾🙏🏾🙏🏾
We doing paid promotions or something on Lemmy now? You sure seem to be pushing this DeepSeek thing pretty hard, op.
That’s right I’m a huge open source shill.
None of this has anything to do with the model being open source or not, plenty of other people have already disputed that claim.
It’s a model that outperforms the other ones in a bunch of areas with a smaller footprint and which was trained for less than a twentieth of the price, and then it was released as open source.
If it were European or US made nobody would deem it suspicious if somebody talked about it all month, but it’s a Chinese breakthrough and god forbid you talk about it for three days
It has everything to do with the tech being open. You can dispute it all you like, but the fact is that all the code and research behind it is open. Anybody could build a new model from scratch using open data if they wanted to. That’s what matters.
I’m commenting on the odd nature of the post and your behavior in the comments, pointing out that it comes across as more a shallow advertisement than a sincere endorsement, that is all. I don’t know enough about DeepSeek to discuss it meaningfully, nor do I have enough evidence to decide upon its open source status.
I don’t really care what you think bud. Stay in your lane.
You might have a far more positive interaction with the community if you learned to listen first before jumping on the defensive
Pretty much all my interactions with the community here have been positive, aside from a few toxic trolls such as yourself. Maybe take your own advice there champ.
Altman didn’t really make his money from tech. He’s basically a magic bean seller. He’ll be fine no matter what happens to AI. He’ll find a new grift and new suckers (famously one born every minute after all)
I’m sure he will, but at least this grift has run its course.
What’s a deepseek? Sounds like a search engine?
Deepseek is a Chinese AI company that released Deepseek R1, a direct competitor to ChatGPT.
You forgot to mention that it’s open source.
Nice! What are they competing for? I’m new to this AI business thing.
Market share, in a speculated market to be in the future.
So far, they are training models extremely efficiently while having US gatekeeping their GPUs and doing everything they can to slow their progress. Any innovation in having efficient models to operate and train is great for accessibility of the technology and to reduce the environment impacts of this (so far) very wasteful tech.
Deepseek is a Chinese AI company
Oh. So, military, then.
How nice of the Chinese military to make their weapon open source and release it to the world lmao
Based on what info?
You can say the same thing about any US AI company. Of course the local terrorists want in
Deepseek collects and process all the data you sent to their LLN even from API calls. It is a no-go for most of businesses applications. For example, OpenAI and Anyhropic do not collect or process anyhow data sent via API and there is an opy-ouy button in their settings that allows to avoid processing of the data sent via UI.
DeepSeek is an open source project that anybody can run, and it’s performant enough that even running the full model is cheap enough for any company to do.
Since it’s open source is there a way for companies to adjust so it doesn’t intentionally avoid saying anything bad about China?
Anybody can adjust the weights any way they want.
That doesn’t mean it’s straightforward, or even possible, to entirely remove the censorship that’s baked into the model.
It doesn’t mean it’s easy, but it is certainly possible if somebody was dedicated enough. At the end of the day you could even use the open source code DeepSeek published and your own training data to train a whole new model with whatever biases you like.
“It’s possible, you just have to train your own model.”
Which is almost as much work as you would have to do if you were to start from scratch.
It’s obviously not since the whole reason DeepSeek is interesting is the new mixture of experts algorithm that it introduces. If you don’t understand the subject then maybe spend a bit of time learning about it instead of adding noise to the discussion?
People saying truisms that confirm their biases about shit they clearly know nothing about? I thought I’d left reddit.
It should be repeated: no American corporation is going to let their employees put data into DeepSeek.
Accept this truth. The LLM you can download and run locally is not the same as what you’re getting on their site. If it is, it’s shit, because I’ve been testing r1 in ollama and it’s trash.
It should be repeated: anybody can run DeepSeek themselves on premise. You have absolutely no clue what you’re talking about. Keep on coping there though, it’s pretty adorable.
You can run 'em locally, tho, if their gh page is to be believed. And this way you can make sure nothing gets even sent to their servers, and not just believe nothing is processed.
I got it running with ollama locally, works as advertised
I’m too lazy to look for any of their documentation about this, but it would be pretty bold to believe privacy or processing claims from OpenAI or similar AI orgs, given their history flouting copyright.
Silicon valley more generally just breaks laws and regulations to “disrupt”. Why wouldn’t an org like OpenAI at least leave a backdoor for themselves to process API requests down the road as a policy change? Not that they would need to, but it’s not uncommon for a co to leave an escape hatch in their policies.
Where do I find this opy ouy button? Sounds tasty
Ok, I still don’t trust them… especially when they have a former NSA chief working at their board of directors
why are you so heavily and openly advertising Deepseek?
Because it’s an open source project that’s destroying the whole closed source subscription AI model.
I don’t think you or that Medium writer understand what “open source” means. Being able to run a local stripped down version for free puts it on par with Llama, a Meta product. Privacy-first indeed. Unless you can train your own from scratch, it’s not open source.
Here’s the OSI’s helpful definition for your reference https://opensource.org/ai/open-source-ai-definition
Thanks for clarification!
You can run the full version if you have the hardware, the weights are published, and importantly the research behind it is published as well. Go troll somewhere else.
All that is true of Meta’s products too. It doesn’t make them open source.
Do you disagree with the OSI?
What makes it open source is that the source code is open.
My grandma is as old as my great aunts, that doesn’t transitively make her my great aunt.
A model isn’t an application. It doesn’t have source code. Any more than an image or a movie has source code to be “open”. That’s why OSI’s definition of an “open source” model is controversial in itself.
It’s clear you’re being disingenuous. A model is its dataset and its weights too but the weights are also open and if the source code was as irrelevant as you say it is, Deepseek wouldn’t be this much more performant, and “Open” AI would have published it instead of closing the whole release.
What part of OSI are you claiming DeepSeek doesn’t satisfy specifically?
The data part. ie the very first part of the OSI’s definition.
It’s not available from their articles https://arxiv.org/html/2501.12948v1 https://arxiv.org/html/2401.02954v1
Nor on their github https://github.com/deepseek-ai/DeepSeek-LLM
Note that the OSI only ask for transparency of what the dataset was - a name and the fee paid will do - not that full access to it to be free and Free.
It’s worth mentioning too that they’ve used the MIT license for the “code” included with the model (a few YAML files to feed it to software) but they have created their own unrecognised non-free license for the model itself. Why they having this misleading label on their github page would only be speculation.
Without making the dataset available then nobody can accurately recreate, modify or learn from the model they’ve released. This is the only sane definition of open source available for an LLM model since it is not in itself code with a “source”.
Uh yeah, that’s because people publish data to huggingface. GitHub isn’t made for huge data files in case you weren’t aware. You can scroll down to datasets here https://huggingface.co/deepseek-ai
So… as far as I understand from this thread, it’s basically a finished model (llama or qwen) which is then fine tuned using an unknown dataset? That’d explain the claimed 6M training cost, hiding the fact that the heavy lifting has been made by others (US of A’s Meta in this case). Nothing revolutionary to see here, I guess. Small improvements are nice to have, though. I wonder how their smallest models perform, are they any better than llama3.2:8b?
What’s revolutionary here is the use of mixture-of-experts approach to get far better performance. While it has 671 billion parameters overall, it only uses 37 billion at a time, making it very efficient. For comparison, Meta’s Llama3.1 uses 405 billion parameters used all at once. It does as well as GPT-4o in the benchmarks, and excels in advanced mathematics and code generation. It also has 128K token context window means it can process and understand very long documents, and processes text at 60 tokens per second, twice as fast as GPT-4o.
I think deepseek opens up new efficient ways for LLM training which in turn increases competition.
Paid influencers are subtle.
🤡