Deepseek collects and process all the data you sent to their LLN even from API calls. It is a no-go for most of businesses applications. For example, OpenAI and Anyhropic do not collect or process anyhow data sent via API and there is an opy-ouy button in their settings that allows to avoid processing of the data sent via UI.
You can run 'em locally, tho, if their gh page is to be believed. And this way you can make sure nothing gets even sent to their servers, and not just believe nothing is processed.
DeepSeek is an open source project that anybody can run, and it’s performant enough that even running the full model is cheap enough for any company to do.
If it was actually programed that way then yes you could go in and adjust that, but the model itself is not censored that way and has no problem describing all sorts of Chinese tabboo subjects.
It doesn’t mean it’s easy, but it is certainly possible if somebody was dedicated enough. At the end of the day you could even use the open source code DeepSeek published and your own training data to train a whole new model with whatever biases you like.
It’s obviously not since the whole reason DeepSeek is interesting is the new mixture of experts algorithm that it introduces. If you don’t understand the subject then maybe spend a bit of time learning about it instead of adding noise to the discussion?
It should be repeated: no American corporation is going to let their employees put data into DeepSeek.
Accept this truth. The LLM you can download and run locally is not the same as what you’re getting on their site. If it is, it’s shit, because I’ve been testing r1 in ollama and it’s trash.
It should be repeated: anybody can run DeepSeek themselves on premise. You have absolutely no clue what you’re talking about. Keep on coping there though, it’s pretty adorable.
I’m too lazy to look for any of their documentation about this, but it would be pretty bold to believe privacy or processing claims from OpenAI or similar AI orgs, given their history flouting copyright.
Silicon valley more generally just breaks laws and regulations to “disrupt”. Why wouldn’t an org like OpenAI at least leave a backdoor for themselves to process API requests down the road as a policy change? Not that they would need to, but it’s not uncommon for a co to leave an escape hatch in their policies.
Deepseek collects and process all the data you sent to their LLN even from API calls. It is a no-go for most of businesses applications. For example, OpenAI and Anyhropic do not collect or process anyhow data sent via API and there is an opy-ouy button in their settings that allows to avoid processing of the data sent via UI.
You can run 'em locally, tho, if their gh page is to be believed. And this way you can make sure nothing gets even sent to their servers, and not just believe nothing is processed.
I got it running with ollama locally, works as advertised
https://medium.com/@pedro.aquino.se/how-to-install-and-use-deepseek-r1-a-free-and-privacy-first-alternative-to-openai-save-c838d2e5e04a
DeepSeek is an open source project that anybody can run, and it’s performant enough that even running the full model is cheap enough for any company to do.
Since it’s open source is there a way for companies to adjust so it doesn’t intentionally avoid saying anything bad about China?
If it was actually programed that way then yes you could go in and adjust that, but the model itself is not censored that way and has no problem describing all sorts of Chinese tabboo subjects.
Anybody can adjust the weights any way they want.
That doesn’t mean it’s straightforward, or even possible, to entirely remove the censorship that’s baked into the model.
People saying truisms that confirm their biases about shit they clearly know nothing about? I thought I’d left reddit.
It doesn’t mean it’s easy, but it is certainly possible if somebody was dedicated enough. At the end of the day you could even use the open source code DeepSeek published and your own training data to train a whole new model with whatever biases you like.
“It’s possible, you just have to train your own model.”
Which is almost as much work as you would have to do if you were to start from scratch.
It’s obviously not since the whole reason DeepSeek is interesting is the new mixture of experts algorithm that it introduces. If you don’t understand the subject then maybe spend a bit of time learning about it instead of adding noise to the discussion?
I understand the subject well enough to know you can’t back up your claims with evidence. You clearly have an agenda here…
It should be repeated: no American corporation is going to let their employees put data into DeepSeek.
Accept this truth. The LLM you can download and run locally is not the same as what you’re getting on their site. If it is, it’s shit, because I’ve been testing r1 in ollama and it’s trash.
It should be repeated: anybody can run DeepSeek themselves on premise. You have absolutely no clue what you’re talking about. Keep on coping there though, it’s pretty adorable.
I’m too lazy to look for any of their documentation about this, but it would be pretty bold to believe privacy or processing claims from OpenAI or similar AI orgs, given their history flouting copyright.
Silicon valley more generally just breaks laws and regulations to “disrupt”. Why wouldn’t an org like OpenAI at least leave a backdoor for themselves to process API requests down the road as a policy change? Not that they would need to, but it’s not uncommon for a co to leave an escape hatch in their policies.
Where do I find this opy ouy button? Sounds tasty
Ok, I still don’t trust them… especially when they have a former NSA chief working at their board of directors