I’m not talking about the speed I’m talking about the quality of output. I don’t think you understand how these models are transformed into “uncensored models” but a lot of the time using abliteration messed then up.
Buddy I have a running and been testing 7b and 14b compared to the cloud deepseek. Any sources, any evidence to back what you’re saying? Or just removed and complaining?
Running an uncensored deepseek model that doesn’t perform significantly worse than the regular deepseek models? I know how to dl and run models, I haven’t seen an uncensored deepseek model that performs as well as the baseline deepseek model
You can also download it and run a local version where you remove all the cesnsors for free
I haven’t seen a way to do that that doesn’t wreck the model
Kccp, hugging face, grab a model that fits your vram in gguf format. I think two clicks after downloaded.
I know how to download and run models, what I’m saying is, all the “uncensored” deepseek models are abliterated and perform worse
It’s the same model, your pc just sucks lmfao
I’m not talking about the speed I’m talking about the quality of output. I don’t think you understand how these models are transformed into “uncensored models” but a lot of the time using abliteration messed then up.
Buddy I have a running and been testing 7b and 14b compared to the cloud deepseek. Any sources, any evidence to back what you’re saying? Or just removed and complaining?
I’m not talking about the cloud version at all. I’m talking about the 32b and 14b models vs ones people have “uncensored”.
I was hoping someone knew of an “uncensored” version of deepseek that was good, that could run locally, because I haven’t seen one.
I don’t know what you mean by “removed”.
You can do it in LM Studio in like 5 clicks, I’m currently using it.
Running an uncensored deepseek model that doesn’t perform significantly worse than the regular deepseek models? I know how to dl and run models, I haven’t seen an uncensored deepseek model that performs as well as the baseline deepseek model
I mean obviously you need to run a lower parameter model locally, that’s not a fault of the model, it’s just not having the same computational power