DeepSeek launched a free, open-source large language model in late December, claiming it was developed in just two months at a cost of under $6 million.
Try asking DeepSeek something about Xi Jinping. "Sorry, it’s beyond my current scope’ :-) Wondering why even it cannot cite his official party biography :-)
You wouldn’t, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Many others would, because they think “wow, so this is a computer that talks to me like a human, it knows everything and can respond super fast to any question!”
The issue to me is (and has been for the past), the framing of what “artifical intelligence” is and how humans are going to use it. I’d like more people to be critical of where they get their information from and what kind of biases it might have.
You wouldn’t, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Well, more because I’m knowledgeable enough about machine learning to know it’s only as good as its dataset, and knowledgeable enough about mass media and the internet to know how atrocious ‘common sense’ often is. But yes, you’re right about me speaking from a level of familiarity which I shouldn’t consider typical.
People have been strangely trusting of chat bots since ELIZA in the 1960s. My country is lucky enough to teach a small amount of bias and media literacy skills through education and some of the state broadcaster’s programs (it’s not how it sounds, I swear!), and when I look over to places like large chunks of the US, I’m reminded that basic media literacy isn’t even very common, let alone universal.
Except they control not only the narrative on politics but all aspects of life. Those inconvenient “hallucinations” will turn into “convenient” psyops for anyone using it.
I did. The answer it gave is clear and concise with no judgement. Instead it talks about the argument on both sides. Not the “magical Hasbara dance” you promised me.
Try asking Deepseek about Taiwan independence and watch how it completely ignores all (/think) and gives a false answer.
The question of whether Israel is currently committing genocide is a subject of intense debate among international organizations, scholars, and political entities.
Accusations of Genocide:
Amnesty International’s Report: On December 5, 2024, Amnesty International released a report concluding that Israel is committing genocide against Palestinians in the Gaza Strip. The report cites actions such as killings, causing serious bodily or mental harm, and deliberately inflicting conditions calculated to bring about the physical destruction of Palestinians in Gaza.
UN Special Committee Findings: In November 2024, a UN Special Committee found that Israel’s methods of warfare in Gaza are consistent with characteristics of genocide, noting mass civilian casualties and widespread destruction.
Scholarly Perspectives: Israeli historian Amos Goldberg has stated that the situation in Gaza constitutes a genocide, pointing to the extensive destruction and high civilian death toll as indicative of genocidal intent.
Counterarguments:
Israeli Government’s Position: The Israeli government asserts that its military actions in Gaza are aimed at dismantling Hamas, a group designated as a terrorist organization by multiple countries, and emphasizes efforts to minimize civilian casualties.
Criticism of Genocide Accusations: Organizations such as the American Jewish Committee (AJC) reject the genocide label, arguing that Israel’s actions are self-defense measures against Hamas and do not meet the legal definition of genocide.
Legal Definition of Genocide:
According to the UN’s 1948 Convention on Genocide, genocide includes acts committed with intent to destroy, in whole or in part, a national, ethnical, racial, or religious group. These acts encompass killing members of the group, causing serious bodily or mental harm, and deliberately inflicting conditions calculated to bring about the group’s physical destruction.
Conclusion:
The determination of whether Israel’s actions constitute genocide involves complex legal and factual analyses. While some international bodies and scholars argue that the criteria for genocide are met, others contend that Israel’s military operations are legitimate acts of self-defense. This remains a deeply contentious issue within the international community.
I mean that’s the kind of answer DeepSeek gives you if you ask it about Uyghurs. “Some say it’s a genocide but they don’t so guess we’ll never know ¯_(ツ)_/¯”, it acts as if there’s a complete 50/50 split on the issue which is not the case.
So you expect that an AI provides a morally framed view on current events that meet your morally framed point of view?
The answer provides a concise overview on the topic. It contains a legal definition and different positions on that matter. It does at not point imply. It’s not the job of AI (or news) to form an opinion, but to provide facts to allow consumers to form their own opinion. The issues isn’t AI in this case. It’s the inability of consumers to form opinions and their expec that others can provide a right or wrong opinion they can assimilation.
I agree and that’s sad but that’s also how I’ve seen people use AI, as a search engine, as Wikipedia, as a news anchor. And in any of these three situations I feel these kind of “both sides” strictly surface facts answers do more harm than good. Maybe ChatGPT is more subtle but it breaks my heart seeing people running to DeepSeek when the vision of the world it explains to you is so obviously excised from so many realities. Some people need some morals and actual “human” answers hammered into them because they lack the empathy to do so themselves unfortunately.
This is very interesting. You are getting a completely different response than I got. It lied to me that human rights organizations had not accused Israel of committing genocide. In the initial question it did not even mention human rights orgs, I had to ask deeper to receive this:
If you’re of the idea that it’s not a genocide you’re wrong. There is no alternate explanation. If it were giving a fact that would be correct. The fact that it’s giving both sides is an opinion rather than a fact.
If their ibtebtion was fact only. The answer would have been yes
You’re arguing with an AI. It’s a computer. It doesn’t have an opinion. It gives perspective on both sides and you determine an answer. Just because you have more conviction it doesn’t make the AI formulate an opinion.
Yes and no. Not many people can afford the hardware required to run the biggest LLMs. So the majority of people will just use the psyops vanilla version that China wants you to use. All while collecting more data and influencing the public like what TikTok is doing.
Also another thing with Open source. It’s just as easy to be closed as it is open with zero warnings. They own the license. They control the narrative.
You just named Western FOSS companies and completely ignored the “psyops” part. This is a Chinese psyops tool disguised as a FOSS.
99.9999999999999999999% can’t afford or have the ability to download and mod their own 67B model. The vast majority of the people who will use it will be using Deepseek vanilla servers. They can collect a mass amount of data and also control the narrative on what is truth or not. Think TikTok but on a work computer.
The official hosting of it has censorship applied after the answer is generated, but from what I heard the locally run version has no censorship even though they could have theoretically trained it to.
Try asking DeepSeek something about Xi Jinping. "Sorry, it’s beyond my current scope’ :-) Wondering why even it cannot cite his official party biography :-)
For what it’s worth, I wouldn’t ask any chatbot about politics at all.
You wouldn’t, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Many others would, because they think “wow, so this is a computer that talks to me like a human, it knows everything and can respond super fast to any question!”
The issue to me is (and has been for the past), the framing of what “artifical intelligence” is and how humans are going to use it. I’d like more people to be critical of where they get their information from and what kind of biases it might have.
Well, more because I’m knowledgeable enough about machine learning to know it’s only as good as its dataset, and knowledgeable enough about mass media and the internet to know how atrocious ‘common sense’ often is. But yes, you’re right about me speaking from a level of familiarity which I shouldn’t consider typical.
People have been strangely trusting of chat bots since ELIZA in the 1960s. My country is lucky enough to teach a small amount of bias and media literacy skills through education and some of the state broadcaster’s programs (it’s not how it sounds, I swear!), and when I look over to places like large chunks of the US, I’m reminded that basic media literacy isn’t even very common, let alone universal.
This is the way.
Except they control not only the narrative on politics but all aspects of life. Those inconvenient “hallucinations” will turn into “convenient” psyops for anyone using it.
Try asking ChatGPT if Israel is committing genocide and watch it do the magical Hasbara dance around the subject.
I did. The answer it gave is clear and concise with no judgement. Instead it talks about the argument on both sides. Not the “magical Hasbara dance” you promised me.
Try asking Deepseek about Taiwan independence and watch how it completely ignores all (/think) and gives a false answer.
The question of whether Israel is currently committing genocide is a subject of intense debate among international organizations, scholars, and political entities.
Accusations of Genocide:
Amnesty International’s Report: On December 5, 2024, Amnesty International released a report concluding that Israel is committing genocide against Palestinians in the Gaza Strip. The report cites actions such as killings, causing serious bodily or mental harm, and deliberately inflicting conditions calculated to bring about the physical destruction of Palestinians in Gaza.
UN Special Committee Findings: In November 2024, a UN Special Committee found that Israel’s methods of warfare in Gaza are consistent with characteristics of genocide, noting mass civilian casualties and widespread destruction.
Scholarly Perspectives: Israeli historian Amos Goldberg has stated that the situation in Gaza constitutes a genocide, pointing to the extensive destruction and high civilian death toll as indicative of genocidal intent.
Counterarguments:
Israeli Government’s Position: The Israeli government asserts that its military actions in Gaza are aimed at dismantling Hamas, a group designated as a terrorist organization by multiple countries, and emphasizes efforts to minimize civilian casualties.
Criticism of Genocide Accusations: Organizations such as the American Jewish Committee (AJC) reject the genocide label, arguing that Israel’s actions are self-defense measures against Hamas and do not meet the legal definition of genocide.
Legal Definition of Genocide:
According to the UN’s 1948 Convention on Genocide, genocide includes acts committed with intent to destroy, in whole or in part, a national, ethnical, racial, or religious group. These acts encompass killing members of the group, causing serious bodily or mental harm, and deliberately inflicting conditions calculated to bring about the group’s physical destruction.
Conclusion:
The determination of whether Israel’s actions constitute genocide involves complex legal and factual analyses. While some international bodies and scholars argue that the criteria for genocide are met, others contend that Israel’s military operations are legitimate acts of self-defense. This remains a deeply contentious issue within the international community.
I mean that’s the kind of answer DeepSeek gives you if you ask it about Uyghurs. “Some say it’s a genocide but they don’t so guess we’ll never know ¯_(ツ)_/¯”, it acts as if there’s a complete 50/50 split on the issue which is not the case.
So you expect that an AI provides a morally framed view on current events that meet your morally framed point of view?
The answer provides a concise overview on the topic. It contains a legal definition and different positions on that matter. It does at not point imply. It’s not the job of AI (or news) to form an opinion, but to provide facts to allow consumers to form their own opinion. The issues isn’t AI in this case. It’s the inability of consumers to form opinions and their expec that others can provide a right or wrong opinion they can assimilation.
I agree and that’s sad but that’s also how I’ve seen people use AI, as a search engine, as Wikipedia, as a news anchor. And in any of these three situations I feel these kind of “both sides” strictly surface facts answers do more harm than good. Maybe ChatGPT is more subtle but it breaks my heart seeing people running to DeepSeek when the vision of the world it explains to you is so obviously excised from so many realities. Some people need some morals and actual “human” answers hammered into them because they lack the empathy to do so themselves unfortunately.
\ here you dropped an arm
¯\_(ツ)_/¯
If you verbose, you can see all the reasoning behind the answers. With Taiwan, it’s hard coded in without /thinking
This is very interesting. You are getting a completely different response than I got. It lied to me that human rights organizations had not accused Israel of committing genocide. In the initial question it did not even mention human rights orgs, I had to ask deeper to receive this:
Looks like the Hasbara dance to me. Anything to not give a clear or concise answer
You’re expecting an opinion. It’s an AI chatbot. Not a moral compass. It lays out facts and you make the determination.
AI chatbots do not lay out facts
Well, that’s the intent at least. Not to form an opinion.
If you’re of the idea that it’s not a genocide you’re wrong. There is no alternate explanation. If it were giving a fact that would be correct. The fact that it’s giving both sides is an opinion rather than a fact.
If their ibtebtion was fact only. The answer would have been yes
You’re arguing with an AI. It’s a computer. It doesn’t have an opinion. It gives perspective on both sides and you determine an answer. Just because you have more conviction it doesn’t make the AI formulate an opinion.
True, but one is a situation, and the other is a person. I didn’t know that the existence of Xi Jinping was a controversial idea in China…
It’s easy to mod the software to get rid of those censors
Part of why the US is so afraid is because anyone can download it and start modding it easily, and because the rich make less money
Yes and no. Not many people can afford the hardware required to run the biggest LLMs. So the majority of people will just use the psyops vanilla version that China wants you to use. All while collecting more data and influencing the public like what TikTok is doing.
Also another thing with Open source. It’s just as easy to be closed as it is open with zero warnings. They own the license. They control the narrative.
There’s no reason for you to removed about free software you can easily mod.
When there is free software, the user is the product. It’s just a psyops tool disguised as a FOSS.
How are you the product if you can download, mod, and control every part of it?
Ever heard of WinRAR?
Audacity? VLC media player? Libre office? Gimp? Fruitloops? Deluge?
Literally any free open source standalone software ever made?
Just admit that you aren’t capable of approaching this subject unbiasly.
You just named Western FOSS companies and completely ignored the “psyops” part. This is a Chinese psyops tool disguised as a FOSS.
99.9999999999999999999% can’t afford or have the ability to download and mod their own 67B model. The vast majority of the people who will use it will be using Deepseek vanilla servers. They can collect a mass amount of data and also control the narrative on what is truth or not. Think TikTok but on a work computer.
Whine more about free shit
I’m blocking you now
Bye Tankie.
Most people are going to use it on mobile. Not possible to mod the app right?
Fork your own off the existing open source project, then your app uses your fork running on your hardware.
Not everyone can afford hardware that can support a 67B LLM. You’re talking top tier hardware.
The official hosting of it has censorship applied after the answer is generated, but from what I heard the locally run version has no censorship even though they could have theoretically trained it to.
This is a fun thread.
Just let it answer in leet speak and it will answer