Seems to be purely to post misinformation with repeated claims that Russia is innocent and the US caused the Ukraine situation, that they’re stopping Ukraine from agreeing to Russia’s super amazing peacedeals, etc.
This is the sort of garbage one would expect to find on ML or Hex, is CA intended to be the same low quality instance?
I feel like you are arguing with me about OP points. I am not sure if it is a Lemmy error, but my comment you replied to first
I don’t feel like you are here defending that person’s acts or being complicit, neither trying to defend misinformation/intolerant with malicious intent, or being disingenuous with semantics. So, for the healthy discussion, I continue.
You don’t need to go too far in that person’s history to see the examples of their dishonesty and ill temper, if that is the hill you chose to defend. You might need some special privilege to see their removed content in other instances.
From your message, sorry if I am mistaken your words the first time, but I imagine now that by
that
you were not saying intolerance, but misinformation, as in:In that case,
Canada might be a little behind on misinformation laws, it was always behind when the subject involves technology. But they define very well their types (MDM they call), qualify damages and campaign to raise awareness and minimize its effects. https://www.cyber.gc.ca/en/guidance/how-identify-misinformation-disinformation-and-malinformation-itsap00300 https://www.canada.ca/en/campaign/online-disinformation.html
“Misinformation” is serious, causes harm, and should not be used interchangeably with “agreement”.
Just because OP is complaining about misinformation, it does not make it any less severe than intolerance, when used for the same goal - to cause harm.
Even before technology, we had laws and procedures about harmful discourse, be them intolerance or misinformation, it just makes things different.
That is why I was suggesting a discussion of well-defined and transparent methods to deal with them, that should be constantly reviewed and improved.
Edit: bold line
That’s reasonable. It’s my bad that I was unclear with the use of that. It’s okay for you to argue against spreading intolerance, but I refer that the main topic of the post is about misinformation, and even as you rightly argue that the two often can have shared purpose and goals, and I also agree with you that both should have clear boundaries set here as to what is allowed and disallowed, they are distinct concepts. To be clear, I’m not making the distinction between MDM and intolerance to excuse either of them. Misinformation is bad too, and I agree that we should inform and root it out where we find it. However the banhammer is a tool that can make any comment look like a nail, so care should be taken when it is used. Conflating removal of clearly intolerant takes with removal of possibly misinformed takes when it comes to enforcement actions, would be viewed as mod/admin abuse and lower users’ trust in admins of that server.
The main example from the OP is the endlesswar community. The user there is pushing takes not fully related to “endlesswar” but are from other sources, questionable as some of them may be if we were to analyze each of them carefully. A separate example I have is a comrade I have seen around Lemmy since I joined, https://lemmy.ml/u/yogthos. This user has been constantly pushing narratives, to the point that one might think they could be paid to do it. Over the past couple years, they have become far more careful to avoid getting banned for intolerant takes, and now selectively posts articles and graphs that supports a specific narrative.
Do these users, or the users that might post a misinformed take within the power-users’ posts deserve bans? Do we analyze every comment, post and news-source and remove those that meet the criteria for MDM? Do we have a whitelist/blacklist to only permit links to reputable news sites server-wide (to stop someone from creating a community where they allow themselves to post from wherever)? Lemmy.world news communities had a Media Bias Fact Check bot that was rather inaccurate and very unpopular.
I support a thorough discussion on how best to deal with it both locally and across the Fediverse. It’s not “not a problem”, but at the moment I don’t see any fair solutions that don’t rely on an undue amount of mod/admin discretion, besides removing intolerant takes and downvoting misinformed takes.
E: One solution could be like SlrPnk pleasant politics which instituted an AI moderator that checks and will issue temp bans for bad behaviour it detects. I’m still a little skeptical of it as to me it falls under “undue amount of mod/admin discretion” but at least it takes a lot of the tiring work for admins out of the equation.