Hey all,
Moderation philosophy posts started out as an exercise by myself to put down some of my thoughts on running communities that I’d learned over the years. As they continued I started to more heavily involve the other admins in the writing and brainstorming. This most recent post involved a lot of moderator voices as well, which is super exciting! This is a community, and we want the voices at all levels to represent the community and how it’s run.
This is probably the first of several posts on moderation philosophy, how we make decisions, and an exercise to bring additional transparency to how we operate.
Oh, you’re definitely right in that I’ve seen communities go the other way, and you’re also right in being concerned that the transition to alt-right-friendly is frequently more common than left-ideological-purity. Which way a community slides, or, whether it slides at all, is almost exclusively down to the community’s moderation policies and enforcement.
What you’ll also see a lot of the time is a community where the cryptofacists infiltrate the discussion with carefully-phrased bigotry, walking up to the ban-line and putting just the tip of their big toe on it. Then, when other community members (rightly and validly) tell them to fuck off, the community members risk getting moderated if their request for off-fucking is phrased too harshly. The alt-right basically use that kind of bright-line moderation as a shield, and won’t hesitate to report every negative comment they receive as a reply to “But what positive benefits to society do trans people provide? I’m just asking questions.”
So moderating a safe community is hard work, no doubt. There’s a fine line between over and under moderating, and we can’t easily rely on a rules-as-written method to do it effectively. There always has to be some degree of subjective discretion, but that degree of subjective discretion can’t be so far as to become a purity test.
So, yes, I agree with you that the terms “persecution complex” and “echo chamber” can be effectively weaponized, but it’s not the words themselves that are the problem. In their appropriate context, they’d perfectly accurate and useful for the creation of models and predictions. But the alt-right is famous for taking everyday words and phrases and trying to use them against us, because, to them, words have no real meaning. They’re used like magical incatations that are expected to ward us off and confuse us, rather than as tools of communication. See: “You’re being racist against white people!”, “I identify as an attack helicopter!”, or “You’re supposed to be tolerant!”.