The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.

In light of these challenges, it’s time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.

Key features of a trust level system include:

  • Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
  • Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
  • Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.

Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.

For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.

As we continue to navigate the complexities of online community management, it’s clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.

Related

  • OpenStars@startrek.website
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 months ago

    This is the model that Wikipedia uses and, while there are most definitely detractions, there are also significant benefits as well. Email spam filters too.

    In one sense, it is a lot like irl democracy - with all the perks and pitfalls therein. For one it could lead to echo chamber reinforcement, though I don’t think this one is a huge deal b/c so too can our current moderator setup, and if anything a trust system may be less susceptible, by virtue of spreading out the number of available “moderators” for each category of action?

    The single greatest challenge I can think of to this working is that like democracy, it is vulnerable to outsider attack, wherein e.g. if someone could fake 100k bots to upvote a particular person’s posts, they could in a fairly short time period elevate them to high status artificially. Perhaps this issue could be dealt with by performing a weighted voting scheme so that not all upvotes are equal, and e.g. an upvote from a higher-status account would count significantly more than an upvote from an account that is only a few hours old. Note that ofc this only reinforces the echo chamber issue all the more, b/c if you just join, how could you possibly hope to argue against a couple of people who have been on the platform for many years? The answer, ofc, is that you go elsewhere to start your own place, as is tradition. Which exasperates still further the issue of finding “good” places but… that is somewhat a separate matter, needing a separate solution in place for it (or maybe that is too naive of me to say?).

    Btw the word “politics” essentially means “how we agree”, and just as irl we are all going to have different ideas about how to achieve our enormous variety of goals, so too would that affect our preferences for social media. And at least at first, I would expect that many people may hate it, so I would hope that this would be made an opt-in feature by default.

    Also, and for some reason I expect this next point to be quite unpopular, especially among some of the current moderators: we already have a system in place for distinguishing b/t good vs. bad content, or at least popular vs. unpopular - it is called “voting”. I have seen some fairly innocuous replies get removed, citing “trolling” or some such, when someone dares to, get this, innocently ask a question, or perhaps state a known fact out-of-context (I know, sea-lioning exists too, I don’t mean that). Irl someone might patiently explain why the other person was wrong or insensitive, or just ignore and move past it, but a mod feels a burden to clean up their safe spaces. So now I wonder, will this effect be exaggerated far further, and worse become capricious as a result? Personally I have had several posts that got perhaps 5 downvotes in the first few minutes, but then in the next few hours got >10-100x greater upvotes. So are the people looking at something RIGHT NOW more important than the 100 people that would look at it an hour from then? Even more tricky, what about the order that the votes are delivered in - would a post survive if the up- and down-voting were delivered more evenly, or like a person playing their hands at gambling, would their post get removed if it ever got too many losses in a row, thus preventing it from ever achieving whatever its true weight would have meant? If so, then people will aim to always talk in a “safe” manner, b/c nothing else would ever be allowed to be discussed, on the off-chance that someone (or 5 someones) could be offended by it (even if a hundred more studious people would have loved to have seen it, if they had been offered the chance - but being busier irl, were not offered the chance by the “winner take all” nature of social media posts, where they are either removed or they are not removed, there really is no middle ground… so far).

    So to summarize that last point: mods can be fairly untrustworthy (I say this as a former one myself:-P), but so too can regular people, and since HARD removal takes away people’s options to make up their own minds, why not leave most posts in and let voting do its work? Perhaps a label could be added, which users could select in their settings not to show “potentially controversial” material.

    These are difficult and weighty matters to try to solve.