Moderation on the Fediverse

Right now when people install federated server instances of any kind that are open for others to join, they take on the job to be the instance admin. When membership grows, they attract additional moderators to help with maintenance and assuring a healthy community.

I haven’t been admin or mod myself, but AFAIK the moderation work is mostly manual, based on the specific UI administrative features offered by a particular app. Metrics are collected about instance operation, and federated messages come in from members (e.g. Flag and Block). There’s a limited set of moderation measures that can be taken (see e.g. Mastodon’s Moderation docs). The toughests actions that can be taken are to blocklist an entire domain (here’s the list for mastodon.social, the largest fedi instance).

The burden of moderating

I think (but pls correct me) that in general there are two important areas for improvement from moderators perspective:

  • Moderation is very time-consuming.
  • Moderation is somewhat of an unthankful, underappreciated job.

It is time-consuming to monitor what happens on your server, to act timely on moderation request, answer questions, get informed about other instances that may have to be blocked.

It is unthankful / underappreciated because your instance members take it for granted, and because you are often the bad guy when acting against someone who misbehaved. Moderation is often seen as unfair and your decisions fiercely argued.

Due to these reasons instances are closed down, or are under-moderated and toxic behavior can fester.

(There’s much more to this, but I’ll leave it here for now)

Federating Moderation

From the Mastodon docs:

Moderation in Mastodon is always applied locally, i.e. as seen from the particular server. An admin or moderator on one server cannot affect a user on another server, they can only affect the local copy on their own server.

This is a good, logical model. After all, you only control your own instance(s). But what if the federation tasks that are bound to the instance got help from ActivityPub federation itself? Copying from this post:

The whole instance discovery / mapping of the Fediverse network can be federated. E.g.:

  • A new server is detected
  • Instance updates internal server list
  • Instance federates (Announce) the new server
  • Other instances update their server list
  • Domain blocklisting / allowlisting actions are announced (with reason)

Then in addition to that Moderation Incidents can be collected as metrics and federated as soon as they occur:

  • User mutes / blocks, instance blocks (without PII, as it is the metric counts that are relevant)
  • Flags (federated after they are approved by admins, without PII)
  • Incidents may include more details (reason for blocking, topic e.g. ‘misinformation’)

So a new instance pops up, and all across fedi people start blocking its users. There’s probably something wrong with the instance that may warrant blocklisting. Instance admin goes to the server list, sees a large incident count for a particular server, clicks the entry and gets a more detailed report on the nature of said incident. Makes the decision whether to block the domain for their own instance or not.

Delegated moderation

When having Federated Moderation it may also be possible to delegate moderation tasks to admins of other instances who are authorized to do so, or even have ‘roaming moderators’ that are not affiliated to any one instance.

I have described this idea already, but from the perspective of Discourse forums having native federation capabilities. See Discourse: Delegating Community Management. Why would you want to delegate moderation:

  • Temporarily, while looking for new mods and admins.
  • When an instance is under attack by trolls and the like, ask extra help
  • When there is a large influx of new users

Moderation-as-a-Service

(Copied and extended from this post)

But this extension to the Moderation model goes further… we can have Moderation-as-a-Service. Experienced moderators and admins gain reputation and trust. They can offer their services, and can be rewarded for the work they do (e.g. via Donations, or otherwise). They may state their available time and timeslots in which they are available, so I could invoke their service and provide 24/7 monitoring of my instance.

The Reputation model of available moderators might even be federated. So I can see history of their work, satisfaction level / review by others, amount of time spent / no. of Incidents handled, etc.

All of this could be intrinsic part of the fabric of the Fediverse, and extend across different application types.

There would be much more visibility to the under-appreciated task of the moderator, and as the model matures more features can be added e.g. in the form of support for Moderation Policies. Like their Code of Conduct different instances would like different governance models (think democratic voting mechanisms, or Sortition. See also What would a fediverse “governance” body look like?).

Note: Highly recommend to also check the toot thread about this post, as many people have great insights there: https://mastodon.social/web/statuses/106059921223198405

  • polymerwitch
    link
    fedilink
    arrow-up
    3
    ·
    4 years ago

    Does this not actually centralize moderation with a few who have “reputation and trust”?

    I think there are some features of what you cover which would be good. Like I think having opt-in federation with maybe a little flag that lets you know that other servers you federate with have intentionally defederated with the instance. Still, I would want to be watchful that we don’t moderate by populism. It is possible that a minority group could start an instance and get ostracized.

    Also, each instance has slightly different moderation needs. Perhaps your instance is not OK with people posting sexual content on the public TL, but I welcome it. Both positions are fine, and I could understand why your instance might defederate from a group that posts sexual content, but that doesn’t mean that my instance should to. I don’t know if a centralized set of moderators could make those decisions for a whole network of diverse instances.

    • smallcirclesOPM
      link
      fedilink
      arrow-up
      1
      ·
      4 years ago

      Thanks for your feedback!

      Does this not actually centralize moderation with a few who have “reputation and trust”?

      I do not think that is the case, but there are things to beware of. The model cannot be implemented on a whim. When done right there’s more decentralization than there’s currently.

      Right now either the mods and admins implicitly have your trust, or you don’t even know who they are. Anyone can be a mod. With these extensions they have an easier time choosing instances to work for, while the instances (and the members on them) stay in control who they allow in for this job. For each instance they’d have to stick with the instance’s Code of Conduct.

      I sometimes hear accusations of mods being toxic (haven’t experienced this myself), and if that person were a ‘roaming mod’ a bad reputation might withhold an instance admin from taking the person on board. The whole reputation model is a very delicate thing, of course. You don’t want mods to be trolled out of existence. So maybe it should just consist of indications like “I vouch for this moderator” and the ability to retract that statement later on (still tricky with ways to be gamed, not saying this is easy… moderation-as-a-service is something to be added last).

      I could understand why your instance might defederate from a group that posts sexual content, but that doesn’t mean that my instance should to

      There will be no changes in this model, to how defederation choices are currently made. In Federated Moderation only metrics about block and allowlisting of domains are automatically collected. The final decision is a manual one by the admin (or others having this authority).

      • nutomicA
        link
        fedilink
        arrow-up
        2
        ·
        4 years ago

        Anautonated reputation model is almost impossible to implement on the fediverse,be sure anyone can create a few dozen accounts to increase reputation of their own account. Or create their own instance etc. There are some ways to avoid that, such as user identity verification, trusted instances or cryptocurrency, but none of them seem desirable.

        • smallcirclesOPM
          link
          fedilink
          arrow-up
          2
          ·
          4 years ago

          Yes, that is certainly the case. It is indeed very tricky. Ultimately as we look into the fedi future, where more application and business domains are represented and we’d like to see them deeply integrated, things like these and all kinds of other governance issues will ultimately need to be tackled in some way or other.

        • smallcirclesOPM
          link
          fedilink
          arrow-up
          2
          ·
          4 years ago

          Note that there need not be a lot of automation in the reputation model. The important decision points would all still be manual, just as they are now. Something like…

          • “I am a fedizen XYZ, and I announce that I offer my services as Moderator. Here’s some info about me, and a donation link for if you like what I do”
          • Other fedizens: “I can vouch for this person, they have earned my trust”
          • Instance admin ABC who needs help: “Lemme see who’s available. Oh, fedizen XYZ has NNN hours of experience and M fedizens vouching for them, let’s check them out”
          • (Instance admin ABC makes the same judgment call that admins are already making right now, but it is a more informed decision)
          • Instance admin ABC sends request for help during 3 days of Conference B to help moderate on-topic forum discussion, and fedizen XYZ accepts.

          In terms of vouching for someone… it may be limited to only people that were actually moderated by fedizen XYZ before, or even further by just instance admins vouching.

          • nutomicA
            link
            fedilink
            arrow-up
            2
            ·
            4 years ago

            Other fedizens: “I can vouch for this person, they have earned my trust”

            Here’s the problem, its extremely easy for me to create a couple dozen fake accounts which endorse myself as moderator. So in the end the admin still has to decide if they trust the potential mod, and its probably not much different from the way things work right now.

            In terms of vouching for someone… it may be limited to only people that were actually moderated by fedizen XYZ before, or even further by just instance admins vouching.

            These are also relatively easy to fake, I can just moderate some of my fake accounts. And starting a new instance is pretty easy too.

            • smallcirclesOPM
              link
              fedilink
              arrow-up
              1
              ·
              4 years ago

              You are right on those points. But the actual decision to take someone on board as a mod remains unchanged, i.e. as a manual decision based “do I trust this person enough”. The Moderation-as-a-Service serves on the one hand to make it easier to find people willing to do the job, and it makes Moderation an intricate part of fediverse, not as something that remains out of sight.

              That last bit is important. ‘Decentralized’ moderation (I mean, what we have now) is an important ‘unique selling point’ of fedi, in relation to traditional social media (where they can’t afford bizprofit-wise to have enough mods, and algorithms do a lot of the work). By making it part of the fediverse (btw, modeled as an AP vocabulary extension, not part of the standards themself) you give visibility to this otherwise unthankful work, offer more incentives (ranging from ‘being recognized for your work’ to donation, to - maybe - paid service) and find more willing people to help.

  • smallcirclesOPM
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    I bumped into A better moderation system is possible for the social web, by Erin Alexis Owen one of the draft authors of ActivityPump in 2014, which has some interesting observations.

    On fedi the #FediBlock process has become kinda popular, but it has its issues. From the article on the topic of blocklists specifically:

    The trust one must place in the creator of a blocklist is enormous, because the most dangerous failure mode isn’t that it doesn’t block who it says it does, but that it blocks who it says it doesn’t and they just disappear.

    I’m not going to say that you should not implement shared blocklist functionality, but I would say that you should be very careful when doing so. Features I’d consider vitally important to mitigate harms:

    • The implementation should track the source of any blocks; and any published reason should also be copied
    • Blocklists should be subscription based - i.e. you should be subscribing to a feed of blocks, not doing a onetime import
    • They should handle unblocking too - its vitally important for a healthy environment that people can correct their mistakes
    • Ideally, there would be an option to queue up blocks for manual review before applying them

    That said, shared blocklists will always be a whack-a-mole scenario.

    Posted a toot to them, where I dropped a link to Christine Webber’s OcapPub: Towards networks of consent that goes into similar direction wrt current moderation practices.

  • smallcirclesOPM
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 years ago

    This commint in follow-up to @dessalines@lemmy.ml post on need for more moderation to combat Brigading.


    Having some reputation metrics be only visible to admins would be useful (maybe that’s already present, idk).

    I don’t know to what extent moderation activity, either by individual lemmians or admins, are natively federated across instances right now. @dessalines@lemmy.ml is warning/advising “I’d adopt similar moderation policies”, but what if this informing came automatically by metrics streaming in across instances.

    Suppose @lemmianA on instance lemmy.xyz interacts here on lemmy.ml and and has good reputation metrics on lemmy.ml (visible to admins only). Now on other instances @lemmianA is doing a lot of blocking and reporting. These moderation activities federate (probably best based on an allowlist) and are collected on lemmy.ml such that when @lemmianB is reported on lemmy.ml the admins are able to see metrics like “@lemmianB is blocked on N instances and has been been reported M times by O number of trusted lemmians of lemmy.ml”.

    There needs to be care for the privacy of lemmians doing moderation actions. Their account names need not propagate to other instances in this design, only “A respectable lemmian of lemmy.xyz blocked @lemmianB” and that metric is aggregated as a side-effect on @lemmianB user account.

    • smallcirclesOPM
      link
      fedilink
      arrow-up
      2
      ·
      2 years ago

      Great article and paper @ntnsndr@social.coop and I wholly agree on this notion. For some time in advocacy I make the distiction between social networking which is what humans do for thousands of years and now extends online, and corporate Social Media. For the latter ‘Media’ is appropriate as, due to optimization for engagement / extraction, people ‘broadcast’ themself here and the algorithms expose them to a flood of info exposure that’s not to their benefit.

      OTOH a social network is a personal thing. It is manageable and fits to ones day-to-day activity, one’s daily life. It supports and reflects your interests and the social relationships that matter to you. There’s many groups and communities you interact with in different kinds of roles and relationships, same as offline.

      I call the vision of an online and offline world that seamlessly intertwine in support of human activity, a Peopleverse. Peopleverse can be established on the Fediverse as it evolves.

  • smallcirclesOPM
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    An update to this topic… in the context of Code Forge Federation there was another discussion where I dropped a link to this Lemmy post:

    https://layer8.space/@RyunoKi/108520016228507552

    An interesting angle from the perspective of the software development domains related to Code Forges is what Federated Moderation and Delegated Moderation bring within reach. Because with some imagination this can be extended and encompass Software Project Governance (to give the domain a name). In other words the domain where Maintainers of a software project operate. In FOSS projects this is an important and delicate subject. There are countless examples where e.g. a BDFL maintenance model or the sole maintainer gone missing, leads to project failure or forks.

    Won’t further elaborate this idea, just leaving as-is. Forge federation community can be found on Matrix in Forge Federation General chatroom.

  • smallcirclesOPM
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Here’s an article by Bluesky on “Composable Moderation”:

    Centralized social platforms delegate all moderation to a central set of admins whose policies are set by one company. This is a bit like resolving all disputes at the level of the Supreme Court. Federated networks delegate moderation decisions to server admins. This is more like resolving disputes at a state government level, which is better because you can move to a new state if you don’t like your state’s decisions — but moving is usually difficult and expensive in other networks. We’ve improved on this situation by making it easier to switch servers, and by separating moderation out into structurally independent services.

    We’re calling the location-independent moderation infrastructure “community labeling” because you can opt-in to an online community’s moderation system that’s not necessarily tied to the server you’re on.

  • smallcirclesOPM
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    IFTAS: “Independent Federated Trust and Safety”, a non-profit, has started to deal with “everything moderation”.