Introduction In Improving fediverse culture and social behavior along the way I introduced two ideas, that are I think interesting enough to warrant a separate thread. The brainstorm will start in the Lemmy Fediverse Futures ideation space, and elaborated here (if there’s interest). See Lemmy: https://lemmy.ml/post/60475 Note: There’s ongoing research on Moderation by @Audrey and @robertwgehl. See: Experiences and aspirations about content moderation in the fediverse I'm part of a researc...

Moderation on the Fediverse

Right now when people install federated server instances of any kind that are open for others to join, they take on the job to be the instance admin. When membership grows, they attract additional moderators to help with maintenance and assuring a healthy community.

I haven’t been admin or mod myself, but AFAIK the moderation work is mostly manual, based on the specific UI administrative features offered by a particular app. Metrics are collected about instance operation, and federated messages come in from members (e.g. Flag and Block). There’s a limited set of moderation measures that can be taken (see e.g. Mastodon’s Moderation docs). The toughests actions that can be taken are to blocklist an entire domain (here’s the list for mastodon.social, the largest fedi instance).

The burden of moderating

I think (but pls correct me) that in general there are two important areas for improvement from moderators perspective:

  • Moderation is very time-consuming.
  • Moderation is somewhat of an unthankful, underappreciated job.

It is time-consuming to monitor what happens on your server, to act timely on moderation request, answer questions, get informed about other instances that may have to be blocked.

It is unthankful / underappreciated because your instance members take it for granted, and because you are often the bad guy when acting against someone who misbehaved. Moderation is often seen as unfair and your decisions fiercely argued.

Due to these reasons instances are closed down, or are under-moderated and toxic behavior can fester.

(There’s much more to this, but I’ll leave it here for now)

Federating Moderation

From the Mastodon docs:

Moderation in Mastodon is always applied locally, i.e. as seen from the particular server. An admin or moderator on one server cannot affect a user on another server, they can only affect the local copy on their own server.

This is a good, logical model. After all, you only control your own instance(s). But what if the federation tasks that are bound to the instance got help from ActivityPub federation itself? Copying from this post:

The whole instance discovery / mapping of the Fediverse network can be federated. E.g.:

  • A new server is detected
  • Instance updates internal server list
  • Instance federates (Announce) the new server
  • Other instances update their server list
  • Domain blocklisting / allowlisting actions are announced (with reason)

Then in addition to that Moderation Incidents can be collected as metrics and federated as soon as they occur:

  • User mutes / blocks, instance blocks (without PII, as it is the metric counts that are relevant)
  • Flags (federated after they are approved by admins, without PII)
  • Incidents may include more details (reason for blocking, topic e.g. ‘misinformation’)

So a new instance pops up, and all across fedi people start blocking its users. There’s probably something wrong with the instance that may warrant blocklisting. Instance admin goes to the server list, sees a large incident count for a particular server, clicks the entry and gets a more detailed report on the nature of said incident. Makes the decision whether to block the domain for their own instance or not.

Delegated moderation

When having Federated Moderation it may also be possible to delegate moderation tasks to admins of other instances who are authorized to do so, or even have ‘roaming moderators’ that are not affiliated to any one instance.

I have described this idea already, but from the perspective of Discourse forums having native federation capabilities. See Discourse: Delegating Community Management. Why would you want to delegate moderation:

  • Temporarily, while looking for new mods and admins.
  • When an instance is under attack by trolls and the like, ask extra help
  • When there is a large influx of new users

Moderation-as-a-Service

(Copied and extended from this post)

But this extension to the Moderation model goes further… we can have Moderation-as-a-Service. Experienced moderators and admins gain reputation and trust. They can offer their services, and can be rewarded for the work they do (e.g. via Donations, or otherwise). They may state their available time and timeslots in which they are available, so I could invoke their service and provide 24/7 monitoring of my instance.

The Reputation model of available moderators might even be federated. So I can see history of their work, satisfaction level / review by others, amount of time spent / no. of Incidents handled, etc.

All of this could be intrinsic part of the fabric of the Fediverse, and extend across different application types.

There would be much more visibility to the under-appreciated task of the moderator, and as the model matures more features can be added e.g. in the form of support for Moderation Policies. Like their Code of Conduct different instances would like different governance models (think democratic voting mechanisms, or Sortition. See also What would a fediverse “governance” body look like?).

Note: Highly recommend to also check the toot thread about this post, as many people have great insights there: https://mastodon.social/web/statuses/106059921223198405

@humanetech
mod
creator
link
14d

An interesting paper to refer to from the Rebooting the Web of Trust 9 - Prague archives is:

@humanetech
mod
creator
link
12M

@nutomic@lemmy.ml posted a thread to SocialHub that highlights functionality in the Delegated Moderation domain: Activities for Federation Application? dealing with an instance sending a request to another instance to federate together, in an allowlist-based federation setup.

@humanetech
mod
creator
link
13M

Oww, I  💖  love the discussion that @macgirvin and @weex are having on Moderation from this comment onwards: Problem: Network-level moderation of content on federated networks leads to fragmentation and lower total value for users.

There’s a lot in the clear description that Mike is giving that warrants further elaboration and documentation for this Federated Moderation brainstorm.

Recommend reading, folks 📚

@humanetech
mod
creator
link
15M

Bob Mottram started an interesting discussion thread on fediverse, that I encourage anyone to read: https://epicyon.freedombone.net/@bob/106200493840204587

@polymerwitch
link
36M

Does this not actually centralize moderation with a few who have “reputation and trust”?

I think there are some features of what you cover which would be good. Like I think having opt-in federation with maybe a little flag that lets you know that other servers you federate with have intentionally defederated with the instance. Still, I would want to be watchful that we don’t moderate by populism. It is possible that a minority group could start an instance and get ostracized.

Also, each instance has slightly different moderation needs. Perhaps your instance is not OK with people posting sexual content on the public TL, but I welcome it. Both positions are fine, and I could understand why your instance might defederate from a group that posts sexual content, but that doesn’t mean that my instance should to. I don’t know if a centralized set of moderators could make those decisions for a whole network of diverse instances.

@humanetech
mod
creator
link
16M

Thanks for your feedback!

Does this not actually centralize moderation with a few who have “reputation and trust”?

I do not think that is the case, but there are things to beware of. The model cannot be implemented on a whim. When done right there’s more decentralization than there’s currently.

Right now either the mods and admins implicitly have your trust, or you don’t even know who they are. Anyone can be a mod. With these extensions they have an easier time choosing instances to work for, while the instances (and the members on them) stay in control who they allow in for this job. For each instance they’d have to stick with the instance’s Code of Conduct.

I sometimes hear accusations of mods being toxic (haven’t experienced this myself), and if that person were a ‘roaming mod’ a bad reputation might withhold an instance admin from taking the person on board. The whole reputation model is a very delicate thing, of course. You don’t want mods to be trolled out of existence. So maybe it should just consist of indications like “I vouch for this moderator” and the ability to retract that statement later on (still tricky with ways to be gamed, not saying this is easy… moderation-as-a-service is something to be added last).

I could understand why your instance might defederate from a group that posts sexual content, but that doesn’t mean that my instance should to

There will be no changes in this model, to how defederation choices are currently made. In Federated Moderation only metrics about block and allowlisting of domains are automatically collected. The final decision is a manual one by the admin (or others having this authority).

@nutomic
admin
link
26M

Anautonated reputation model is almost impossible to implement on the fediverse,be sure anyone can create a few dozen accounts to increase reputation of their own account. Or create their own instance etc. There are some ways to avoid that, such as user identity verification, trusted instances or cryptocurrency, but none of them seem desirable.

@humanetech
mod
creator
link
26M

Note that there need not be a lot of automation in the reputation model. The important decision points would all still be manual, just as they are now. Something like…

  • “I am a fedizen XYZ, and I announce that I offer my services as Moderator. Here’s some info about me, and a donation link for if you like what I do”
  • Other fedizens: “I can vouch for this person, they have earned my trust”
  • Instance admin ABC who needs help: “Lemme see who’s available. Oh, fedizen XYZ has NNN hours of experience and M fedizens vouching for them, let’s check them out”
  • (Instance admin ABC makes the same judgment call that admins are already making right now, but it is a more informed decision)
  • Instance admin ABC sends request for help during 3 days of Conference B to help moderate on-topic forum discussion, and fedizen XYZ accepts.

In terms of vouching for someone… it may be limited to only people that were actually moderated by fedizen XYZ before, or even further by just instance admins vouching.

@nutomic
admin
link
26M

Other fedizens: “I can vouch for this person, they have earned my trust”

Here’s the problem, its extremely easy for me to create a couple dozen fake accounts which endorse myself as moderator. So in the end the admin still has to decide if they trust the potential mod, and its probably not much different from the way things work right now.

In terms of vouching for someone… it may be limited to only people that were actually moderated by fedizen XYZ before, or even further by just instance admins vouching.

These are also relatively easy to fake, I can just moderate some of my fake accounts. And starting a new instance is pretty easy too.

@humanetech
mod
creator
link
16M

You are right on those points. But the actual decision to take someone on board as a mod remains unchanged, i.e. as a manual decision based “do I trust this person enough”. The Moderation-as-a-Service serves on the one hand to make it easier to find people willing to do the job, and it makes Moderation an intricate part of fediverse, not as something that remains out of sight.

That last bit is important. ‘Decentralized’ moderation (I mean, what we have now) is an important ‘unique selling point’ of fedi, in relation to traditional social media (where they can’t afford bizprofit-wise to have enough mods, and algorithms do a lot of the work). By making it part of the fediverse (btw, modeled as an AP vocabulary extension, not part of the standards themself) you give visibility to this otherwise unthankful work, offer more incentives (ranging from ‘being recognized for your work’ to donation, to - maybe - paid service) and find more willing people to help.

@humanetech
mod
creator
link
26M

Yes, that is certainly the case. It is indeed very tricky. Ultimately as we look into the fedi future, where more application and business domains are represented and we’d like to see them deeply integrated, things like these and all kinds of other governance issues will ultimately need to be tackled in some way or other.

@humanetech
mod
creator
link
1
edit-2
6M

deleted by creator

Social Media Reimagined

This is a companion to Fediverse Futures on SocialHub to elaborate the Fediverse from high-level, non-technical perspectives, brainstorming our visions and dreams.

We need a more holistic approach to fedi development and evolution. We need product designers, graphics artists, UX / UI / Interaction designers, futurists and visionaries to join the dev folks. Everyone is encouraged to join here and enrich our views on what Fediverse can be with diverse and different viewpoints, and to stimulate brainstorming, creativity, thinking out-of-the-box and crazy, wild ideas.

Some guidelines

  • Choose a descriptive title that speaks for itself.
  • Be substantive in your comments and stay on-topic.
  • Treat others as you want to be treated, respectful.
  • Don’t be overly critical, we are just brainstorming.

Our fedi hashtags

#FediverseFutures #SocialNetworkingReimagined #UnitedInDiversity #YearOfTheFediverse #SocialHub #ActivityPub

  • 0 user online
  • 1 user / day
  • 2 user / week
  • 5 user / month
  • 42 user / 6 month
  • 259 subscriber
  • 43 Post
  • 223 Comment
  • Modlog