Right now when people install federated server instances of any kind that are open for others to join, they take on the job to be the instance admin. When membership grows, they attract additional moderators to help with maintenance and assuring a healthy community.
I haven’t been admin or mod myself, but AFAIK the moderation work is mostly manual, based on the specific UI administrative features offered by a particular app. Metrics are collected about instance operation, and federated messages come in from members (e.g. Flag
and Block
). There’s a limited set of moderation measures that can be taken (see e.g. Mastodon’s Moderation docs). The toughests actions that can be taken are to blocklist an entire domain (here’s the list for mastodon.social, the largest fedi instance).
I think (but pls correct me) that in general there are two important areas for improvement from moderators perspective:
It is time-consuming to monitor what happens on your server, to act timely on moderation request, answer questions, get informed about other instances that may have to be blocked.
It is unthankful / underappreciated because your instance members take it for granted, and because you are often the bad guy when acting against someone who misbehaved. Moderation is often seen as unfair and your decisions fiercely argued.
Due to these reasons instances are closed down, or are under-moderated and toxic behavior can fester.
(There’s much more to this, but I’ll leave it here for now)
From the Mastodon docs:
Moderation in Mastodon is always applied locally, i.e. as seen from the particular server. An admin or moderator on one server cannot affect a user on another server, they can only affect the local copy on their own server.
This is a good, logical model. After all, you only control your own instance(s). But what if the federation tasks that are bound to the instance got help from ActivityPub federation itself? Copying from this post:
The whole instance discovery / mapping of the Fediverse network can be federated. E.g.:
Announce
) the new serverThen in addition to that Moderation Incidents can be collected as metrics and federated as soon as they occur:
So a new instance pops up, and all across fedi people start blocking its users. There’s probably something wrong with the instance that may warrant blocklisting. Instance admin goes to the server list, sees a large incident count for a particular server, clicks the entry and gets a more detailed report on the nature of said incident. Makes the decision whether to block the domain for their own instance or not.
When having Federated Moderation it may also be possible to delegate moderation tasks to admins of other instances who are authorized to do so, or even have ‘roaming moderators’ that are not affiliated to any one instance.
I have described this idea already, but from the perspective of Discourse forums having native federation capabilities. See Discourse: Delegating Community Management. Why would you want to delegate moderation:
(Copied and extended from this post)
But this extension to the Moderation model goes further… we can have Moderation-as-a-Service. Experienced moderators and admins gain reputation and trust. They can offer their services, and can be rewarded for the work they do (e.g. via Donations, or otherwise). They may state their available time and timeslots in which they are available, so I could invoke their service and provide 24/7 monitoring of my instance.
The Reputation model of available moderators might even be federated. So I can see history of their work, satisfaction level / review by others, amount of time spent / no. of Incidents handled, etc.
All of this could be intrinsic part of the fabric of the Fediverse, and extend across different application types.
There would be much more visibility to the under-appreciated task of the moderator, and as the model matures more features can be added e.g. in the form of support for Moderation Policies. Like their Code of Conduct different instances would like different governance models (think democratic voting mechanisms, or Sortition. See also What would a fediverse “governance” body look like?).
Note: Highly recommend to also check the toot thread about this post, as many people have great insights there: https://mastodon.social/web/statuses/106059921223198405
This is a companion to Fediverse Futures on Social Coding to elaborate the Fediverse from high-level, non-technical perspectives, brainstorming our visions and dreams.
We need a more holistic approach to fedi development and evolution. We need product designers, graphics artists, UX / UI / Interaction designers, futurists and visionaries to join the dev folks. Everyone is encouraged to join here and enrich our views on what Fediverse can be with diverse and different viewpoints, and to stimulate brainstorming, creativity, thinking out-of-the-box and crazy, wild ideas.
Please read the Social Coding Community Participation Guidelines for more information.
#Peopleverse #FediverseFutures #Web0 #SocialNetworkingReimagined #UnitedInDiversity #Fedivolution2022 #SocialCoding #ActivityPub
Though it is only indirectly related to Moderation, this StackExchange Q&A The Stack Exchange reputation system: What’s working? What’s not? has some nice feedback and ideas. It was tooted by Codinghorror, or Jeff Atwood the co-founder of Stackoverflow and Discourse forum software.
I bumped into A better moderation system is possible for the social web, by Erin Alexis Owen one of the draft authors of ActivityPump in 2014, which has some interesting observations.
On fedi the #FediBlock process has become kinda popular, but it has its issues. From the article on the topic of blocklists specifically:
Posted a toot to them, where I dropped a link to Christine Webber’s OcapPub: Towards networks of consent that goes into similar direction wrt current moderation practices.
A whole set of projects around Fediblock is emerging. This github repo tracks projects: https://github.com/ineffyble/mastodon-block-tools
@humanetech Maybe of interest:
https://www.noemamag.com/mastodon-isnt-just-a-replacement-for-twitter/
https://journals.sagepub.com/doi/10.1177/20563051221126041
PS. Note that Christine Webber does not see a future in the Fediverse as it currently is. And I tend to agree, albeit maybe for different reasons.
Great article and paper @ntnsndr@social.coop and I wholly agree on this notion. For some time in advocacy I make the distiction between social networking which is what humans do for thousands of years and now extends online, and corporate Social Media. For the latter ‘Media’ is appropriate as, due to optimization for engagement / extraction, people ‘broadcast’ themself here and the algorithms expose them to a flood of info exposure that’s not to their benefit.
OTOH a social network is a personal thing. It is manageable and fits to ones day-to-day activity, one’s daily life. It supports and reflects your interests and the social relationships that matter to you. There’s many groups and communities you interact with in different kinds of roles and relationships, same as offline.
I call the vision of an online and offline world that seamlessly intertwine in support of human activity, a Peopleverse. Peopleverse can be established on the Fediverse as it evolves.
This is golden: Hey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve
(Federated) Gitea Moderation
There’s an open issue by @Ta180m@exozy.me who is working on Gitea forge federation: Moderation #10. Interesting moderation features are discussed, such as learning from Discourse forum moderation (as suggested by @dachary@lemmy.ml), like supporting Trust Levels.
An update to this topic… in the context of Code Forge Federation there was another discussion where I dropped a link to this Lemmy post:
https://layer8.space/@RyunoKi/108520016228507552
An interesting angle from the perspective of the software development domains related to Code Forges is what Federated Moderation and Delegated Moderation bring within reach. Because with some imagination this can be extended and encompass Software Project Governance (to give the domain a name). In other words the domain where Maintainers of a software project operate. In FOSS projects this is an important and delicate subject. There are countless examples where e.g. a BDFL maintenance model or the sole maintainer gone missing, leads to project failure or forks.
Won’t further elaborate this idea, just leaving as-is. Forge federation community can be found on Matrix in Forge Federation General chatroom.
This commint in follow-up to @dessalines@lemmy.ml post on need for more moderation to combat Brigading.
Having some reputation metrics be only visible to admins would be useful (maybe that’s already present, idk).
I don’t know to what extent moderation activity, either by individual lemmians or admins, are natively federated across instances right now. @dessalines@lemmy.ml is warning/advising “I’d adopt similar moderation policies”, but what if this informing came automatically by metrics streaming in across instances.
Suppose @lemmianA on instance lemmy.xyz interacts here on lemmy.ml and and has good reputation metrics on lemmy.ml (visible to admins only). Now on other instances @lemmianA is doing a lot of blocking and reporting. These moderation activities federate (probably best based on an allowlist) and are collected on lemmy.ml such that when @lemmianB is reported on lemmy.ml the admins are able to see metrics like “@lemmianB is blocked on N instances and has been been reported M times by O number of trusted lemmians of lemmy.ml”.
There needs to be care for the privacy of lemmians doing moderation actions. Their account names need not propagate to other instances in this design, only “A respectable lemmian of lemmy.xyz blocked @lemmianB” and that metric is aggregated as a side-effect on @lemmianB user account.
I am absolutely no fan of the idea of having a global blacklist.
Based on my experience about lawyers, judges and this disgusting hashtag “FediBlock” on Mastodon, I can only say that this exact scenario is just plain wrong. It only takes one “trusted” person who has stress with someone to fill the list with false information and lock out someone innocent from multiple instances.
On the other hand, anyone who is stupid enough to fall for hate speech without forming their own opinion belongs off the Internet anyway. So it would be no loss for their victims.
If one can’t manage moderation, one should think twice about whether it makes sense for them to provide a federating instance at all.
I am no fan either. But a global backlist is not what this idea is about. Rather to make Moderation a first-class citizen of the fediverse, rather than something that happens ‘behind the scenes’ in other channels. By federating metrics about moderation-related activity, one can be informed and then decide how one wants to act on that information. The fact that the information is available to anyone can lead - when done well - to more democratization and new forms of governance where communities on the fediverse can make collective decisions.
You mean only metrics to be able to find unmoderated instances? 🤔
It can be just any kind of metrics that can help a person make up their own mind on how to act wrt moderation. And the way these metrics are used, i.e. the features that are built on top of them, can be very app-specific too.
Like for instance, if one person responds to my toot, but that person is massively blocked by people I follow, or instances I federate with, then a warning icon might be shown. The icon could serve as a warning to check their timeline. If they are a clear troll I can decide not to respond or to either silence or block. The app I use might also allow me to vote or downvote a suggestion to block a particular very controversial instance. Things like that.
I think the main focus should be to empower individuals to make the decisions. That way full freedom of speech is possible, but one can just decide not to listen. Though I think that instance blocks are also part of the picture. After all why should someone who graciously hosts a “cat lover” instance for a community of followers federate with a “kill all cats” instance? There’s the freedom of the provider here too, to make the instance as topical or generic as they wish.
GoToSocial has an issue that is related to this topic: Subscribe-able allowlists / denylists
An interesting paper to refer to from the Rebooting the Web of Trust 9 - Prague archives is:
There’s a long discussion ongoing with many people on the thread, caused by a recent outburst of spam stemming from mastodon.social
Independently @csdummi@lemmy.ml was discussing with Serge @emacsen on the paper above. Related to this in the Social Coding chat @yusf dropped a link to the very interesting thesis:
TrustNet: Trust-based Moderation (Full 103-page PDF) by Alexander Cobleigh @cblgh
And another great article by @roko@social.trom.tf detailing social aspects of moderation.
As further follow-up to the thread @roko@social.trom.tf did post a link to an elaboration of some moderation ideas.
@nutomic@lemmy.ml posted a thread to SocialHub that highlights functionality in the Delegated Moderation domain: Activities for Federation Application? dealing with an instance sending a request to another instance to federate together, in an allowlist-based federation setup.
Oww, I 💖 love the discussion that @macgirvin and @weex are having on Moderation from this comment onwards: Problem: Network-level moderation of content on federated networks leads to fragmentation and lower total value for users.
There’s a lot in the clear description that Mike is giving that warrants further elaboration and documentation for this Federated Moderation brainstorm.
Recommend reading, folks 📚
Bob Mottram started an interesting discussion thread on fediverse, that I encourage anyone to read: https://epicyon.freedombone.net/@bob/106200493840204587
deleted by creator