Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

  • zephyrvs
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Total tangent, but we kid ourselves if we think the fediverse is somehow censorship-immune in comparison to Reddit or Twitter.

    But, it actually is, though, just not in the way you imagine: As long as someone can publish an distribute content via ActivityPub and federate with other instances, regardless of their size, instance admins and moderators can only contain the spread the of information by defederating/blocking those instances.

    Let’s look at this from the perspective of email, which is also based on open protocols were stricter policies (DKIM, DMARC, SPF, etc) were only bolted on after the fact: Gmail, Outlook, Apple Mail and other big freemailers may make it more and more difficult for people running their own mailservers to interact with the big player userbase but they cannot deny communication between other privately run mailservers, no matter what they do. It’s the same with the Fediverse: Big instances could agree on importing shared block lists to defederate with any instances that don’t pledge to follow certain rules (perhaps by becoming a free member of some non-profit “Better Internet™” NGO whatever) but smaller instances would still be able to federate with each other.

    In both cases, big players can severely limit the spread of whatever they deem to be undesirable but they cannot censor the content altogether. They can only leverage their userbase and make it more difficult for their users to see and interact with users/instances/content of undesireable instances.

    In the end, they can deplatform but not actively censor, because the content will still be published. On Reddit or Twitter, there is a single gatekeper who can deny access to their platform and thus make it impossible to share undesirable content with other users of the platform.