Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

  • shinjiikarus@mylem.eu
    link
    fedilink
    arrow-up
    3
    ·
    11 months ago

    Total tangent, but we kid ourselves if we think the fediverse is somehow censorship-immune in comparison to Reddit or Twitter.

    There are more moderators and administrators across all instances which can federate/defederate at will and can delete posts and propagate this deletion through the network. At the same time governments don’t need to negotiate with a large company, but only need to hint they could destroy one person’s livelihood to remove undesirable content from the network. And to avoid the Streisand effect instead of requesting to delete one specific piece of subversive content (which could backfire), just insinuate some illegal material (CSAM being the most obvious, but anything goes, really) has been found to force shut down or takeover of the whole instance.

    The same goes for big companies instead of governments: if a large corporation has launched their own Mastodon clone, the first thing they’d reasonably fund are smearpieces by “journalists” and/or “scientists” hinting at harm to befall server owners by continuing to host Mastodon instances.

    I personally hate, what crypto has become (if I wanted to destroy crypto, I’d have invented crypto bros as a psy op), but the fediverse isn’t really federated enough to be resistant to influence by corporations and governments and something blockchain adjacent could have been the solution. For example: if the server admin and their hoster is totally unable to decrypt whatever is stored on their own server and the network as a whole is distributing all the content probabilistically across every federated server, the network would only get stronger and more censorship resistant with each new instance. If the government is forcing you for any reason to take down your server your content is not gone but stored with all the other nodes. If you are able to retrieve your key, you could even move to a new instance and authenticate as your old instance (don’t forget: you are not “sending” BTC from one wallet to another, you are only telling as much nodes as sensible that BTC on the chain belongs to a new key now; the same would go for content. Take down one node with a “wallet” doesn’t change which wallet the BTC on the chain belongs to. I propose the same, just with content). If federation between instances would work in a comparable way as it is now, this would additionally increase the probability to root out bad faith actors trying to flood the whole network with illegal content, since their content would be stored on much less nodes in a pseudo-predictable way: as soon as each major instance would defederate, their content would not be stored on their nodes and unfederated third-party-nodes.

    • zephyrvs
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Total tangent, but we kid ourselves if we think the fediverse is somehow censorship-immune in comparison to Reddit or Twitter.

      But, it actually is, though, just not in the way you imagine: As long as someone can publish an distribute content via ActivityPub and federate with other instances, regardless of their size, instance admins and moderators can only contain the spread the of information by defederating/blocking those instances.

      Let’s look at this from the perspective of email, which is also based on open protocols were stricter policies (DKIM, DMARC, SPF, etc) were only bolted on after the fact: Gmail, Outlook, Apple Mail and other big freemailers may make it more and more difficult for people running their own mailservers to interact with the big player userbase but they cannot deny communication between other privately run mailservers, no matter what they do. It’s the same with the Fediverse: Big instances could agree on importing shared block lists to defederate with any instances that don’t pledge to follow certain rules (perhaps by becoming a free member of some non-profit “Better Internet™” NGO whatever) but smaller instances would still be able to federate with each other.

      In both cases, big players can severely limit the spread of whatever they deem to be undesirable but they cannot censor the content altogether. They can only leverage their userbase and make it more difficult for their users to see and interact with users/instances/content of undesireable instances.

      In the end, they can deplatform but not actively censor, because the content will still be published. On Reddit or Twitter, there is a single gatekeper who can deny access to their platform and thus make it impossible to share undesirable content with other users of the platform.