Gonna post some quotes whenever I stumble on anything interesting
While defederation offers one scaled mechanism for addressing repeated or prolific harmful conduct, federated platforms largely lack industry-standard capabilities for broad or automated content moderation. Mastodon, for example, does not provide moderators with the ability to block harmful links from being shared on the service. This prevents moderators from being able to ingest lists of known-bad URLs (such as spam and phishing sites) in order to programmatically restrict them. Mastodon also lacks essential tools for addressing media-based harms, like child sexual exploitation, such as media hashing and matching functions (although a number of third parties, including Cloudflare, make such tools available to customers of their content-delivery services). Critically, many of the existing federated platforms have not implemented moderator-facing tools for deploying automation and machine learning to streamline and scale repeated content-moderation actions — functions that are an essential part of the moderation toolkit at all of the existing large, centralized platforms.
Apparently we’re supposed to let some inverse-ChatGPT monstrosity determine what speech belongs on the Fediverse.
As Ben Werdmuller puts it, “While software is provided to technically moderate, there are very few ecosystem resources to explain how to approach this from a human perspective.” The results are predictable for anyone familiar with the challenges of social media content moderation. Users report erroneous or inexplicable bans, with limited recourse from volunteer admins moonlighting as content moderators. Larger-scale harassment campaigns can overwhelm victims and admins alike. Driven by business imperatives, virtually all centralized platforms at least attempt to mitigate these harmful behaviors. But, absent the financial support that goes along with centralized, corporate social media, few parts of the fediverse have been able to successfully marshal the human and technological
resources required to successfully execute proactive, accurate content moderation at scale.
AHAHAHAHAHAHAHA. It feels like half of Mastodon is gay or trans. As a cishet guy I host a mastodon instance which isn’t specifically about LGBT issues and half of it is trans (y’all are great posters <3). There is a reason it is this way. It is because the FAILURES of commercial platform moderation. It is because the commercial platforms would prefer to keep your racist fucking uncle on there to view their ads even if he replies to random people with slurs once a week.
Even when we know content has been created by a troll farm, addressing it as content is challenging (if not impossible). For example, researcher Josh Russell captured hundreds of examples of memes created by the IRA on Instagram in 2018; those same exact memes resurfaced a year later in a network of spammy
Facebook pages operated out of Ukraine. If Meta, possessing all the relevant data about these campaigns and having extensive capabilities to detect similar media, couldn’t catch this, how can we expect Mastodon instance moderators to keep pace, particularly given the lack of media-hashing and matching functions?
Some interesting details about the digital forensic techniques employed against memes deemed harmful to the alliance. They are creating digital fingerprints of memes and using them to track their appearance in different communities in different time periods.
Detection of behavioral manipulation relies, in large part, on access to data about on-platform activity—and the openness of federated platforms has largely resulted in the ready availability of application programming interfaces (APIs) to enable this kind of access. For example, Mastodon has a robust set of public APIs that
would allow researchers to study the conversations happening on the service. But federation complicates the use of these APIs to study ecosystem-level threats. Whereas Twitter’s APIs offer a single channel for collecting data about all the activity happening globally across the Twitter service, Mastodon’s APIs are mostly
instance specific. As a result, many data-collection efforts either involve focusing on a handful of the largest instances, or needing to go down an essentially limitless rabbit hole of collecting data from successively smaller and smaller instances until you reach a point of diminishing returns—with no guarantee that the threats you’re hunting aren’t lurking on the n+1th instance from which you’d collect data.
It won’t take them long to find us
This also assumes that instance moderators have the time, knowledge, tools, and governance frameworks necessary to do the highly specialized work of disinformation detection and analysis. Training programs at large platforms to get even technically proficient analysts fully up to speed on advanced analytic techniques can take months.
Training programs, huh?
A failure state for the promise of the fediverse is homogeneity of moderation as a product of convenience. But leaving it to individual moderators to assess, designate, and track troll farms and other bad actors for themselves is hardly a reasonable alternative.
Answers to these questions will help structure responses across three critical constituencies: the developers of open-source fediverse services, and the developers of complementary tools and features that enable effective moderation of federated social media; the individuals and groups engaged in investigations, analysis,
and moderation of federated services; and investors, funders, and donors engaged with platform governance and counter-manipulation efforts.
Notably absent: The END FUCKING USERS!
This is probably the biggest blindspot in their analysis. They assume everyone is a helpless victim being manipulated by ‘evil’ governments like Russia, China, and Venezuela. Mastodon is a relatively small platform. There are very few celebrities. There are very few news organizations. People make an active choice to go there. Choosing which social media website you want to spend your time on is, in fact, a demonstration of media literacy. The ‘threats’ inherent to mass-media platforms like Facebook and Twitter are substantially different in nature from the ‘threats’ the alliance will encounter by small-ish tight-knit internet communities which stand by their principles.
AHAHAHAHAHAHAHA. It feels like half of Mastodon is gay or trans. As a cishet guy I host a mastodon instance which isn’t specifically about LGBT issues and half of it is trans (y’all are great posters <3). There is a reason it is this way. It is because the FAILURES of commercial platform moderation. It is because the commercial platforms would prefer to keep your racist fucking uncle on there to view their ads even if he replies to random people with slurs once a week.
trans people are heavily overrepresented on these platforms because people harass us too much in meatspace. society has made us obligate shut-ins at large
Gonna post some quotes whenever I stumble on anything interesting
Apparently we’re supposed to let some inverse-ChatGPT monstrosity determine what speech belongs on the Fediverse.
AHAHAHAHAHAHAHA. It feels like half of Mastodon is gay or trans. As a cishet guy I host a mastodon instance which isn’t specifically about LGBT issues and half of it is trans (y’all are great posters <3). There is a reason it is this way. It is because the FAILURES of commercial platform moderation. It is because the commercial platforms would prefer to keep your racist fucking uncle on there to view their ads even if he replies to random people with slurs once a week.
Some interesting details about the digital forensic techniques employed against memes deemed harmful to the alliance. They are creating digital fingerprints of memes and using them to track their appearance in different communities in different time periods.
It won’t take them long to find us
Training programs, huh?
Notably absent: The END FUCKING USERS!
This is probably the biggest blindspot in their analysis. They assume everyone is a helpless victim being manipulated by ‘evil’ governments like Russia, China, and Venezuela. Mastodon is a relatively small platform. There are very few celebrities. There are very few news organizations. People make an active choice to go there. Choosing which social media website you want to spend your time on is, in fact, a demonstration of media literacy. The ‘threats’ inherent to mass-media platforms like Facebook and Twitter are substantially different in nature from the ‘threats’ the alliance will encounter by small-ish tight-knit internet communities which stand by their principles.
I haven’t read the pdf but I read your excerpts and this is infuriating
I wonder which lemmy users belong to the Atlantic Council, there must be at least one
:monkey-look-away:
lemmy.world
This for sure
They really see themselves as the mandarins and us as NPCs with an excess of democracy to be managed.
deleted by creator
trans people are heavily overrepresented on these platforms because people harass us too much in meatspace. society has made us obligate shut-ins at large
AFAICT you & I are the only cishets on TMD, NTTAWWT.