From the article:

NAIROBI, Kenya (AP) — On the verge of tears, Nathan Nkunzimana recalled watching a video of a child being molested and another of a woman being killed.

Eight hours a day, his job as a content moderator for a Facebook contractor required him to look at horrors so the world wouldn’t have to. Some overwhelmed colleagues would scream or cry, he said.

The mental cost of this sort of large scale content moderation isn’t discussed often, and it’s often exported to the developing world, as highlighted by this article. Can the online commons be kept safe without causing this sort of harm? Can AI/ML help solve this problem? What about federated social networking?