Almost exactly six months after Twitter got taken over by a petulant edge lord, people seem to be done with grieving the communities this disrupted and connections they lost, and are ready, eager even, to jump head-first into another toxic relationship. This time with BlueSky.

  • Arthur BesseA
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 years ago

    ActivityPub has a over 20k different independent instances, mostly federating with one another. BlueSky has one, and if you try to set up an independent one, it won’t federate.

    I’m guessing you still haven’t read this post I linked to? Here is the first paragraph:

    Moderation is a necessary feature of social spaces. It’s how bad behavior gets constrained, norms get set, and disputes get resolved. We’ve kept the Bluesky app invite-only and are finishing moderation before the last pieces of open federation because we wanted to prioritize user safety from the start.

    It’s a little surprising that the person you’re linking to managed to install and operate their own Personal Data Server without reading enough of the BlueSky website to see that federation isn’t turned on yet!

    You are confusing content warnings (not exposing others to potentially triggering content you post) with moderation (making it hard to harass users). These are two very different things.

    Why should they be different? If a user neglects to label their own post, shouldn’t other people be able to label it? (And shouldn’t the reader be able to decide who’s labels to give what importance to?)

    • rysiek@szmer.infoOP
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      2 years ago

      Moderation is a necessary feature of social spaces. It’s how bad behavior gets constrained, norms get set, and disputes get resolved. We’ve kept the Bluesky app invite-only and are finishing moderation before the last pieces of open federation because we wanted to prioritize user safety from the start.

      I do hope I will eat my words as far as moderation on BlueSky is concerned. I do doubt I will, though.

      It’s a little surprising that the person you’re linking to managed to install and operate their own Personal Data Server without reading enough of the BlueSky website to see that federation isn’t turned on yet!

      Until federation is turned on they don’t get to call BlueSky a decentralized/federated social network. And until an actually decentralized DID is used, they don’t get to call it a decentralized protocol. And until they actually implement some features related to moderation and fighting harassment, they don’t get to claim they care about moderation — they cared enough about “free speech” to design a whole protocol around it, so I believe I am quite correct to say that moderation is an afterthought in BlueSky.

      All of this is basically “trust us, this time we will not screw people over” coming from a Twitter-funded startup started by Jack Dorsey. I don’t believe they deserve the benefit of the doubt.

      Why should they be different? If a user neglects to label their own post, shouldn’t other people be able to label it? (And shouldn’t the reader be able to decide who’s labels to give what importance to?)

      It’s not about labeling, it’s about protecting people using a given network from malicious/harassing behaviour. That is always contextual. Putting a label on a post doesn’t mean much, it loses a lot of the context. Saying “you’re not welcome in this community” after reviewing of a broader context (multiple posts etc) is a much more effective way to do this.

      You’re also completely missing the point that it’s not just about “whose content I see” but also about “who sees my posts”. As I wrote in the blogpost:

      What actual difference would being able to choose between different recommendation/discoverability algorithms make for at-risk folks who are constantly harassed on Twitter? There is no way to opt-out from “reach” algorithms indexing one’s posts, as far as I can see in the ATproto and BS documentation. So fash/harassers would be able to choose an algorithm that basically recommends targets to them.

      On the other hand, harassment victims could choose an algo that does not recommend harassers to them — but the problem for them is not that they are recommended to follow harassers’ accounts. It’s that harassers get to jump into their replies and pile-on using quote-posts and so on. Aided and abetted by recommendation algorithms that one cannot opt out of being indexed by in order to protect oneself.

      Anyway, we won’t agree. I rarely find common ground with free-speech-maximalists. I see fedi admins and moderators as people helping protect and nurture their communities, you see them as “hostage-holders”. We might as well stop here.