An effect of federated communities is that an instance could be a “safe space” for people of a similar mindset, for example, all the left-side supporters can be on one instance and all the right-side supports could be on another instance, and they would largely not be able to affect each. An example right on this domain is communism.lemmy.ml

What is your opinion on this? Personally, I think it can go either way. It’s generally good to have places where like-minded individuals can share their thoughts without the opposition attacking them, and it’s generally good to have safe spaces for marginalized or potentially vulnerable people like the LGBTQ community, but unhealthy echo chambers and toxicity can also form. Worse, what if an instance pops up for people who are, for example, highly bigoted/discriminatory or advocate the commission of serious crimes?

  • @dreamland
    link
    44 years ago

    Generally if something is going to affect my life in a structural way, I want to have a say in that something. So if a group’s primary aim is political activism, then I am less inclined to favor special protections for the discussions of said group.

    Of course, determining the intent of a group is not easy, and not all that is potentially dangerous is going to materialize into a real danger, so dealing with a lot of false positives can be counterproductive too.

    If in doubt, I lean toward freedom of speech (and thought, and conscience).

    Like many people, I don’t want to live in a toxic society, but trying to discipline and control every little thing can in and of itself become toxic and oppressive.

    Every rhetorical or conceptual device we use to protect the vulnerable can be (and already has been) hijacked and weaponized against our own interests. So for example, the corporate interests use weaponized identity politics when they deliberately select minority talking heads to promote the views of the exploiter class to the masses. Every good thing we have developed has also been used in evil ways. Promoting morality generally makes people behave better but it also makes people more (intellectually and emotionally) exploitable by those who cynically weaponize morality for private gain.

    So the more we make ourselves sensitive to abuse, the more effective the weaponized forms of “sensitivity” will also become.

    I don’t see any obvious or easy solutions to any of these problems. It all seems like a balancing act to me.

  • @appa
    link
    34 years ago

    I had similar thoughts upon reading /u/dessalines mission statement and observing the mod logs. How are you going to feel when the platform you built finds it’s own gab.

    • Dessalines
      cake
      A
      link
      34 years ago

      I’m def not gonna be pleased, and it’s probably unavoidable unfortunately, even tho I’ve hardcoded a slur filter.

      Best we can do is implement block lists to stop federation to the racist ones.

      • @AgreeableLandscapeOP
        link
        2
        edit-2
        4 years ago

        We should also implement a blacklist of community names, stuff like, for example, “fatpeoplehate” or “<race>hate”.

  • @AgreeableLandscapeOP
    link
    2
    edit-2
    4 years ago

    Related: /u/dessalines, /u/nutomic, once federation gets going, will there be a way for instances to blacklist truly toxic or despicable instances from being relayed via their servers? I’m assuming that the greater fediverse has mechanisms for this, is that correct? Is there a global blacklist or is it per instance only?

    • @nutomicA
      link
      34 years ago

      Yes that is definitely something we will have to implement. Blocklists in Mastodon or Peertube are all per instance, and I dont see any reason to do that differently.