anonymity and privacy seem to come at odds with a social platform’s ability to moderate content and control spam.

If users have sufficient privacy and anonymity, then they can simply use another identity to come back, or use multiple identities.

Are there ways around this? It seems that any method of ensuring that a banned user is kept off the platform would necessitate the platform knowing information about the user and their identity

  • CyclohexaneOP
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Maybe if, somehow, content moderation focused more on CONTENT rather than the users themselves? I suppose that may be the best way, but that’s easier said than done.

  • plisken@lemmy.fmhy.ml
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Anonymity/Privacy are not inherently universal. Your true identity can be known to some and unknown to others in this case masked via an alias.

    Thus, I propose a hypothetical arrangement: separating Content Instances and Identity Instances.

    Content Instances host the main communities and discussions. There must still be “many” (hundreds maybe even thousands) of these so that none can wield power of the others.

    Within Identity Instances you are known or at least verified and vetted. External to the Identity Instance a user is only known as their alias from the identity instance. There should be many more of these with a maximum user size ~100 (see Dunbar’s number).

    Further, federation should not be open by default. New Identity Instances are quarantined initially with the ability to subscribe to communities on Content Instances, but the posts and comments from the Identity Instances are not federated back to the Content Instances.

    The goal here is to employ a heavily distributed Divide & Conquer approach to moderation and community management. The users of an Identity Instance are responsible to one another as any of their actions may cause the entire Instance’s users to be affected (i.e. defederation). Even better if you know each other, you should feel some real social pressure that your actions online will impact your social life IRL.

    But to be honest and pragmatic, I don’t think this will form organically nor do I think it could be enforced. And even in practice it probably wouldn’t work. But perhaps it’s a nice dream.

    • Cracks_InTheWalls@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I think something like this could occur. Something I kicked around my local, city community was the possibility of our local non-profit ISP (National Capital Freenet in Ottawa, Canada) hosting an instance. In practice, it would likely be an identity instance more than anything else. It would likely require membership, so a) there’s a donation required, which is fine, NCF is a good group, and b) they do need your actual identity, because part of their membership involves an agreement to certain conduct.

      NCF is something of a relic from an earlier internet age in many respects, but this kind of thing still exists elsewhere. Maybe this is a role other such organizations can take on, both increasing their relevance and adding another layer of accountability on users re: not being shitheads.

      Idk, something to think about.

      Edit to acknowledge I’m not a member of NCF right now and have no involvement with them. I just think they’re neat and this could be a neat thing for them to do for my city’s residents.

    • Catsrules
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      New Identity Instances are quarantined initially

      What would the process be for an identity instance to become trusted? Like would you need to get approval from multiple other identity instances or something?

      • plisken@lemmy.fmhy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I can’t say what it should be. I’d argue that each Content Instance should have it’s own path to becoming trusted. An example could be: demonstrating quality post/comment content during the quarantine period.

  • Margot Robbie@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    If they show disruptive behavior that you don’t think is acceptable, ban them. There are very few people who would be motivated enough spend the time to remake an account just to troll and get instantly banned again, and for these people an IP ban should deter them enough.

  • doylio@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I just posted about a system where users put down a deposit for their accounts. Bad actors lose the deposit. Imposes a cost on spammers and trolls and essentially no cost on honest users.

    Also, how do I link to a post in such a way that it’s cross-instance friendly?

    • CyclohexaneOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      The problem is that you’re paying money in hopes a random stranger on the Internet doesn’t determine you’re a good actor.

      Plus this is another system that only works against poor people.

  • CyclohexaneOP
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    What is even the value of content moderation being a thing by a separate entity (the platform admins or community mods)? Why not just filter content on your own? Why do we like having others choose for us what content we see?

    • tr00st@lemmy.tr00st.co.uk
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Two main points personally:

      • with self-moderation, you can’t really say “I don’t want to see this sort of content”, you can only say “I don’t want to see this content again”. A well stated set of rules for a community let’s you know what to expect, so you get to make that choice if advance. This is a massive difference in preventing distress and general unpleasant feelings. It’s not absolutely necessary, but it’s a lot nicer.
      • it avoids massive duplication of effort. If you have a moderator-to-reader ratio of 1000:1, you’ll be saving the vast majority of self moderation with those people would be doing. Yes, reporting exists, but it’s a tiny fraction of the time one would spend “moderating” for yourself
      • CyclohexaneOP
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Great points; thanks! On that, there’s a couple of ways I could see content moderation involving more personal freedom and choice:

        • give users more general types of content. For example “block all content containing ____ racial slur”. Could be made more complex as well, especially with how open source language models are coming along

        • give users the ability to follow another user’s content self-moderation choices. Consequently, a group of users can all be part of a group where, if one user flags content or type of content, it applies to others. The niceness of this is that it would be extremely fluid and you can opt out with a button.

        This could lead to better moderation in my opinion, and less disconnect between moderators and users.

        Does not solve the anonymity issue, but that’s for another comment.

        • tr00st@lemmy.tr00st.co.uk
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Those are reasonable options - though I’m pessimistic enough to believe that trolls will get better than every automated system, so we’d probably want some manual options. I wouldn’t say it’s not possible - just would require quite a bit of work, and would likely be an ongoing battle to improve your auto-moderator.

          It feels like I’m moving the goalposts, so apologies, but your response got me thinking further. The other big advantages I can think of for central censorship is that it can actually prevent hosting of content - which has two benefits:

          • legal concerns - make countries will require the removal of some amount of content - extreme stuff of all the usual sorts. Some jurisdictions will also require minors being prevented from accessing certain content, at least to a reasonable degree - refusing to host that kind of content is an easy solution.
          • community unity and protection - is a lot more abstract, age debatable - but I’d contest that central moderation can give a certain “this content isn’t wanted in our community” that individual censorship won’t. Really difficult to define, though.
          • CyclohexaneOP
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            first, just to clarify, I am not saying all moderation should be automatic. That is what I said in my first point, but in my second point, moderation is still manual and delegated to another person. The only difference is that you can very easily opt-out of it without losing anything else, or you can override it.

            so, instead of moderation being something tighly coupled with a community or space where people post, it is instead something separate. You can “subscribe” to a moderation policy managed by someone or a group of people, and anything they ban (automatically or manually) applies to you without extra effort. The benefit to this is that if you ever regret this “subscription”, you dont lose out on the entire community. You can simply just change the moderation policy.

            To answer your other points:

            • legal concerns: I think it will always be hard to please all lawmakers. But I think this approach would be coupled with a censorship proof model. It is a protocol that is hard to outright ban, as another instance can spring up anywhere to provide a gateway to the rest.
            • this is still possible. The “community” in this case is the group of people subscribing to a particular moderation policy. The key is that unsubscribing from this policy is extremely easy without much loss. User freedom is satisfied
    • Catsrules
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      What is even the value of content moderation being a thing by a separate entity (the platform admins or community mods)?

      The value is I don’t have to do anything. Dealing with spam, and other garbage people and bots post sucks. I just want to view my relevant content not wade thought hundreds of off topic or spam posts. Unfortunately in order to have that experience I do need to put some trust in the platforms admins, mods and automated filtering systems.

      That said it would be interesting if there was an option to turn on the raw feed. Zero filter for the people who are interesting in doing it themselves.

  • 0xCAFe@feddit.de
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I’ve read that the most healthy discussions arise with stable pseudonyms. That allows users to be “anonym”, i.e. they don’t have to use their personal name and identity, but since they have a recognizable name there’s still reputation involved which prevents a too free-for-all-ish culture.

    • CyclohexaneOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I agree that this is good, but depending on your threat model, this may not really be sufficient anonymity.

      Post and comment history may be plenty to identify you, and your IP and other personal data can still be visible to instance admins.