An in-depth report reveals an ugly truth about isolated, unmoderated parts of the Fediverse. It’s a solvable problem, with challenges.

  • Elise@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I wonder what kind of computing resources that Microsoft service needs. Isn’t it essentially just a set of hashes? My point being that centralization does not have to be an issue.

    • Sean TilleyOPM
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      It’s a bit of an unknown, since the service is a proprietary black box. With that being said, my guess:

      • A database with perceptual hash data for volumes and volumes of CSAM.
      • Means to generate new hashes from media
      • Infrastructure for adding and auditing more of it
      • REST API for hash comparisons and reporting
      • Integration for pushing reports to NCMEC and law enforcement.

      None of those things are impossible or out of reach…but, collecting a new database of hashes is challenging. Where do you get it from? How is it stored? Do you allow the public to access the hash data correctly, or do you keep it secret like all the other solutions do?

      I’m imagining a solution where servers aggregate all of this data up to a dispatch platform like the one described above, possibly run by a non-profit or NGO, which then dispatches the data to NCMEC directly.

      The other thing to keep in mind is that solutions like photoDNA are HUGE. I’m talking like hundreds of thousands of pieces of reported media per year. It’s something that would require a lot of uptime, and the ability to handle a significantly high amount of requests on a daily basis.

      • Elise@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Thanks for the thought you put into your answer.

        I’ve been thinking: CSAM is just one of the many problems communities face. E.g. Youtube is unable to moderate transphobia properly, which has significant consequences as well.

        Let’s say we had an ideal federated copy of the existing system. It would still not detect many other types of antisocial behavior. All I’ms saying is that the existing approach by M$ feels a bit like it’s based on a moral tunnel vision and trying to solve complex human social issues by using some kind of silver bullet. It lacks nuance. Whereas in fact this is a community management issue.

        Honestly I feel it’s really a matter of having manageable communities with strong moderation. And the ability to report anonymously, in case one becomes involved in something bad and wants out.

        Thoughts?

    • modulus
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      IMO the hardest part is the legal side, and in fact I’m not very clear how MS skirted that issue other than through US lax enforcement on corporations. In order to have a db like this one must store stuff that is, ordinarily, illegal to store. Because of the use of imperfect, so-called perceptual hashes, and in case of algorithm updates, I don’t think one can get away with simply storing the hash of the file. Some kind of computer vision/AI-ish solution might work out, but I wouldn’t want to be the person compiling that training set…

      • Elise@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Perhaps the manual reporting tool is enough? Then that content can be forwarded to the central ms service. I wonder if that API can report back to say whether it is positive.

        Can you elaborate on the hash problem?

        Personally I was thinking of generating a federated set based on user reporting. Perhaps enhanced by checking with the central service as mentioned above. This db can then be synced with trusted instances.

        • modulus
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Perhaps the manual reporting tool is enough? Then that content can be forwarded to the central ms service. I wonder if that API can report back to say whether it is positive.

          The problem with a lot of this tooling is you need some sort of accreditation to use it, because it somewhat relies on security through obscurity. As far as I know you can’t just hit MS’s servers and ask “is this CSAM?” If something like that were possible it might work.

          Can you elaborate on the hash problem?

          Sure. When you have an image, you can do lots of things to it that change it in some way: change the compression, the format, crop it, apply a filter… This all changes the file and so it changes the hash. The perceptual hash system works on the basis of some computer vision stuff and the idea is that it will try to generate the same hash for pictures that are substantially the same. But this tech is imperfect and probably will have changes. So if there’s a change in the way the hash gets calculated, it wouldn’t be enough with keeping hashes, you’d have to keep the original file to recalculate, which is storing CSAM, which is ordinarily not allowed and for good reason.

          For a hint on how bad these hashes can get, they are reversible, vulnerable to pre-image attacks, and so on.

          Some of this is probably inevitable in this type of systems. You don’t want to make it easy for someone to hit the servers with a large number of hashes, and then use IPFS or BitTorrent DHT to retrieve positives (you’d be helping people getting CSAM). The problem is hard.

          Personally I was thinking of generating a federated set based on user reporting. Perhaps enhanced by checking with the central service as mentioned above. This db can then be synced with trusted instances.

          Something like that could work, maybe obscuring some of the hash content (random parts of it) so that it doesn’t become a way to actually find the stuff.

          Whatever decisions are made have to be well thought through so as not to make the problem worse.

          • Elise@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Perhaps this technical approach is the wrong way entirely. In a scale free network it might seem like a good approach because of the seemingly infinite number of edges the hub nodes service (yt, twitter). The numbers are so large that you have a tendency to come up with a technical solution.

            However a network can be laid out in a way that is more conducive to meaningful moderation. With meaningful in this case I am referring to there being people involved rather than algos. This requires having small world communities with influential core members or moderators.

            This allows for a more inclusive/wider and more nuanced moderation. For example I assume that yt detects and removes CSAM, however it still has CSAM-like content because it is legal, but it would still be filtered otherwise. Likewise issues such as transphobia are not legal problems and thus are not properly moderated. On the flip end, stuff gets removed that has nothing wrong with it. When different communities create their own meaning through values and principles based on those values, we will have more diversity, and this allows for social progress in the long run.

            This might be the case for the federated structure of Lemmy.

            Of course this ignores communities that break off and do their own thing and polarize into a more extreme form. I feel that is a different problem that requires a different solution.

            Excuse me for being all over the place with this post, but I have to run :)

            • modulus
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Well, in a way that’s what we’re doing now, and by and large it works but obviously there’s some leakage, which is impossible to bring down to zero but which makes sense working on improving.

              The other side of the coin is that the price of this moderation model is subjecting a lot more people to a lot more horrible shit, and I unfortunately don’t know any way around that.

              • Elise@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Maybe it would be good to know more about this leakage. Are these isolated communities? I’ve personally not encountered any CSAM so far. The only thing I’ve seen so far was a transphobe and they were banned quickly.

                And about that subjecting moderators to bad stuff. Is that true? Why would anyone constantly post CSAM in a place where it constantly gets removed and their accounts banned?

                ATM it seems to me like these are isolated instances?