Is there anything stopping someone from making 1000 accounts or bots to artificially upvote posts on the Lemmy network?

I guess a single instance can moderate its users using captcha etc. but since it’s federated an evil actor could setup an instance without these restrictions.

An instance could maybe protect its users against this by blocking the domains of evil instances, but does this approach scale?

A solution might be to add a limit to the number of upvotes to accept from a single instance in a certain time frame, but that wont work if the other instance is very large and the upvotes are legitimate.

I’d like to hear if this issue has already been thought out or what ideas that you might have.

  • @lemmy_check_thatOP
    link
    42 years ago

    Thank you for the reply. I’m happy to hear that it sounds like a more or less fixed problem. I guess that Mastodon has proven that these methods do in fact work.

    Ban manipulated accounts.

    I guess it’s an entire field of study, how to automate spam detection. It will be nice to see how this will be applied to open-source federation in the future. Maybe it’s already used?

    Remove the manipulated posts / comments.

    I guess this applies to upvotes as well?

    • DessalinesA
      link
      42 years ago

      Yeah, bot and spam will be an ever-present problem that becomes magnified in federated networks… I’m sure we’ll have to get creative with figuring out how to stop them as things grow.

      I guess this applies to upvotes as well?

      Yep, but its tricky. I mean currently one person could make several accounts, possibly even on the same server, and upvote their own content. We don’t track IPs or fingerprint so we wouldn’t really be able to tell that they’re the same person. But we can at least stop automated bots via captchas and other things, to make sure that someone can’t create 1000s of accounts to upvote their own stuff.