Been watching over the recent surge in fediverse users for about a week now. Last week, it was climbing what i would call naturally, organically. Now for the last couple days, its been like 350k in the last 2 days.

Love to see the growth of users, but these have to be bot created accounts. I dont want this to be bot infested community. I see the value in bots when used correctly, but lets be real - general population and bots could ruin this community.

Is there anything planned? Is there work from some third party to throw off the “stableness” of Lemmy / fediverse?

  • terribleplan@lemmy.nrd.li
    link
    fedilink
    English
    arrow-up
    28
    ·
    edit-2
    2 years ago

    I think some of it comes down to admins who left their (small) instances open (no captcha, no application, no email validation) not knowing how bad an idea that currently is given the maturity level of Lemmy and the (very recent) influx of bots. I am reaching out to the admins of the fastest growing servers according to FediDB if it looks suspicious (based on growth rate, participation rate of their users, and if the content posted by users). In many of these cases we are talking thousands of new accounts in the past few days on instances that have single-digit active daily/weekly users.

    So far the responses I have gotten have been appreciative and the admins are taking action, but not everyone has responded. Also the tooling to find and delete such accounts is pretty lacking as far as I can tell.

      • terribleplan@lemmy.nrd.li
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 years ago

        I’m not sure. At a user level perhaps some sort of tracking of logins, posting frequency that sort of stuff. If a user signed up and immediately starts making hundreds of posts, something is probably up and an admin should be made aware somehow. If a dormant account wakes up and starts posting a lot, maybe an admin should take a casual look. Also, as much as people seem to hate it, track some IP addresses, at least temporarily. If 100+ accounts all sign up from one IP in the space of an hour, they are probably less than legitimate.

        Assuming the problem is posts and comments by bots there could be something that looks for known spam copypasta, previously moderated/admin’d content, or keywords could be enough on a small instance. Going further perhaps something that reads the posts from users of your instance, has them classified based on previous admin actions (and probably some manual work to flag things as “known good”), and trains some sort of classifier (bayes/markov/ml/whatever). Such tools already exist and are in wide use for email spam filtering and the like. They aren’t perfect, but would make an ok first line of defense that can raise things to the attention of the admin.

        I am sure you could go further down the automation side, but I would imagine all of these are “human in the loop” sort of things. Once a user/post/whatever gets flagged it generates some sort of report for an admin to take a look at. I don’t know how much of this stuff like automoderator or mod bots did on reddit, but a decent amount of it would probably be transferable however it was done.

        Perhaps some/all of this doesn’t get put into Lemmy itself but can interact through admin APIs and/or the database. I would start at just basic things in Lemmy itself as at the moment there is hardly any admin interface to Lemmy at all. If I just want a list of the users on my instance I have to query the database. Make deleting/purging users easier (I have heard from some admins having bot trouble that it was easier to vs than delete them). Properly split out the modlog per community, show all the details of the action, and show whether something was a mod or admin action.

  • JoYo 🇺🇸
    link
    fedilink
    arrow-up
    27
    arrow-down
    4
    ·
    2 years ago

    there’s no karma to farm. there’s no algorithm to game. the best they can do is spam.

    • phase_change@sh.itjust.works
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      2 years ago

      No. They can be used in influence campaigns. They can upvote the posts and comments the controllers want you to see and downvote those they don’t.

      Spam’s obvious and can be dealt with. Bots altering what shows up in your feed is impossible to combat as an end user.

      In some ways, this shows Lemmy is winning. It means Lemmy’s important enough to start trying to influence. It also means we’re about to go through some interesting times.

  • Catsrules
    link
    fedilink
    arrow-up
    21
    ·
    edit-2
    2 years ago

    There has been a big surge in bot accounts

    https://lemmy.ml/post/1391903

    Basically many of the newer instances allowed sign ups with no bot protection

    This is why we can’t have nice things and need to deal with CAPTCHA and email verification etc…

      • lixus98@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        2 years ago

        I have a crosspost bot, I’m mainly testing it and it’s purpose is to use reddit as link agregator to help small communities get some content going.
        It only posts external links, never OC nor a link to reddit.

        Also in my case the bot will get one post no older than 5 hours every hour, to prevent flooding with posts.

  • BurningnnTree@lemmy.one
    link
    fedilink
    arrow-up
    16
    ·
    2 years ago

    What are the bot accounts being used for? I haven’t noticed any posts made by bots (unless you guys are all ChatGPT and I’m the only human here)

    • Netsettler2k@lemmy.one
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Exactly what I was going to link! I went down a rabbit hole last night. Found a random instance on the Overseer map that was only a few days old but had 6000 signups overnight.

      Another outside user had also noticed, and had left a post calling attention to the flood of bots. At first the instance owner said they thought a glitch was messing with the numbers. Then they did some digging, and realized 6000 bots account signups had come through all at once. The high volume of unverified emails was proof enough, as they came through all at the same time, and were never actually verified. After that, they realized you could enable captcha for signups and turned it on.

  • SpliceVW@vlemmy.net
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    2 years ago

    Did you miss the news about Reddit pissing off all their users, many of whom are now looking for alternatives?

  • Haily@rblind.com
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    Eh, the Internet is full of bots, particularly bots that just register and then do nothing. I wouldn’t worry about it.

  • Gentoo1337@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    I wonder what will happen to the users that already registered without email (Like me). Maybe I should add an email to my profile, just to be sure.

  • iso@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    Talk to the instance admins. If you don’t like their response, open up your own instance and do whatever you want, ban all bots, write your own things completely freely, disable all up- and downvotes, whatever