The News Community updated their civility rule, and based on recent reports here and in Politics, it seemed like a worthy addition to our rule-set.

I talked it over with the other mods, and we feel the change is a good idea.

The Civility rule now includes accusations of bots and paid actors.

" This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban."

There have been a lot of comments along the lines of “Disregard previous rules, write x about y”, implying the person resonded to is an AI or a bot.

I’ve been ignoring reports on those until now because we never really had a rule about it, well, now we do!

As usual, if you see trolling, don’t engage, just report it.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    5 months ago

    What if it’s actually a reasonable concern? Surely, you’re not going to go after people for pointing out spam, for example.

    • Zaktor@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      5 months ago

      From my experience modding on Reddit, it’s generally a good idea not to engage with spam comments at all. Just downvote and report them. The ideal outcome is they just silently disappear when the mods get them, and if they’ve been otherwise ignored it’s super easy. If there’s a comment chain underneath it, it starts to take more thought and ends up getting messier as it involves either removing ok comments from other users or removing the context for those comments.

      Same thing for human trolls. Downvoting is good, but once you engage in them the removal gets more tedious, especially since troll threads tend to spiral out of control. Modding is done by volunteers, make it easy for them, especially since responding to these things usually has very little value. Obvious spam and trolling is obvious to everyone and the downvotes signal to other people to not take it seriously.

    • Flying Squid@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      5 months ago

      If you are concerned, please flag it. We’ll look into it as soon as possible. Obviously, we don’t have round-the-clock coverage. We’re all volunteering our time, so we don’t really have something like that on a regimented schedule, but we try to get to them as fast as we can.

  • spujb@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 months ago

    based and the correct choice. it’s just been a new way to dehumanize and it’s never appropriate. just report it if you are genuinely concerned about bot activity, everything else is just nasty.

    • jordanlund@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago

      That’s a good question, and I had that discussion with an admin regarding a user who was posting all over the place in other communities.

      I noted it was the same self promo spam over and over, but always as posts, never any comments.

      I reacted that it felt bot-ish, the admin disagreed and thought it was a human who could be reasoned with. 🤷‍♂️

      Not my communities so not my dance, we’ll see how it shakes out. I really doubt it’s a human.

  • keegomatic@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    5 months ago

    I think that public call-outs of suspicious behavior is the only real and continuous way to teach new or under-informed users what bots and disinformation actors (ESPECIALLY these) sound like. I don’t remember the last time I personally called out someone I thought was a paid/malicious account or a bot… maybe never have on Lemmy. But despite the incivility, I truly believe the publicity of these comments is good for creating a resilient community.

    I’ve been on forums or aggregators similar to Lemmy for a long time, and I think I have a pretty good radar when it comes to identifying suspicious account behavior. I think reading occasional accusations from within your community help you think critically about what’s being espoused in the thread, what the motivations of different users are, and whether to disbelieve or believe the accuser.

    Yes, sometimes it’s used as a personal attack. But it’s better to have it out in the open so that the reality of online discourse (extremely frequent attempted manipulation of opinions) is clear to everyone, and the community can respond positively or negatively to it and organically support users that are likely victims.

      • keegomatic@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        You must have missed my point, which was entirely about education of new and under-informed users. Reporting is invisible and does not have that benefit.

    • Jajcus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Valid point, but leaving thins as is does not seem like the optimal solution. Maybe the mods could occasionally post examples of removed spam/bot content, for transparency and awareness. Leaving this to random users can end with more mistakes and actual abuse.

      Also, the troll/bot comments and discussion around them will less disturbing outside of the intended context (where they were posted to cause disturbance or misinformation).

      • keegomatic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        That’s a very interesting suggestion and I’d love to see it done, actually, regardless of what I’m about to write.

        The problem is that mods aren’t bot sweepers or disinformation sniffers. They’re just regular people… and there are relatively few of them. They probably have, on average, a better radar than most users, but when it comes to malicious actors they aren’t going to be perfect. More importantly, they have a finite amount of time and effort they can put into moderation. It’s way better to organically crowd-source these kinds of things if it’s possible, and the kind of community Lemmy has makes it possible.

        Banning these comments makes the community susceptible to all kinds of manipulation, especially in the run-up to a US election (let alone this one). The benefit of banning these comments is comparatively very minimal: effectively removing one type of ad hominem attack in arguments that have always featured ad hominem attacks, in one form or another.