• Lvxferre
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    The origin (being programmed by people) doesn’t matter, what matters are the capabilities. Not even current state-of-art LLMs understand human language on a discursive level, and yet that is necessary if you want to moderate the content produced by human beings.

    (inb4: a few people don’t understand it either. Those should not be moderators.)

    all they really do is put a buffer between the actions of a moderator [user? otherwise the sentence doesn’t make sense] and the (real) moderators.

    Using them as a buffer would be fine, but sometimes bots are used to replace the actions of human moderators - this is a shitty practice bound to create a lot of false positives (legit content and users being removed) and false negatives (shitty users and content are left alone). Reddit is a good example of that - there’s always some fuckhead mod to code automod to remove posts based on individual keywords, and never check the mod logs for false positives.

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      even if that hypothetical AI could understand human language- and you’re right- it’s coded by people, and it’s actions will be predicated on what those people coded it to do.

      Meaning that the AI gets it’s sense of appropriate from those people. Which means, those people might as well be modding it. or seen as the mods. bots are all-too-frequently used to insulate the people making the decisions as to what should be moderated from those actions. in the case of reddit automod bot yeeting content based on included words… most of that is stupid, I agree, but then it’s those mod’s community.

      • Lvxferre
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Now I got your point. You’re right - the AI in question will inherit the biases and the worldviews of the people coding it, effectively acting as their proxy. IMO for this reason the bot’s actions should be seen as moral responsibility of those people (i.e. instead of “the bot did it”, it’s more like “I did it through the bot”).

        in the case of reddit automod bot yeeting content based on included words… most of that is stupid, I agree, but then it’s those mod’s community.

        Even if we see the comm as belonging to the mod, it’s still a shitty approach that IMO should be avoided, for the sake of the health of the community. You don’t want people breaking the rules by avoiding the automod (it’s too easy to do it), but you also don’t want content being needlessly removed.

        Plus, personally, I don’t see a community as “the mod’s”. It’s more like "the users’ ". The mods are there enforcing the rules, sure, but the community belongs as much to them as it belongs to the others, you know?