I made a robot moderator. It models trust flow through a network that’s made of voting patterns, and detects people and posts/comments that are accumulating a large amount of “negative trust,” so to speak.
In its current form, it is supposed to run autonomously. In practice, I have to step in and fix some of its boo-boos when it makes them, which happens sometimes but not very often.
I think it’s working well enough at this point that I’d like to experiment with a mode where it can form an assistant to an existing moderation team, instead of taking its own actions. I’m thinking about making it auto-report suspect comments, instead of autonomously deleting them. There are other modes that might be useful, but that might be a good place to start out. Is anyone interested in trying the experiment in one of your communities? I’m pretty confident that at this point it can ease moderation load without causing many problems.
Is there a way of tailoring the moderation to a communities needs? One problem that I can see arising is that it could lead to a mono culture of moderation practices. If there is a way of making the auto reports relative that would be interesting.
I tried that early on. It does have a “perspective,” in terms of what communities are the trusted ones. What I found was that more data is simply better. It’s able to sort out for itself who the jerks are, and who are the widely trusted social networks, when it looks at a global picture. Trying to tell it to interpret the data a certain configured way or curtail things, when I tried it, only increased the chance of error without making it any better-tuned to the specific community it’s looking at.
I think giving people some insight into how it works, and ability to play with the settings, so to speak, so they feel confident that it’s on their side instead of being a black box, is a really good idea. I tried some things along those lines, but I didn’t get very far along.
Maybe it’d be nice to set it up so it’s more transparent. Instead of auto-banning, it can send auto-reports to the moderators with comments which it considers to be bad, and an indication of how bad or why. And then, once a week, it can publish a report of what it’s done and why, some justification for anyone who it took action against, so that everyone in the community can see it, so there aren’t surprises or secrets.
I thought about some other ideas, such as opening up an “appeal” community where someone can come in and talk with people and agree not to be a jerk, and get unbanned as long as they aren’t toxic going forward. That, coupled with the idea that if you come in for your appeal and yell at everyone that you are right and everyone else is wrong and this is unfair, your ban stays, could I think be a good thing. Maybe it would just be a magnet for toxicity. But in general, one reason I really like the idea is that it’s getting away from one individual making decisions about what is and isn’t toxic and outsourcing it more to the community at large and how they feel about it, which feels more fair.
But in general, one reason I really like the idea is that it’s getting away from one individual making decisions about what is and isn’t toxic and outsourcing it more to the community at large and how they feel about it, which feels more fair.
Yeah that does sound useful it is just that there are some communities where it isn’t necessarily clear who is a jerk and who has a controversial minority opinion. For example how do you think the bot would’ve handled the vegan community debacle that happened. There were a lot of trusted users who were not necessarily on the side of vegans and it could’ve made those communities revert back to a norm of what users think to be good and bad.
I think giving people some insight into how it works, and ability to play with the settings, so to speak, so they feel confident that it’s on their side instead of being a black box, is a really good idea. I tried some things along those lines, but I didn’t get very far along.
If you’d want I can help with that. Like you said it sounds like a good way of decentralizing moderation so that we have less problems with power tripping moderators and more transparent decisions. I just want it so that communities can keep their specific values while easing their moderation burden.
For example how do you think the bot would’ve handled the vegan community debacle that happened.
That’s not a situation it’s completely equipped to handle. It can decide what the community’s opinion of someone is, but it’s not going to be able to approach any kind of judgement call, in terms of whether a post by a permitted user is unexpectedly dangerous misinformation that the admins need to remove. That’s a judgement call that humans can’t effectively come to a conclusion on, so definitely the bot won’t be able to do any better.
There is some interesting insight to be had. One of the big concerns that people had about the bot’s premise was that it would shut down minority opinions, with vegans as a perfect example.
I tried going back and having it judge https://lemmy.world/post/18691022, but there may not be recent activity for a lot of those users, so there’s a risk of false negatives. The only user it found which it wanted to do anything to was EndlessApollo@lemmy.world, who it wanted to greylist, meaning they’re allowed to post, but anything of theirs that gets downvotes will get removed. That sounds right to me, if you look at their modlog.
I also spent some time just now asking it to look at comments from vegantheoryclub.com and modern comments from !vegan@lemmy.world, and it didn’t want to ban or greylist anybody. That’s in keeping with how it’s programmed. Almost all users on Lemmy are fine. They have normal participation to counterbalance anything unpopular that they like to say, or any single bad day where they get in a big argument. The point is to pick out the users that only like to pick fights or start trouble, and don’t have a lot that they do other than that, which is a significant number. You can see some of them in these comments. I think that broader picture of people’s participation, and leeway to get a little out of pocket for people who are normal human people, is useful context that the bot can include that would be time-prohibitive when human mods are trying to do it when they make decisions.
The literal answer to your question is that I don’t think it would have done anything about the Vegan cat food issue other than letting everyone hash it out, and potentially removing some comments from EndlessApollo. But that kind of misinformation referee position isn’t quite the role I envisioned for it.
Like you said it sounds like a good way of decentralizing moderation so that we have less problems with power tripping moderators and more transparent decisions.
I wasn’t thinking in these terms when I made it, but I do think this is a very significant thing. We’re all human. It’s just hard to be fair and balanced all of the time when you’re given sole authority over who is and isn’t allowed to speak. Initially, I was looking at the bot as its own entity with its own opinions, but I realized that it’s not doing anything more than detecting the will of the community with as good a fidelity as I can achieve.
I just want it so that communities can keep their specific values while easing their moderation burden.
This was a huge concern. We went back and forth over a big number of specific users and situations to make sure it wasn’t going to do this, back in the early days of testing it out and designing behaviors.
I think the vegan community is a great example. I think there was one vegan user who was a big edge case in the early days, and they wound up banned, because all they wanted to talk about was veganism, and they kept wanting to talk about it to non-vegans in a pretty unfriendly fashion. I think their username was vegan-related also. I can’t remember the specifics, but that was the only case like that where the bot was silencing a vegan person, and we hemmed and hawed a little but wound up leaving them banned.
The point is to pick out the users that only like to pick fights or start trouble, and don’t have a lot that they do other than that, which is a significant number. You can see some of them in these comments.
Ok then that makes sense on why you chose these specific mechanics for how it works. Does that mean hostile but popular comments in the wrong communities would have a pass though?
For example let’s assume that most people on Lemmy love cars (probably not the case but lets go with it) and there are a few commenters that consistently shows up in the !fuck_cars@lemmy.ml or !fuckcars@lemmy.world community to show why everyone in that community is wrong. Or vice a versa
Since most people scroll all it could be the case that those comments get elevated and comments from people that community is supposed to be for get downvoted.
I mean its not that much of a deal now because most values are shared across Lemmy but I can already see that starting to shift a bit.
I was reminded of this meme a bit
Initially, I was looking at the bot as its own entity with its own opinions, but I realized that it’s not doing anything more than detecting the will of the community with as good a fidelity as I can achieve.
Yeah that’s the main benefit I see that would come from this bot. Especially if it is just given in the form of suggestions, it is still human judgements that are making most of the judgement calls, and the way it makes decisions are transparent (like the appeal community you suggested).
I still think that instead of the bot considering all of Lemmy as one community it would be better if moderators can provide focus for it because there are differences in values between instances and communities that I think should reflect in the moderation decisions that are taken.
However if you aren’t planning on developing that side of it more I think you could probably still let the other moderators that want to test the bot see notifications from it anytime it has a suggestion for a community user ban (edit: for clarification) as a test run. Good luck.
Does that mean hostile but popular comments in the wrong communities would have a pass though?
They have no effect. The impact of someone’s upvote is dependent on how much trust from the wider community that person has. It’s a huge recursive formula, almost the same as PageRank. The upshot is that those little isolated wrong communities have no power unless the wider community also gives them some upvotes. It’s a very clever algorithm. I like it a lot.
For normal minority communities like vegans, that’s not a problem. They still get some upvotes, because the occasional conflict isn’t the normal state, so they count as normal users. They post stuff, people generally upvote more than they downvote by about 10 to 1, and they are their own separate thing, which is fine. For minority communities that are totally isolated from interactions with the wider community, they just have more or less 0 rank, so it doesn’t matter what they think. They’re not banned, unless they’ve done something, but their votes do almost nothing. For minority communities that constantly pick fights with the wider community, they tend to have negative rank, so it also doesn’t matter what they think, in terms of the impact of them mutually upvoting each other.
I think it might be a good idea to set up “canary” communities, vegans being a great example, with the bot posting warnings if users from those communities start to get ranked down. That can be a safety check to make sure it is working the way it’s supposed to. Even if that downranking does happen, it might be fine, if their behavior is obnoxious and the community is reacting with downvotes, or it might be a sign of a problem. You have to look up people’s profiles and look at the details. In general, people on Lemmy don’t spend very much time going into the vegan community and spreading hate and downvotes just for the sake of hatred, because they saw some vegans being vegans. Usually there’s some reason for it.
One thing that definitely does happen is people from that minority community going out and picking fights with the wider community, and then beginning to make a whining sound when the reaction is negative, and claiming that the heat they’re getting is because of their viewpoint, and not because they’re being obnoxious. That happens quite a lot.
I think some of the instances that police and ban dissent set up a bad expectation for their users. People from there feel like their tribe is being attacked if they have to come into contact a viewpoint that they’re been told is the “wrong” one, and then they make these blanket proclamations about how their own point of view is God’s truth while attacking anyone who disagrees, and then they sincerely don’t expect the hostile response that they get. I think some of them sincerely feel silenced when that happens. I don’t know what to do about that other than be transparent and supportive about where the door to being able to post is, if they want to go through it, and otherwise minimizing the amount that they can irritate everyone else for as long as that’s their MO.
I still think that instead of the bot considering all of Lemmy as one community it would be better if moderators can provide focus for it because there are differences in values between instances and communities that I think should reflect in the moderation decisions that are taken.
It definitely does that. It just uses a more sophisticated metric for “value” than a hard-coding of which are the good communities and which are the bad ones.
I think the configuration options to give more weight or primacy to certain communities are still in the code. I’m not sure. I do see what you’re saying. I think it might be wise for me, if anyone does wind up wanting to play with this, to give as many tools as possible to moderators who want to use it, and just let them make the decision. I think the bot is capable of working without needing configuration which ones are the good communities, but if someone can replicate my checking into it, they’ll be happier with the outcome whether or not they wind up with the same conclusions as me.
And yes, definitely making it advisory to the moderators, instead of its own autonomous AI drone banhammer, will increase people’s trust.
This is an extremely important tool for moderating large forums, thank you for working on this! This is one of those mod tools that some people may need to enable large scale moderated federated forums!
Interesting concept
So it would delete people’s posts if they get downvoted a lot or if the poster tends to upvote heavily downvoted posts?
You’ve automated the suppression of dissenting voices.
So it would delete people’s posts if they get downvoted a lot
No.
or if the poster tends to upvote heavily downvoted posts?
No.
You’ve automated the suppression of dissenting voices.
Am not.
It’s a perfectly fair concern. I’m trying to be careful to make sure I’m not doing that. There’s quite a lot of explanation in the FAQ, and some conversations you can look back over with people who were concerned, because they’ve had experience with exactly that happening to them.
At one point I tried to illustrate with data just how big a jerk you have to be before it starts banning you. If you’re interested, I can start doing that again. Being a dissenting voice on its own is nowhere near enough to anger the bot. You can look over !pleasantpolitics@slrpnk.net and see quite a few dissenting voices. I’ve also offered to delve, for any user who feels that this has happened to them, into the breakdown of why they’re being ranked down, which almost always is because they’re being a jerk about their “dissenting” opinion, and not the opinion itself.
Also, I think it’s hilarious that someone coming from lemmy.ml is accusing me of trying to suppress dissenting voices. Lemmy.ml has been suppressing dissenting voices since its inception. The degree to which I’m bending over backwards not to suppress dissenting voices is something I think you should absorb and carry over to the lemmy.ml moderators as a good replacement for their current banhammer circus.
Hehe, good roast on lemmy.ml
The guy set himself up for it for saying a god damn word about “suppression of dissenting voices” while he’s logging into SuppressionOfDissentingVoices.ml every day to access his Lemmy account.
Sure, no need to explain. I think it’s been appropriate to point it out.
And wow, quite some comments you got. I’m not sure if I agree with the negative ones. We’ve been requesting better moderation tools for a long time now. I wouldn’t immeadiately do away with your effort. I share some concern about privacy and introducing “algorithms” and bots into the platform instead of making it more human… But nonetheless -we need good moderation. A lot of the issues are just technical in nature and can be solved. And you seem pretty aware of them. And there’s always a balance and a potential of abuse that comes with power…
I think we should experiment and try a few things. A bot is a very good idea, since we won’t get that into the Lemmy core software. I think mostly due to personal reasons. And that relates to the lemmy.ml situation. I’ll have a look at the code. But I’m using PieFed instead of Lemmy. Which already attributes reputation scores to users. So this might be aligned with PieFed’s project goals, maybe we can take some inspiration from your ideas.
The tool that detects unreasonable people and is effective at combatting them, a whole lot of unreasonable people really don’t like, and they’re being really unreasonable in how they approach the conversation. Go figure.
It wouldn’t be hard to make it work on PieFed. A first step, having it load up the voting flow patterns and make its judgements, would be very easy. It just needs a PieFed version of
db.py
, it would take 10-20 minutes. Is that something you’re interested in me working up? If I did that, it would be pretty simple for someone to get it working on PieFed, just fill in.env
and run the script. Then you’d have to fire up the interpreter, unpickleuser_ranks.pkl
and start poking around in there, but I could give you some guidance.That’s where I would start with it. Getting it to speak to the PieFed API to enact its judgements would be a separate thing, but checking it out and seeing what it thinks of your users and how easy it is to work with, as a first step, is very easy.
I had this vague vision of augmenting Lemmy so that it has a user-configurable jerk filter, which can be switched to filter out the avowed jerks from your view of the Lemmyverse regardless of whether the moderators are getting the job done. I think putting the control in the hands of the users instead of the mods and admins would be a nice thing. If you want to talk about that for PieFed, that sounds grand to me.
Careful, some might think you’re a jerk 🤣
At one point I tried to illustrate with data just how big a jerk you have to be before it starts banning you. If you’re interested, I can start doing that again.
Would you?
My understanding is that downvotes reflect whether or not someone agrees with a post or comment much more than whether the user is making a constructive comment or not so they can only be used to infer how agreeable the comment is.
I’ve also offered to delve, for any user who feels that this has happened to them, into the breakdown of why they’re being ranked down, which almost always is because they’re being a jerk about their “dissenting” opinion, and not the opinion itself.
Use me as an example, I regularly get dozens of downvotes for such hot takes as “facilitating genocide hurts the dems chances of getting elected, we need them to stop that if we want them to win.”.
Sure. Here’s you. Red is downvotes, blue is upvotes. The left-right axis is time, with the past on the left.
The bar right below the red/blue bar code is the key to what comments were in what posts.
One thing that jumps out at me is that almost all of your participation is in political threads, and the majority of it is getting downvoted. It would be different if you were just participating in Lemmy, and then also you had some views that were unpopular. That happens to a lot of people, and I’ve bent over backwards trying to preserve their right to do that when I’ve been making and tuning the bot. This isn’t that. This is almost all just you going in and arguing with people.
One thing I say a lot when talking to people about this is, “It’s not your opinion, it’s your delivery.” I’m going to be honest, when I read your first message here, it annoyed me. You’re coming out of the gate hostile. Most people, when they receive that, are going to be hostile back. It’s just how people work. You’re not going to convince them of your point of view, you’re not going to be able to fine-tune your own point of view to let them poke holes in any mistakes in it. You’re just going to irritate everyone. That’s a choice you’re making in how you approach things, and I think it’s completely fair for people to react to that choice by closing the door on you.
It’s the difference between going to a party when you’re in a fringe political party, and having conversations about it, versus showing up to the party with a bunch of flyers and handing one to every person and making almost every conversation over the course of the night revolve around your chosen fringe political party. The first one is fine, or should be, at a decent party. The second one, people are going to remove you from the party for. I think if you want to make an impact on people’s thinking, you’re going to need to recognize and respect that reality of human nature.
Having an unpopular political opinion is fine. Being a little bit combative with people is fine. Doing both at once is going to collect a tidal wave of downvotes, and also I think is going to make it harder for you to make any progress convincing anyone of anything.
I regularly get dozens of downvotes for such hot takes as “facilitating genocide hurts the dems chances of getting elected, we need them to stop that if we want them to win.”.
I’m going to stop you right there.
You’re playing a little game where you claim you said one thing and got downvoted for it, when I can guarantee you actually said something different. You probably said that we need to not vote for the Democrats, because they’re facilitating genocide. That’s different. You can say that, sure. Someone might say back to you that not voting for the Democrats is going to make the genocide 20 times worse, and that’s why they’re voting for the Democrats. They can say that, too. That’s progress, that’s people talking to each other. Maybe one or the other of you will learn something from the exchange.
Where it gets difficult is where you go off into this alternate reality where they said, “I love genocide, and I love the Democrats, I’m going to give you downvotes because you don’t support genocide which I love,” and then you start arguing against that thing that they didn’t say. That’s not progress. That’s just people shouting and trying to twist the conversation around so that they can “win.” It only takes a little bit of that before people are going to stop talking to you.
I know you do that, because you did it to me in your first message in this conversation.
I looked over some of your posting history, and I think you’ve got some valuable things to say. I learned some things about how bad Liz Cheney was before she for some reason found her principles and broke with the Republican party over Trump. I saw some debates people were having with you about Russian and Chinese history, where I don’t think you’re right, but it didn’t seem like any kind of badly intentioned thing.
I think if you built up the habit of always responding honestly to what people said, and telling the truth about your own views and the world outside the best way you can, the bot wouldn’t treat you harshly, and you’d also make more progress in convincing people of what you’re trying to say.
Try again: What’s the last thing you said that got dozens of downvotes, and what did you actually say that got dozens of downvotes? What was the opposing side’s core argument, honestly summarized?
Two things:
You’ve accused them of being hostile here, and of arguing elsewhere.
This very post by you comes across as hostile to me.
Paradigm is everything, and here you are attempting to be the arbiter if what’s acceptable.
You’ve also made your own bias clear by labelling someone as “coming from lemmy.ml”. You’re attacking the person from the start.
Try not to be hypocritical.
All I can think about is how this bot is immediately a non-starter because this is the kind of attitude I can expect from the author when asking for support or collaboration. It’s not just in this post, either.
Even if the parent comment here was hostile–it’s borderline, at worst–I can’t possibly understand the mentality of being argumentative in a post trying to encourage the use of a service.
Your 1-star review is noted. When I open a Yelp page for the bot, I’ll be sure to let you know, and you can speak to my manager about it.
My understanding is that downvotes reflect whether or not someone agrees with a post or comment much more than whether the user is making a constructive comment or not so they can only be used to infer how agreeable the comment is.
I never responded to this part, and I should have. Yes, people definitely vote in exactly that fashion. They do, however, upvote about 10 times more than they downvote. And, the bot takes into account everything you say. It’s not just those controversial topics. You have to be talking about only, or majority, things that people don’t want to hear in order to trigger it. And, Lemmy is all those minority political takes on things. There are a lot of communities where you’ll get straight-up banned for saying things that are mainstream American points of view. The people who tend to be argumentative like to maintain a fiction that people on Lemmy just can’t handle someone who’s anti-genocide, or something like that, when they’re showing up right next to a “fuck Israel” meme or a “fuck Biden for arming Israel” meme that has 1,500 upvotes.
It’s hard for me to make a convincing argument that it’s tolerant of dissenting voices who aren’t jerks about it without listing off accounts. I can do some version, though, if you’re interested, listing examples of banned and not-banned accounts to illustrate where the boundary line is.
Lemmy just can’t handle someone who’s anti-genocide
They can’t when that stance conflicts with their party. Hence why “The dems need to stop the genocide, people are not going to vote for genocide” gets you downvoted.
“fuck Biden for arming Israel” meme that has 1,500 upvotes
Those exist on .world? I see too many “You have to vote for genocide because trump would do genocide and also other bad things” type posts, it’d be weird if they coexisted.
can do some version, though, if you’re interested
Sure if it’s trivial I’d be interested, but don’t put too much work into it.
I don’t know how much I want to go around this merry-go-round. I’m losing some of my good humor about it. I’ll try though.
If you need evidence, here it is:
https://lemmy.world/search?q=fuck biden&type=All&listingType=All&communityId=1384&page=1&sort=TopAll
Let’s look at the first page:
Fuck Biden and fuck Putin.
(Even though I did vote for Biden in 2020 and plan to again in 2024 if he’s the Democratic nominee.)
118 upvotes (inb4 you pretend that the other three also included that little disclaimer, even though they didn’t)
Also genocide. Never forget that Biden is aiding and abetting a genocide. Don’t fucking look away because he’s your guy, motherfuckers
81 upvotes
Obligatory: Fuck Biden, Fuck Putin, Fuck Netanyahu/IDF and anybody else complicit in killing innocent people and/or oppressing people.
51 upvotes
Broadly, Biden supporting this genocide in the way that he has is costing him the election. Acknowledging this doesn’t mean you support Trump. Arguing that if you don’t support Biden in-spite of this position is headspinning, and some posters here (@PugJesus@lemmy.world ) are doing the work of trying to separate the left from Democrats in this regard.
49 upvotes
Expressing the viewpoint that you are claiming is banned, is incredibly popular.
You said, “They can’t when that stance conflicts with their party.” That’s backwards. I can’t speak for everybody, but for me, it’s exactly the other way around. Because I dislike genocide, and because Trump getting elected will accelerate the genocide tenfold, I support Harris. I’m not clinging to the Democrats even though they’re enabling genocide. I’m voting Democratic in this election because the alternative is more genocide. Much, much more.
You can understand and deal with that viewpoint head-on without caricaturing it into something else. You could say it doesn’t make sense, you could criticize the logic, you could try to argue some other strategy that is no genocide, instead of Harris or Trump. All fine. Instead you’re doing a little dodge where you pretend that the only reason someone might say that, is that they love Democrats and are okay with genocide. For as long as that’s your debate style, you are not welcome, as far as I’m concerned. Learn to respect the point of view of people you disagree with, if you like. I think it’ll help you. Or don’t, and get used to being not listened to in some forums, and banned from some others.
You can take that or leave it. I’m not trying to debate you. But I’m now pointing out for the second time that, rather than the issue being your viewpoint, which is popular on Lemmy, the issue is that you are caricaturing your opponent’s also-popular viewpoint on Lemmy into something nutty, so that you can send messages which have no possible possibility of any productive impact. That’s disrespectful and inflammatory. That’s why you are banned. Not because of your viewpoint, which is very popular on Lemmy.
Hell. I’m on the opposite end of the spectrum. I bet I get banned for hot takes like “I should be able to defend myself legally” and “Illegal immigrants should be deported”
You got banned for hot takes like, “A lot more pedophiles endorsing Harris though. It would cancel out if they were about the same, but they aren’t.”
I’m curious to know what you said under the posts about the Harris campaign HQ getting shot with bullets, or the disinformation project which produced the video of illegal immigrants saying they’re registered to vote, but not curious enough to look into it any further.
Sounds like the bot knows its job. The paradox of tolerance is tempting, but it’s resisting.
When was Harris shot at? Was that after Trump was shot at? Was it a copycat?
Tell you what. I can respond to this in two different ways. I really don’t agree with silencing people who have a different point of view than I do. On other platforms, I’ve spent a ton of time arguing with conservatives. So much time. It’s not unfamiliar to me. I don’t think people need to limit their interactions to only the people who are “right,” if that makes sense. It’s okay for someone to be saying something that I think is wrong, as long as they’re open to a conversation about it. I can say where I think some of your sources have a long history of deliberately misleading people, as a way of making the case that they are misleading you, too. I don’t view you as the enemy, necessarily, but I do think you’re mistaken.
If I’m going to have that conversation with me, then you don’t have to agree with me or be nice to me, but I do need you to be willing to hear me out. In exchange, I’ll promise to hear you out, too, and take seriously what you’re saying, enough to disagree with it honestly and respectfully when I disagree with it. Is that something you’re interested in? Because we have about as different a set of viewpoints as you could imagine, but I’m still fine talking with you, and having a real exchange of views.
If you don’t want to do that, and just want to emit your viewpoint and belittle other viewpoints until people remove you from the community, then I can respond to you accordingly. But I would prefer to do the first thing. I don’t think this is the forum for it, but we can surely find one, and I can spend a while talking with you about the viewpoints you seem to think are getting you censored.
Edit: Grammar
Tbh, I don’t really care to engage in arguments anymore. It never goes anywhere. It’s pretty dumb.
Maybe, but conservatism is considered much more acceptable among Americans than anything left of liberalism. Particularly now when the dems are trying to reach out to conservatives with policies such as closing the border, “tough on crime” rhetoric, unlimited support for Israel, etc. You can check by whether you’ve been banned from PleasentPolitics
legally” and “Illegal
Adding “this should be done according to the law” doesn’t divorce an action from its morality.
Rounding up millions of immigrants, some whom have been here for decades, and nearly all of whom are here because they’re fleeing the effects of the US constantly couping their governments and training/funding terrorists is an immoral action, whether they’re legal or illegal.
I must say I don’t like the idea of a social-credit-score bot.
Regarding your implementation, I saw the summary of your own comments elsewhere in this post and I noticed all the annotations were on upvoted/blue segments. Other summaries you posted focused more on negative/red segments. Would it be possible to enforce a minimum of 1 or 2 from both categories?
Also, would you be kind enough to read my tea leaves? Am I an acceptable citizen of the Lemmy community?
I’m in agreement. The social credit idea really doesn’t bode well. Karma resrictions on Reddit are one of the bad parts of Reddit and I for one am glad that it’s not a thing here.
You’re fine. Why would you not be? You left 15 comments in the last month, and they were all upvoted. It doesn’t even really have much to go on to rank you, but your rank is positive, nowhere near 0, much less far enough into the negative side that it would need to be to even be greylisted.
99% of Lemmy is made of acceptable citizens. That’s a real number. Only 1% of the users that it evaluates, which is itself only a tiny fraction of the total Lemmy population, ever get blacklisted. You have to be very obnoxious before it starts targeting you. I can understand the worry that this is going to arbitrarily start attacking people because of some vague AI bot decision, but that’s not what is happening.
The visualization of someone’s social credit score just picks the 5 most impactful posts, it doesn’t discriminate based on positive or negative. If you want to see what the red corresponds to on my graph, the most negative things I have done within the time window are:
They both contributed some red to the graph, I think. The red at the far right end is comments within this post that people are taking exception to.
What kind of sample size of comments or time length would the bot use to make a decision? Is it safe to assume that those new Troll accounts would fly under this bots radar?
Edit:Also is there a functionality where we can look up a user using this tool? I would be interested in seeing some users I’ve interacted with in some political and news related communities.
It’s the last 30 days of comments. That’s long enough to be robust, but short enough that someone can realistically rehabilitate their image with the bot by not being a jerk for 30 days, and restore their posting ability.
I was hoping that it would be a good tool for self-reflection and fairness in moderation. In practice, the people who get banned for being jerks are totally uninterested in revising their commenting strategy, and choose instead just to yell at me that I’m awful and my bot is unfair and it should be their right to come in and be a jerk if they want to, and banning them means I am breaking Lemmy. Then they restart one of the arguments that got them banned in the first place. I don’t know what I was thinking, expecting anything different, but that’s what happened. You can see some of it happening in these comments.
New accounts, or accounts that have been recently inactive, are a hard problem. I think I’ve got it mostly worked out now. If the bot has limited information, it won’t ban you, but it will be super-strict if you have a generally negative reception, and if its unclear impression of you is negative and you also make a comment that gets downvoted, it’ll delete the comment. I think it should work fairly well, but it’s still in development. It’s hard to test, because that situation only comes up a few times a month, so I basically just have to wait a while every time I do it.
You can check a user by searching the modlog for their user, and santa@slrpnk.net as the moderator, and see what comes up. If you see that they’ve been banned at any point, then they are probably a reprobate of one sort or another.
How many new accounts do you want? I delete and recreate my account all the time. I’d be happy to make new accounts and put a bunch of comments in them for you. I could make one a week or something.
You can do that now, and evade human moderation in the same way.
I don’t want you to give it a try in the Santa communities, even though it would be a badly-needed test of the system. The code that’s supposed to detect and react to that doesn’t get much action. Mostly it’s been misfiring on the innocent case, and attacking innocent people because they’re new and they said one wrong thing one day. I think I fixed that, but it would be nice to test it in the other case, with some participation that I know is badly intended, and make sure it’s still capable of reacting and nuking the comments.
But no, please don’t. The remedy for that kind of thing is for admins to have to do work to find and ban you at the source, or look at banning VPNs or something which is sad for other reasons, so I don’t want that. Just leave it until real bad people do it for real, and then me and the admins will have to work out how to get rid of them when it happens.
Thanks for the information. I took a look at the bots community and for what it’s worth I appreciate the amount of effort you put into fine tuning it as well as being as transparent as possible.
(edit: I accidentally a word and didn’t realize you wrote ‘auto-report instead of deleting them’. Read the following with a grain of salt)
I’ve played (briefly) with automated moderation bots on forums, and the main thing stopping me from going much past known-bad profiles (e.g. visited the site from a literal spamlist) is not just false positives but malicious abuse. I wanted to add a feature which would censor an image immediately with a warning if it was reported for (say) porn, shock imagery or other extreme content, but if a user noticed this, they could falsely report content to censor it until a staff member dismisses the report.
Could an external brigade of trolls get legitimate users banned or their posts hidden just by gaming your bot? That’s a serious issue which could make real users have their work deleted, and in my experience, users can take that very personally.
It’s possible. I think it’s more difficult than people think. You have to do it on a scale which is blatantly obvious to anyone who’s looking, so you’re just inviting a ban.
One person swore to me that it would be really easy, so I invited them to try, and they made a gang of bots which farmed karma and then mass-downvoted me, trying to get me banned from my own place. If you look at my profile you’ll see some things which have -300 score because of it. I welcomed the effort, since I’m interested in how well it will resist that kind of attack. Their first effort did exactly nothing, because none of the downvote bots had any rank within the algorithm. I gave them some pointers on how they could improve for a second time around, and they went radio silent and I haven’t heard from them since then.
Haha they thought it was too easy and were proven wrong!
Honestly, if a place is obscure enough, even smaller barriers of entry help, like forums that don’t let you post on important boards until you build a reputation. There’s only so much effort an adversary is willing to put in, and if there isn’t a financial incentive or huge political incentive, that barrier could be low.
Finally, what the Fediverse needed:
AIs.
Good to know they’ve caught up to us from the walled gardens. Welp, I guess that’s it folks. Let’s pack up and go back to Usenet.
One of the first few instances I heard of was botsin.space, which has been around since at least 2017. Bots aren’t new. (Not sure where you’re pulling “AI” from, this is old* tech, and I don’t mean that negatively)
The core algorithm, a robust way to determine global trust rankings based on a network of relative trust, was first codified and used for this purpose in the late 1800s. It would be difficult to come up with a way to attack this problem that is further separated from the era of AI algorithms. I think you would need help from a math historian.