I’m thinking of starting a community for thouroughly researched investment and trading proposals (crypto/stocks/etc)
My idea was to have a severely restricted subreddit and/or community here that would have some or more of the following criteria:
- No posts from users with account age under e.g. 2 years
- No posts that don’t meet the requirements set in the community guidelines
- No more than 1 post per member each week
- Posts must be approved to be visible
- etc…
The goal being that it can’t be easily targeted by spam bots, can’t easily be gamed and every post must be approved by a valued member.
The main problem I have with subreddits is the amount of power mods have in doing shady behind the scenes deals and/or pump and dump schemes. I don’t want anyone to even question that I may be doing that or even be able to do it.
My idea on Reddit was to create the subreddit with a new account and livestream that account’s inbox to show that there’s no funny business going on. That’s obviously pretty stupid and inconvenient and would require any other mod to do the same.
What I’m after is a community whose content is very carefully curated whilst being as trustworthy as possible by making it heavily locked down and in fact quite trustless.
One part of the solution I’ve thought of is for the platform to have a feature whereby mods can be anonymous and personally uncontactable whilst at the same time making all messages and post requests public for anyone to see. This would remove the ability for scammers and con artists to secretly contact mods and conduct shady deals because anyone would be able to see that taking place.
However mods would still be able to reach out to shady actors and prove that simply by logging in and revealing the dashboard.
The solution to this would be to give many longstanding members of the community some lesser mod privileges to approve posts. These members would have to have been members for a while and have submitted posts to the community. With this system, they also have to approve each post before it is posted. That removes an element of control from the mods. My thinking was that it could choose 3 currently online members who have this lesser mod privilege at random and request their approval. If those accounts go offline or don’t respond for a few minutes it then selects approval of other members.
Are there any better solutions that have been thought of or are already implemented?
Is this something that you guys have thought about or would consider?
Would love to hear your thoughts on this
At least, we’re trying to solve a people problem with a technology, which is always hard and unpredictable, no matter whether the Lemmy devs are going to implement something like that or no. I have thought about similar issues a bit, though didin’t come to something very specific. Really like your idea of “shadow transparency” (you name it).
My idea was to have as granular permission system as possible, so that admins can assign/remove permissions to whatever actions they want - post approvals, being able to send/receive private messages, being able to post/comment/read etc. Discourse was trying to implement something similar, not sure how they succeed. It’s not that flexible, rather a level in Discourse represents a set of permissions, but maybe that’s ok if the levels are configurable. Multi-level approvals and inbox disposal you’re talking about, though, are definitely features beyond just having a granular permission system, as well as Discourse’s automatic level upgrade on an event such as “the age of an account is more than 1 year”.
In terms on what is I think relatively easily achievable in Lemmy, is the following combination:
- a permission system which is as granular as possible
- an API which is also as granular as possible
- a custom bot or a set of bots using the API in the way required for that specific community/instance
The above is not my thoughts in full, will add more later if it comes to my mind.
Yep that looks like it could be a good step towards the goal however there would still be so many layers of abstraction to prove that whatever is calling the API is fully trustworthy. Without users seeing the journey from git to implementation it would be very easy for a dev to dupe a (non-technical) community by implementing a bot that can filter posts before they’re seen e.g. admin looking to manipulate a price could block any posts with the name of the competition.
That discourse blog post is great food for thought! Thanks very much
Maybe what I’m thinking of would require a fundamentally different, less configurable system where all parameters of a community are publicly on display in natural language, and these social features are built in and configurable from the start, on a per-community basis rather than site-wide.
Like a super granular permissions system with no API. But I would want an API… Not sure if I like my own idea haha!
I might just thinking too much about it but I’d really like somewhere on the internet to be a trustworthy forum to share knowledge or thought, in an organised way, that can’t be manipulated and doesn’t rely on external authority for the basis of trust (i.e. institutions/governments/etc).
It is indeed a people problem! I think we almost always use anonymity to protect ourselves but it could be used to protect a community from the admin/mod by diluting their power. I’m sure there are people way more clever than me who have figured it out already! Would love to see it.
deleted by creator
No need to dispose of the whole database, only a subset of it could work. Anyway no guarantee this data is not preprocessed to become nicer and more polished for the public.