I wish they didn’t incentivise deception and bad behavior.
Who should be responsible for compliance or for setting rules?
The platform of course but I’m aware it would most likely be against their best interest. I don’t really have a solution, this is just wishful thinking.
That’s pretty much reddit’s approach. On this platform, the community takes over the moderation of all posts without any financial compensation - this is rather unusual as far as larger platforms are concerned. But this approach also presents major difficulties: Reddit has a large number of moderators who manage several very wide-ranging communities/subreddits. In the past, this has led to the problem that Reddit admins have sold their direct “influence” to advertisers and other interest groups. The social media application, in this case Reddit, has little to no influence on this - after all, the admin is not an employee of the company.
Chronological. Completely uncensored. Allow easy blocking of others, including blocking posts/comments from your personal feed using categories or keyword recognition.
Done.
I initially rejected this idea with a reason like “You seem to forget how vile certain parts of the internet can be,” but the more I think about it the more I agree, given a few conditions. Namely that children should not be allowed access.
Forbidding children access to the internet would solve many problems, such as social media addiction (potentially leading to depression), the spreading of misinformation, and the general amount of child exploitation online. I don’t deny that such an action may introduce other issues that I have yet to consider, but I still feel that the main points are very compelling.
I am also aware that such a system is not perfect and that people will undoubtedly circumvent it, but a much larger number of people will not (if it is made difficult to do so). Unfortunately, the only conceivable way to do such a thing is some kind of age-verification system, which I am against for various privacy-related reasons.
Kinda like cohost
You feed me topics. I comment on them. Everyone thinks I’m hilarious. That’s all.
Haha! You’re so witty and funny!
God damn I love this place!
OP you are absolutely histerical! I’m laughing my ass off!
Aww, shucks, I’m just trying to do my part and spread some joy. You all are too kind! ☺️
For anyone who’s willing to spend ~15 mins on this, I’d encourage you to play TechDirt’s simulator game Trust & Safety Tycoon.
While it’s hardly comprehensive, it’s a fun way of thinking about the balance between needing to remain profitable/solvent whilst also choosing what social values to promote.
It’s really easy to say “they should do [x]”, but sometimes that’s not what your investors want, or it has a toll in other ways.
Personally, I want to see more action on disinformation. In my mind, that is the single biggest vulnerability that can be exploited with almost no repurcussions, and the world is facing some important public decisions (e.g. elections). I don’t pretend to know the specific solution, but it’s an area that needs way more investment and recognition than it currently gets.
How can this be funded? A workforce is needed for all matters that cannot be automated.
Funding/resourcing is obviously challenging, but I think there are things that can support it:
-
State it publicly as a proud position. Other platforms are too eager to promote “free speech” at all costs, when in fact they are private companies that can impose whatever rules they want. Stating a firm position doesn’t cost anything at all, whilst also playing a role in attracting a certain kind of user and giving them confidence to report things that are dodgy.
-
Leverage AI. LLMs and other types of AI tools can be used to detect bots, deepfakes and apply sentiment analysis on written posts. Obviously it’s not perfect and will require human oversight, but it can be an enormous help so staff can see things faster that they otherwise might miss.
-
Punish offenders. Acknowledging complexities with how to enforce it consistently, there are still things you can do to remove the most egregious bad actors from the platform and signal to others.
-
Price it in. If you know that you need humans to enforce the rules, then build it into your advertising fees (or other revenue streams) and sell it as a feature (e.g.: companies pay extra so they don’t have to worry about reputational damage when their product appears next to racists etc). The workforce you need isn’t that large compared to the revenue these platforms can potentially generate.
I don’t mean to suggest it’s easy or failsafe. But it’s what I would do.
-
They should proactively defederate from Threads. 👍
I bloody hate meta as a business, but I think instances shouldn’t defedarate from them by default.
It should be a personal choice really. The user should choose whether or not they want to block threads as an instance.
Should be a personal choice rather than mandated by an instance.
By federating with them, your instance is providing them with free content to profit off of. Every post you make is another post for their users to scroll through, another chance for them to inject ads even if you personally block Threads.
I agree with you. Fucking hate meta. Still, I think it should be a personal choice for users. But then again, lemmy is all a out choices and users can flock from one instance to another.
I think we might be mostly on the same page but to clarify: I believe that an instance admin choosing to federate with Threads is depriving their users of personal choice moreso than choosing not to federate with Threads as it’s forcing users to opt-out their content being used by a for-profit company (by changing instances).
Nah. They knowingly and deliberately house hate groups. They get actively defederated.
Why?
Immediate concern is difference in scale - we’re a drop compared to Meta’s ocean, and I don’t see how we can have any shred of hope moderating the tsunami of content that’ll be heading our way.
Long term is EEE. I have zero expectation that Meta would handle a union with the fediverse ethically, and that’s their ticket to killing it off before it has the chance to grow into any kind of real competition.
I think social media should be 18+ only. In fact, I don’t think anyone under 18 should have phones that connect to the internet at large, only things like maps or whatnot to get around. I think this would solve a lot of fundamental phone addiction problems we’re seeing from our youth.
I also think filters of any kind should be banned on social media. They’re fun, but not worth the damage they cause.
The ultimate social media site, in my perspective, would probably have the simplicity and functionality of Side 7, the content execution methodology of TV Tropes, the expandability of Discord, the rule enforcement of ProBoards, the fanbase of YouTube, the adaptability of Hypothesis, and the funding of Pogo (classic Pogo, not modern Pogo, and no I don’t mean Pokémon Go).
Advertising revenue should at least pay a proportion of the cost of getting sausage lips.
Secondly, interacting with social media should be conducted using rotary dial phones. That’ll fcuk every generation which is overly keen on using it.
Remove voting. Remove likes. Remove any semblance of a point based system.
How to determine which posts are displayed on the frontpage? If it should be a platform that works similar to reddit or lemmy.