“Their data” HA.
Hey! It took years of hard work to develop the good will necessary to get into a position to take advantage of their data!
Mark Zuckerberg: “Yeah so if you ever need info about anyone at Harvard just @ me. I have over 4,000 emails, pictures, addresses, SNS.”
“What? How’d you manage that one?” a friend asked.
“People just submitted it,” “I don’t know why. They ‘trust’ me. Dumb fucks.”
edit: copy/paste cleanup
Name address and so on i, well, understand if you buy something online to fill that in. But sns and id??? Thats all kinds of stupid. Why would you give thata willingly to fb? Its not a government entity or even a bank.
It could be that some of that data was scraped from FB messenger.
That quote comes the time facebook was not open to the public yet. just fellow students of his.
Unfortunately, when we sign up to their EULAs we “willingly” give everything up… So technically it ends up being legally theirs 🥺
Not quite, but pretty close. You still hold copyrights in anything protected by copyright for example. They just have a perpetual license to use your work. We really ought to be working on laws to protect privacy and limit corp content piracy without explicitly clear opt-ins.
Maybe we should charge them for emails they send us. Want me to sign up for a news letter, that will be 20€ per email. Or something.
Back in the paper spam days, some folk would stuff the “postage paid” envelopes with junk and mail them back to troll the companies. Setting up a junk address with an autoresponder would be pleasing, but probably would get tagged illegal.
Shame, its legal when big corp does it but illegal when I do it. This always seems weird to me.
I’m on Lemmy due to this!
I literally use this platform just to run from bots and cooperate greed.
I don’t think the Lemmy is well prepared to handle bots or more sophisticated spam, for now we’re just too small to target. I usually browse by new and see spam staying up for hours even in the biggest communities.
Just chiming in here: there are at the moment some problems with federation. I’m an admin on LW, and generally we remove spam pretty quickly but it currently doesn’t federate quickly. We are working on solutions that temporarily fix it till the lemmy devs themselves fix it.
Ste spam is bad but I can just ignore it, but last week there was an attack with CSAM which showed up while casually surfing new, that made me not want to open Lemmy anymore.
I think that is what needs to be fixed before we can taccle spam.
Whatever is done to fight spam should be useful in fighting CSAM too. Latest “AI” boom could prove lucky for non-commercial social networks as content recognition is something that can leverage machine learning. Obviously it’s a significant cost so pitching in will have to be more common in covering running costs.
Admins are actively looking into solutions, nobody wants that stuff stored on their server, and there’s a bunch of legal stuff you must do when it happens.
One of the problems is the cost of compute power for running programs detecting CSAM in pictures before uploading, making it not viable for many instances. Lemmy.world is moving towards only allowing images hosted via whitelisted sites I think.
Be diligent with reporting, and consider switching instance if your admins aren’t really active.
The reports go to the community mods not your instance admins though don’t they?
Any reports you make are visible to the admins of your instance.
E.g. if you make a report, the community mods may choose to ignore it while your admins choose to remove it for everyone using their instance.
Everything you see on Lemmy is through the eyes of your instance, people of other instances may see different stuff. E.g. some instances censor certain slurs, but that doesn’t affect users outside that instance. (de)federations also dictates what comments you will see on a post.
But they do go to the community mods, even on a different instance? And if the community mods remove the content that removal federates?
I prefer to rely on the community mods to remove most ‘spam’ as it’s their role to decide what is spam in their community. (Obviously admins can/should remove illegal content etc)
Admins for the most part shouldn’t have to remove content on their copy of other instances communities.
It goes to the community mods too yeah. But when it comes to spam/scams that is being posted, admins (at least on programming.dev) will remove it immediately and not wait for community moderators. Spammers will usually spam multiple communities at once and only admins have the capability of banning users entirely from the site/their instance.
A few days ago a person created multiple accounts and spammed scat content across multiple communities. Moderators can’t effectively stop those kind of things.
Lol, well it’s not immune to either. As soon as anyone thinks Lemmy has ROI, it will be targeted by bots, corporate greed, and scrapers.
But all of our posts are publicly available in the Internet and in my opinion should be fair game for web crawlers, archivists, or whoever wants to use it. That’s the free and open Internet.
What’s shitty is when companies like reddit decide it’s “their” data.
…corporate?
Long live Lemmy!
Testify! 👏🏻👊🏻
Fuck spez.
That’s Steve Huffman folks!
Does this mean the monetary value of personal data is falling? I’m thinking this may be some sort of price fixing.
It’s the opposite.
They’re hoarding more of it because they’re wanting to capitalize on it.
Sharing your capital for free is a bad business move.
It’s probably more like when Amazon gets into yet another business and kills the competition. Whatever those 3rd party devs are doing the social networks can do themselves and make more money.
Data has always been valuable, even before Surveillance Calitalism. But now with the rise of AI, the owners of social platforms that were easily accessible are now making it harder to hoard the data because they realize they can use it for their own LLM training
Not to mention data’s various other uses like advertising/marketing, selling of it foreign governments/advesaries/law enforcement agencies, etc.
I suspect it’s a similar story with AI
Before AI took off, it was necessary to make groundbreaking discoveries. Pretty much all the architectures and most if not all of the data for training were released open source
Now that AI is taking off, these companies don’t want to help their competition. So their data and algorithms are becoming more and more closed off
We’re scanning the very last email! It surely has all the passwords!
Ohh fuck! Another fuckin cat picture zip file!
This is the best summary I could come up with:
However, in May, Christian Selig, the developer of the popular iOS client Apollo, had a call with the company where he learned that the cost demanded by the platform was so high that his app would go out of business.
As he wrote in detail on his blog, he noted that Threads starting with ActivityPub integration doesn’t automatically mean that we’ll see a flourishing ecosystem of apps by default.
“Again though, the integration of Threads into this ecosystem doesn’t necessarily equate to a larger market — as Meta may not need to use many of the back-end services, and most likely will not initially allow their users to use alternative clients,” Coates said.
“On the other hand, Meta’s integration may cause a lot more interest in self-hosting and other companies and communities may join the larger Mastodon ecology and that would increase the opportunities and possibility for services and products of this kind,” he noted.
The Iconfactory’s principal developer Ged Maheux told TechCrunch that the company has learned to diversify its revenue across different apps after the bad experience of Twitterific’s shutdown.
Earlier this week, Maheux and Iconfactory began a new Kickstarter campaign for a new app called Tapestry, which will allow you to connect your social media accounts and RSS feeds through a chronological timeline.
The original article contains 2,203 words, the summary contains 216 words. Saved 90%. I’m a bot and I’m open source!