A machine learning librarian at Hugging Face just released a dataset composed of one million Bluesky posts, complete with when they were posted and who posted them, intended for machine learning research.
Daniel van Strien posted about the dataset on Bluesky on Tuesday:
“This dataset contains 1 million public posts collected from Bluesky Social’s firehose API, intended for machine learning research and experimentation with social media data,” the dataset description says. “Each post contains text content, metadata, and information about media attachments and reply relationships.”
The data isn’t anonymous. In the dataset, each post is listed alongside the users’ decentralized identifier, or DID; van Strien also made a search tool for finding users based on their DID and published it on Hugging Face. A quick skim through the first few hundred of the million posts shows people doing normal types of Bluesky posting—arguing about politics, talking about concerts, saying stuff like “The cat is gay” and “When’s the last time yall had Boston baked beans?”—but the dataset has also swept up a lot of adult content, too.
But why would you need bots to scrape data? Wouldn’t a script just do fine?
I think the worry is that they are capable of doing to Lemmy what they did to Reddit: regurgitating content or producing astroturfed content while appearing like authentic users.