• PizzaFacia@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    10 months ago

    $60mm a year seems really cheap, no? I know its shit data from the bot posters but still would think it would be like $100-150mm

    • potatopotato@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      10 months ago

      Honestly it’s probably the best search dataset in existence right now. You can make Google suck far less by appending “reddit” to most searches because you’ll get results from a group consisting of a higher ratio of actual humans instead of bots.

      Yeah reddit is shit, but the rest of the internet is 10x worse at this point. Pretty much any writing that isn’t a labor of love on someone’s personal page or users interacting with each other in a semi organic way is rapidly becoming 100% GPT vomit as every company in existence lays off their writing staff

      Whoever bought this got a fucking bargain.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      10 months ago

      It’s ludicrously cheap for the size and quality of the dataset. A set of 829 academic papers at University of Michigan is priced at $25,000—about 1/2400 of this sale. If you were to scale that dollar value to the size of the Reddit dataset, you’d expect it to contain about 2 million academic papers’ worth of data.

      But Reddit has almost two decades of text written by 200 million chronically-online people. And sure, probably most Reddit users don’t write an academic paper amount of content every year; but the average is probably closer to that than not, especially when you consider that some of those subreddits like AskHistorians and AskScientists really are generating the equivalent of dozens of academic papers per day. Just based on the amount of text alone, Reddit should’ve sold us out for 50-100x what they got for just a single year of data, and 1000-2000x for the full twenty years (though, granted, they didn’t have that much data for that entire time, so let’s say half that).

      Furthermore, those 829 papers in the U of M dataset are disconnected, unlinked text representing a tiny fraction of what U of M’s 50,000 students generate in even a single year. Reddit has data with links, images, conversational responses, prompt responses, Q&As, flash fiction, slash fiction, historical deep-dives, investigations, memes, inside jokes, a development of style and consensus over time, and a comprehensive understanding of what it means to interact online, generated by people around the world over the course of 18 years. It’s much better data for almost any LLM purpose that isn’t just writing academic papers from the perspective of students at a medium size 4-year undergrad institution in the Midwestern US. The quality of the dataset should’ve made the value even higher. It’s hard to say exactly how much higher, but let’s just be extremely conservative and say it should have doubled the total.

      That means that, conservatively, the value of Reddit’s dataset—or, rather, our dataset, which Reddit freebooted from us—was about 1000x what they were paid, based on the proportional value of the U of M dataset.

      They should’ve sold us out for billions.

      Of course, we don’t know anything about what exclusivity deals or subset of data that they might have included with this deal. It might only be one year of data, and only 6 months of exclusivity. But assuming they sold the rights to the entire dataset, we got sold for pennies.

    • ormr@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Is the data access exclusive for that one company? If not then it’s no miracle they’re opting for a subscription-based model lol