• pixxelkick@lemmy.world
    link
    fedilink
    arrow-up
    56
    arrow-down
    7
    ·
    edit-2
    11 months ago

    I 100% can see it easily selling for that much.

    You want to know why it’s worth that much?

    Petabytes of raw training Data for LLMs. Arguably atm reddit us one of the better gold mines of LLM training data on the internet, bazillion of posts already formatted as post-response chains, which is the exact type if format an LLM wants to train on.

    Can you imagine how valuable those servers loaded with posts are to a company like OpenAI, Google, or Microsoft?

    5 billion is quite reasonable to harvest every reddit post that has ever been made ever and cut it off from your competitors.

    • MajorMajormajormajor@lemmy.ca
      link
      fedilink
      arrow-up
      61
      ·
      11 months ago

      The worst part is that ai chatbots will start responding like redditers. I can’t wait for chatgpt to regale me with a story about his dad beating him with jumper cables, or jolly ranchers, or hell in a cell.

    • duncesplayed@lemmy.one
      link
      fedilink
      English
      arrow-up
      38
      ·
      11 months ago

      Has reddit not already been scraped? With all of that information exposed bare on the public Internet for decades, and apparently so valuable, I find it hard to believe that everybody’s just been sitting there twiddling their thumbs, saying “boy I sure hope they decide to sell us that data one day so that we don’t have to force an intern to scrape it for us”.

      • sheogorath@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        11 months ago

        I asked Bard whether they could make a bastardization of rogersimon10 post and it made this:

        Ah, the 2012 Fiesta EcoBoost. A pocket rocket some called it, a ticking time bomb I preferred. Owned one myself, sunshine yellow with a dent shaped like a screaming Muppet (courtesy of a rogue shopping cart). Let me tell you, that car was less a vehicle and more a high-octane game of Russian roulette with pistons.
        
        Every morning was a gamble. Would she purr to life like a hungover kitten, or unleash a symphony of coughs, bangs, and the ominous "Check Engine" light disco? One day, the radio spontaneously switched to polka, which, let's be honest, is only two steps away from a thermonuclear detonation in a Fiesta's internal combustion fiesta.
        
        Speaking of fiestas, the fuel gauge was as reliable as a politician's promise. One minute it was brimming, the next it was flashing "Bone Dry, Mate," even if I'd just filled it. Learned to carry a jerrycan like a Prohibition-era bootlegger, never knowing when the engine's thirst would strike.
        
        But oh, the things that little lemon could do! On a good day, that turbocharged engine was like a caffeinated hummingbird on steroids. Weaving through traffic, leaving Audis in the dust (until their turbos spooled up, of course). Handling? Razor sharp, unless you hit a pebble, then it was suddenly a drunken ballerina on roller skates.
        
        Comfort? Ha! The seats were about as plush as a park bench, and the suspension was tuned for "Feel Every Pebble in the County." But hey, it kept you awake, which was essential considering the constant fear of spontaneous combustion.
        
        In the end, I sold the Fiesta to a circus clown for a fistful of juggling pins and a lifetime supply of those tiny, uncomfortable hats. He seemed thrilled, probably because the polka radio was a bonus feature for him.
        
        So, would I recommend a 2012 Fiesta EcoBoost? Only if you enjoy existential dread, questionable fuel efficiency, and the thrill of living life on the edge (of a breakdown). Otherwise, stick to something safer, like a heavily sedated hamster. You'll thank me later.
        
        Now, if you'll excuse me, I have a date with a tow truck and a very suspicious mechanic who keeps asking about "jumper cables." Wish me luck.
        
        P.S. Don't forget the jumper cables. Seriously. You'll thank me later.
        
      • pixxelkick@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        11 months ago

        Scraped data isn’t legal to resell, scraping isn’t even legal in the first place.

        Just because you can scrape the data doesn’t mean it’s worth anything.

        Companies like MS, Google, OpenAI, FB they make money by selling the usage of their LLM services to other companies who then they use that service to make their own products.

        If it came to light that MS/Google/OAI/FB were using illegal training data for their LLMs, it would get all those other companies hit in the crossfire.

        So these companies have to do a shit tonne of diligence to assure their investors and clients that their LLMs are purely trained on legally obtained data and are safe to use.

        And you know what is a super easy way to assure them of that?

        If they literally own the original data themselves

        • duncesplayed@lemmy.one
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Scraping is legal

          Have you been following any of the court battles involving LLMs lately?

          The New York Times suing OpenAI. Getty Images suing Stability AI. Sarah Silverman and George R.R. Martin suing OpenAI.

          All of those cases involve data that has been scraped. (In the latter two cases, the memoir/novels were scraped from excerpts and archives found online).

          It’s too late to say with complete certainty that it’s all legal (the appeal processes haven’t all been finished yet), but at this point it looks like using scraped and copyrighted data in training LLMs is legal. Even if it’s going to turn out not to be legal, it’s very clear that nobody’s shying away from doing it, because we have the courts showing as a statement of fact that it’s been happening for years.

          Everything you’ve written is just fantasy. We have a lot of reality which contradicts it. Every LLM company has been primarily relying upon scraping data (which we know to completely legal) and has been incorporated copyrighted and scraped data in its data sets (which is still legally a grey area, but is happening anyway).

          • pixxelkick@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            NYT hasn’t actually won that case yet, so it’s pointless to bring up. OpenAI has publicly stated that NYT heavily has misrepresented their findings.

            OpenAI’s value would plummet and crash if they gained a reputation for using illegal material to train their AI on, investors would drop them so fast.

            This is just a simple fact. LLM providers reputation is heavily staked on the legality of their data.

            So far the courts have ruled in these companies favor.

            But it’s extremely likely illegaly scraped Dara from reddit would not pass the sniff test and debestate an offending companies reputation.

            If you don’t understand why, you have to do some brushing up on why these LLM services are worth so much and who is using them and for what. Once you understand that, it becomes extremely apparent why legally owning the entire history of every reddit post ever would be extremely valuable, and why a 5bil price tag is actually not that crazy.

    • LittleBorat2
      link
      fedilink
      arrow-up
      37
      ·
      11 months ago

      This data was out in the open for a decade and still is. People could train their llm without problems.

        • Corkyskog@sh.itjust.works
          link
          fedilink
          arrow-up
          14
          ·
          11 months ago

          And then everyone started deleting accounts, comments and even rewriting and poisoning their comments. The data was way better before the API change.

          • pixxelkick@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            11 months ago

            Do you actually think this has any impact? That’s silly.

            Reddit’s servers have the original copy of every single post made, undoubtedly, and everytime you edit your post, they store that copy too.

            So not only has everyone “poisoned” their data ineffectively, they literally have created training data of “before” vs “after” poisoning to compare the two for training the LLM against poisoned data.

            Whoever buys the right to that is going to have a pretty huge goldmine, and perhaps they will rent it out, or perhaps they’ll use it themselves.

      • pixxelkick@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        Not legally / free.

        And yes, that very very much matters if you intend to actually sell the service to companies that they themselves dont want to get hit in the crossfire of potential lawsuits for building their products on top of stolen info.

        So if you can own the data itself (via buying reddit), you now have an ENORMOUS quantity of prime training data that you’re investors and potential customers know is legally clean, because you literally own it.

    • AJ Sadauskas@aus.social
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      @pixxelkick @ardi60 Well, if anyone wants to buy it for that purpose, then I just hope they remember to screen out the more NSFW parts of Reddit.

      Otherwise, their bots are going to start giving some rather unfortunate responses to customer questions…

      • I am looking forward to the hilarity of it for a while though.

        “Cooking bot, i have found this cucumber i need to use before it gets bad. What can i do with it?”

        “Shove it up your rectum”

        Could lead to a lot of interesting lawsuits and let a lot of MBA bros look rather stupid.

      • pixxelkick@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Most LLMs have tonnes of NSFW data in their training.

        Typically, if this wants to be blocked, a secondary RAG or LORA is run overtop to act as a filtering mechanism to catch, block, and regenerate explicit responses.

        Furthermore, output allowed lexicon is a whole thing.

        Unfiltered LLMs without these layers added on are actually quite explicit and very much capable of generating extremely NSFW output by default.