• Willer@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    14
    ·
    11 months ago

    No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)

    • Marzepansion@programming.dev
      link
      fedilink
      English
      arrow-up
      16
      ·
      11 months ago

      “we purposefully make it terrible, because we know it’s actually better” is near to conspiracy theory level thinking.

      The internal models they are working on might be better, but they are definitely not making their actual product that’s publicly available right now shittier. It’s exactly the thing they released, and this is its current limitations.

      This has always been the type of output it would give you, we even gave it a term really early on, hallucinations. The only thing that has changed is that the novelty has worn off so you are now paying a bit more attention to it, it’s not a shittier product, you’re just not enthralled by it anymore.

      • UndercoverUlrikHD@programming.dev
        link
        fedilink
        arrow-up
        2
        arrow-down
        9
        ·
        11 months ago

        Researchers have shown that the performance of the public GPT models have decreased, likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.

        I don’t really care about why it, so I won’t speculate, but let’s not pretend the publicly available models aren’t purposefully getting restricted either.

        • Marzepansion@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          11 months ago

          likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.

          Which is different than

          No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)

          One is a natural thing that can happen in software engineering, the other is malicious intent without facts. That’s why I said it’s near to conspiracy level thinking. That paper does not attribute this to some deeper cabal of AI companies colluding together to make a shittier product, but enough so that they all are equally more shitty (so none outcompete eachother unfairly), so they can sell the better version later (apparently this doesn’t hurt their brand or credibility somehow?).

          but let’s not pretend the publicly available models aren’t purposefully getting restricted either.

          Sure, not all optimizations are without costs. Additionally you have to keep in mind that a lot of these companies are currently being kept afloat with VC funding. OpenAI isn’t profitable right now (they lost 540 million last year), and if investments go in a downturn (like they have a little while ago in the tech industry), then they need to cut costs like any normal company. But it’s magical thinking to make this malicious by default.