• Pyro@programming.dev
    link
    fedilink
    English
    arrow-up
    171
    arrow-down
    4
    ·
    edit-2
    8 months ago

    GPT doesn’t really learn from people, it’s the over-correction by OpenAI in the name of “safety” which is likely to have caused this.

    • lugal@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      66
      arrow-down
      2
      ·
      8 months ago

      I assumed they reduced capacity to save power due to the high demand

      • MalReynolds@slrpnk.net
        link
        fedilink
        English
        arrow-up
        52
        arrow-down
        5
        ·
        8 months ago

        This. They could obviously reset to original performance (what, they don’t have backups?), it’s just more cost-efficient to have crappier answers. Yay, turbo AI enshittification…

        • CommanderCloon
          link
          fedilink
          English
          arrow-up
          40
          arrow-down
          2
          ·
          8 months ago

          Well they probably did power down the performance a bit but censorship is known to nuke LLM’s performance as well

    • rtxn@lemmy.world
      link
      fedilink
      English
      arrow-up
      49
      arrow-down
      5
      ·
      edit-2
      8 months ago

      Sounds good, let’s put it in charge of cars, bombs, and nuclear power plants!

          • OpenStars@startrek.website
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 months ago

            I mean… some might argue that even 98% wasn’t enough!? :-D

            What are people supposed to - ask every question 3 times and take the best 2 out of 3, like this was kindergarten? (and that is the best-case scenario, where the errors are entirely evenly distributed across the entire problem space, which is the absolute lowest likelihood model there - much more often some problems would be wrong 100% of the time, while others may be correct more like 99% of the time, but importantly you will never know in advance which is which)

            Actually that does on a real issue: some schools teach the model of “upholding standards” where like the kids actually have to know stuff (& like, junk, yeah totally) - whereas conversely another, competing model is where if they just learn something, anything at all during the year, that that is good enough to pass them and make them someone else’s problem down the line (it’s a good thing that professionals don’t need to uh… “uphold standards”, right? anyway, the important thing there is that the school still receives the federal funding in the latter case but not the former, and I am sure that we all can agree that when it comes to the next generation of our children, the profits for the school administrators are all that matters… right? /s)

            All of this came up when Trump appointed one of his top donors, Betsy Devos to be in charge of all edumacashium in America, and she had literally never stepped foot inside of a public school in her entire lifetime. I am not kidding you, watch the Barbara Walters special to hear it from her own mouth. Appropriately (somehow), she had never even so much as heard of either of these two main competing models. Yet she still stepped up and acknowledged that somehow she, as an extremely wealthy (read: successful) white woman, she could do that task better than literally all of the educators in the entire nation - plus all those with PhDs in education too, jeering cheering her on from the sidelines.

            Anyway, why we should expect “correctness” from an artificial intelligence, when we cannot seem to find it anywhere among humans either, is beyond me. These were marketing gimmicks to begin with, then we all rushed to ask it to save us from the enshittification of the internet. It was never going to happen - not this soon, not this easily, not this painlessly. Results take real effort.