Black Mirror creator unafraid of AI because it’s “boring”::Charlie Brooker doesn’t think AI is taking his job any time soon because it only produces trash

  • state_electrician@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    5
    ·
    9 months ago

    LLMs are awful for facts, because they don’t understand what facts are. You should never rely on them if you require factual correctness.

    They are OK for text summation, formatting and just making shit up. For summation a human with experience still produces nicer output, because they understand the content and don’t just look at words. As for making shit up you will get the statistically most likely output, so it’s usually trite and boring. I think the progress is amazing, but there are still so many problems to be solved.

    Right now I use them for boiler plate stuff, like writing a text with some parameters and then I polish it. For code I find them quite useless, because with an IDE I can write boiler plate just as fast as when I polish the prompts until the LLM delivers useful stuff. And with the IDE I don’t get references to methods or entire libraries that just don’t exist.

    • banneryear1868@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Right now I use them for boiler plate stuff, like writing a text with some parameters and then I polish it

      It’s actually great for dnd to produce NPC dialogue or names on the fly. We also tried using it to calculate area of effect spells, ie “how many average sized humans in armor with swords could fit in a circle with a diameter of 30ft.” We were rolling with it before someone pointed out that it didn’t calculate the area of a circle correctly, however it got the rest more or less accurate. So we don’t use it for that anymore, and it’s funny how what often appears to be the simplest component of a question is the thing it most often gets wrong.

    • darth_helmet@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      9 months ago

      People are also kind of shit at facts. There are so many facts, and many of them aren’t practical for every person who needs to assess a fact’s accuracy to do so. But it isn’t structurally impossible to mimic how humans learn how to gauge truthfulness, we just have to be prepared for the idea that it will be bound by the limitations of language, as well as the risk inherent in trusting data that it has not independently verified.