It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    7 months ago

    I can use what I understand about pytorch and other libraries to infer specific aspects of the library that I am not familiar with.

    This is what LLM’s can’t do though. They can’t use what they understand because they don’t understand anything. They can’t infer, they can’t reason, they can’t evaluate or compare. They can spit out words that make it look like they did those things, but they didn’t.

    • UnpluggedFridge@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      7 months ago

      Here I think you are behind on the literature. LLMs can infer and reason, and there are whole series of papers that evaluate LLMs for these properties the exact same way we evaluate humans. So if you can’t trust the metrics, then you cannot even assert that humans can reason and infer and understand.