• TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    8 months ago

    You know the reason they wanna do this is make it hands off so they can ignore it. And it will work until the AI starts confidently giving people false advice and misinformation about what they need help with. If a human makes a mistake they can correct it. In a lot of cases with AI if it makes a mistake it will just double down. And if you have no human element to fact check it will just spiral downward while you ignore it. I just hope the AI starts telling people to cancel their services as the most effective way to solve their problems and they don’t even notice.

    • FigMcLargeHuge@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 months ago

      Isn’t there already a case where a llm assistant quoted a wrong price and the person sued when the company tried to go back on the offer. Maybe it was an airline, I can’t quite remember, but it stood up in court and the company had to honor it as far as I remember.