• @whoisearth@lemmy.ca
      link
      fedilink
      English
      243 months ago

      Corporate IT here. You’re assuming they’re smart enough to budget for this. They aren’t. They never are. Things are rarely if never implemented with any thought put into any other scenario that isn’t happy path.

      • @Patches@sh.itjust.works
        link
        fedilink
        English
        10
        edit-2
        3 months ago

        As a corporate IT person also. Hello.

        But we do put thought into what can go wrong. But no we don’t budget for it, and as far as we are concerned 99% success rate is 100% correct 100% of the time. Nevermind 7 billion transactions per year multiplied by 99% is a fuck ton of failure.

        • @whoisearth@lemmy.ca
          link
          fedilink
          English
          53 months ago

          Amen. Fwiw at my work we have an AI steering committee. No idea what they’re doing though because you’d think enough articles and lawsuits against OpenAI and Microsoft on shady practices most recently allowing AI to be used by militaries potentially to kill people. I love knowing my org supports companies that enable the war machine.

  • @TehWorld@lemmy.world
    link
    fedilink
    English
    1303 months ago

    Great! Please make sure that your server system is un-racked and physically present in court for cross examination.

  • 𝕯𝖎𝖕𝖘𝖍𝖎𝖙
    link
    fedilink
    English
    1253 months ago

    “Airline tried arguing virtual assistant was solely responsible for its own actions”

    that’s not how corporations work. that’s not how ai works. that’s not how any of this works.

    • @Pantoffel@feddit.de
      link
      fedilink
      English
      203 months ago

      Oh, it is if they are using a dump integration of LLM in their Chatbot that is given more or less free reign. Lots of companies do that, unfortunately.

      • Fushuan [he/him]
        link
        fedilink
        English
        173 months ago

        If it’s integrated in their service, unless they have a disclaimer and the customer has to accept it to use the bot, they are the ones telling the customer that whatever the bot says is true.

        If I contract a company to do X and one of their employees fucks shit up, I will ask for damages to the company, and They internally will have to deal with the worker. The bot is the worker in this instance.

        • @lunar17@lemmy.world
          link
          fedilink
          English
          13 months ago

          So what you’re saying is that companies will start hiring LLMs as “independent contractors”?

          • Fushuan [he/him]
            link
            fedilink
            English
            83 months ago

            No, the company contracted the service from another company, but that’s irrelevant. I’m saying that in any case, the company is responsible for any service it provides unless there’s a disclaimer. Be that service a chat bot, a ticketing system, a store, workers.

            If an Accenture contractor fucks up, the one liable for the client is Accenture. Now, Accenture may sue the worker but that’s besides the point. If a store mismanaged products and sold wrong stuff or inputted incorrect prices, you go against the store chain, not the individual store, nor the worker. If a ticketing system takes your money but sends you an invalid ticket, you complain to the company that manages, it, not the ones that program it.

            It’s pretty simple actually.

    • @Delta_V@lemmy.world
      link
      fedilink
      English
      83 months ago

      My 2024 bingo card didn’t have a major corporation litigating in favor of AI rights in order to avoid liability, but I’m not disappointed to see it.

  • @Son_of_dad@lemmy.world
    link
    fedilink
    English
    1153 months ago

    Why would air Canada even fight this? He got a couple hundred bucks and they paid at least 50k in lawyer fees to fight paying those. They could have just given him the cost of the lawyer’s fees and be done with it

      • tryptaminev 🇵🇸 🇺🇦 🇪🇺
        link
        fedilink
        English
        433 months ago

        Which is fascinating, that they themselves thought there was any doubt about it, or they could argue such a doubt.

        This is the same like arguing “It wasn’t me who shot the mailmen dead. It was my automated home self defense system”

        • @frunch@lemmy.world
          link
          fedilink
          English
          153 months ago

          Agree 100%–i mean who are you gonna fine, the bot? The company that sold you the bot? This is a simple case of garbage in, garbage out–if they set it up properly and vetted its operation, they wouldn’t be trying to make such preposterous objections. I’m glad this went to court where it was definitively shut down.

          Fuck Canada Air. The guy already lost a loved one, now they wanna drag him through all this over a pittance? To me, this is the corporate mindset–going to absolutely any length necessary to hoover up more money, even the smallest of scraps.

      • brianorca
        link
        fedilink
        English
        53 months ago

        A settlement would cost less, can be kept private, and doesn’t set precedent. Now they have an actual court case judgement, and that does set precedent.

    • @RobotToaster@mander.xyz
      link
      fedilink
      English
      143 months ago

      I think some companies have a policy of fighting every lawsuit and making everything take as long as possible, simply to discourage more lawsuits.

    • @intensely_human@lemm.ee
      link
      fedilink
      English
      113 months ago

      Because there is something far nastier in the world than self interest. This airline seems to me like it was operating from a place of spite.

    • @SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      103 months ago

      Just how Air Canada does things now. I think it largely stemmed from the pandemic where people gave them leeway on things being a bit messed up. But now they’ve fallen into a habit of not taking responsibility for anything.

  • Андрей Быдло
    link
    fedilink
    English
    1033 months ago

    That’s an important precedent. Many companies turned to LLMs to cut the cost and dodge any liability for whatever model can say. It’s great that they get rekt in the court.

  • Optional
    link
    fedilink
    English
    853 months ago

    Lol. “It wasn’t us - it was the bot! The bot did it! Yeah!”

    • TxzK
      link
      fedilink
      English
      303 months ago

      “See officer, we didn’t make these deepfakes, the AI did. Arrest the AI instead”

  • @RobotToaster@mander.xyz
    link
    fedilink
    English
    813 months ago

    That seems like a stupid argument?

    Even if a human employee did that aren’t organisations normally vicariously liable?

    • @atx_aquarian@lemmy.world
      link
      fedilink
      English
      723 months ago

      That’s what I thought of, at first. Interestingly, the judge went with the angle of the chatbot being part of their web site, and they’re responsible for that info. When they tried to argue that the bot mentioned a link to a page with contradicting info, the judge said users can’t be expected to check one part of the site against another part to determine which part is more accurate. Still works in favor of the common person, just a different approach than how I thought about it.

      • Carighan Maconar
        link
        fedilink
        English
        243 months ago

        I like this. LLMs are powerful tools, but being rebranded as “AI” and crammed into ~everything is just bullshit.

        The more legislation like this happens where the employing entity is responsible for the - lack of - accuracy, the better. At some point they’ll notice they cannot guarantee the correct information is the only one provided as that’s not how LLMs work in their function as stochastic parrots, and they’ll stop using them for a lot of things. Hopefully sooner rather than later.

        • lad
          link
          fedilink
          English
          23 months ago

          This is actually a very good outcome if achievable, leave LLMs to be used where there’s nothing important on the line or have humans control them

  • @kandoh@reddthat.com
    link
    fedilink
    English
    603 months ago

    A computer can never be held responsible so a computer must never make management decisions

    • IBM in the 80s and 90s

    A computer can never be held responsible so a computer must make all management decisions

    • Corporations in 2025
  • @Baggie@lemmy.zip
    link
    fedilink
    English
    433 months ago

    Hey dumbasses maybe don’t let a loose llm represent your company if you can’t control what it’s saying. It’s not a real person, you can’t throw blame to a non sentient being.

  • @Mr_Blott@lemmy.world
    link
    fedilink
    English
    373 months ago

    If you type “biz” instead of “business” in the first couple of lines, surely you’re not expecting me to actually keep reading?!

    • @frunch@lemmy.world
      link
      fedilink
      English
      233 months ago

      I went ahead and read it anyway. I actually had to Google the last word of the article: natch. It’s slang for “naturally”. We’re living in interesting times. Glad the guy got compensated after going through that ordeal.

    • Doofus Magoo
      link
      fedilink
      English
      63 months ago

      That’s just “El Reg’s” style; they’ve been that way for years. Don’t let their pseudoinformality fool you, though, they know their stuff.

      • @Mr_Blott@lemmy.world
        link
        fedilink
        English
        23 months ago

        Yeah, you mean they’ve been getting worse for years! Would expect better from a UK based publication that isn’t a tabloid, tbh

  • Flying Squid
    link
    fedilink
    English
    353 months ago

    Oh good, we’ve entered into the “we can’t be held responsible for what our machines do” age of late-stage capitalism.

  • @rbesfe@lemmy.ca
    link
    fedilink
    English
    213 months ago

    Par for the course for this airline, in my experience. They’re allergic to responsibility.

  • @shiroininja@lemmy.world
    link
    fedilink
    English
    153 months ago

    I can’t wait for something like this to hit SCOTUS. We’ve already declared corporations are people and money is free speech, why wouldn’t we declare chatbots solely responsible for their own actions? Lmao 😂😂💀💀😭😭

    • Bob Robertson IX
      link
      fedilink
      English
      23 months ago

      money is free speech

      Can someone explain this to me? I assume this is in relation to campaign finance, but what was the actual argument that makes “(spending/accepting/?) money is free speech”?

      • lad
        link
        fedilink
        English
        13 months ago

        Maybe something along the lines of “if you can afford fines you can say whatever you want including but not limited to offence, lies, hate speech, and slander”

  • LinkOpensChest.wav
    link
    fedilink
    English
    23 months ago

    If big companies want to give jobs to bots instead of humans, they need to reap the consequences.

    Side note: Personally, I’ve never found a chatbot helpful. They typically only provide information that I can find for myself on the web site. If I’m asking someone for help, it’s solely because I can’t find it myself.