• @ArbiterXero@lemmy.world
      link
      fedilink
      1516 days ago

      Nah, they’ll just make the AI racist to compensate.

      Also, until they can’t turn off the camera, it’s worth nothing.

    • @octopus_ink
      link
      English
      12
      edit-2
      16 days ago

      They better be careful, the AI could actually make stuff more impartial. They wouldn’t want that

      I dunno, when the cops scream “stop resisting” 400 times while kicking a man in the fetal position on the ground, will it conclude he’s resisting or conclude excessive force is being used? I know where my money is at.

    • FaceDeer
      link
      fedilink
      516 days ago

      My first thought too, “finally something in the chain that’s honest.”

      It’d be good to audit it now and then, of course.

      • @remotelove@lemmy.ca
        link
        fedilink
        816 days ago

        They are probably going to train the AI it on existing reports and videos. Why train an AI to work against you?

    • @brlemworld@lemmy.world
      link
      fedilink
      216 days ago

      I mean if it’s based on the audio the police officer just can say I’m under attack I feel a tank even when they’re not before they walk up to somebody. Is very very very easily to manipulate this

  • @harsh3466
    link
    2016 days ago

    Probably using the Arya ai prompt filter

  • Deebster
    link
    fedilink
    1316 days ago

    It feels off that the headline talks about body cam footage but the AI actually just uses the audio. Technically that may be considered footage but I think I’m with most in considering that to mean the audio and video together.

    Anecdotally, I’ve found that AI systems set up to summarise are reliable, probably using that “turn off creativity” setup that’s mentioned.

      • Deebster
        link
        fedilink
        816 days ago

        It’s already a report written by the police - they can make it say whatever they want with or without AI.

  • kamenLady.
    link
    fedilink
    516 days ago

    In 2022, nine out of the 12 international ethics board members resigned following the announcement — and prompt reversal — of having Taser-wielding drones patrol US schools.

    This ain’t no futurism anymore, it’s already time for an ancient_dystopia community‽

  • AutoTL;DRB
    link
    fedilink
    English
    216 days ago

    This is the best summary I could come up with:


    As Forbes reports, it’s a brazen and worrying use of the tech that could easily lead to the furthering of institutional ills like racial bias in the hands of police departments.

    “It’s kind of a nightmare,” Electronic Frontier Foundation surveillance technologies investigations director Dave Maass told Forbes.

    Axon claims its new AI, which is based on OpenAI’s GPT-4 large language model, can help cops spend less time writing up reports.

    But given the sheer propensity of OpenAI’s models to “hallucinate” facts, fail at correctly summarizing information, and replicate the racial biases from their training data, it’s an eyebrow-raising use of the tech.

    “This is going to seriously mess up people’s lives — AI is notoriously error-prone and police reports are official records,” another user wrote.

    In 2022, nine out of the 12 international ethics board members resigned following the announcement — and prompt reversal — of having Taser-wielding drones patrol US schools.


    The original article contains 555 words, the summary contains 152 words. Saved 73%. I’m a bot and I’m open source!