• EatYouWell@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      It’s not AI, though. They’re just using buzzwords, because what they described is functionally no different from AFIS. It’s just a poorly written algorithm.

      • rockSlayer@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        I’m aware, but unfortunately I’m not big enough in the tech industry to create differentiating terms. AI is an extremely broad term ranging from literal if-else statements to LLMs and generative AI. Unfortunately the specifics usually get buried in the term

  • Voli
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    1 year ago

    I wish the term Ai would be stopped, because these devices are far from the idea of what ai is.

    • psivchaz@reddthat.com
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 year ago

      I always thought machine learning was descriptive and made sense. I guess it just didn’t get investors erect enough.

    • Floey@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      AI has been used to refer to all kinds of dynamic programming in the history of computation. Algebraic solvers, edge detection, fuzzy decision systems, player programs for video games and tabletop games. So when you say AI is this or that you are being rather prescriptivist about it.

      The problem with AI and ML is more one of it being presented to the public by grifters as a magical one stop solution to almost any problem. What term was used hardly matters, it was the propaganda that carried the term. It would be like saying the name Nike is the reason for the shoe brand’s success and not it’s marketing.

      So discredit the grifters, and if you want to destroy the term then look to dilute it by using it to describe even more things. It was never really a useful term to begin with. I’ll leave you with this quote

      A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.

    • NightOwl@lemmy.one
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Yeah, things that weren’t called AI years back are just getting called AI now.

      • Gurfaild@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        “AI” was always an imprecise term - even compilers used to be called AI once

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      It’s almost like the incessant marketing of standard optimisation algorithms as artificial intelligence has diluted the tech industry with meaningless buzzwords.

  • catsup@lemmy.one
    link
    fedilink
    English
    arrow-up
    39
    ·
    edit-2
    1 year ago

    TLDR:

    In 2018, a man in a baseball cap stole thousands of dollars worth of watches from a store in central Detroit.

    The AI was trained on a database of mostly white people The photos of people of colour in the dataset were generally of worse quality, as default camera settings are often not optimised to capture darker skin tones.

    Mr Williams’ photo didn’t come up first. In fact, it was the ninth-most-probable match.

    Regardless…

    Officers drove to Mr Williams’ house and handcuffed him.

    They arrested him in front of his five and two-year-old kids…


    Ai with bad training data + lazy cops who didn’t learn how to use the tools they were given = this mess

  • Solar Bear@slrpnk.net
    link
    fedilink
    English
    arrow-up
    28
    ·
    edit-2
    1 year ago

    The computer didn’t get it wrong; the computer did exactly what it was programmed to do. Blaming the computer implies that this can be solved by fixing the computer, that it “just wasn’t good enough yet”, when it was the humans who actually did it. It was the humans who were supposed to exercise their judgment that got it wrong. You can’t fix that from the computer.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    Ever since we let law enforcement use facial recognition technology, they’ve been arresting people for false positives, sometimes for long periods of time.

    it’s not just camera problems and being poorly trained regarding non-whites, but that people actually look too much alike, especially when using the tech on blurry low-res security footage,

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      I used to work in security camera monitoring and I used to think I don’t understand why insurers will touch some of these companies with an electrified cattle prod.

      They will be pretty high value asset companies with valuable stuff on premises that could be stolen, construction equipment, medical equipment, guns, cars, steel copper lead etc. and their security cameras would max out at 720p have a giant spider web on them without fail and would invariably be on some wobbly pole somewhere that was blowing around in the wind causing 300 false positives a minute. We literally used to switch those cameras off.

      Why don’t they insist on equipment that didn’t cost the company $4.50 from Walmart?

      The only cameras we used to work with that were actually any good were the number plate recognition cameras, but they were specialist and were absolutely useless for anything else other than number plate recognition. But boy did they get you that number plate.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This is the best summary I could come up with:


    Facial recognition could analyse a blown-up still taken from a security tape, sift through a database of millions of driver licence photos, and identify the person who did the crime.

    Months later, the facial recognition system used by Detroit police combed through its database of millions of driver licences to identify the criminal in the grainy security tapes.

    By January 2020, as Mr Williams had his mug shot taken in the Detroit detention centre, civil liberties groups knew that black people were being falsely accused due to this technology.

    It would give law enforcement and security agencies quick access to up to 100 million facial images from databases around Australia, including driver licences and passport photos.

    That didn’t stop the then government from ploughing ahead with its planned national facial recognition system, says Edward Santow, an expert on responsible AI at the University of Technology Sydney, and the Australian Human Rights Commissioner at the time.

    Despite this, last month Senate estimates heard the federal police tested a second commercial one-to-many face matching service, Pim Eyes, earlier this year.


    The original article contains 1,870 words, the summary contains 162 words. Saved 91%. I’m a bot and I’m open source!