Senate bill aims to stop Uncle Sam using facial recognition at airports / Legislation would eliminate TSA permission to use the tech, require database purge in 90 days::Legislation would eliminate TSA permission to use the tech, require database purge in 90 days

  • paysrenttobirds@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    9
    ·
    11 months ago

    The TSA’s use of CAT-2 involves scanning a passenger’s face and comparing it to a scanned ID card or passport. The system can detect fake IDs “very quickly,” a TSA official told us in July, and is also able to verify the person is on any additional screening lists and is actually scheduled to travel in the next 24 hours.

    This I’m ok with actually? The airport is already a place you expect to have to give your real identity to be there, and in the case of unfortunate people who share a name with a watchlist person this technology helps them travel normally without hours long interviews at every stop, I think mainly because the TSA agent can say the computer ok’d it instead of having to stick their neck out personally.

    I guess the problem would be if the new scans of your face collected by this software are connected to your identity and/or travel data and then exported to third parties who didn’t already have that info.

    Because by itself it isn’t really giving the TSA any new information. They have your id and your boarding pass. The government already knows who you are and where you’re going and this bill doesn’t stop them acquiring or keeping that information.

    • NocturnalMorning@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      2
      ·
      11 months ago

      Facial recognition is bad for a multitude of privacy reasons. But, the biggest reason though is it is also wrong, and often trained with biased data (which is almost impossible to completely remove).

      • bobgusford@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        11 months ago

        Sorry, this needs more clarification! Do you mean “intent recognition” where some AI, trained with biased data, will assume that some brown person is upto no good? Or do you mean that they will misidentify black and brown people more often due to how cameras work? Because the latter has nothing to do with biased data.

        • yeather@lemmy.ca
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          11 months ago

          Both in fact. Training data for things like this regularly mix up minority people. If Omar is a upstanding citizen, but gets his face mixed with Haani, known terrorist, Omar gets treated unfairly, potentially to the point of lethality.

          • bobgusford@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            11 months ago

            For “intent recognition”, I agree. A system trained on data of mostly black committing crimes might flag more black people with ill intent.

            But for the sake of identification at security checkpoints, if a man named Omar - who has an eerie resemblance to Haani the terrorist - walks through the gates, then they probably need to do a more thorough check. If they confirm with secondary data that Omar is who he says he is, then the system needs to be retrained on more images of Omar. The bias was only that they didn’t have enough images of Haani and Omar for the system to make a good enough distinction. With more training, it will probably be less biased and more accurate than a human.

      • paysrenttobirds@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        6
        ·
        11 months ago

        There is nothing in the article to suggest that the TSA programs’ errors have inconvenienced people as the agent is right there to correct it, and more scans improves the accuracy. I get what you’re saying, but the same biases are undoubtedly programmed into the brains of the agents and just as hard to eradicate.

        There are many places I don’t want to see facial recognition employed, but where people are already mandated to positively identify themselves seems like a natural fit. I think the senators and the ACLU can find much more persuasive examples of overreach.

    • inclementimmigrant@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      11 months ago

      Remember when those millimeter wave scanning machines rolled out and we were all reassured that the technology, which would create very detailed body scans, would blur out genitalia, would not be saved, and employees would not have access to the scanned data?

      We then found out that nothing was blurred, the days was saved, the data was available to TSA agents to be copied onto God damn flash drives and we being traded?

      Yeah, notwithstanding but fuck this and I don’t trust the shitty security theater that is TSA to not advise this technology, not to mention that facial recognition has a myriad of problems with false positives with POC due to the well recognized racial bias being baked into theses systems because of the programmers who build and train these systems.