• fubarx
    link
    fedilink
    arrow-up
    2
    arrow-down
    31
    ·
    1 month ago

    At some point in the future, there will be LLMs that will scan all voice, video, or text calls and do sentiment analysis. If they flag a call as hostile, they could send a signal to all intermediate network conduits that this person is a threat and will essentially ‘unmask’ their origin. Authorities can then take appropriate action.

    If we’re lucky, laws will be passed to enable it just for cases of assholes threatening someone they don’t like. And after a few public lawsuits, the whole doxxing/swatting/death-threat thing will cease to exist.

    More likely, it’ll be used by governments to stifle dissent. But you gotta dream 😔

      • fubarx
        link
        fedilink
        arrow-up
        1
        arrow-down
        10
        ·
        1 month ago

        How to stop violent death-threats.

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        An LLM could do this, but it would be very expensive to do this for every single communication and not particularly good. Humans are good at communicating through subtext. “You have such a lovely wife, I would just be gutted if something tragic were to happen to her.”

        ChatGPT picked up the veiled threat there, but that’s a very unsubtle example.

    • Nuke_the_whales@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      1 month ago

      Sounds like the kinda thing my grandpa would say back in the 90s. I’m still waiting for the micro chip that’s supposed to make me worship Satan