The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    111
    ·
    11 months ago

    I can’t wait until we find out AI trained on military secrets is leaking military secrets.

    • Jknaraa
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      3
      ·
      11 months ago

      I can’t wait until people find out that you don’t even need to train it on secrets, for it to “leak” secrets.

        • Jknaraa
          link
          fedilink
          English
          arrow-up
          7
          ·
          11 months ago

          Language learning models are all about identifying patterns in how humans use words and copying them. Thing is that’s also how people tend to do things a lot of the time. If you give the LLM enough tertiary data it may be capable of ‘accidentally’ (read: randomly) outputting things you don’t want people to see.

    • AeonFelis@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      11 months ago

      In order for this to happen, someone will have to utilize that AI to make a cheatbot for War Thunder.

    • Bezerker03@lemmy.bezzie.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      11 months ago

      I mean even with chatgpt enterprise you prevent that.

      It’s only the consumer versions that train on your data and submissions.

      Otherwise no legal team in the world would consider chatgpt or copilot.

      • Scribbd@feddit.nl
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        I will say that they still store and use your data some way. They just haven’t been caught yet.

        Anything you have to send over the internet to a server you do not control, will probably not work for a infosec minded legal team.