Anyone else getting tired of all the click bait articles regarding PoisonGPT, WormGPT, etc without them ever providing any sort of evidence to back up their claims?

They’re always talking about how the models are so good and can write malware but damn near every GPT model I’ve seen can barely write basic code - no shot it’s writing actually valuable malware, not to mention FUD malware as some are claiming.

Thoughts?

  • Dudewitbow
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I dont know as im not someone deep into cyber security or anything, but the idea makes sense. Their idea is having a machine learn whats “normal traffic” in a system and react when abnormal traffic arrives.

    Their being darktraces security.