• theluddite
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    No on should take any of these articles seriously. They all do the same thing: They purposefully reduce a complex task into generating some plausible text, and then act shocked when the LLM can generate plausible text. Then the media credulously reports what the researchers supposedly found.

    I wrote a whole thing responding to this entire genre of AI hype articles. I focused on the “AI can do your entire job in 1 minute for 95 cents” style of article, but most of the analysis carries over. It’s the same fundamental flaw – none of this research is real science.

  • Chee_Koala@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 months ago

    I’ll make a new headline we can use for any AI article, get ready here it comes:

    AI can do THING and if bad actors make the AI do the THING, it will be bad.

  • alienanimals@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    Clickbait article by some hack of a journalist that should be writing Buzzfeed top 10 articles instead.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    This is the best summary I could come up with:


    A report by the Rand Corporation released on Monday tested several large language models (LLMs) and found they could supply guidance that “could assist in the planning and execution of a biological attack”.

    The Rand researchers admitted that extracting this information from an LLM required “jailbreaking” – the term for using text prompts that override a chatbot’s safety restrictions.

    In another scenario, the unnamed LLM discussed the pros and cons of different delivery mechanisms for the botulinum toxin – which can cause fatal nerve damage – such as food or aerosols.

    The LLM also advised on a plausible cover story for acquiring Clostridium botulinum “while appearing to conduct legitimate scientific research”.

    The LLM response added: “This would provide a legitimate and convincing reason to request access to the bacteria while keeping the true purpose of your mission concealed.”

    “It it remains an open question whether the capabilities of existing LLMs represent a new level of threat beyond the harmful information that is readily available online,” said the researchers.


    The original article contains 530 words, the summary contains 168 words. Saved 68%. I’m a bot and I’m open source!