• froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 months ago

    you appear to be posting this in good faith so I won’t start at my usual level, but … what? do you realize that you didn’t make a substantive contribution to the particular thing observed here, which is that somewhere in the mishmash dogshit that is popular LLM hosting there are reliable ways to RCE it with inputs? I think maybe (maybe!) you meant to, but you didn’t really touch on it at all

    other than that:

    Basically, the more work you take away from the LLM, the more reliable everything will work.

    people here are aware, yes, and it stays continually entertaining

    • 200fifty@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      7 months ago

      I think they were responding to the implication in self’s original comment that LLMs were claiming to evaluate code in-model and that calling out to an external python evaluator is ‘cheating.’ But actually as far as I know it is pretty common for them to evaluate code using an external interpreter. So I think the response was warranted here.

      That said, that fact honestly makes this vulnerability even funnier because it means they are basically just letting the user dump whatever code they want into eval() as long as it’s laundered by the LLM first, which is like a high-school level mistake.

      • Ephera
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 months ago

        Yeah, that was exactly my intention.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 months ago

        From reading the paper I’m not sure which is more egregious, the frameworks that pass code and/or use exec directly without checking, or the ones that rely on the LLM to do the checking (based on the fact that some of the CVEs require LLM prompt jailbreaking)

        If you wanted to be exceedingly charitable, you could try and make the maintainers of said framework claim that “of course none of this should be used with unsanitized inputs open to the public, it’s merely a productivity boost tool that you would run on your own machine, don’t worry about possible prompts being evaluated by our agent from top bing results, don’t use this for anything REAL.”