• pop
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    4 months ago

    Because these posts are nothing but the model making up something believable to the user. This “prompt engineering” is like asking a parrot who’s learned quite a lot of words (but not their meaning), and then the self-proclaimed “pet whisperer” asks some random questions and the parrot, by coincidence makes up something cohesive. And he’s like “I made the parrot spill the beans.”

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      4 months ago

      if it produces the same text as its response in multiple instances I think we can safely say it’s the actual prompt

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 months ago

        Even better, we can say that it’s the actual hard prompt: this is real text written by real OpenAI employees. GPTs are well-known to easily quote verbatim from their context, and OpenAI trains theirs to do it by teaching them to break down word problems into pieces which are manipulated and regurgitated. This is clownshoes prompt engineering done by manager-first principles like “not knowing what we want” and “being able to quickly change the behavior of our products with millions of customers in unpredictable ways”.