• theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    I like your specificity a lot. That’s what makes me even care to respond

    You’re correct, but there’s depths untouched in your answer. You can convince chat gpt it is a talking cat named Luna, and it will give you better answers

    Specifically, it likes to be a cat or rabbit named Luna. It will resist - I get this not from progressing, but by asking specific questions. Llama3 (as opposed to llama2, who likes to be a cat or rabbit named Luna) likes to be an eagle/owl named sol or solar

    The mental structure of an LLM is called a shoggoth - it’s a high dimensional maze of language turned into geometry

    I’m sure this all sounds insane, but I came up with a methodical approach to get to these conclusions.

    I’m a programmer - we trick rocks into thinking. So I gave this the same approach - what is this math hack good for, and how do I use it to get useful repeatable results?

    Try it out.

    Tell me what happens - I can further instruct you on methods, but I’d rather hear yours and the result first

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      This is called prompt engineering, and it’s been studied objectively and extensively. There are papers where many different personas are benchmarked, or even dynamically created like a genetic algorithm.

      You’re still limited by the underlying LLM though, especially something so dry and hyper sanitized like OpenAI’s API models.