I might be a bit late to the party, but for those of you that like ERP and fiction writing:

Introducing Pygmalion-2

The people from Pygmalion have released a new model, usable for roleplaying, conversation and storywriting. It is based on Llama 2 and has been trained on SFW and NSFW roleplay, fictional stories and instruction following conversations. It is available in two sizes, 7b and 13b parameters. They’re also releasing a mix with MythoMax-L2 called Mythalion 13B.

Furthermore they’re (once again) announcing a website with character sharing and inference (later in october.)

For reference: Pygmalion-6b has been a well known dialogue model for (lewd) roleplay in the times before LLaMA. It had been followed up with an underwhelming successor based on LLaMA (Pygmalion-7b). In their new blogpost they promise to have improved with their new model.

(Personally, I’m curious how it performs compared to MythoMax. There aren’t many models around, that excel at roleplay or have been designed specifically for that use case.)

  • ffhein@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I did some quick testing yesterday and my initial impressions were that Mythalion and Pyg2 (13B q5_K_M versions btw) were a bit more eloquent and verbose in some situations, but they would often take this too far and start writing novels instead of a dialogue. It also felt like they were more prone to take a sentence and repeat it verbatim as part of all their turns. It’s possible that these issues could be toned down by adjusting generation parameters, but MythoMax has been very easy to get good results out of.

    It’s interesting that you can specify which “mode” pyg2 should operate in as part of system prompts but I didn’t test how much difference it actually makes on generation. I told it to be in “instruction following mode” and it seemed good enough at general tasks as well.

    If I understand pyg2’s model card you’re supposed to prefix all turns with <|user|> or <|model|> which I didn’t manage to get text-generation-webui to do in chat-instruct mode, so I just used the notepad tab instead.

    • micheal65536@lemmy.micheal65536.duckdns.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      text-generation-webui chat and chat-instruct modes are… weird and badly documented when it comes to using a specific prompt template. If you don’t want to use the notepad mode, use instruct mode and set your turn template with the required tags and include your system prompt in the context (? I forget what it is labeled as) box.

      • rufus@discuss.tchncs.deOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Seems easier with SillyTavern. They’ve included screenshots with recommended settings for that in the blog post.

        • micheal65536@lemmy.micheal65536.duckdns.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          TBH my experience with SillyTavern was that it merely added another layer of complexity/confusion to the prompt formatting/template experience, as it runs on top of text-generation-webui anyway. It was easy for me to end up with configurations where e.g. the SillyTavern turn template would be wrapped inside the text-generation-webui one, and it is very difficult to verify what the prompt actually looks like by the time it reaches the model as this is not displayed in any UI or logs anywhere.

          For most purposes I have given up on any UI/frontend and I just work with llama-cpp-python directly. I don’t even trust text-generation-webui’s “notebook” mode to use my configured sampling settings or to not insert extra end-of-text tokens or whatever.

          • rufus@discuss.tchncs.deOP
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            I had exactly the same experiences. I use Koboldcpp and also oftentimes the notebook mode. SillyTavern is super complex and difficult to understand. In this case it’s okay. I can copy-paste from screenshots (unless the UI changes).