Testing my new comfy workflow.

    • Itrytoblenderrender@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      A lot:

      • optional openpose controlnet

      • optional prompt cutoff

      • optional Lora stacker

      • optional face detailer

      • optional upscaler

      • optional negative embeddings (positive embeddings not yet implemented)

      The prompt goes with the selected options into 3 models who generate 3 images. Every image variant per model can have its own parameters like sampler or steps.

      • RandomLegend [He/Him]@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Do you combine those three images after that or do you simply produce three seperate images with each run then? also afaik positive embeddings are implemented already.

        • Itrytoblenderrender@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I generate in total 9 images, 3 per model.

          Which then go into the detailer and upscale pipeline (optional).

          Embeddings: I use a embedding picker node which appends the embedding to the prompt as I am to lazy to look up the embedding filenames. I just pick them from a list. You can stack the node and add multiple embeddings this way without the embedding:[name] hassle.

          • RandomLegend [He/Him]@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            so it wouldn’t look that complex if you only did one image, right? :-D

            Do you know if any extension exist that allows me to have a “gallery” for Lora, Hypernetworks and embedding just like a1111 does? I really really like that i could have example images to show me what a specific lora or whatever will do with the style…i really miss that in comfyui

            • Itrytoblenderrender@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              A) correct but I like to get multiple variants and then to proceed with the seed / model / sampler which gives the best results

              B) you could create a gallery of your embeddings by your self with an x/y workflow from the efficiency nodes. I will look into it and send you an update in this thread

              • RandomLegend [He/Him]@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                i mean i already have all the images… it’s just that in a1111 i can click on the image and it will automatically use whatever i clicked on. Here i would have to manually look up the corresponding filename and use that then.

                It works it’s just not as quick and…well automatically^^

                • Itrytoblenderrender@lemmy.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  Would be a cool node, but unfortunately my programming skils are not good enough to realize something like this. A workaround would be a cheatsheet like this. And you could skip the typing with the embedding picker.

                  here is the workflow: Link

                  I strongly suggest that you install the custom node comfy ui manager . You can download custom nodes via an interface and also update and mange them. Very useful.