What are the hardware requirements to run SDXL?

In particular, how much VRAM is required?

This is assuming A1111 and not using --lowvram or --medvram.

  • Vivarevo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    11 months ago

    3070 8gb vram, 16gb ram

    In confy About 16-17sec to do everything at 1024x1024 with 20steps and 5 refiner.

    A1111 About the same, using batch of 4 and then using batch img2img refining. Just more clicks without extensions etc as getting the same it/s between confy and a1111 - - medvram

    • Thanks4Nothing@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      I am using a 3070 8gb FTW, and 32gb ddr5…I have yet to get SDXL to even generate without an error yet. I assumed it was my hardware.

      • Altima NEO@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Nah, auto1111 seriously struggles with SDXL for me. But comfy manages to do it without issue.

        • Thanks4Nothing@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          I tried ComyfyUI last night - but, unless I am just missing something, I couln’t get past the workflow screen. Thanks for the tip, I will keep tinkering with things.

          I tried InvokeAI, but was having the same problem. I got the safetensor directly from Stability AI’s hugging face page - so I have to be doing something wrong…3 different UIs and I couldn’t get any to work. I am losing my technical aptitude :)

          • Altima NEO@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            The best thing with comfy is you can drag a PNG image that was generated by it and it will replicate the workflow nodes. They have some images on their GitHub for examples

  • amenotef@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    11 months ago

    I don’t have A1111 but in ComfyUI using a shared workflow that does base and then refiner, SDXL 0.9 was using 12GB of VRAM and 22GB of ram in Ubuntu for me. Doing images of 1024x1024~ GPU: AMD RX 6800

    • xenla@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Also have a 6800xt, 32gb ram. SDXL 1.0 running with A1111, but I can only generate images using --medvram. This is on windows admittedly.

    • Scew@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      Also using Comfy. Have been able to get away with 6GB of VRAM doing 1024x1024 and it took a bit longer but I’ve done a couple of 1024x2048’s and they’re coming out good :3

  • FactorSD@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    It’s hard to give precise figures, because there’s always tricks to getting a little more or less but from my (admittedly limited) testing SDXL is significantly more demanding, and 10+GB of VRAM is probably going to be the minimum to run it. I don’t remember exactly what I was doing but I run on an RTX A4500 card, and I managed to max out the 20GB of VRAM just with one SDXL process, where I can normally run a LORA training and 512x768 size images at the same time.

  • ehsanrt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    and im tryin to make SDXL work on my 1660ti laptop lol , comfyui runs it like 1:30 min for each pic , A1111 can’t even load the vae , however yesterday i saw and update on hugging face page of sd that they chaned to 0.9 vae for sdxl1 , seems like there was an issue with their provided 1.0 vae

  • InterSynth@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    I have a 6600 XT (so 8GBs of VRAM) and had no luck with A1111 or Vladmandic’s port, they would crash. ComfyUI worked with no fiddling.

  • Altima NEO@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 months ago

    I can run it on my 3080 10 gig card, but Its ridiculously slow. I HAVE to use --medvram or I get out of memory errors and NaN errors. And I mean ridiculously slow. Loading the model takes a few minutes. Generating an image requires me to minimize the browser window, or stable diffusion just stalls. Switching to the refiner isnt even an option because it takes so long to switch between models.

    This is on a 5930K, 32 GB Ram, 3080 10G trying to generate 1024x1024 images.

    However with comfyUI, it runs just fine, PC doesnt struggle, and it generates the images in about 40 seconds at 50 steps base, 10 refiner.