In addition to the online platforms linked by the other commenters, it’s also pretty straightforward to run Stable Diffusion locally, if your hardware is beefy enough:
https://github.com/AUTOMATIC1111/stable-diffusion-webui
Various fine-tunes checkpoints for different content and art styles can be downloaded at civitai.
(side note, Does anyone know why I can’t upload pictures directly from web? getting
SyntaxError: JSON.parse: unexpected character at line 1 column 1of the JSON data )
This is the way. The really top-tier AI art is almost guaranteed to use this, most online tools and other frontends just don’t have the features. Also, here is a link to a fork of that with an improved UI (no other changes).
Beefy can mean things to different people too. I have a mobile 1660ti and it can generate images in decent times (about 40seconds for a 20iteration image from prompt)
I’m slightly lacking in VRAM though, something 8GB VRAM would allow you to use most models.
Fun fact, it can be run on as low as 2gb vram! It works out of the box with the --lowvram parameter, and with some extra fiddling with extensions you can even generate high resolution stuff.
In addition to the online platforms linked by the other commenters, it’s also pretty straightforward to run Stable Diffusion locally, if your hardware is beefy enough: https://github.com/AUTOMATIC1111/stable-diffusion-webui
Various fine-tunes checkpoints for different content and art styles can be downloaded at civitai.
Best way right here. Free, open sourced, and you wont get judged for your outputs.
absolutely this. I have been messing around with this for about a week now.
super fun and easy to set up. I used this since I wanted a docker env.
(side note, Does anyone know why I can’t upload pictures directly from web? getting
SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data
)edit: its because of size…
This is the way. The really top-tier AI art is almost guaranteed to use this, most online tools and other frontends just don’t have the features. Also, here is a link to a fork of that with an improved UI (no other changes).
Beefy can mean things to different people too. I have a mobile 1660ti and it can generate images in decent times (about 40seconds for a 20iteration image from prompt)
I’m slightly lacking in VRAM though, something 8GB VRAM would allow you to use most models.
Fun fact, it can be run on as low as 2gb vram! It works out of the box with the --lowvram parameter, and with some extra fiddling with extensions you can even generate high resolution stuff.
yeah thats fair enough on the wordage.
Im rocking a 3070 and 11th gen i7, but only 16gb of ram.
still pretty quick imo
Took longer for my browser to download the image than it took for you to generate it. :)