- cross-posted to:
- cursed_ai
- technology@lemmy.zip
- technology@lemmy.world
- stablediffusion
- cross-posted to:
- cursed_ai
- technology@lemmy.zip
- technology@lemmy.world
- stablediffusion
This is the best summary I could come up with:
Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3.
Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue.
In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.
Basically, any time a prompt homes in on a concept that isn’t represented well in its training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for.
Stability first announced Stable Diffusion 3 in February, and the company has planned to make it available in a variety of different model sizes.
Stability AI as a company fell into a tailspin recently with the resignation of its founder and CEO, Emad Mostaque, in March and then a series of layoffs.
The original article contains 730 words, the summary contains 180 words. Saved 75%. I’m a bot and I’m open source!