Seems like a bit of a stretch to call 4 seconds per frame, on a 3060, “realtime” / “as fast as you can type”.
I tried it on a 6900 XT recently and generation time was well under half a second.
Results are not as good as with SDXL but for the time it needs it’s very impressive.
The author can’t type very quickly
A rapid dark-tan mammalian with a bushy tail, propels itself upward off the ground, to an elevation above (or greater) than that of the canine resting below, whom has a disposition contrary to productivity.
Well, it is technically as fast as you can type if you’re running a better GPU. The 3060 is pretty mid-tier at this point.
Low end card.
I’ll get crucified for saying that because people will interpret that as an attack on their PC or something daft like that. It’s not.
It’s Ampere, a GPU architecture from 3.5 years ago. And even then, here’s what the desktop stack was like:
-
3090 Ti (GA102)
-
3090 (GA102)
-
3080 Ti (GA102)
-
3080 12GB (GA102)
-
3080 (GA102)
-
3070 Ti (GA102/GA104)
-
3070 (GA104)
-
3060 Ti (GA104/GA103)
-
3060 (GA106/GA104)
-
3050 (GA106/GA107)
It was almost at the bottom of Nvidia’s stack 3 years ago. It was a low end card then (because, you know, it was at the bottom end of what they were offering). It’s an even more low end card now.
People are always fooled by Nvidia’s marketing and thinking they’re getting a mid range card when in reality Nvidia’s giving people the scraps and pretending they’re giving you a great deal. People need to demand more from these companies.
Nvidia takes a low end card, slaps a $400 price tag on it, calls it mid range, and people lap it up every time.
The pricing makes it a mid range card, because the budget end is just gone these days.
Nvidia conning people into paying what used to be mid range/high end pricing for a low end card does not make it a low end card.
The 3060 was always a low end card. Because it was on the low end of the product stack, both for Nvidia and against AMD.
I know it’s low-end when compared to the newer generations but if we call a 3060 low-end then what do we call people with older GPUs like a 1070?
Should we not compare the 3060 against its own generation/the current one? To me that makes more sense than including the 1000 series or 900 series or something. How far would we go back? Are all cards sold now high end because they’re faster than a GTX 960? Earlier?
Personally my cut off was cards still on sale either right now or very recently, say within the past year.
-
deleted by creator
I’m on a 3060 and with 4x upscaling it takes about a second and a half.
XL Turbotastic Mega Ginormous, etc. Hate naming schemes like this. Why not just make it v2.0 or the Pro version instead? Why use multiple words that make it sound bigger and better? Marketing BS that just sounds dumb.
Not sure why you have a problem with it, the naming here makes a lot of sense if you know the context.
Stable Diffusion --> The original SD with versions like 1.5, 2.0, 2.1 etc
Stable Diffusion XL --> A version of SD with much bigger training data and support for much larger resolutions (hence, XL)
Stable Diffusion XL Turbo --> A version of SDXL that is much faster (hence, Turbo)
They have different names because they’re actually different things, it’s not exactly a v1.0 --> v2.0 scenario
Thanks for the context. That does make it much less redundant.
Yeah but the next version has yet a bigger training set, so what then? XXL? and what about the next ? Turbo was already used, so now we call it Nitro? This is not the “new kids” movies, you know…
Why not just make it v2.0 or the Pro version instead?
“Pro version” is equally cringe.
Yeah I get that. Would just have made more sense given that it’s widely used. Though I’ve been told why the name is so weird and it makes some sense now
Here are my suggestions:
Stable Diffusion Free
Stable Diffusion Paid with Limitations
Stable Diffusion Paid Unlimited
I agree with you in general, but for Stable Diffusion, “2.0/2.1” was not an incremental direct improvement on “1.5” but was trained and behaves differently. XL is not a simple upgrade from 2.0, and since they say this Turbo model doesn’t produce as detailed images it would be more confusing to have SDXL 2.0 that is worse but faster than base SDXL, and then presumably when there’s a more direct improvement to SDXL have that be called SDXL 3.0 (but really it’s version 2) etc.
It’s less like Windows 95->Windows 98 and more like DOS->Windows NT.
That’s not to say it all couldn’t have been better named. Personally, instead of ‘XL’ I’d rather they start including the base resolution and something to reference whether it uses a refiner model etc.
(Note: I use Stable Diffusion but am not involved with the AI/ML community and don’t fully understand the tech – I’m not trying to claim expert knowledge this is just my interpretation)
Yeah I got some good replies to my comment explaining it. Makes more sense now.
Im just glad we’re moving away from purposely misspelled product SEO hacks.
deleted by creator
deleted by creator
I heard they were all child murderers! 😱
This isn’t free BTW folks
I haven’t messed with any AI imaging stuff yet. And free recommendations to just have some fun?
Bing Image Creator if you just want to create some images quick (free, Microsoft account required). It’s using DALLE3 behind the scenes, so it’s pretty much state-of-the-art, but rather limited in terms of features otherwise and rather heavy on the censorship.
If you wanna generate something local on your PC with more flexibility, Automatic1111 along with one of the models from CivitAI, needs a reasonably modern graphics card and enough VRAM (8GB+) to be enjoyable and installation can be a bit fiddly (check Youtube & Co. for tutorials). But once past that you can create some pretty wild stuff.
Bing and Open AI still and free stuff. Bing’s is actually really good.
Great, even more online noise that I can look forward to.
And the resulting faces still all have lazy eyes, asymmetric features, and significantly uncanny issues.
Humans have asymmetric features. No one is symmetrical
These features are abnormally asymmetric to the point of being off-putting. General symmetry of features is a significant part of what attracts people one to another, and why facial droops from things like Bells Palsy or strokes can often be psychologically difficult for the patient who experiences them.
General symmetry, not exact symmetry.
Anecdote: I think Denzel Washington is supposed to have one of the most symmetrical faces.
You can easily get incredibly canny stuff.
This is great news for people who make animations with deforum as the speed increase should make Rakile’s deforumation GUI much more usable for live composition and framing.
I’ve tried to install this multiple times but always manage to fuck it up somehow. I think the guides I’m following are outdated or pointing me to one or more incompatible files.
Do you use comfyui ?
This is the best summary I could come up with:
Stability detailed the model’s inner workings in a research paper released Tuesday that focuses on the ADD technique.
One of the claimed advantages of SDXL Turbo is its similarity to Generative Adversarial Networks (GANs), especially in producing single-step image outputs.
Stability AI says that on an Nvidia A100 (a powerful AI-tuned GPU), the model can generate a 512×512 image in 207 ms, including encoding, a single de-noising step, and decoding.
This move has already been met with some criticism in the Stable Diffusion community, but Stability AI has expressed openness to commercial applications and invites interested parties to get in touch for more information.
Meanwhile, Stability AI itself has faced internal management issues, with an investor recently urging CEO Emad Mostaque to resign.
Stability AI offers a beta demonstration of SDXL Turbo’s capabilities on its image-editing platform, Clipdrop.
The original article contains 553 words, the summary contains 138 words. Saved 75%. I’m a bot and I’m open source!
Removed by mod
Because all the boomer clipart it’s replacing was so endearing…
Now we get ai generated boomer art instead, and at a faster pace
The clouds don’t have ears
There’s a fair chance we’ll see (or actually don’t see) a lot more offline use. AI apps are coming to desktop PCs and phones and it means in the long run people don’t have to get some entertaining stuff from the web any more. Like if you want to a cool pic of a dragon for a wallpaper, you can just ask the AI app on your PC and it will make a bunch to choose from.
What’s out there that actually works offline? Stable Diffusion is the only one I’ve heard about, everyone else is more interested in exclusively selling AI as a service.
deleted by creator
You might be waiting a long time. We aren’t going back and this is one of those things that are not going back into the box. So now we must prepare for it and learn to live with it as the best course of action and make sure it’s not used to oppress us.
Agreed. It is similar to waiting for Photoshop to die. It’s not going away.
People talk like its a plague. Media have distorted this topic and people are running with it
Lmao I’m old enough to remember “the internet is just a fad”
Does it actually run any faster though? For instance, if I manually spun a model with the diffusers library and ran it locally on dml, would there be any difference?
Edit: Assuming we’re normalizing the output to something reasonable, e.g. a recognizable picture of a dog.