This may be more of an “out of the loop” thing, but I’m new to this site and I’m noticing that lemmy.world seems surprisingly bereft of any substantial NSFW content. I’m surprised! Isn’t the adage that porn motivates technological progress?
What’s even more surprising is that the NSFW instance seems brand spanking new.
Is there some code-of-conduct thing which has prevented NSFW community growth? Or is it just a demographic thing where there wasn’t much/any demand until the Reddit exodus?
No idea what you’re talking about. It’s just picture of naked people. It needs to be in its own place.
i agree, that comment reads like a fever dream. i have no idea what theyre talking about at all.
https://en.m.wikipedia.org/wiki/Stable_Diffusion
This has the most basic guide to what is happening if you wish to crawl out from under that rock. It includes a built in SFW text to image prompt:
https://stable-diffusion-art.com/beginners-guide/
All one has to do is look at the NSFW images marked as being from AI and note the watermark to find where to generate images. Once you make a few, you’ll see small problems that are present in many other image categories. The main issues have to do with excluding certain prompt key words to make the output look real, then stuff like genitalia is not easy to get dialed in well unless you are running the software on your own hardware. This requires a powerful video card to generate the images and a lot of storage space. Once you know this a lot of images become obviously AI generated. There are aspects of lighting, eyes, fingers and toes, easy lighting text prompts and other small details that are harder to avoid in the image output. These start to stand out more once you know.
This tech is moving very fast right now. The next iteration of Stable Diffusion is set to release this month and it will likely make it impossible to tell what is real and what is fake. Right now SD must start with a low res image, then it can be scaled higher. SDXL will be able to start with a high res image and modify details which has not been possible. With a bit of effort, it will be possible to modify video frame by frame and use a simple text prompt to alter details. I doubt people will do more than clips at first, but with some good scripting using Blender, I could see it working for larger projects.
Follow the second posted link. And read it. This is FOSS. Combine this with an open source text to text LLVM running on native hardware and you have a real game changing set of technology.
https://generativeai.pub/how-to-setup-and-run-privategpt-a-step-by-step-guide-ab6a1544803e
“crawl out from under that rock” ya nah im good ill stay right here
No one understands how your ramblings are relevant to the OP or even the comment you originally replied to is the thing
A little less than half do understand based on votes alone. This is like talking about the wonders of the internet future as a programmer in a post office in the early 90’s. This is a big deal, but most people can’t intuitively connect the dots like others. Text to image AI will be extremely disruptive in coming years. Stable Diffusion is too new for most people to have heard about passively. This is the bleeding edge of publicly available tech. This technology will impact the digital lives of everyone. Follow the links provided or learn the hard way. I’ve told you about the internet when you can’t see past a world of postage stamps and newspaper classified ads. Humans are primarily visual. The implications of text generated imagery using AI that is indistinguishable from the real thing is here now. This could greatly enrich, alter, influence, or degrade lives. The next version of Stable Diffusion will be out this month. It is the real game changer because it can be used to realistically edit high resolution images. The output can be perfect, beyond anything you will be able to detect. Think about what all of this means for politics specifically. The broader implications for LLVM’s applied to political strategy are even worse. This should be blatantly obvious, and at the very least, you should know not to trust images at surface value no matter how real they look. Not just Photoshop editing “not real,” I mean everything about the people, places, actions, and content can be faked now.