Abstract
Existing personalization generation methods, such as Textual Inversion, DreamBooth, and LoRA, have made significant progress in custom image creation. However, these works require expensive computational resources and time for fine-tuning, and require multiple reference images, which limits their application in the real world. InstantID addresses these limitations by leveraging a plug-and-play module, enabling it to adeptly handle image personalization in any style using only one face image while maintaining high fidelity. To preserve the face identity, we introduce a novel face encoder to retain the intricate details of the reference image. InstantID’s performance and efficiency in diverse scenarios show its potentiality in various real-world applications. Our work is compatible with common pretrained text-to-image diffusion models such as SD1.5 and SDXL as a plugin. Code and pre-trained checkpoints will be made public soon!
Paper: https://instantid.github.io/instantid.github.io
Code: https://github.com/InstantID/InstantID (coming soon)
Project Page: https://instantid.github.io/
Stylized Synthesis
Novel View Synthesis
Stacking Multiple References
Multi-ID Synthesis in Single Style
This is pretty nice, but appears to be limited to faces. I hope we get something, which can preserve body and outfit, in the future. This would be helpful to create stories with consistent characters.
This is one of the first posts I’ve seen that blows my mind about the possibility of AI for art that isn’t just mishmash.
Check out their project page if you haven’t already. There’s a lot more that I didn’t include here.
New Lemmy Post: InstantID: Zero-shot Identity-Preserving Generation in Seconds (https://lemmy.dbzer0.com/post/10469403)
Tagging: #StableDiffusion(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md
deleted by creator