Stability AI Introduces Stable Diffusion SDXL 1.0 Model

Stable Diffusion SDXL 1.0

Stability AI has announced the release of Stable Diffusion SDXL 1.0, a new version of the popular image generation model. SDXL 1.0 is a foundational model with 3.5 billion parameters and a pipeline ensemble of models with 6.6 billion parameters. The model is now available on GitHub, Amazon Bedrock, through the Stability AI API, and in user applications such as Clipdrop and DreamStudio.

The new beta version of the retraining feature in the Stability API will only require five images to style images for specific tasks. Currently, this feature is undergoing limited testing with a few partners and will be released in the coming weeks.

Updates in Stable Diffusion SDXL 1.0

Stability AI claims that the updated model architecture and training process have improved coherence in generated images, reduced artifacts and distortions, and enhanced the accuracy of images according to textual prompts. Progress has been made in achieving photorealism, with faces and clothing details generated with higher clarity and precision. The model now better handles ambiguous or abstract image requests and more accurately emulates different artistic styles.

Example of an image created by Stable Diffusion SDXL
Example of an image created by Stable Diffusion SDXL 1.0

“The latest SDXL model represents the next step in Stability AI’s innovation heritage and ability to bring the most cutting-edge open access models to market for the AI community,” commented Emad Mostak, the CEO of Stability AI. “Unveiling 1.0 on Amazon Bedrock demonstrates our strong commitment to work alongside AWS to provide the best solutions for developers and our clients.”

Since the beta launch of SDXL on April 13, ClipDrop users have generated over 35 million images using the model, and the Stability AI Discord community averages 20,000 image creations per day.

Notify of

Inline Feedbacks
View all comments