Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Look now.
Generative AI frameworks and models are one of the hottest trends in 2022, as new approaches have come to market that enable users and organizations to generate images and text.
Among the organizations building generative AI technologies is Stability AI, which raised $101 million in funding in October. Stability AI develops foundational open-source models, including the popular Stable Diffusion model. Stable Spread enables anyone to generate creative images by entering a text message describing the desired image. Building a generative AI model like Stable Diffusion requires a significant amount of computing power both for training and for inference.
At the AWS re:Invent conference this week in Las Vegas, Stability AI formally announced that it had chosen AWS as its cloud platform of choice for building generative AI tools. As it turns out, Stability AI is no stranger to AWS and has already used the cloud platform.
“Last week we released Stable Diffusion 2.0, developed by Stability AI, which is another step forward in cleaning our dataset, with better quality, less bias and faster [speeds]”, said Emad Mostaque, founder and CEO of Stability AI, during a session at the re:Invent 2022 conference. “We built all of this on AWS.”
Intelligent Security Summit
Learn the critical role of AI and ML in cybersecurity and industry-specific case studies on December 8. Sign up for your free pass today.
The cloud and generative AI go hand in hand
Stability AI isn’t the only generative AI vendor relying on the public cloud to build foundational models.
OpenAI, the organization behind the GPT-3 large language model for text and DALL-E for image generation, already relies on the public cloud. But instead of using AWS, OpenAI has relied heavily on Microsoft Azure to help build and deliver its capabilities.
OpenAI’s reliance on Microsoft Azure is not just about technology. There is also a financial incentive. In 2019, Microsoft invested $1 billion in OpenAI to help develop AI technologies on Azure.
Not to be outdone, Google is also using its cloud for generative AI efforts, including its own Imagen text-to-image initiative.
Stable Diffusion 2.0 uses the cloud to generate AI images faster
Building stable diffusion is an exercise that involves several steps and components. At the most basic level, it’s about data.
Mostaque said Stable Diffusion started with 100,000 GB of images and labels, and was able to compress that down to just 2 GB of data for the AI model.
Stable Diffusion 2.0 provides more control over how images are generated at a higher level of detail. And with the 2.0 version, Stable Diffusion has become faster. With the first release, Mostaque said it took about 5.6 seconds to generate an image. Today it only takes 0.9 seconds. He said the technology is set to get even faster as it moves towards real-time generation of high-resolution images.
Using AWS SageMaker to build generative AI
Stability AI is now working with the AWS SageMaker toolkit to continue building and improving Stable Diffusion and other foundational models.
Mostaque said GPT Neo X, which comes from the EleutherAI community that Stability supports, is a popular foundation for language models. With SageMaker, Stability runs training on it across 1000 Nvidia a100 GPUs to make the model run faster.
“Scaling our infrastructure is incredibly difficult and making these models available is incredibly difficult,” Mostaque said. “We believe that with SageMaker and the broader Amazon suite, we can bring this technology to everyone [not only] make … one model for someone, but make models all over the world and make it available.”
VentureBeat’s mission will be a digital town square for technical decision makers to gain knowledge about transformative business technology and transactions. Discover our orientations.