Diffusion transformers are the key behind OpenAI’s Sora — and they’re set to upend GenAI

OpenAI’s Sora, which can generate videos and interactive 3D environments on the fly, is a remarkable demonstration of the cutting edge in GenAI — a bonafide milestone.

But curiously, one of the innovations that led to it, an AI model architecture colloquially known as the diffusion transformer, arrived on the AI research scene years ago.

The diffusion transformer, which also powers AI startup Stability AI’s newest image generator, Stable Diffusion 3.0, appears poised to transform the GenAI field by enabling GenAI models to scale up beyond what was previously possible.

Saining Xie, a computer science professor at NYU, began the research project that spawned the diffusion transformer in June 2022. With William Peebles, his mentee while Peebles was interning at Meta’s AI research lab and now the co-lead of Sora at OpenAI, Xie combined two concepts in machine learning — diffusion and the transformer — to create the diffusion transformer.

Most modern AI-powered media generators, including OpenAI’s DALL-E 3, rely on a process called diffusion to output images, videos, speech, music, 3D meshes, artwork and more.

It’s not the most intuitive idea, but basically, noise is slowly added to a piece of media — say an image — until it’s unrecognizable. This is repeated to build a data set of noisy media. When a diffusion model trains on this, it learns how to gradually subtract the noise, moving closer, step by step, to a target output piece of media (e.g. a new image).

Diffusion models typically have a “backbone,” or engine of sorts, called a U-Net. The U-Net backbone learns to estimate the noise to be removed — and does so well. But U-Nets are complex, with specially-designed modules that can dramatically slow the diffusion pipeline down.

Fortunately, transformers can replace U-Nets — and deliver an efficiency and performance boost in the process.

OpenAI Sora

A Sora-generated video.

Transformers are the architecture of choice for complex reasoning tasks, powering models like GPT-4, Gemini and ChatGPT. They have several unique characteristics, but by far transformers’ defining feature is their “attention mechanism.” For every piece of input data (in the case of diffusion, image noise), transformers weigh the relevance of every other input (other noise in an image) and draw from them to generate the output (an estimate of the image noise).

Not only does the attention mechanism make transformers simpler than other model architectures but it makes the architecture parallelizable. In other words, larger and larger transformer models can be trained with significant but not unattainable increases in compute.

“What transformers contribute to the diffusion process is akin to an engine upgrade,” Xie told TechCrunch in an email interview. “The introduction of transformers … marks a significant leap in scalability and effectiveness. This is particularly evident in models like Sora, which benefit from training on vast volumes of video data and leverage extensive model parameters to showcase the transformative potential of transformers when applied at scale.”

Generated by Stable Diffusion 3.

So, given the idea for diffusion transformers has been around a while, why did it take years before projects like Sora and Stable Diffusion began leveraging them? Xie thinks the importance of having a scalable backbone model didn’t come to light until relatively recently.

“The Sora team really went above and beyond to show how much more you can do with this approach on a big scale,” he said. “They’ve pretty much made it clear that U-Nets are out and transformers are in for diffusion models from now on.”

Diffusion transformers should be a simple swap-in for existing diffusion models, Xie says — whether the models generate images, videos, audio or some other form of media. The current process of training diffusion transformers potentially introduces some inefficiencies and performance loss, but Xie believes this can be addressed over the long horizon.

“The main takeaway is pretty straightforward: forget U-Nets and switch to transformers, because they’re faster, work better and are more scalable,” he said. “I’m interested in integrating the domains of content understanding and creation within the framework of diffusion transformers. At the moment, these are like two different worlds — one for understanding and another for creating. I envision a future where these aspects are integrated, and I believe that achieving this integration requires the standardization of underlying architectures, with transformers being an ideal candidate for this purpose.”

If Sora and Stable Diffusion 3.0 are a preview of what to expect with diffusion transformers, I’d say we’re in for a wild ride.

News Article Courtesy Of Kyle Wiggers »