VentureBeat October 23, 2024
Carl Franzen

A pair of researchers at OpenAI has published a paper describing a new type of model — specifically, a new type of continuous-time consistency model (sCM) — that increases the speed at which multimedia including images, video, and audio can be generated by AI by 50 times compared to traditional diffusion models, generating images in nearly a 10th of a second compared to more than 5 seconds for regular diffusion.

With the introduction of sCM, OpenAI has managed to achieve comparable sample quality with only two sampling steps, offering a solution that accelerates the generative process without compromising on quality.

Described in the pre-peer reviewed paper published on arXiv.org and blog post released today, authored by Cheng Lu and Yang...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Technology
Cofactor AI Nabs $4M to Combat Hospital Claim Denials with AI
Set Your Team Up to Collaborate with AI Successfully
What’s So Great About Nvidia Blackwell?
Mayo develops new AI tools
Medtronic, Tempus testing AI to find potential TAVR patients

Share This Article