Runway debuts AI video generation API for developers

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


As enterprises continue to increase their investments in generative AI, Runway is going all in to give them the best it has on offer. Today, the New York-based AI startup announced that it is making its ultra-fast video generation model, Gen-3 Apha Turbo, available via API. 

The move makes Runway among the first companies to allow developers and organizations to integrate a proprietary AI video generation model into their platforms, apps, and services — powering internal or external use cases requiring video content. Imagine an advertising company being able to generate video assets for campaigns on the fly.

The launch promises to significantly enhance the workflows of video-focused enterprises. However, Runway noted that the API will not be immediately available to everyone. Instead, the company is following a phased approach, gradually rolling it out to all interested parties.

What do we know about the Runway API?

Currently available to select partners, the Runway API comes via two main plans: Base for individuals and small teams, and Enterprise for larger organizations.

Depending on the plan chosen, users will receive endpoints to integrate the model into their respective products and initiate various video generation tasks, with the interface clearly displaying “powered by Runway” messaging.

The base price for the API starts at one cent per credit, with five credits required to generate a one-second video.

It’s important to note that, at this stage, the company is only providing access to the Gen-3 Alpha Turbo model via the API. Other models, including the original Gen-3 Alpha, are not yet available on the platform.

The Turbo model debuted in late July as an accelerated version of Gen-3 Alpha, capable of producing videos from images seven times faster while being more affordable. Runway co-founder and CEO Cristóbal noted at the time that the model could generate videos almost in “real-time,” producing a 10-second clip in just 11 seconds.

Until now, the model was only available to users on the Runway platform. With the API, the company hopes to see broader adoption across various enterprise use cases, which could ultimately boost its revenues.

Runway said in a blog post that marketing group Omnicom is already using the API, although it did not say how exactly the group is putting the video generation technology to use. The names of other existing partners have also not been revealed.

Either way, with this announcement, the messaging is pretty clear: Runway is taking a proactive step to stay ahead of competition in the market, including the likes of OpenAI’s yet-to-launch Sora and Deepmind’s Veo, and gain a bunch of enterprise customers.

Not to mention, despite all the criticism surrounding AI video generation, right from copyright cases to questions about data collection for training, the company has been aggressively moving to expand its product capabilities. Just a couple of days ago, it launched Gen-3 Alpha Video to Video on the web for all paid subscribers. 

“Video to Video represents a new control mechanism for precise movement, expressiveness and intent within generations. To use Video to Video, simply upload your input video, prompt in any aesthetic direction you like, or, choose from a collection of preset styles,” the company wrote in a post on X.

While it remains to be seen when Runway will add its other models, including Gen-3 Alpha, to the API platform, interested parties can already sign up on the company’s waitlist to get access. 

Runway says it is currently gathering feedback from early partners to further refine the offering but plans to initiate a wider release in the coming weeks to open up access for all waitlisted customers. 



Source link