Runway started as a side project inside an NYU grad school program and somehow became the AI video company that Hollywood actually uses. Their Gen-2 model let anyone type a sentence and get a video back — before most people knew that was possible. They were quietly embedded in the VFX pipeline for 'Everything Everywhere All at Once', which won seven Oscars, while the rest of the world was still arguing about whether AI art was real. Now they're in a three-way arms race with OpenAI and Google for the future of video generation, and they have a two-year head start.
Founded
2018
HQ
New York, USA
Total Raised
$237 million
Founder
Cristóbal Valenzuela, Anastasis Germanidis, Alejandro Matamala-Ortiz
Status
Private
Website
runwayml.comTHE ORIGIN STORY
In 2018, three NYU Tisch School of the Arts students — Cristóbal Valenzuela, Anastasis Germanidis, and Alejandro Matamala-Ortiz — were deep inside a graduate program for interactive telecommunications. They were the kind of people who thought about what machine learning could do for creative tools, not just enterprise software.
While their classmates were building art installations, they were building models.
They started Runway as a research project. The original pitch was simple: make machine learning tools accessible to artists and designers who had no idea how to write code.
Not a tool for engineers to make art — a tool for artists to use AI. That distinction mattered.
The first version was basically a web interface where creatives could run pre-built ML models without touching a terminal.
They graduated, kept building, and raised a small seed round in 2018. The product evolved fast.
By 2021 they were working on generative video. By 2022 they had Gen-1, which could apply visual styles to existing footage.
By March 2023, Gen-2 dropped — text to video, from scratch, with results that genuinely surprised people. Not because it was perfect, but because it worked at all.
The company that started as a grad school side project had become the most serious AI video lab outside of the big tech giants. Three art school kids beat the world to it.
WHAT THEY ACTUALLY DO
Runway makes money by selling access to its AI creative tools — primarily through a subscription model aimed at individual creators, creative professionals, and enterprises.
The consumer tier starts at around $15 per month and gives users access to video generation, image editing, green screen removal, motion tracking, and a growing suite of AI-powered tools. Higher tiers unlock more credits, longer video outputs, and faster generation.
It is essentially a SaaS model layered on top of very expensive compute.
The enterprise side is where the real money is. Film studios, production companies, advertising agencies, and game developers pay significantly more for custom integrations, API access, and dedicated capacity.
Runway has been used in major film productions — most famously the Oscar-winning 'Everything Everywhere All at Once' — which gave them credibility that no amount of marketing spend could buy.
The core tension in the business is that running these models costs a lot. GPU compute is not cheap, and every video generated burns real money.
Like most AI companies at this stage, the bet is that scale and model efficiency improve faster than costs compound. Whether that math works long-term is the question every AI video company is trying to answer at the same time.
THE PRODUCTS
Gen-2 is the flagship — a text-to-video model that takes a written prompt and produces short video clips from scratch. When it launched in 2023, it was the most capable publicly accessible text-to-video tool in the world.
You type 'a lone astronaut walking through a neon-lit Tokyo street at night' and it generates something that looks like a scene from a film. It is not perfect, but it is remarkable.
Gen-3 Alpha followed in 2024, pushing fidelity, consistency, and control significantly further. Longer clips, more coherent motion, better understanding of complex prompts.
Each generation has been a meaningful step forward, not just a marketing refresh.
Beyond the headline video generation, Runway offers a full creative suite: AI-powered video editing, background removal that actually works without a green screen, motion tracking, slow motion generation, image-to-video conversion, and inpainting tools that let you remove or replace objects within footage. It is trying to be an entire post-production workflow, not just one impressive demo.
Runway also has an API that lets developers and studios integrate its models directly into their own tools and pipelines. That B2B layer is increasingly important — it is how Runway gets embedded into workflows rather than just used for one-off experiments.
HOW THEY GREW
The move that changed everything was going deep into Hollywood before Hollywood knew it needed them. While competitors were chasing consumer virality, Runway was building relationships with VFX artists and indie filmmakers who were looking for any edge they could find.
The 'Everything Everywhere All at Once' connection was not a marketing stunt — the production team at A24 actually used Runway's tools in the editing and VFX process. When the film swept the Oscars in 2023, Runway got name-dropped in conversations it had no right to be in yet.
Suddenly studios were calling.
They also played the research credibility game well. Cristóbal Valenzuela and the team published their work, positioned themselves as serious AI researchers, not just a product company, and built a reputation in the ML community that attracted top talent.
That credibility compounded into partnerships, press, and eventually a $141 million Series C led by Google in 2022 — which was its own kind of endorsement.
The free tier with limited credits created a word-of-mouth loop that social media accelerated. When Gen-2 launched, the demo videos spread organically because they were genuinely impressive.
People were posting 'I made this with Runway' content across Twitter and TikTok before Runway spent a dollar on advertising. The product was the growth strategy.
THE HARD PART
The problem that keeps Runway's leadership up at night is the same one facing every AI model company: the big players have more compute, more data, and more money, and they are coming for the same market.
OpenAI's Sora was announced in February 2024 with demo clips that were, honestly, a generation ahead of what was publicly available. Google has Lumiere and VideoPoet.
Meta has its own labs. All of them have infrastructure advantages that Runway cannot match dollar for dollar.
The question is whether being first, being focused, and being embedded in real production workflows is enough of a moat when trillion-dollar companies decide to compete directly.
There is also the intellectual property problem. AI video generation requires training data, and the question of what that data is and whether using it is legal remains entirely unsettled.
Runway, like every generative AI company, faces the real possibility of significant litigation from artists, studios, and rights holders who argue their work was used without consent or compensation. One unfavorable court ruling could reshape the cost and legality of training the next generation of models.
And underneath all of it: the unit economics. Generating video is computationally expensive in a way that generating text is not.
Every Runway user who creates a ten-second video clip consumes real GPU time. Making that sustainable while keeping prices low enough to attract creators is a genuinely hard problem, and nobody has fully solved it yet.
MONEY TRAIL
Seed
2018 · Led by Felicis Ventures
$2M raised
Series A
2021 · Led by Amplify Partners
$9M raised
Series B
2021 · Led by Coatue Management
$35M raised
$0.5B valuation
Series C
2022 · Led by Google
$141M raised
$1.5B valuation
Series D
2023 · Led by General Atlantic
$50M raised
$1.5B valuation
WHO BACKED THEM
Runway has raised $237 million across multiple rounds, but the most important check came from an unexpected source: Google. The $141 million Series C in 2022, co-led by Google and including Salesforce Ventures, was a signal to the entire industry that Runway was being taken seriously at the highest level.
When one of the largest AI labs in the world puts money into your video AI startup, it changes how everyone else looks at you.
Earlier backing came from Felicis Ventures and Amplify Partners in the seed and Series A stages — investors known for early bets on infrastructure and developer tools who saw what Runway was building before the AI content wave became obvious. NVIDIA also participated in later rounds, which makes strategic sense given that NVIDIA's GPUs are what makes any of this possible and the company has obvious interest in ensuring the AI application layer remains healthy and well-funded.
The investor base reflects Runway's unusual positioning: part creative tool company, part serious AI research lab, part Hollywood infrastructure play. That mix of backers — from consumer-focused VCs to deep tech players to strategic investors with GPU and cloud interests — suggests the company has convinced multiple different types of capital that it occupies a genuinely defensible position in a rapidly moving market.
Related Profiles
Investors
Peter Thiel
Runway exemplifies the zero-to-one thesis — building a genuinely new category of creative tool rather than incrementally improving existing software
Sam Altman
OpenAI's Sora is Runway's most direct competitor in AI video generation — Sam Altman's lab released Sora in 2024, directly challenging Runway's lead in the space
Head-to-Head
Compare Runway vs another company.