SambaNova looked at the AI chip wars and decided the problem wasn't the chip — it was the entire stack. While everyone else was fighting over who makes the best GPU, SambaNova built the hardware, the software, and the system together from scratch. They raised over a billion dollars on that bet. The AI boom didn't hurt their case.
Founded
2017
HQ
Palo Alto, USA
Total Raised
$1.1 billion
Founder
Rodrigo Liang, Kunle Olukotun, Christopher Ré
Status
Private
Website
sambanova.aiTHE ORIGIN STORY
In 2017, three people who had spent their careers thinking about the fundamental limits of computing decided those limits were wrong. Rodrigo Liang had been a senior VP at Oracle, where he watched enterprise software get more complicated without getting more intelligent.
Kunle Olukotun was a Stanford professor who had spent decades researching chip architecture and parallel computing — he literally wrote the papers that GPUs are built on. Christopher Ré was another Stanford professor, one of the minds behind foundational work in machine learning systems.
Together they asked a simple question: what if the reason AI is so slow and expensive is that we're running fundamentally new workloads on fundamentally old hardware designed for something else? The answer they came to was: yes, that's exactly the problem.
They founded SambaNova with the thesis that the AI era needed purpose-built infrastructure — not just a faster GPU, but a completely different way of designing how hardware and software talk to each other. They called their approach the Reconfigurable Dataflow Architecture.
Which sounds like a PhD thesis, because it basically is.
WHAT THEY ACTUALLY DO
SambaNova sells AI infrastructure to enterprises and governments — the kind of organizations that need to run large AI models at scale but cannot or will not send their data to a public cloud. Think national labs, financial institutions, healthcare systems, and defense-adjacent organizations.
They offer two things: the SambaNova Suite, which is their full-stack hardware-and-software system deployed on-premises, and SambaNova Cloud, which is an API-access service for running frontier models like their own versions of Llama. The on-premises model is where the serious money is.
A customer pays for the DataScale system — SambaNova's proprietary server rack built around their RDU chip — plus the software to run it and the support to keep it running. It's a high-ticket, long-contract enterprise sale.
The kind of deal that involves procurement teams and multi-year agreements. The cloud product is newer and more accessible — developers can hit their API and run inference on large language models at competitive speeds without buying hardware.
SambaNova makes money on both, but the hardware deployments are what made them a unicorn.
THE PRODUCTS
The DataScale system is SambaNova's flagship product — a full server rack built around their proprietary Reconfigurable Dataflow Unit chip. It's designed for enterprises that want to train and run large AI models on their own infrastructure without depending on cloud providers.
The RDU chip is the core innovation: unlike a GPU, which is a general-purpose parallel processor repurposed for AI, the RDU is designed specifically for the dataflow patterns that AI models use. SambaNova claims this makes it more efficient per watt and faster for specific workloads.
SambaNova Cloud is their API service — launched properly in 2023 and positioned around serving open-source models like Meta's Llama family at high speed. They made headlines by posting some of the fastest publicly benchmarked inference speeds for Llama 2 and later Llama 3, which drove developer attention and gave them a foothold in the developer market beyond their enterprise base.
The SambaNova Suite is the software layer that runs on top of the hardware — it handles model training, fine-tuning, and deployment, and is designed so that enterprises don't need a team of ML engineers to operate the system. The whole pitch is: this is AI infrastructure that actually works out of the box.
HOW THEY GREW
SambaNova's first big move was going after the customers everyone else ignored. When OpenAI and Google were racing to build consumer products and cloud APIs, SambaNova went to the U.S.
Department of Energy and said: here's a system you can run inside your own walls. Argonne National Laboratory and Lawrence Livermore National Laboratory both became early customers.
Those are not flashy logos — but they are the kind of customers who sign large, long contracts and don't churn. The national lab wins gave SambaNova credibility with the federal market and with enterprise buyers who care about security and data sovereignty above everything else.
The second move was speed. In late 2023, SambaNova launched what they claimed was the fastest inference service for large language models available — running Llama models faster than anyone else publicly benchmarked.
In AI, inference speed matters enormously because it translates directly into cost and user experience. Being the fastest, verifiably, is a marketing campaign that writes itself.
They leaned into it hard, publishing benchmarks and positioning themselves as the performance option in a market full of 'good enough' offerings.
THE HARD PART
SambaNova is competing in one of the most capital-intensive and strategically contested markets in technology history. Nvidia has a stranglehold on the AI chip market that goes beyond hardware — CUDA, the software ecosystem that Nvidia built over 15 years, is what most AI development is written for.
Getting enterprises to adopt a completely different architecture means asking them to bet that SambaNova's stack will still be around and supported in five years. That's a hard sell even when the product is good.
Then there's the hyperscaler problem. AWS, Google, and Microsoft are all building their own custom AI chips — TPUs, Trainium, Maia — and they have virtually unlimited budgets and captive customer bases.
SambaNova has to convince enterprises to buy hardware and run it themselves rather than just scaling up their cloud spend, which is a harder argument to make in an era when CFOs love OpEx over CapEx. The on-premises bet requires customers to believe that data sovereignty and performance justify the complexity and upfront cost.
Most enterprise buyers are still figuring out whether they want to run AI at all, let alone own the infrastructure.
MONEY TRAIL
Seed
2017 · Led by Intel Capital
$56M raised
Series A
2019 · Led by Intel Capital
$56M raised
Series B
2020 · Led by GIC
$250M raised
$1.0B valuation
Series C
2021 · Led by SoftBank Vision Fund
$676M raised
$5.1B valuation
Series D
2023 · Led by Walden International
$125M raised
$5.0B valuation
WHO BACKED THEM
SambaNova raised over a billion dollars from a group of investors that includes some of the heaviest names in both venture capital and strategic investment. SoftBank Vision Fund led the Series D in 2021, putting in a reported $676 million and pushing SambaNova's valuation to $5.1 billion.
That was SoftBank doing what SoftBank does — betting enormous on infrastructure it believes will be foundational, writing a check large enough to scare away competition. Intel Capital was an early backer, which is interesting given that Intel is technically a competitor in the semiconductor space — a sign that even the incumbents believed SambaNova's architecture was doing something genuinely different.
Google participated in early rounds too, another strategic bet from a company that has its own TPU program but clearly wanted exposure to alternative AI hardware approaches. BlackRock, GIC (Singapore's sovereign wealth fund), and Temasek also participated in later rounds, adding the institutional weight that signals a company being taken seriously as a long-term infrastructure player rather than a speculative startup.
The fundraising trajectory — seed in 2017, Series D at $5.1B valuation by 2021 — reflects how fast the AI infrastructure investment thesis moved from niche academic bet to mainstream capital allocation priority.
Related Profiles
Investors
Head-to-Head
Compare SambaNova Systems vs another company.