SambaNova and Intel have announced a planned multi-year strategic collaboration to deliver high-performance, cost-efficient AI inference solutions for AI-native companies, model providers, enterprises and government organisations worldwide, built around Intel® Xeon® based infrastructure.
Intel Capital is participating in SambaNova’s Series E financing round.
Demand for more heterogeneous infrastructure
The collaboration is in response to AI workloads becoming more diverse and complex, and many global organisations are looking for different solutions for different needs. This is driving demand for more heterogeneous infrastructure, which is built on diverse compute, memory, networking and a consistent software foundation, to support inference at scale across the data centre.
For customers with AI workloads well-suited to SambaNova’s approach, the combination of Intel CPUs and SambaNova’s AI platform can provide a compelling rack-level inference option as Intel’s GPU-based solutions come online.
This collaboration complements Intel’s existing data centre GPU commitments and does not alter its path forward to competing in AI. Intel continues to invest across GPU IP, architecture, products, software, systems and strengthen its roadmap as part of its edge-to-cloud AI engagements.
The SambaNova SN50 AI chip
The SambaNova SN50 AI chip has an alleged maximum speed that is five times faster than “competitive” chips.
Positioned as the most efficient chip for agentic AI, the SN50 chip is being marketed as offering enterprises a “3X lower total cost of ownership” (it is assumed this is based on internal testing). The SN50 will be shipping to customers later this year.
Rodrigo Liang, co‑founder and CEO of SambaNova, has commented:
“AI is no longer a contest to build the biggest model. With the SN50 and our deep collaboration with Intel, the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”
GPU Alternatives
The SambaNova claim is that the SN50 delivers five times more compute per accelerator and four times more network bandwidth than the previous generation. It links up to 256 accelerators over a multi‑terabyte‑per‑second interconnect, cutting time‑to‑first‑token and supporting larger batch sizes. Subsequently, enterprises can deploy bigger, longer‑context AI models with higher throughput and responsiveness, all while keeping performance high and costs and latency under control.
Kevork Kechichian, EVP, General Manager, Data Center Group, Intel has commented:
“Customers are asking for more choice and more efficient ways to scale AI. By combining Intel’s leadership in compute, networking, and memory with SambaNova’s full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives to deploy advanced AI at scale.”
SambaNova’s Reconfigurable Data Unit (RDU) architecture
Built on SambaNova’s Reconfigurable Data Unit (RDU) architecture, the SN50 is set to deliver:
- Instant AI Experiences, as ultra‑low latency delivers real‑time responsiveness for next‑gen enterprise apps like voice assistants.
- The ability to power hundreds or thousands of simultaneous AI sessions with consistent high performance.
- Three‑tier memory architecture unlocks 10T+ parameter models and 10M+ context lengths for deeper reasoning and richer outputs.
- Higher hardware utilisation which will lower cost‑per‑token, driving greater performance and ROI (return on investment).
- Resident multi‑model memory and agentic caching optimise the three‑tier architecture, cutting infrastructure costs for enterprise‑scale AI deployments.
In response to the news of the Peter Rutten, Research Vice-President Performance Intensive Computing at analyst firm IDC, comments:
“The new SambaNova SN50 RDU changes the tokenomics of AI inference at scale. By delivering both high performance and high throughput with a chip that uses existing power and is air cooled, SambaNova is changing the game.”
The news follows SambaNova’s record bookings and revenue as they closed out 2025, reflecting accelerating demand for production-ready AI systems across financial services, telecommunications, energy, and sovereign deployments worldwide.





