• Get in Touch 📬
  • About
  • Home
  • News
    • Anthropic
    • Google
    • OpenAI
    • Model Releases
    • Policy and Regulation
    • Safety and Security
    • Business and Funding
    • Platforms and Partnerships
    • Infrastructure and Compute
    • Apps and Distribution
  • Research
  • Guides
  • Tools
  • Opinion
No Result
View All Result
No Result
View All Result
Home News Business and Funding

OpenAI Signs Ten Billion Dollar Deal For Cerebras Systems Wafer Scale Chips

March 3, 2026
in Business and Funding
Reading Time: 3 mins read
OpenAI Signs Ten Billion Dollar Deal For Cerebras Systems Wafer Scale Chips
3
VIEWS
Share on FacebookShare on Twitter

Most computer chips are the size of a fingernail, but Cerebras Systems builds a chip the size of a dinner plate. It is a piece of engineering that ignores standard manufacturing rules, yet investors just bet another billion dollars that this giant square of silicon is the only real threat to Nvidia. The question is whether the physics or the politics will break first.

Key Takeaways

  • Cerebras raised $1 billion in capital, reaching a $23 billion valuation.
  • Benchmark Capital invested $225 million through two dedicated Benchmark Infrastructure funding vehicles.
  • Cerebras signed a $10 billion deal to provide 750 megawatts of computing power to OpenAI.

Cerebras Systems has raised $1 billion in new capital. This pushes the company’s valuation to $23 billion, which is nearly triple what it was worth six months ago. The round was led by Tiger Global, but the most interesting check came from Benchmark Capital.

Benchmark usually caps its funds at $450 million. To make this work, they created special investment vehicles just to funnel $225 million into Cerebras. This suggests high confidence from one of Silicon Valley’s most disciplined firms.

The big deal

Nvidia currently controls the market for AI hardware. Tech companies are desperate for a viable alternative to keep costs down and supply up. Cerebras is positioning itself as that alternative. The company claims its systems are faster than Nvidia’s chips for specific AI tasks.

Related articles

The real bottleneck is not what you think

The real bottleneck is not what you think

March 29, 2026
The real bottleneck is not training compute

The real bottleneck is not training compute

March 25, 2026

This is not just talk. OpenAI has signed a multi-year agreement worth roughly $10 billion with Cerebras. The deal secures 750 megawatts of computing power through 2028. OpenAI CEO Sam Altman is also an investor in the company. This partnership validates the technology in a way that press releases cannot.

How it works

Standard chip manufacturing takes a 300-millimeter silicon wafer and cuts it into hundreds of tiny, separate chips. Cerebras does the opposite. They use the entire wafer to make a single, massive processor called the Wafer Scale Engine. It is about 8.5 inches wide and holds 4 trillion transistors.

Think of a standard computer cluster like a fleet of delivery trucks. The trucks are fast, but they spend a lot of time stuck in traffic moving packages between different warehouses. Cerebras is like building one giant warehouse where all the workers are in the same room. No trucks are needed.

Because the data does not have to travel between separate chips, the system avoids the communication bottlenecks that slow down traditional GPU clusters. Cerebras says this design makes AI inference more than 20 times faster.

Inference: The process where a trained AI model answers a user’s question or performs a task.

The catch

The technology works, but the business side has baggage. Until recently, a single client in the UAE called G42 provided 87% of Cerebras’ revenue. This heavy reliance on one customer is risky.

It also attracted government attention. G42 has historical ties to Chinese technology companies, which triggered a national security review by the Committee on Foreign Investment in the United States. This scrutiny forced Cerebras to delay its initial plans for an IPO and withdraw a filing earlier in 2025. G42 has since been removed from the investor list to clear the path forward.

What now?

Cerebras is preparing to go public in the second quarter of 2026. They must now prove they can diversify their customer base beyond the UAE and OpenAI.

If you use ChatGPT, this deal is designed to make the system answer complex queries faster in the coming years. Watch to see if other major AI labs sign similar deals, or if OpenAI remains the primary user of this giant chip.

Tags: enterprise aiIlya Sutskeverinference optimizationMicrosoftnotionONNXOpenAIragscraping
  • Trending
  • Comments
  • Latest
IBM Triples Entry Level Hiring To Pivot Junior Roles Toward Customer Engagement

IBM Triples Entry Level Hiring To Pivot Junior Roles Toward Customer Engagement

March 4, 2026
Learning Outcomes Measurement Suite Evaluates Student Cognitive Process Beyond Test Scores

Learning Outcomes Measurement Suite Evaluates Student Cognitive Process Beyond Test Scores

March 4, 2026
ElevenLabs Reports 330 Million In Revenue And Develops Autonomous AI Models

ElevenLabs Reports 330 Million In Revenue And Develops Autonomous AI Models

March 3, 2026
Pinterest Claims Higher Search Volume Than ChatGPT Despite Earnings Miss

Pinterest Claims Higher Search Volume Than ChatGPT Despite Earnings Miss

March 4, 2026
Amazon Invests Fifty Billion To Run OpenAI Models On Trainium Chips

Amazon Invests Fifty Billion To Run OpenAI Models On Trainium Chips

Resolve AI Reaches Billion Dollar Valuation To Automate Software Troubleshooting

Resolve AI Reaches Billion Dollar Valuation To Automate Software Troubleshooting

Microsoft Contract Retains Exclusive License to OpenAI Models Despite Amazon Deal

Microsoft Contract Retains Exclusive License to OpenAI Models Despite Amazon Deal

Alphabet Declines To Disclose Financial Terms Of Apple Gemini Partnership

Alphabet Declines To Disclose Financial Terms Of Apple Gemini Partnership

The real bottleneck is not what you think

The real bottleneck is not what you think

March 29, 2026
The real bottleneck is not training compute

The real bottleneck is not training compute

March 25, 2026
The real bottleneck is not model size

The real bottleneck is not model size

March 22, 2026
The real bottleneck is test time compute not training

The real bottleneck is test time compute not training

March 18, 2026

Get your daily dose of AI news and insights, delivered to your inbox.

© 2025 Tomorrow Explained. Built with 💚 by Dr.P

No Result
View All Result
  • Home
  • About
  • Get in Touch 📬
  • Newsletter 📧

© 2025 Tomorrow Explained by Dr.p