Nvidia’s CEO Jensen Huang speaks throughout a keynote tackle at Nvidia’s GTC Convention on March 16, 2026 in San Jose, California. Nvidia’s GTC Convention focuses on current developments and future makes use of of AI.
Benjamin Fanjoy | Getty Photographs
At Nvidia’s annual developer convention on Monday, CEO Jensen Huang took the stage to a packed home and mentioned he expects buy orders between Blackwell and Vera Rubin to succeed in $1 trillion via 2027.
Final yr, the corporate had projections for a $500 billion income alternative between the 2 chip applied sciences. Following Nvidia’s earnings report final month, Finance chief Colette Kress mentioned the corporate expects development this yr to exceed what was included in that estimate.
Huang mentioned demand is booming from startups and massive corporations alike. Nvidia shares rose about 2% on Monday.
“If they may simply get extra capability, they may generate extra tokens, their revenues would go up,” Huang mentioned at GTC in San Jose, California.
Nvidia’s graphics processing models for synthetic intelligence have turned the model right into a family identify and essentially the most worthwhile public firm on the planet, price about $4.5 trillion. As mass AI adoption shifts from chatbots to agentic apps that spawn off different brokers to perform duties, the variety of tokens being generated has exploded, creating even higher want for working inference at quicker speeds.
Nvidia is scheduled to roll out Vera Rubin later this yr. The system, which is made up of 1.3 million elements, will ship 10 instances extra efficiency per watt than its predecessor, Grace Blackwell, the corporate claims. That is a big growth when vitality consumption is without doubt one of the most crucial points dealing with the AI build-out.
The chipmaker mentioned in February that year-over-year income this quarter will surge about 77% to roughly $78 billion. The corporate has reported 11 straight quarters of income development above 55%.
Additionally on Monday, Huang unveiled the Nvidia Groq 3 Language Processing Unit, or LPU, the corporate’s first chip from the startup that it principally acquired via a $20 billion asset buy in December, its largest deal ever.
Groq was based by the creators of Google’s in-house tensor processing unit, which has gained traction in recent times as a competitor to Nvidia’s graphics processing models. The Groq 3 LPU is constructed to boost its expertise, with one core optimized for rushing up the GPU.
Huang launched a full rack devoted to housing the brand new Groq accelerators.
The Groq 3 LPX rack will maintain 256 LPUs, and is supposed to take a seat beside the Vera Rubin rack-scale system that is transport to prospects later this yr. Huang mentioned the Groq LPX rack can improve the tokens per watt efficiency of its Rubin GPUs by 35 instances.
“Each little bit of infrastructure you add to your information middle is competing for energy,” mentioned chip analyst Ben Bajarin of Inventive Methods
Huang additionally confirmed off a prototype of Kyber, Nvidia’s subsequent huge rack structure leap after Rubin. It should combine 144 GPUs in compute trays that sit vertically as an alternative of horizontally, as a way to increase density and decrease latency. The Kyber design will likely be out there in Vera Rubin Extremely, Nvidia’s subsequent rack-scale system, anticipated to ship in 2027.
— CNBC’s Jordan Novet contributed to this report.
WATCH: Inside Nvidia’s Vera Rubin AI system
