Timothy Morano
Apr 01, 2026 19:17
Collectively AI’s kernel analysis group delivers main GPU optimization breakthroughs, chopping inference latency from 281ms to 77ms for enterprise AI deployments.
The group behind FlashAttention has quietly change into one of the vital consequential teams in AI infrastructure. Collectively AI’s kernel analysis unit, now about 15 engineers robust, is fixing an issue most individuals do not even know exists: the huge efficiency hole between AI fashions and the {hardware} working them.
Their newest win? Taking a voice AI firm’s time-to-first-token from 281ms all the way down to 77ms—a 3.6x enchancment that translated to 7.2x higher unit economics.
The Hidden Bottleneck
This is what most AI discourse misses: having nice fashions and costly GPUs would not assure efficiency. The bottleneck sits in between—the kernel layer that interprets mathematical operations into precise silicon directions.
“The hole between what researchers design and what really runs quick on {hardware} is huge,” explains Dan Fu, who leads a parallel analysis lab at UCSD. Get kernels proper and also you unlock {hardware}’s full potential. Get them mistaken and your costly GPUs sit partially idle.
For firms constructing AI-native merchandise, this is not tutorial. When inference prices run 2x increased than obligatory, or when latency breaks the person expertise, kernel optimization turns into existential.
One Week Versus One Yr
The group’s capabilities confirmed clearly when NVIDIA’s Blackwell GPUs arrived in March 2025. NVIDIA had spent a yr with dozens of engineers optimizing kernels for the brand new structure. Collectively AI had per week.
Their secret weapon: ThunderKittens, a library developed with Stanford researchers that reduces kernel code from 1,000+ traces of CUDA to roughly 100-200 traces. The abstraction layer is constructed round NVIDIA’s tensor cores, the specialised matrix multiplication items on trendy GPUs.
Inside seven days of {hardware} entry, the group had among the quickest FP4 and FP8 GEMM kernels out there for Blackwell, reaching as much as 2x speedups over cuBLAS on H100s.
Actual-World Impression
The voice AI case research illustrates what this implies in manufacturing. The client had a tough constraint: time-to-first-64-tokens above roughly 100ms breaks conversational circulate. Their B200 deployment was hitting 281ms.
Collectively’s group hand-optimized a “Megakernel” implementation—working a whole mannequin in a single kernel, focusing on the HBM bandwidth ceiling of NVIDIA H100s. Outcomes on Llama-3.2-1B: 77ms. On Qwen 2.5 1.5B: 127ms, down from 292ms.
The method traces again to FlashAttention’s authentic perception. That Memorial Day 2022 paper proved the AI institution mistaken about consideration being absolutely optimized. By making use of database methods ideas—knowledge locality, reminiscence hierarchies—to transformer consideration, the group achieved 2-3x speedups the place earlier sparsity strategies confirmed solely 10% actual positive aspects.
Tutorial-Trade Pipeline
The group operates by way of an uncommon mannequin. Dan Fu runs his UCSD lab on higher-risk basic analysis. Collectively AI co-founder Tri Dao is at Princeton. Simran Arora is at Caltech. Concepts get de-risked in academia, then productionized at Collectively AI. PhD college students be a part of the corporate. Interns work on longer-term analysis in tutorial labs.
This produces engineers who bridge principle and manufacturing—individuals who, as Fu places it, “lose sleep over reminiscence entry patterns” and “discover magnificence in knowledge circulate diagrams.”
The work is not glamorous. No bulletins when a kernel optimization lands. Simply sooner coaching occasions, decrease prices, increased throughput. However these margins decide whether or not AI-native merchandise really feel prompt or sluggish, whether or not unit economics work or do not, whether or not firms scale to hundreds of thousands of customers or plateau at hundreds.
For enterprise AI deployments the place each millisecond issues—and each share level of effectivity interprets to vital value financial savings—this invisible infrastructure layer could also be the place the true aggressive benefit lies.
Picture supply: Shutterstock
