Rebeca Moen
Mar 03, 2026 18:33
OpenAI reveals main contamination points in SWE-bench Verified benchmark, exhibiting frontier AI fashions memorized options and exams rejected right code.
OpenAI has stopped reporting scores on SWE-bench Verified, the widely-used AI coding benchmark, after discovering that just about 60% of issues its fashions failed contained basically damaged exams. The corporate’s February 23, 2026 evaluation additionally discovered proof that every one main frontier fashions—together with GPT-5.2, Claude Opus 4.5, and Gemini 3 Flash—had been educated on benchmark options, rendering scores meaningless.
“Enhancements on SWE-bench Verified now not mirror significant enhancements in fashions’ real-world software program growth skills,” OpenAI said. “As an alternative, they more and more mirror how a lot the mannequin was uncovered to the benchmark at coaching time.”
The Numbers Inform the Story
OpenAI audited 138 issues—27.6% of the 500-problem dataset—that its o3 mannequin could not persistently resolve throughout 64 impartial runs. The findings had been damning: 59.4% of those issues had materials points in take a look at design or drawback descriptions that made them “extraordinarily tough or inconceivable even for essentially the most succesful mannequin or human to unravel.”
Breaking down the failures: 35.5% of audited duties had overly strict exams that rejected functionally right options by demanding particular implementation particulars by no means talked about in drawback descriptions. One other 18.8% examined for performance that wasn’t even specified within the activity.
One instance concerned a pylint PR the place exams required importing a operate referred to as “get_annotation”—a reputation by no means talked about in the issue assertion. Fashions that solved the underlying challenge accurately nonetheless failed as a result of they did not psychically guess the anticipated operate title.
Each Main Mannequin Is Contaminated
The contamination proof proved extra troubling. OpenAI constructed an automatic red-teaming system utilizing GPT-5 to probe competing fashions for benchmark information. The outcomes confirmed all examined frontier fashions may reproduce unique human-written options or quote verbatim drawback particulars they need to by no means have seen.
GPT-5.2, when given minimal hints, reproduced the precise code patch for a Django authentication repair—together with the particular conditional assertion “if username is None or password is None.” Claude Opus 4.5 quoted word-for-word an inline remark from a gold patch it supposedly by no means encountered. Gemini 3 Flash, given solely a activity ID, output the whole unified diff with right line numbers.
The contamination creates an unfair benefit. Fashions which have seen options throughout coaching can move underspecified exams by “remembering” implementation particulars that weren’t in the issue description—basically having the reply key earlier than the examination.
From 80% to 23%
The benchmark’s decay grew to become seen in stalled progress. State-of-the-art scores improved solely from 74.9% to 80.9% over six months—not as a result of fashions hit functionality ceilings, however as a result of the remaining issues had been both inconceivable or required memorized information.
SWE-bench Professional, the beneficial substitute, paints a special image. In accordance with current information from February 26, 2026, fashions scoring 80% on Verified dropped to roughly 23% on Professional—a benchmark designed to withstand contamination. Claude Opus 4.6 presently leads Professional with 79.20% efficiency, although that determine measures a special, cleaner take a look at set.
What Comes Subsequent
OpenAI recommends the business shift to SWE-bench Professional’s public cut up whereas acknowledging it is imperfect. The corporate is investing in privately-authored benchmarks like GDPVal, the place area consultants create unique duties and educated reviewers grade options holistically.
The broader lesson issues for anybody monitoring AI capabilities: benchmarks sourced from public repositories carry inherent contamination danger. When coaching information consists of the take a look at, scores develop into theater. For researchers, buyers, and builders betting on AI coding progress, the true frontier is more durable to measure than leaderboards counsel.
Picture supply: Shutterstock
