Close Menu
Gossips Today
  • Tech & Innovation
  • Healthcare
  • Personal Finance
  • Lifestyle
  • Travel
  • Business
  • Recipes
What's Hot

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

Tenet appoints first COO | Healthcare Dive

When You Check in for a Flight Matters—Here’s Why

Facebook X (Twitter) Instagram
Sunday, June 8
Gossips Today
Facebook X (Twitter) Instagram
  • Tech & Innovation

    Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

    June 8, 2025

    After its data was wiped, KiranaPro’s co-founder cannot rule out an external hack

    June 7, 2025

    Meet the Finalists: VivaTech’s 5 Most Visionary Startups of 2025

    June 7, 2025

    Court denies Apple’s request to pause ruling on App Store payment fees

    June 6, 2025

    Cursor’s Anysphere nabs $9.9B valuation, soars past $500M ARR

    June 6, 2025
  • Healthcare

    Tenet appoints first COO | Healthcare Dive

    June 8, 2025

    Ransomware group linked to cyberattack on Kettering Health

    June 7, 2025

    Public health scholars ask HHS to reject Georgia Medicaid work requirement extension

    June 7, 2025

    Healthcare organizations could be unprepared to adopt generative AI: survey

    June 6, 2025

    Nearly 11M would become uninsured under GOP reconciliation bill: CBO

    June 6, 2025
  • Personal Finance

    16 Budgeting Tips to Manage Your Money Better

    May 28, 2025

    How to Stick to a Budget

    May 20, 2025

    4 Steps to Navigate Marriage and Debt

    May 11, 2025

    Buying a Fixer-Upper Home: What to Know

    May 10, 2025

    How to Talk to Your Spouse About Money

    May 10, 2025
  • Lifestyle

    16 Father’s Day Gift Ideas He (or You) Will Love

    June 4, 2025

    The Getup: Sand

    May 25, 2025

    Your Summer Style Starts Here: 17 Memorial Day Sale Picks to Grab Now + 4 Getups

    May 24, 2025

    3 Fixes If You Hate the Way Your Pants Fit (That Have Nothing to Do with Your Waist Size)

    May 14, 2025

    On Sale Now: 9 Nike Sneakers Under $100 You’ll Want to Wear All Summer

    May 10, 2025
  • Travel

    When You Check in for a Flight Matters—Here’s Why

    June 7, 2025

    Amazon Is Kicking Off Summer With Travel Deals Up to 89% Off This Month—Prices Start at Just $8

    June 7, 2025

    Disney Has Asian American Culture Hidden in Plain Sight—How to Find the Best Eats, Experiences, and More

    June 6, 2025

    Birkenstock Sandals and Comfy Clarks Shoes Are Up to 74% Off in This Secret Summer Sale

    June 6, 2025

    This Small Town in Virginia Is a U.S. Dupe for the English Countryside—Here's How to Visit

    June 5, 2025
  • Business

    Airstream’s new Frank Lloyd Wright trailer is a match made in midcentury heaven

    June 7, 2025

    Supersonic air travel gets green light in U.S. after 50-year ban lifted

    June 7, 2025

    Rite Aid store closures update: Latest list includes doomed locations in California, Washington, and Oregon

    June 6, 2025

    We can reshore American manufacturing

    June 6, 2025

    How AI is reshaping the fields of African farmers

    June 5, 2025
  • Recipes

    slushy paper plane

    June 6, 2025

    one-pan ditalini and peas

    May 29, 2025

    eggs florentine

    May 20, 2025

    challah french toast

    May 6, 2025

    charred salt and vinegar cabbage

    April 25, 2025
Gossips Today
  • Tech & Innovation
  • Healthcare
  • Personal Finance
  • Lifestyle
  • Travel
  • Business
  • Recipes
Technology & Innovation

OpenAI’s o3 AI model scores lower on a benchmark than the company initially implied

gossipstodayBy gossipstodayApril 21, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Openai's o3 ai model scores lower on a benchmark than
Share
Facebook Twitter LinkedIn Pinterest Email

A discrepancy between first- and third-party benchmark results for OpenAI’s o3 AI model is raising questions about the company’s transparency and model testing practices.

When OpenAI unveiled o3 in December, the company claimed the model could answer just over a fourth of questions on FrontierMath, a challenging set of math problems. That score blew the competition away — the next-best model managed to answer only around 2% of FrontierMath problems correctly.

“Today, all offerings out there have less than 2% [on FrontierMath],” Mark Chen, chief research officer at OpenAI, said during a livestream. “We’re seeing [internally], with o3 in aggressive test-time compute settings, we’re able to get over 25%.”

As it turns out, that figure was likely an upper bound, achieved by a version of o3 with more computing behind it than the model OpenAI publicly launched last week.

Epoch AI, the research institute behind FrontierMath, released results of its independent benchmark tests of o3 on Friday. Epoch found that o3 scored around 10%, well below OpenAI’s highest claimed score.

OpenAI has released o3, their highly anticipated reasoning model, along with o4-mini, a smaller and cheaper model that succeeds o3-mini.

We evaluated the new models on our suite of math and science benchmarks. Results in thread! pic.twitter.com/5gbtzkEy1B

— Epoch AI (@EpochAIResearch) April 18, 2025

That doesn’t mean OpenAI lied, per se. The benchmark results the company published in December show a lower-bound score that matches the score Epoch observed. Epoch also noted its testing setup likely differs from OpenAI’s, and that it used an updated release of FrontierMath for its evaluations.

“The difference between our results and OpenAI’s might be due to OpenAI evaluating with a more powerful internal scaffold, using more test-time [computing], or because those results were run on a different subset of FrontierMath (the 180 problems in frontiermath-2024-11-26 vs the 290 problems in frontiermath-2025-02-28-private),” wrote Epoch.

According to a post on X from the ARC Prize Foundation, an organization that tested a pre-release version of o3, the public o3 model “is a different model […] tuned for chat/product use,” corroborating Epoch’s report.

“All released o3 compute tiers are smaller than the version we [benchmarked],” wrote ARC Prize. Generally speaking, bigger compute tiers can be expected to achieve better benchmark scores.

Re-testing released o3 on ARC-AGI-1 will take a day or two. Because today’s release is a materially different system, we are re-labeling our past reported results as “preview”:

o3-preview (low): 75.7%, $200/task
o3-preview (high): 87.5%, $34.4k/task

Above uses o1 pro pricing…

— Mike Knoop (@mikeknoop) April 16, 2025

OpenAI’s own Wenda Zhou, a member of the technical staff, said during a livestream last week that the o3 in production is “more optimized for real-world use cases” and speed versus the version of o3 demoed in December. As a result, it may exhibit benchmark “disparities,” he added.

“[W]e’ve done [optimizations] to make the [model] more cost efficient [and] more useful in general,” Zhou said. “We still hope that — we still think that — this is a much better model […] You won’t have to wait as long when you’re asking for an answer, which is a real thing with these [types of] models.”

Granted, the fact that the public release of o3 falls short of OpenAI’s testing promises is a bit of a moot point, since the company’s o3-mini-high and o4-mini models outperform o3 on FrontierMath, and OpenAI plans to debut a more powerful o3 variant, o3-pro, in the coming weeks.

It is, however, another reminder that AI benchmarks are best not taken at face value — particularly when the source is a company with services to sell.

Benchmarking “controversies” are becoming a common occurrence in the AI industry as vendors race to capture headlines and mindshare with new models.

In January, Epoch was criticized for waiting to disclose funding from OpenAI until after the company announced o3. Many academics who contributed to FrontierMath weren’t informed of OpenAI’s involvement until it was made public.

More recently, Elon Musk’s xAI was accused of publishing misleading benchmark charts for its latest AI model, Grok 3. Just this month, Meta admitted to touting benchmark scores for a version of a model that differed from the one the company made available to developers.

Updated 4:21 p.m. Pacific: Added comments from Wenda Zhou, a member of the OpenAI technical staff, from a livestream last week.

benchmark company implied initially model OpenAIs scores
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleFrom vision to reality: How AI is transforming healthcare
Next Article Founder of elite Davos gatherings quits as chair of World Economic Forum
admin
gossipstoday
  • Website

Related Posts

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

June 8, 2025

After its data was wiped, KiranaPro’s co-founder cannot rule out an external hack

June 7, 2025

Meet the Finalists: VivaTech’s 5 Most Visionary Startups of 2025

June 7, 2025
Leave A Reply Cancel Reply

Demo
Trending Now

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

Tenet appoints first COO | Healthcare Dive

When You Check in for a Flight Matters—Here’s Why

Airstream’s new Frank Lloyd Wright trailer is a match made in midcentury heaven

Latest Posts

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

June 8, 2025

Tenet appoints first COO | Healthcare Dive

June 8, 2025

When You Check in for a Flight Matters—Here’s Why

June 7, 2025

Subscribe to News

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Advertisement
Demo
Black And Beige Minimalist Elegant Cosmetics Logo (4) (1)
Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram

Categories

  • Tech & Innovation
  • Health & Wellness
  • Personal Finance
  • Lifestyle & Productivity

Company

  • About Us
  • Contact Us
  • Advertise With Us

Services

  • Privacy Policy
  • Terms & Conditions
  • Disclaimer

Subscribe to Updates

© 2025 Gossips Today. All Right Reserved.

Type above and press Enter to search. Press Esc to cancel.