Humans beat AI in Math Olympiad showdown

Share:

While both AI systems achieved gold-level scores — 35 out of a possible 42 points — five human participants scored perfect 42s.

Despite major advances in generative AI, human contestants outperformed Google’s Gemini and OpenAI’s experimental models at the 2025 International Mathematical Olympiad (IMO) held in Queensland, Australia.

While both AI systems achieved gold-level scores — 35 out of a possible 42 points — five human participants scored perfect 42s.

“We can confirm that Google DeepMind has reached the much-desired milestone,” said IMO President Gregor Dolinar. “Their solutions were astonishing… clear, precise, and easy to follow.”

OpenAI hailed its result as a breakthrough in AI reasoning, with researcher Alexander Wei calling it “a longstanding grand challenge in AI.”

Unlike last year, when AI needed days to solve four problems, Gemini solved five of six within the 4.5-hour human limit. However, contest officials stressed that computing resources and possible human assistance could not be verified.

Dolinar said it was “exciting to see progress” but acknowledged human intellect still leads in top-tier math.

READ MORE AT PUNCH

Join Our Community to get Live Updates

Leave a Comment

We would like to keep you updated with special notifications.

×