Supporters

“Using hard mathematical problems as a benchmark to determine if AI can match the highest cognitive functions of human beings is a wonderful idea.

Every year, more than 600 of the best students from more than 110 countries are challenged with six extremely difficult IMO problems, typically only one or two students solve all six IMO problems in the allotted nine hours. These problems test their ability to formulate concepts and engage in long chains of reasoning.

I am sure that many people will be following the AIMO Prize with great excitement, to see when, in the future, AI will match the world's brightest young minds.”

Gregor Dolinar
President, International Mathematical Olympiad

“The ability of modern AI systems to create a facsimile of a human when engaging in a written exchange is remarkable. However, this works best when there is a large corpus of relevant material already placed on the internet by people.

It will be a far more demanding task to mimic a creative mathematician as they engage in abstract reasoning. The global IMO community will be delighted to be involved in supporting this competition, as we seek to measure AI systems against the best young mathematical minds.”

Geoff Smith
Former President, International Mathematical Olympiad
Advisory Committee Member, AIMO Prize

“Despite recent advances, using AI to solve, or at least assist with solving, advanced mathematical problems remains an incredibly complicated and multifaceted challenge. It will be important to experiment with multiple approaches to this goal, and to benchmark the performance of each of them.

The AIMO Prize promises to provide at least one such set of benchmarks which will help compare different AI problem solving strategies at a technical level, in a manner that will be accessible and appealing to the broader public.”

Terence Tao
University of California, Los Angeles
Fields Medallist (2006)

“The advancements in large language models have been nothing short of remarkable, demonstrating versatility in numerous domains. However, mathematical reasoning presents a unique and significant challenge, one that these models are still striving to overcome.

The IMO is one of the most celebrated mental competitions in the world, and respected for its complexity and rigour. It serves as the ultimate grand challenge for artificial intelligence, pushing the boundaries of what AI can achieve in advanced mathematical problem-solving.

The prospect of an AI model winning a gold medal in the IMO is not just a milestone; it is a monumental leap towards powerful AI-driven mathematical reasoning. Such an achievement would mark a profound moment in the journey towards artificial general intelligence, transcending existing limitations and opening new horizons in mathematics.”

Leonardo de Moura
Senior Principal Applied Scientist, Automated Reasoning Group, AWS
Co-Founder and Chief Architect, Lean FRO

“The AIMO prize is driving progress in an area where large language models are currently weak, namely logic and reasoning. Solving this problem would be an important step towards making intelligent machines.

A big difference between this prize and the IMO Grand Challenge is that an AIMO entrant is only given the questions in human-readable form and must give output also in human-readable form. When the IMO Grand Challenge was set up in 2019, this seemed out of reach, so we instead asked for a machine which was given computer code corresponding to the questions and had to write code corresponding to the answers.

Recent progress in large language models makes it natural to remove this restriction, and this is exactly what the AIMO prize is doing.”