Getting close to gold medal level! AI is starting to play Olympic math!

10 months ago 258
Compiled by | Xu RuiFrom playing chess to analyzing protein structures, artificial intelligence (AI) is becoming more and more capable. DeepMind, a company in the United States, has set its sights on the field of mathematics this time, developing...

Compiled by | Xu Rui

From playing chess to analyzing protein structures, artificial intelligence (AI) is becoming more and more capable. DeepMind, a company in the United States, has set its sights on the field of mathematics this time, developing an AI geometric reasoning model called AlphaGeometry, which can solve complex geometric problems at a level close to that of gold medal winners in the International Mathematical Olympiad (IMO). The related research was published in Nature on January 17.The IMO, held in July each year, is one of the most difficult mathematical competitions in the world. Solving geometric problems in the IMO requires a certain level of mathematical creativity, which is exactly what AI has been trying to overcome. Even OpenAI's GPT-4, which has demonstrated extraordinary reasoning abilities in other fields, can only score 0 on IMO geometry problems.The reason why AI has not been able to conquer IMO geometry problems for a long time is not only because of the difficulty of the problems, but also the lack of training data. The IMO has been held once a year since 1959, with only six problems in each competition. However, AI systems solving geometric problems require millions or even billions of data points, and the existing data is far from sufficient for training.To address this, Thang Luong and colleagues at DeepMind created a tool that can generate billions of machine-readable geometric proofs, bypassing the difficulty of insufficient existing data.The researchers trained AlphaGeometry using this data and benchmarked it against 30 IMO geometry problems. The result was that AlphaGeometry correctly solved 25 problems within the standard time limit. In comparison, the previously most advanced system solved only 10 of them, while according to predictions, the average gold medal winner in IMO can solve 25.9 problems.Luong explained that AlphaGeometry consists of two parts: a fast and intuitive language model system, similar to GPT-f, and a slower and more analytical "symbol engine" system.Facing an IMO geometry problem, AlphaGeometry first uses the language model to propose theorems and arguments to try, and then the "symbol engine" constructs the arguments proposed by the language model through logical reasoning according to mathematical rules. The two systems work together, continuously switching, until the problem is solved.Luong said that although AlphaGeometry has been very successful in solving IMO geometry problems, its answers are often longer than human proofs. However, it can discover things that humans may have overlooked. For example, it has a better and more general solution to a problem in the 2004 IMO competition than the official answer.He Yanghui from the London Institute of Mathematics and Science pointed out that the system has inherent limitations in terms of available mathematical operations because IMO problems should be solved using the theorems taught in undergraduate courses or below. Therefore, increasing the mathematical knowledge that AlphaGeometry can acquire can improve the system and even help it make new mathematical discoveries.Currently, DeepMind refuses to disclose whether it plans to have AlphaGeometry participate in live IMO competitions or expand the system to solve other non-geometric IMO problems.

Related publication information:

https://doi.org/10.1038/s41586-023-06747-5

"China Science Daily" (2024-01-19 2nd Edition International)Editor | Zhao Lu
Type setting | Guo GangPlease scan the QR code below for 3 seconds to recognize