A paper and a blog post about Deepmind's new model AlphaGeometry was published yesterday. The model solved 25 out of 30 geometry problems from the IMOs 2000-2022. Previous SOTA solved 10 and average score of gold medalist was 25.9 on the same problems:
![ ](https://codeforces.net/c9afc2/Screenshot from 2024-01-18 14-21-50.png)
The model was trained only on synthetic data and it seems (to me) that more data would result in better results:
![ ](https://codeforces.net/c9afc2/Screenshot from 2024-01-18 14-21-50.png)
A notable thing is that AlphaGeometry uses an language model combined with a rule-bound deduction engine and solves problems with similar approach to humans.
The paper can be read and the blog post can be found
Own speculation: I don't see any clear reason not sure why similar strategy couldn't be used to other IMO areas (at least number theory and algebra) but I'm not an expert and haven't read all of the details. Generating a lot of data about for example inequalities or functional equations doesn't sound that much harder than generating data about geometry but again I might be missing some important key why good data is easy to generate about geometry. I'm not sure if this has direct implications on Competitive Programming AIs. Verifying proofs to math problems can be done automatically but I'm not sure if the same applies to algorithms. Still overall very interesting approach and results.