silxi's blog

By silxi, history, 11 months ago, In English

There has been recent buzz and discussion surrounding the potential of AI to be competitive in math and programming contests, prompted by recent initiatives such as the AIMO prize, which promises to award a $5 million dollar prize to an AI who is able to earn a gold medal at the IMO, and Google's AlphaCode2, which has been claimed to be able to compete in Codeforces contests near the level of a CM.

Which of the following tasks do you think would be the most difficult and most impressive for an AI to accomplish:

  • Winning a gold medal at IMO
  • Winning a gold medal at IOI
  • Winning a Div. 1+2 Codeforces contest
  • Writing/preparing a well-balanced, well-received Div. 1+2 Codeforces contest

(You can vote for multiple options.)

How soon, if ever, do you predict these milestones to be reached? If an AI can beat the best humans at a Codeforces contest, what does this mean for the future of online programming competitions? Is the ability for an AI to solve math and programming contest problems an indicator of its potential to produce novel math/computer science research? Please discuss!

  • Vote: I like it
  • +51
  • Vote: I do not like it

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by silxi (previous revision, new revision, compare).

»
11 months ago, # |
Rev. 2   Vote: I like it +15 Vote: I do not like it

I think if an AI model can write a well-received Div. 1 + 2 contest then it can pretty much come up with very high quality unique ideas. I bet that, at that point, AI models will be advanced enough to do some novel research in STEM fields. Who knows when that will happen though.

  • »
    »
    11 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Or perhaps we discover competitive programming problems are not as high quality unique ideas as many think. For example, if one trained an AI to take existing problems and put them in new scenario with small modification I think it would be more feasible and good enough to be well-received.

    • »
      »
      »
      11 months ago, # ^ |
        Vote: I like it +7 Vote: I do not like it

      I'll tell you though, if AI made a round and no one made blogs complaining about previous similar problems afterwards, that'd be impressive.

»
11 months ago, # |
  Vote: I like it +15 Vote: I do not like it

When an AI can do all of these, they will be the least of your worries.

»
11 months ago, # |
  Vote: I like it +2 Vote: I do not like it

I'll give my answer only to the less common question, because variations of the other ones have been well discussed in a similar blogs.

Is the ability for an AI to solve math and programming contest problems an indicator of its potential to produce novel math/computer science research?

I am pretty sure that the answer is "no", considering the way current "AI" works. It's trained on a huge datasets (more data — better coverage) in order to be able to reproduce ideas similar to the ones in the data with a small variance in the output. I mean it heavily rely on a data and without tons of our submissions, code snippets and blogs it would stuck somewhere in between sorting a sequence and finding a LIS. This principle just cannot produce anything significantly novel, especially in science, because if there is enough data to train ML-model on something — it's not novel anymore!

  • »
    »
    11 months ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    The reason people think the answer is maybe yes, or at least be on track, as answer to posed question is that it is unlikely for AI to solve hard contest problems just by training on enough data (because otherwise it would've been possible already).

»
11 months ago, # |
  Vote: I like it +3 Vote: I do not like it

Winning ICPC world final

  • »
    »
    11 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I considered putting that or some onsite finals like Atcoder WTF in the options. But do you think such an achievement would be qualitatively different than winning other kinds of programming contests?

»
11 months ago, # |
Rev. 2   Vote: I like it +18 Vote: I do not like it

Yesterday: AlphaGeometry claims to solve a couple of IMO geometry problems.