Блог пользователя silxi

Автор silxi, история, 10 месяцев назад, По-английски

There has been recent buzz and discussion surrounding the potential of AI to be competitive in math and programming contests, prompted by recent initiatives such as the AIMO prize, which promises to award a $5 million dollar prize to an AI who is able to earn a gold medal at the IMO, and Google's AlphaCode2, which has been claimed to be able to compete in Codeforces contests near the level of a CM.

Which of the following tasks do you think would be the most difficult and most impressive for an AI to accomplish:

  • Winning a gold medal at IMO
  • Winning a gold medal at IOI
  • Winning a Div. 1+2 Codeforces contest
  • Writing/preparing a well-balanced, well-received Div. 1+2 Codeforces contest

(You can vote for multiple options.)

How soon, if ever, do you predict these milestones to be reached? If an AI can beat the best humans at a Codeforces contest, what does this mean for the future of online programming competitions? Is the ability for an AI to solve math and programming contest problems an indicator of its potential to produce novel math/computer science research? Please discuss!

  • Проголосовать: нравится
  • +51
  • Проголосовать: не нравится

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by silxi (previous revision, new revision, compare).

»
10 месяцев назад, # |
Rev. 2   Проголосовать: нравится +15 Проголосовать: не нравится

I think if an AI model can write a well-received Div. 1 + 2 contest then it can pretty much come up with very high quality unique ideas. I bet that, at that point, AI models will be advanced enough to do some novel research in STEM fields. Who knows when that will happen though.

  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Or perhaps we discover competitive programming problems are not as high quality unique ideas as many think. For example, if one trained an AI to take existing problems and put them in new scenario with small modification I think it would be more feasible and good enough to be well-received.

    • »
      »
      »
      10 месяцев назад, # ^ |
        Проголосовать: нравится +7 Проголосовать: не нравится

      I'll tell you though, if AI made a round and no one made blogs complaining about previous similar problems afterwards, that'd be impressive.

»
10 месяцев назад, # |
  Проголосовать: нравится +15 Проголосовать: не нравится

When an AI can do all of these, they will be the least of your worries.

»
10 месяцев назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

I'll give my answer only to the less common question, because variations of the other ones have been well discussed in a similar blogs.

Is the ability for an AI to solve math and programming contest problems an indicator of its potential to produce novel math/computer science research?

I am pretty sure that the answer is "no", considering the way current "AI" works. It's trained on a huge datasets (more data — better coverage) in order to be able to reproduce ideas similar to the ones in the data with a small variance in the output. I mean it heavily rely on a data and without tons of our submissions, code snippets and blogs it would stuck somewhere in between sorting a sequence and finding a LIS. This principle just cannot produce anything significantly novel, especially in science, because if there is enough data to train ML-model on something — it's not novel anymore!

  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    The reason people think the answer is maybe yes, or at least be on track, as answer to posed question is that it is unlikely for AI to solve hard contest problems just by training on enough data (because otherwise it would've been possible already).

»
10 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Winning ICPC world final

  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I considered putting that or some onsite finals like Atcoder WTF in the options. But do you think such an achievement would be qualitatively different than winning other kinds of programming contests?

»
10 месяцев назад, # |
Rev. 2   Проголосовать: нравится +18 Проголосовать: не нравится

Yesterday: AlphaGeometry claims to solve a couple of IMO geometry problems.