How LGMs could already cheat using current AI. Or, are these not cheating?

Revision en2, by arvindf232, 2024-09-15 17:03:30

I experimented with o1-mini, and try to understand its implications for competitive codeforces at level 2500 or higher. Obviously, the current AI cannot outright provide you with a solutions, however there are already ways they can help quite a lot.

Brute Force writing

I believe most competitors regularly uses brute-forces. They are kind of an admission of failure that you cannot find your mistakes on your own. Previously, the brute forces come with a cost of taking 5 minutes to write. I would not do it unless it is a very hard problem, or I tried debugging by hand for 10 minutes and seeing nothing. If AI usage are assumed, then these 5 minutes would be completely saved.

5 minutes would be corresponding to ~10 performance if at the final problem, and increasingly more at earlier problems.

Translations of Languages

This may falls under the Code-Completion tools mentioned in rules https://codeforces.net/blog/entry/133941 , so maybe it is fine.

This is really for the unfortunate people that uses Kotlin like me or Python. But if we can use the most comfortable for us, but now with the one downside (speed) completely negated, that seemed like something not particularly fair.

When I am using kotlin, I need to constantly keep in mind of constant factors, and risk TLE in some cases. Some time limit issues can be offset by well optimised prewritten library. I personally find this to be worth it (or I would have switched long before), and this is to be always the trade offs, making (to me) Python vs Kotlin vs C++ a mostly fair and informed decision.

Code Trimming/Rewrite

This may falls under the Code-Completion tools mentioned in rules https://codeforces.net/blog/entry/133941 , so maybe it is fine.

Basically, I ask the AI to "remove all unused functions", or "rewrite the code to be more clean". This should had been an IDE thing, and IDE definitely can do "remove all uncalled functions". But now it can be much smarter and much more consistent, like "refactor this to use X instead". Or if you write a simple n^2 code and asks "optimise this using segment tree". I don't know where the line is.

Heavy Data Structures

The AI can do dumb (dumb is with respect to 1900) things, very very fast. The current good competitive programmers can do more things, but at a moderate speed. There is one place where this mismatch is especially strong: implementing well known data structures with slight augmentations.

There are a few data structures that are extremely flexible, but you almost never use it: Redblack tree, Treap, Segment tree beats, link cut trees. They can support so many queries, but previously, requires (1) A working implementations that is in your coding style, and either of (2A) Prepare a prewritten code of the exact version that you turned out to need, or (2B) go through your code and change every single related spot, and hope that you don't miss anything.

Previously, option 2B is not very practical for long data structures, option 2A is possible but it is only limited to a few most common tasks, as there are simply too many situations to prepare for. Instead, we just don't use it, since typically they won't be the intended solution and there is a harder to think of , but easier to implement way.

With AI though, it can generate your 100-300 lines treap, red black tree in no time, that supports any of your highly specific requests.

This is the case for near 3000 level, but for purple/orange/red people, perhaps regular segment tree, merge sort trees, 2D data structures, all fall into this list.

Example

In 1976E - Splittable Permutations, we need to maintain an array of distinct values which supports two operations: specify a value v, w and insert before/after it.

The no brain option is treap, as treap can famously support any dynamic array with many operations. The second option is to think in offline, and then the operations correspond to a tree so we can do an idea similar to Kruskal-reconstruction tree. The third option, which is definitely the best, is to realise this can be handled with a basic linked list implementation. During practice, I ended up with the second option.

Yet, the first option is certainly the least thinking. I asked chatgpt for a treap implementation supporting those queries and replaced my linked list. AC follows rather easily 281351200. It is not first try: but it just requires quick understanding of the chatgpt code to understood that it wrote some functions to be O(n) and ask it to correct the implementation.

For reference, this is my exact prompt:

Build a treap in kotlin that maintains an array that supports the following operations: (1) given a value in the array, insert a value after it or before it (2) output the current array It can be guarantee that the array have no duplicate values

Note that in particular the AI didn't tell me I am dumb and could have used a linked list, but instead it enabled me to go forward with a very dumb idea.

Additional Example

I don't have another concrete problems in mind. But, consider the following tasks that appeared many times.

  • Have a usual ordered set (so , can add, remove elements, with values upto 1e9)
  • Find the sum of the k-largest elements, or find the sum/max/min of elements with keys within some range [l,r]

Usually, the best course of action is to (1) offline and coordinate compress. (2) do some binary search to find the two corresponding cuts (3) Use a normal segment tree to maintain what you want. This is a multistep process that is complicated, but is the simplest using only typical prewritten libraries.

But this is a trivial task for (custom) RedBlack tree. At some point, I decided it is worth it to learn and implemented my self a version of Redblack tree that can handle the sum version. It took quite a while (4+ hours), and so I still haven't implemented the max version.

But now, anyone can just get an implementation -- well you should definitely get one before contest, but my actual concern is doing this in-contest.

Conclusion

The point is that with AI, we can make use of any more data structures to their maximal flexibility, with far less knowledge required. With AI, we only need to know "What is potentially possible", and sometimes their performance in terms of constant factors, and never worry about how painful the implementation is.

Opinions

The first thought I had is that "This is disgusting, there is no way this should be allowed". But then I realised I am not qualified to judge and make the verdict, and my current opinions is "I don't know".

Prewritten libraries are always there, and it always provided serious tactical advantages -- the cost is your preparations, and the preparations also require the same skillset, so it never had trivialised anything.

Note the following obvious logical conclusion: A generated library done before contest must be legal.

The distinction becomes "pre-generated library" vs "in-contest generated library". In this case, generating every single variant seems to be a sensible thing to do.

In terms of problem solving, it is absolutely clear that I did not provide every single ideas to the chatgpt. I may know that Redblack tree can handle a maximum query, but that is not the same as knowing the exact way to maintains those parameters and the exact query functions in order to get this.

Of course, even if the act itself is ok, it may not be possible separate it from other not-ok uses: such as asking "what data structures could I use to support these queries".

Future subdivision of Competitive Programming

I believe currently there are two major rulesets: prewritten code allowed ( all online competitions) and not allowed (all onsite competitions, though possibly with printed materials). For example, if Atcoder ARC and AGC continue to allow all AI tools, then that would be a different field compared to CF, not to mention other smaller platforms.

Additional Comments

Due to the emergence of AI, we may need to properly distinguish between prewritten library and newly AI generated code. I propose the following simple rule in face of this: All prewritten library must be made public. I don't fully support this but I just add this to get the conversions going.

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en6 English arvindf232 2024-09-16 01:30:57 20
en5 English arvindf232 2024-09-15 22:45:09 14 Tiny change: ' ended up with the ' -> ' ended up going forward with the '
en4 English arvindf232 2024-09-15 17:46:07 18
en3 English arvindf232 2024-09-15 17:40:50 1966 Tiny change: 'oncern is implementing this ' -> 'oncern is generating this ' (published)
en2 English arvindf232 2024-09-15 17:03:30 1322 Tiny change: 'e elements)\n- Find ' -> 'e elements, with values upto 1e9)\n- Find '
en1 English arvindf232 2024-09-15 16:37:55 7424 Initial revision (saved to drafts)