I recently got access to Github's Copilot and it struck me that it could generate a lot of algorithms that someone might not know or even solve some of the simpler problems with ease. Despite this technology being very new, I think it's worth adding a rule restricting or prohibiting the use of certain AI code completers. From what I'm seeing in the development of this field, it's only a matter of years before someone trains an AI that can perform in coding contests at a LGM level.
Edit: To clarify, I don't believe it will be Copilot that will be trained at this level. LGM is an exaggeration but it's certainly possible for a Master level AI because of how much the easier problems are somewhat pattern recognition.
Examples to demonstrate the power of Copilot:
I couldn't really make it write out many advanced algorithms but this is for sure going to change in the future. So far I find it's the best with very simple trivial functions (check if number is a power of 2, convert string to integer, check for duplicates, find minimum etc.)
Personally, I'll be coding without copilot in competitive programming because I believe this to just be another form of cheating and doesn't help me get better. Regardless, I want to hear what the community thinks on this matter and if it should be banned or not.