Codeforces и Polygon могут быть недоступны в период с 6 декабря, 22:00 (МСК) по 7 декабря, 00:00 (МСК) в связи с проведением технических работ. ×

Блог пользователя unalive

Автор unalive, 5 недель назад, По-английски
  • Проголосовать: нравится
  • +72
  • Проголосовать: не нравится

»
5 недель назад, # |
Rev. 2   Проголосовать: нравится +34 Проголосовать: не нравится

just feel that i'm so stupid that i cannot even win chatGPT :(

»
5 недель назад, # |
  Проголосовать: нравится +851 Проголосовать: не нравится

Hacked :)

»
5 недель назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

what do u mean

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

CP is Chess now.

»
5 недель назад, # |
  Проголосовать: нравится +108 Проголосовать: не нравится

I am impressed that people even attempted to use ChatGPT on problem F, something that is 1400 rating higher than ChatGPT. ChatGPT must feel flattered if it could feel.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

that's some next level prompting skills for sure

»
5 недель назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

Feel like this is more of an L on the authors' part to not make strong enough tests, where even brute force works.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +28 Проголосовать: не нравится

    If you think about it, this might actually be the way to go, since offering full feedback makes it much easier for people who are unskilled at cp to gpt their way through problems.

    Does it worsen the experience for other people? Yes, but I’d prefer to have weaker pretests as compared to hundreds of gpt greys above me in the final ranklist.

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      This actually is a great idea the authors can feed their question into gpt and design the testcases in such a manner that this solution passes the pretests. But another problems could arise that people could easily hack these solutions and get points.

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +60 Проголосовать: не нравится

      Maybe Hacker Cup was right all along...

    • »
      »
      »
      25 часов назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      This can be fixed by stress testing tho. In most of the problems on Codeforces, writing a brute-force and generating test only takes 5 minutes if you have the boilerplate setup ready. Having a 25% chance to do harder problems in contest is pretty powerful, because say, you have 25% to do 3 problems, 50% to do 4 problems, and 25% to do 5. If you just do enough contest so that your rating converges, that's some easy CM/M right there.

»
5 недель назад, # |
  Проголосовать: нравится +36 Проголосовать: не нравится

So, is it acceptable to describe the grey and green as borderline retarded? This is clearly rude, insulting and retarded at the same time.
I don't know much about Codeforces Code of Conduct but it can't be that one can just insult others like that.
Whether the blog author meant to refer to the particular grey and green participants mentioned in the blog or to all people with these ranks, I think it's equally unacceptable and this blog should be edited or deleted.

  • »
    »
    5 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится +76 Проголосовать: не нравится

    I think it's equally unacceptable and this blog should be edited or deleted

    No it's not. Cheaters are borderline retarded. Newbies and pupils are not.

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +31 Проголосовать: не нравится

      I disagree. Cheaters are just cheaters. If you prove someone cheats, then you apply whatever rules you have for that.
      Anyway, the author is clearly describing all the grey and all the green, not the two cheaters in the post.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    womp womp

»
5 недель назад, # |
Rev. 3   Проголосовать: нравится +8 Проголосовать: не нравится

I'm more sad about the fact that it took me 50 minutes to carefully implement D, although I've got the idea instantly. And o1-preview solves it in less than a minute. Guess the "borderline retarded" goes all the way up to the 1836 rating at least.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    The glimmer of hope here is that there are people way above 1804 rating. Meaning AI still can't beat all of us and we have the potential to be better than o1.

»
5 недель назад, # |
  Проголосовать: нравится -19 Проголосовать: не нравится

Side note I think Educational Rounds should be unrated.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится -7 Проголосовать: не нравится

    Give it as an unrated contest if you want to codeforces allows you to do that,why cry?

    • »
      »
      »
      5 недель назад, # ^ |
      Rev. 2   Проголосовать: нравится +26 Проголосовать: не нравится

      Because many people who does educational rounds to get $$$X$$$ rating does not have the capability to get $$$X$$$ rating with regular rounds, including me in the past (namely $$$X=1900$$$). That ruins the point of rating. Knowing some classical tricks does not mean you can solve real problems of the same difficulty.

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        omg, do u mean that edu rounds are easier ?

        • »
          »
          »
          »
          »
          5 недель назад, # ^ |
          Rev. 2   Проголосовать: нравится +20 Проголосовать: не нравится

          Their problems are more classical which means you can usually find similar techniques in other problems or even books or lectures. To be good at them you need to learn more classical techniques like binary search instead of improving your problemsolving mindset.

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится -20 Проголосовать: не нравится

        Again how does it affect you? Your rating is only dependant on contests YOU choose to give being rated. Dont give edu rounds and skip the inflation in YOUR ratings,simple fix.

        • »
          »
          »
          »
          »
          5 недель назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          So what's wrong giving suggestions? Besides, it doesn't affect me either way because I'm Div.1 and forcefully unrated in those contests. And it's not about inflation. It's about these educational rounds serves more educational purposes than actual competition so we might want to exclude them from regular ratings.

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I guess that's true, my average rating change in last 4 educational rounds is +81, but I usually loose rating in regular rounds.

»
5 недель назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

Plz don't call me retarded I am trying to improve :(

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Dont think so.The author used such word just to further satire those cheaters.You're not "the grey".You are a grey Nowbie who intend to improve by oneself.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    He meant to say regarding the "grey" and "green" in the mentioned submissions, not the grey and green as a whole!

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +10 Проголосовать: не нравится

      No, he meant the grey and the green as a whole. And he meant to make what he meant so clear by not including the third cheaters (the blue one).

»
5 недель назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

ah , don't forget A as well , I hacked 7 submissions that works on $$$O(XY)$$$ , $$$O(\min(X,Y)^2)$$$ , $$$O(X^2+Y^2)$$$ which are obviously trash under current constraints.

Hacks:

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I don't think these people cheated though... I think they just couldn't think of a better construction...

    However there was also 288542915 this... this guy fully KNEW the construction, but decided to run some random nonsense loops before outputting the construction... and the saddest thing is, I couldn't even hack him with the worst case (when the nested loops run to 999 and 1000 respectively, and there are 5000 testcases)...

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    how can i be good at math like u

»
5 недель назад, # |
  Проголосовать: нравится +81 Проголосовать: не нравится

This post actually inspires a great way to combat cheating using GPT -- simply make the pretests weaker so that those brute force solutions by GPT will be allowed to pass pretests. As displayed by the previous OpenAI blog on CP, the performance of the model increases quite significantly when the number of allowed submissions increases; in addition, it is known that AI performs worse when the feedback it receives is not 100% accurate (i.e. pretest passed but FST). This really seems like a plausible way to reduce AI's effectiveness while affecting a genuine human solver much less (any competent contestant submitting an O(n^2) brute force to a n=10^5 question should know they'll FST anyway).

The above can be done in multiple ways, e.g. not including a max test in the pretests, which also helps to reduce the pretest judging time. A downside to this is that people can now hack all of these brute force solutions to get a lot of points -- maybe we can redesign the hacking system in some way. I'm sure there are other better methods than this, but this is just a suggestion for a starting ground.

That being said, if CF does want to take this path, it might be beneficial to make an announcement about this, mainly to protect the newer contestants at CF who have been familiar with the strong pretests these days so that they do not get frustrated unexpectedly.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +4 Проголосовать: не нравится

    There's another downside to the above example method -- people can now submit bruteforce to a difficult problem on an alt, lock it and copy a legitimate solution from the room on their main.

    Maybe it's time to reconsider in-contest hacking in the GPT era... But maybe someone can come up with a clever method that preserves the hacking system while still making the above cheat-combating method work.

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +28 Проголосовать: не нравится

      I don't think very many people want to preserve in-contest hacking.

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится +48 Проголосовать: не нравится

        How about just making pretests weak and not allowing in-contest hacking?

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      is this water on her nose or something else ?

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Maybe it's time to reconsider in-contest hacking

      Yes, maybe it's even possible to create a separate short phase after the coding where people can challenge solutions of others and get points for that. Like, imagine being the top coder in your room just based on hacks. But on codeforces I don't think even a single round matches a format like that, not that I remember.

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      If there is a concern that participants might hack GPT brute-force solutions, it could be possible to run system tests immediately after the contest and only open up hacking afterward.

  • »
    »
    5 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится +24 Проголосовать: не нравится

    any competent contestant submitting an O(n^2) brute force to a n=10^5 question should know they'll FST anyway

    MrDindows will strongly disagree with you

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +24 Проголосовать: не нравится

    I like the idea in principle, but there have been several problems where constant factor is a real issue and having max tests in pretests is our main line of defense to measure those things (I don't want to have to make random max tests and test those in custom invocation for every problem).

    For a very recent example, many solutions to 2035F - Tree Operations with the right complexity got TLE in pretests, as that problem requires a low constant implementation.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I don't really think so. You can counter that by local testing i.e. writing brute-force code to check the correctness of the produced program, and benchmark to see whether they can run within the time limit. I'm sure a green or a cyan will be more than competent enough to do those things. The only exceptions where I think this strategy would fail are those problems where generating strong tests is extremely difficult, such as graph problems, but they don't appear often enough to prevent cheaters from still having high performance. Codeforces best bet is probably just let them do what they want, cause they are gonna leave the platform after like 5 contests to get that juicy interview anyway.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

GPT is such a noob doesn't know how to calculate complexity