Блог пользователя Psyho

Автор Psyho, 10 лет назад, По-английски

After several months of inactivity I finally added something new to my blog: http://psyho.gg/overview-of-programming-contests/

Long time ago, I had a goal to write a small series of articles around competitive programming, so that most of the commonly asked questions (what's the point it, what people do there, how to practice, can I earn money on those, what sites should I use, etc.) could be simply answered by linking to the appropriate article. Unfortunately, after switching to being a full-time game designer I run out of time to do so. So I won't promise that I'll ever finish the series, but at least I'll try :)

If you think that I have missed something very important, let me know (either here, or on my blog). Just keep in mind, that the article is already long and I don't want to write about some niche contests.

Also a small remainder. You have less than one month to gather a team for Deadline24 & Challenge24 contests. Competing in Challenge24 qualifier is worth it, even if you don't want to attend the onsite finals.

  • Проголосовать: нравится
  • +177
  • Проголосовать: не нравится

»
10 лет назад, # |
  Проголосовать: нравится +43 Проголосовать: не нравится

Nice post :).

One thing I would somehow disagree with is HackerRank's low quality of problems. Maybe in general it is true, but my experience is limited to participating in "AdInfinitum" contests which are devoted to mainly math problems. I participated in them four times and I enjoyed all of them. Problem quality there is pretty high and there are always exhaustive editorials.

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    +1 On Hackerrank contests there are a little bit more well-known problems, but usually all problems are interesting, even easy and well-known problems =)

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    I agree that HR can have contests of high quality. But I've seen there, hands down, the worst ML problems in my life. I cannot recommend a site that often hosts contests that are just waste of time, considering there are many better (on average) options.

    Keep in mind, that you're an experienced guy and your level of knowledge about the contests is surely beyond my blog post (at least for "classic" stuff). And my goal was to give a relatively quick overview of different types of contests in a structured manner. I agree that I made a slight oversimplification that HR sucks, but it's a small trade-off between keeping the size of the post under control, and being super accurate.

    • »
      »
      »
      10 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I'll be talking about algorithmic contests here.

      I agree that HR sometimes has a problem with contests that have just standard and technical problems or 4 standard ones and 1 absolutely insane problem... which is to be expected when you look at the colours of authors' handles, you can't expect blues/violets to view as standard as many problems as reds (whoa that's a lot of "as"-s!). When there are higher rated authors, it's good, but sometimes, there aren't any and that's when difficulty fails arise.

      You're giving it too much of an importance, though. Most active coders would probably think "okay, this is also practice" after solving all problems in an hour. It's kinda disappointing, but that's not enough to consider the site shit as long as there are nice problems from time to time.

      • »
        »
        »
        »
        10 лет назад, # ^ |
          Проголосовать: нравится +5 Проголосовать: не нравится

        It's kinda disappointing, but that's not enough to consider the site shit as long as there are nice problems from time to time.

        Apparently I'm failing at explaining my point. I had competed in few HR contests (and viewed many more). The common problem I saw was the poor, often ambiguous, problem statement, bad tests and sometimes incorrectly specified constraints. And that's what I meant by quality of the problem — not how original it was. Beginners don't care about "nice problems from time to time", since they haven't solved enough problems to fully appreciate such thing.

        • »
          »
          »
          »
          »
          10 лет назад, # ^ |
            Проголосовать: нравится +3 Проголосовать: не нравится

          I admit to tl;dring the blogpost and reading just the comments :D

          Yeah, stuff like weak tests and weird statements are HR's (not only, India-based contests are notorious for this) problem. That's how it works when you don't have people well-versed in international (English) problemsetting doing at least language checks. CF sometimes has funny Russian English, but manages to get the point of problems through; HR often completely fails in that regard.

          It's a good thing my strong point is understanding problem statements *twirls moustache*

          And beginners usually care about books and not contests...

          • »
            »
            »
            »
            »
            »
            9 лет назад, # ^ |
            Rev. 4   Проголосовать: нравится +2 Проголосовать: не нравится

            Yeah, stuff like weak tests and weird statements are HR's (not only, India-based contests are notorious for this) problem. That's how it works when you don't have people well-versed in international (English) problemsetting doing at least language checks.

            What does poor English have to do with bad test suites?

            Check out this example from today's contest. The problem statement is clear enough, and so are the examples. But then the test suite checks for the wrong solution: you get AC by printing the result of A & B, while the correct solution gets you WA.

            To top it off, notice the markup issues with the problem statement, for example the heading.

            This is not just bad English, it's general sloppiness.

            So I completely agree with OP's point: with so many good problems on sites like CodeForces, TopCoder, and UVa, there is simply no reason to waste time and effort on low-quality sites with their defective problems.

        • »
          »
          »
          »
          »
          10 лет назад, # ^ |
          Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

          If this is what you mean, then I believe you're probably judging CodeChef as it was a long time ago. Now I would say that the problems in CookOffs (I don't do LunchTimes) are not terribly interesting, but they are quite well-prepared, clearly formulated and a very good training for non-expert coders. And the monthly challenge is also a very good contest, though too time-consuming for most (certainly for me). The challenge does usually contain hard and interesting tasks.

          My comment does not apply to contests of some Indian universities hosted on CodeChef; these can be pretty abysmal. I'm only talking about the regular series.

»
10 лет назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Thanks for interesting post, I was waiting for it:)

recreational algorithms.

And now i know how this stuff is called:)

Both competitions feature an opportunity to find bugs in other people’s codes (and get additional points if you’re successful at it), but CF designed it horribly and the best thing to do, is to ignore it entirely.

You prefer TC hacking system? Do you have any ideas how to improve CF version of it? I see only 2 weak sides — we don't know what cases are covered by pretests; when a problem have weak pretests, amount of available challenges strongly depends on your roommates (and their hacking skills — fastest may take 5-7 easy hacks just in 1-2 minutes, and this difference between being fastest and being second fastest usually isn't so huge at TopCoder). But hacking during round gives your more different possible strategies — at least you need a good time management.

  • »
    »
    10 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

    Pretests are main problem.

    Over a year ago I wrote the following post on TC forum (yeah, I just googled it):

    I never understood the point of pretests in CF. They create horrible challenge experience, because you have no idea what mistakes will get caught by pretests and in order to find this you need to spend a lot of time. The best approach is to usually watch closely the leaderboard to see if there's anyone with significant number of challenges and then check which problem is "challenge-heavy". This is really an awful design for a competition.

    And it got +16/-0, so I guess I'm not alone with that opinion.

    I guess the idea behind the pretests was to discourage people from submitting stupid solutions, but I doubt it's actually worth it. I would remove pretests entirely, or only left some very simple ones.

    Having the challenge phase combined with submission phase is a completely different thing. Generally, I prefer separate phases, but it's a matter of opinion. In TC challenge phase is mandatory, so it encourages you to learn some new skills. And since, it starts at the same time for everyone, it actually has an interesting dynamic (blind challenges at the start, exhaustive search for bugs, blind challenges at end depending on the scoreboard). In CF you're almost always better with working on problems, since otherwise you'll get less score for the submissions. I also don't like the mechanic of "locking" the problem, since it looks like an artificial fix for something that just doesn't work too well.

    • »
      »
      »
      10 лет назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      Idea of pretests is very clear — to tell people with some stupid mistakes that their solution is wrong. It is very sad when one solves a problem, it passes samples and it fails on systests, because of some stupid bug. Amount of "almost good" solutions on TC failing systests is damn too high and it causes results to be much more random. Pretests are to prevent such things.

      Notice distinction between having a small bug which causes WA in some cases (like lack of long longs or no corner case analysis) and having bad idea for algorithm, so called heuristics. Passing pretests and failing systests in first case is really sad, while in second case it is fully deserved.

      • »
        »
        »
        »
        10 лет назад, # ^ |
          Проголосовать: нравится +16 Проголосовать: не нравится

        No, it's not clear. Why aren't they just included in the given examples? You've given the reasons why there should be more tests, not why pretests exist.

        Most of the mistakes are stupid. What's the point of catching some random subset of them, when no one knows what is that particular subset. Knowing what types of tests are usually included in pretests gives you a small advantage, and thus pretests promote a steeper learning curve. Something that almost always should be avoided, if it doesn't serve any other purpose. I could make many more arguments why pretests are stupid, so I'm still curious what's the idea behind them.

        • »
          »
          »
          »
          »
          10 лет назад, # ^ |
            Проголосовать: нравится +26 Проголосовать: не нравится

          "Why aren't they just included in the given examples?" — because problemsetters don't want to provide counterexamples to all bad ideas directly. They don't want competitors with good idea to fail systests, because of stupid mistakes, but they also want them to figure out specific bugs / realizing that particular idea is bad by themselves.

          Another reason is a technical issue. CF is not TC thus it doesn't have ridiculous constraints like "n<=50" in all tasks and maxtests are often included in pretests in order to prevent people from failing systests because of too big constant hidden behing O notation. Tests with n = 10^6 can't be included to problem statement. Uploading them separately would be also a bad idea since there are people with slow Internet connection and this would be also a challenge from CF's developers point of view.

          "Most of the mistakes are stupid. What's the point of catching some random subset of them, when no one knows what is that particular subset." — I agree that this is not fully fair that it is possible that pretests against bug X will be included and pretests against bug Y won't, but corner cases are very often included and testing large subsets of bugs is still better than not testing any of them.

          " Knowing what types of tests are usually included in pretests gives you a small advantage, and thus pretests promote a steeper learning curve. Something that almost always should be avoided" — I didn't clearly get what you meant by promoting a steeper learning curve, could you elaborate on that?

          • »
            »
            »
            »
            »
            »
            10 лет назад, # ^ |
              Проголосовать: нравится +10 Проголосовать: не нравится

            I somewhat agree with your points. They make sense, but I wouldn't say they are important enough to ruin the challenges/hacks.

            I meant steeper learning curve [on CF], i.e. it gives unfair advantage to people that are more accommodated to CF system. By knowing what kinds of pretests are usually added, you can better estimate the probabilities of potential types of wrong solutions that may slip through pretests. This gives you advantage in hacking, and in case when your solution gets WA on the pretests.

            In TC, I'll usually know what I'm looking right at the start of the challenge phase. This is because I know what corner cases were not included in the examples. In CF I have no freaking idea, since some of those corner cases might have been included in pretests. This means, that I have to invest several minutes in order to be able to tell if challenging (hacking) is actually worthwhile. Although the better approach is to just wait and see if there's anyone with multiple successful hacks. This makes the whole hacking strategy very simple (from tactical point of view) and rather uninteresting.

            • »
              »
              »
              »
              »
              »
              »
              10 лет назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              So... the steeper learning curve part can be summed up as: you improve more on a specific contest platform the more contests you do on that platform. Sounds completely normal to me and I don't think it applies to CF more than TC or even ACM. I'm using that principle (of the most straightforward path — doing X is the best way to get good at X) all the time.

            • »
              »
              »
              »
              »
              »
              »
              10 лет назад, # ^ |
              Rev. 2   Проголосовать: нравится +6 Проголосовать: не нравится

              "They make sense, but I wouldn't say they are important enough to ruin the challenges/hacks." -> I think that can be called as a "generations difference" :D. I'm a CF's child and you're not and hacks are not so important for me, I treat them rather as an additional feature. Maybe that's because I don't think that reading some chinese codes where all variable's names lengths don't exceed 2 letters, there is high amount of copy-paste and there are 6 instructions in one line is something what I can benefit from (not to offend anyone, but I think that average quality of chinese CP codes is definitely lower than average quality in overall, but that is a separate thread).

              "I meant steeper learning curve [on CF], i.e. it gives unfair advantage to people that are more accommodated to CF system." -> that "knowing what is typically included in pretests" is somewhat fishy for me and for me it's funny that you insist that getting used to CF gives an advantage, because of that while TC has their shitty arena which I can complain many hours about. I don't know such big numbers which will describe how many different problems I encountered because of that, I often get pretty angry on arena during contest and that fact + shitty samples is main reason why at the moment I'm strong red on CF and mid-yellow on TC :P. (Maybe I just have bad luck, because people seem not to have that many problems as I have)

              • »
                »
                »
                »
                »
                »
                »
                »
                10 лет назад, # ^ |
                  Проголосовать: нравится +5 Проголосовать: не нравится

                On TC vs CF: I don't think that TC is strictly better than CF or other way around. Both have their pros and cons. I'm using TC mainly as an example. The whole point of my ranting was that there's design flaw, where pretests and hacking doesn't work well together. And that's really it.

                TC has also many flaws which makes it harder for newcomers. Really unnecessary flaws. And whole bunch of them are related to arena. But that's absolutely orthogonal to that issue that I'm talking about :)

                Also, I wouldn't call TC's examples shitty. If they're shitty then how do you call examples from ACM-ish contests? Your ranking disproportion probably is a mix of arena problems, being good in different kinds of problems, lower success rate (since it's much more important on TC), and caring less about the TC (psychological aspect) :) Still, it's only a ranking and since the competition formats are different, you're forced to learn slightly different skills on each site.

          • »
            »
            »
            »
            »
            »
            10 лет назад, # ^ |
              Проголосовать: нравится +12 Проголосовать: не нравится

            One thing this doesn't address — why aren't pretests shown to you after you pass them? This way you don't have to do blind guessing at what is included and what isn't.

            • »
              »
              »
              »
              »
              »
              »
              10 лет назад, # ^ |
                Проголосовать: нравится +8 Проголосовать: не нравится

              Maybe because a pretest can by very large to be shown?

              And although if Codeforces provides feature for downloading pretests, we have to spend much time to analyze what a pretest really covers if it is large.

              • »
                »
                »
                »
                »
                »
                »
                »
                10 лет назад, # ^ |
                  Проголосовать: нравится +17 Проголосовать: не нравится

                Makes sense, but they already have functionality to show you just the beginning of such tests.

                It's still strictly better than nothing, and usually you can get a good idea of corner cases covered by seeing if any of them has N=0, N=4, N=max, etc

      • »
        »
        »
        »
        10 лет назад, # ^ |
          Проголосовать: нравится +22 Проголосовать: не нравится

        I've always thought pretests' purpose is to make you solve the problem before you can go hacking.

        • »
          »
          »
          »
          »
          10 лет назад, # ^ |
          Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

          Is that solving problem before hacking of any importance? If sb would be allowed to lock problem before solving it I don't see any reasons why you should solve problem before hacking.

          Pretests are very important from solving problems point of view and this is more important than hacking point of view so design of hacking should be adjusted to design of solving problems, not the other way around. At least that's my opinion on that.

          • »
            »
            »
            »
            »
            »
            10 лет назад, # ^ |
              Проголосовать: нравится +6 Проголосовать: не нравится

            I am not sure of exactly the point you are making here, but I think it is important that there are pretests for problem to at least make the lives of multi accounting cheaters harder. If one could lock problem without solving it, cheaters would have two accounts and then copy solution. At least now they have to pass pretests, which usually means getting a pretty close to correct solution.

            • »
              »
              »
              »
              »
              »
              »
              10 лет назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              Aahhh, ok, I didn't think about it that way. I thought that whole internet CP relies on people's honesty, so it shouldn't be a big problem, but indeed, that would be extremely easy to cheat in that case. Though we can't simply copy code from hacking window, but either way.

              • »
                »
                »
                »
                »
                »
                »
                »
                10 лет назад, # ^ |
                  Проголосовать: нравится 0 Проголосовать: не нравится

                Yes I agree in general honesty is reasonable to assume, however I think it is logical to at least make cheating more difficult. For instance when one turns unofficial standings for some contests, you see people who just copy other people's solutions and then submit them in virtual contests, even though in virtual contest it is made very clear that one should not look at actual solutions and compete as if real contest. And yes the individual would have to retype the solution, however that would obviously not take much time. I think that both codeforces and topcoder as they are created right now make cheating reasonably hard, although in d2 topcoder I see sometimes individuals with 249.xx and 499.xx points for first two, obviously being multiaccounting. Fortunately the best competitors on both sites are simply very good, and not just cheaters, who even when cheating tend to do badly.

    • »
      »
      »
      10 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      You call watching the leaderboard an awful design, but blind challenges an interesting dynamic. Nice double standard there — both are exploiting the system, just in different ways.

      • »
        »
        »
        »
        10 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Seriously, I can't really discuss with you when you're not trying to understand what I wrote there.

        You call watching the leaderboard an awful design — nope, please read again. I'm saying why pretests ruin the hacking part.

        • »
          »
          »
          »
          »
          10 лет назад, # ^ |
          Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

          >The best approach is to usually watch closely the leaderboard to see if there's anyone with significant number of challenges and then check which problem is "challenge-heavy". This is really an awful design for a competition.

          You describe something and begin the next sentence with "this". I understand the logical structure of this argument perfectly, but it's not my fault that you meant something else than you wrote. 2/10 low quality bait.

          And I disagree with the "pretests = awful design" as well, unless the competition you're talking about is dice rolling. Swistakk already summed it up.

          • »
            »
            »
            »
            »
            »
            10 лет назад, # ^ |
            Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

            I'll make it easier for you:

            hacking during the contest: fine

            pretests: fine

            pretests + hacking during the contest: awful design

            • »
              »
              »
              »
              »
              »
              »
              10 лет назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              Okay, I'll come back to my initial argument and add 3 words.

              You call watching the leaderboard when pretests exist an awful design, but blind challenges an interesting dynamic. Nice double standard there — both are exploiting the system, just in different ways.

»
10 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I can add that CTF hacking competitions are also in many ways programming competitions (but require much more broad knowledge and skills than just programming algorithms), and sometimes there's even special category called PPC (professional programming and coding), which usually includes tasks, where you need to win some game using algorithms (sometimes there're also classical algorithm problems, and sometimes you need to automate routine operations like recognizing captcha).

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I think there are very few people (if any?) who perform well in both CTFs and algorithmic contests. I've talked with few people from the Dragon Sector team about CTFs and my impression was that those contests have actually very little common with (algorithmic) programming contests.

    That being said, they are definitely noteworthy. It's just that I focused on stuff loosely related to algorithms, since this is as far as my knowledge goes.

»
10 лет назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

Very helpful article, thank you psyho!

What kind of game are you working on? I remember that in TCO materials you said you will write AI that better than any others in AAA titles. I'm interested in this because I'm on the way to become a game designer and developer.

Another question is that, do you know some contests like Google AI Challenge that we create AI for 2 or more player competitive games? I'm not very interested in TC Marathon because it is just one player game, but I'm interested in multi-player AI contest.

The last question: have you tried to work on AI for real world games, like Go or Starcraft? You know, we don't have an AI that can defeat professional players yet. Are you interested in working on this kind of AI?

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится +9 Проголосовать: не нравится

    That idea of writing "AI that better than any others in AAA titles" probably won't come to life. During the second half of 2014 I created around 5-7 decent concepts for games. One of them can be summed as "RTS where AI is better than human", which I really wanted to do, but it's too ambitious and too risky for a first project, and most probably I would ran out of funds while doing it.

    Ultimately, I've chosen a concept that is based on unique (I've never seen a similar game, and I've been playing games since I was four) core game mechanics. It's a top-down beat 'em up game designated for 4-player local play, with a non-conventional control scheme and a very high skill cap. Unfortunately, so far I've been acting more as a project manager than a designer/programmer (had to fire lead artist and hire a new one, etc.), which means that I'm already behind the schedule :)

    About AI contests: Unfortunately I don't. I remember that doudouille shared such site on TC forums. But I've never checked it, so I don't know if it's good enough. I'm guessing it should be at least decent, considering the amount of traffic the site gets. One problem I often encountered in writing AI for contests, is that I don't gain too much out of them. I believe that competing in MMs (optimization stuff) will actually help you more in writing good AI. At least, in the long run. The biggest downside is that "single-player games" are just not that fun :)

    I had a very short-lived side project with Go. A friend of mine, spent considerable amount of his PhD time researching AI in Go. He knew that back-then (it was ~5 years ago) AIs considered to be state-of-the-art were written by very small teams (usually a single person). He asked me to help him, we worked together for a week or two, but ultimately I didn't have too much time (it collided with my poker pro "career" and there were other issues as well) and I resigned from the project.

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    I think this is the contest that you are looking for: http://theaigames.com/

»
10 лет назад, # |
  Проголосовать: нравится +22 Проголосовать: не нравится

I would like to add a comment on CF's pretests, as a separate thread.

Basically if you have ever been a problem setter of a CF round, there is a rule that specifies the kinds of test cases that should be included as pretests. As a regular contestant in other rounds, you have an advantage because you know in advance that you don't have to consider those cases when trying to hack solutions.

MikeMirzayanov, you might want to consider this?

  • »
    »
    10 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Maybe we can just make that document public to all user.

    But I think it is not that helpful — I still fail by not using long long after knowing that we must include a test to catch such error in pretest.

    And I don't think all problem setter follow that rules, for example, I remember for first 2 tasks in Div2, we must add explanation to first 2 examples, but sometimes they don't.