Блог пользователя eatmore

Автор eatmore, история, 6 лет назад, По-английски

This post in Code Jam group

Hi,

I want to discuss interactive problems in Code Jam. Code Jam didn't have interactive problems until recently, and there are still major differences between interactive problems in Code Jam versus the other competitions where interactive problems are more established.

One difference is in how sample interactions are presented. I think that in Code Jam, sample interactions are very inconvenient to read. For regular input and output, there is a place where you can read just the input and just the output, but for interactive problems, you have to read a mix of pseudocode, comments and the actual input/output.

Other competitions that have interactive problems usually use more concise formats, which are easier to read quickly. One possibility is to just present the input and the output separately, like here (problems H and I). This doesn't convey the relative order of the input and output lines, but it can be reconstructed from the description of the interaction protocol. Also, it is possible to use empty lines to show the order, like here (problem C). There are multiple approaches that can be used to show a sample interaction in a single column, like showing input and output lines in different colors, or prefixing them with different symbols (for example, left and right arrows). In all cases, it is possible to add an additional column for comments.

Another feature of interactive problems specific to Code Jam is that when a solution produces an incorrect result, the judge program will communicate this to the solution and wait for it to terminate. This doesn't make sense. Once the solution produces an incorrect result, there is no point in continuing to run it, and in other competitions, it would be immediately terminated in such a situation. But apparently Code Jam team expects the contestants to write code that would check the judge's output and handle the case where it indicates that the solution's output is incorrect. This is useless, because it wouldn't make any solution pass that wouldn't pass otherwise, and that's all that matters for a solution. I for one don't write such code (my solutions always skip this part of the judge's output), but I still need to not forget to read (and ignore) that line, something that I don't have to do in other competitions.

Finally, I'd prefer sample interactions to be correct. It's simple: for regular problems, sample inputs and outputs are (almost) always correct, so why should it be any different for interactive problems? Yet, both of this year's interactive problems available so far have sample interactions where a solution provides an incorrect answer, apparently to demonstrate the judge's response to it (the right response, as I already said, is to terminate the solution with WA verdict).

What do you think of all this?

  • Проголосовать: нравится
  • +186
  • Проголосовать: не нравится

»
6 лет назад, # |
  Проголосовать: нравится +28 Проголосовать: не нравится

"Once the solution produces an incorrect result, there is no point in continuing to run it, and in other competitions, it would be immediately terminated in such a situation."

I don't think this is usually the case. In codeforces it definitely isn't, see for example 872D - Что-то там c xor запросами. Similarly in BOI 2018 there's this problem.

It doesn't really bother me that much either, since I usually write a separate function to write and read output to and from the grader, and you can just put a exit(0) there if the grader returns -1.

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится +13 Проголосовать: не нравится

    Actually, both Kattis and DOMjudge have recently changed the protocol for interactive problems so that when the validator detects a wrong answer, that is also the verdict that will be shown, see this PR.

    I think the interaction protocol on Code Jam would also be a lot cleaner if they split each test case into a separate run of the submission. Does anybody know why they don't do that? Is it because of server load?

»
6 лет назад, # |
  Проголосовать: нравится +99 Проголосовать: не нравится

The visual design thing: I particularly loved the sample interaction in this year's ICPC finals dress rehearsal:

IMHO it leaves very little area for interpretation — you instantly see who writes what and when. Of course, the only problem is that it's not straightforward to recreate in the web browser. :/

Another thing is that I needed some time to get used to the sample interactor. I had to guess that I needed to look at the source code of the interactor and understand some comments in there, only to run the code. It could be another thing repelling people from the interactive problems in Codejam.

»
6 лет назад, # |
  Проголосовать: нравится +49 Проголосовать: не нравится

You forgot to mention that they provide tester script which may be more important than how pretty sample input/output looks in statement

»
6 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I wish their script could copy the interaction history to console so I could see what went wrong...

  • »
    »
    6 лет назад, # ^ |
      Проголосовать: нравится +22 Проголосовать: не нравится

    I modified interactor for round 1A to debug messages going through. Just pass -debug flag to the interactor like this:

    python interactive_runner.py -debug python testing_tool.py 0 -- ./my_binaRy

    interactive_runner.py

    Notice I have only tested this with working programs. I haven't explored what happens when the program crash (RTE) or the interactor crash.

    This should print to the terminal something like this:

    output