The coding phase has been finished. Please wait for the final testing and results.
Good day everybody.
Everything written below is not a April Fool's joke. Though it is also a funny occasion :)
We are glad to announce a new experimental contest on Codeforces, "Problem Parser Contest".
Have you ever made training sessions, using the previous contests' archives? Have you downloaded from the Internet contest archives to test for your solutions locally? If the answer is 'yes', then you couldn't help noticing that each organizer insists upon inventing his own way (format) to distribute problems. Yes, it is true!
I've prepared and organized a huge number of trainings based on the previous contests. It is often rather a monotonous task; one often has to write some scripts that rename the input/output files of the problem into the standard form. In archives they are called very differently: they can be river.in.1, river.out.1, river.in.2, river.out.2, etc. In another problem they can be tests/1.dat, tests/1.ans, tests/2.dat, tests/2.ans etc.
At some point I got fed up with it and wrote a script that told the patterns of paths to the test files, using some heuristics and the regularity I'd noticed. The script worked well, but it wasn't perfect.
Codeforces is working on functionality to improve the project as a training ground. We plan to give the users the opportunity to prepare and organize training sessions directly on Codeforces. You will prepare the training session yourselves or choose a public one to participate. Have you prepared it yourselves? That's great! Make it available to general public and people will be grateful to you.
We've already done much and worked out some part of the software. There is much to do, many technically challenging moments and simply mundane work. However, I am sure that the community of young and talented programmers can cope with some problems no worse than the Codeforces team. For example, it can be a problem of archive contents' automatical recognition.
In a nutshell, you are given a set of files that relate to that problem. Your program should find a way to pick out test files (input files and output files), solutions and the checker source code. That should be based on the found patterns, heuristic solutions or on something else.
The key point is: the winner's code will be issued into a special small library that will be distributed by the rules of free license. Codeforces will use this code to make mundane processes automatic. You will really help us and your name will be part of Codeforces's history of great deeds and the code you've written will regularly serve the society. Have I persuaded you to take part there yet?
As nearly all the Codeforces code system is written in Java, we expect the solutions to that problem to be on this very language. We don't think you will need some profound knowledge of Java. Basic proficiency in the language should be enough.
We even decided to set up a small prize fund. The prize money will be distributed between five best participants:
- 1 prize — 280 dollars,
- 2 prize — 140 dollars,
- 3-5 prizes — 70 dollars.
Problem Statement
Your task is to write a java program that will be run from the catalogue, containing the files of one or several problems.
The relative paths of all files that relate to the problem in question are listed in the files.lst file, which is located in the root. The separator in paths is character '/' (the UNIX separator).
For example (files.lst):
tester/check_a.pas tester/check_a.exe a/tests/02.in a/tests/01.ans a/tests/01.in a/tests/02.ans a/tests/gen.exe a/tests/t.bat a/tree_mm.exe a/tree_mm.cpp a/doall.bat a/problem.xml a/validator.cpp a/tree_dm_wrong.cpp
The program is allowed to read and analyze only the files listed in the files.lst.
The program's task is to pick out a set of tests in the given problem (if there are several sets of tests, the program should pick out the official one), the solutions sources (if they are found) and the checking program source code (if it is found).
The result of the work should be output in the standard output stream. The output should contain from one to three sections: the tests' section, the solutions' section and the checker section (only the tests' section is obligatory).
The tests' section should start with the "tests:" string. Then it should contain one or more string, consisting of the actual test's file name and the name of the file of the response to the test. The two names should be separated by a ':' character (a column). The tests should be listed in the order of increasing of their numbers in the original problem. You should only output official tests (if several test sets are found).
For example (the tests' section):
tests: a/tests/01.in:a/tests/01.ans a/tests/02.in:a/tests/02.ans
The solutions' section, if it is present in the output, should follow second. The section starts with the "solutions:" string. Then follow lines that contain the names of the files of solutions' sources. The order in which file names follow is lexicographical. Right and wrong solutions alike are regarded as solutions (the wrong ones are also present in the jury archive often).
For example (solutions' section):
solutions: a/tree_dm_wrong.cpp a/tree_mm.cpp
The checker section, if it is present in the output, should start with a "checker:" string and it should consist of a single element, the file name with the checker source.
For example (checker section):
checker: tester/check_a.pas
Thus, the complete output should look like this:
tests: a/tests/01.in:a/tests/01.ans a/tests/02.in:a/tests/02.ans solutions: a/tree_dm_wrong.cpp a/tree_mm.cpp checker: tester/check_a.pas
All the file names in the output should be in the exactly the same form in which they are represented in the files.lst file.
Your solution will be run with limited used memory "-Xmx32M -Xss1M" (the heap size is 32M, the stack size is 1M). The running time limit is 1 second.
The main class of your program should have the name of ProblemParser and be present in the package by default.
How it will be held
The contest will last for two weeks and will be over on April, 15 at noon. We haven't yet finished writing the system of the contest's organization, and we will improve and expand it soon.
The tabs you already very well know, await you: send, my packages, status and the participants' position. You can either send one file (ProblemParser.java), or a zip archive with your project source code.
This two weeks testing will be held on 30 tests. The first ten of them are open, you can download them (updated). The answers to the tests are available in the archive; they are located in the answer.lst folder. The rest 20 tests are unknown to you. We sort the participants by the non-increasing of points; when participants have the same number of points, they are sorted by the decreasing of time of the last attempt. You can send the solutions as many times as you like, if there will be a queue, we will limit the frequency of sending the packages. We take into consideration the point for the last solution of the problem.
The solution can earn one point on each test. A partial mistake on the test may lead to failing it.
When the main part of the contest is over, we will hold a final testing. For each participant we will test his last attempt on the expanded set of tests. The set may contain 20 new tests and 20 tests (11st-30th) you've seen before.
All the tests are real problems (or perhaps, their slightly modified versions). The problems are taken from various sources — we tried to maximally cover the geography of their origin.
If this experience proves to be fruitful, we will try to expand the experiment. I have in mind many more interesting tasks, solving which will be useful to the world.
May the best programmer win, MikeMirzayanov
UPD: The given archive has been updated. It is recommended to download it again.
I'm waiting for the English translation.
TC always describe random distribution and all ranges. So... optimal solution = solution that get best summary score for all possible input data (if you have infinite time you can bruteforce all possible next opponent's moves and taking into account their probability distribution choose the best variant).
problem/Source/tester.dpr
problem/Source/TESTER.PAS
why .dpr is a checker, not *.pas?