Badum badum...
The Internet Problem Solving Contest takes place again in 2014! The registration's started (those of you who competed last year should've gotten an e-mail about it) and will be open until the end of the contest.
The contest will take place from 15.6.2014 10:00 UTC 15.6.2014 11:00 UTC to 15.6.2014 15:00 UTC 15.6.2014 16:00 UTC and will be preceded by a short practice session.
You can find the registration form, archive and everything at the contest site.
The rules can be found at the contest site or in my blog post about last year's IPSC. That's why this post is short — you can just read that one and replace the dates.
Also, if you think one blog post isn't a good enough emphasis, here are two more :D
*CONTEST HAS ENDED!* You can find the solutions in the archive.
Here is a summary of some registered teams by CF handles. It's not very long, but better than nothing.
Team name | Member 1 | Member 2 | Member 3 | Place |
---|---|---|---|---|
Zenith | I_love_Hoang_Yen | flashmt | ngfam_kongu | 39 |
MSU Trinity | Zlobober | sankear | malcolm | 31 |
cutie, dropout & useless woman | vadimmm | Rubanenko | baba_beda | 113 |
XZ Team | eatmore | AlexFetisov | winger | 26 |
KPZ | eduardische | popoffka | Alex_2oo8 | 28 |
Break a leg | Olja | Oleg805 | andgein | 270 |
Charles University Legion | fhlasek | k21 | simsa.st | 12 |
SPb SU 4 | Dmitry_Egorov | PavelKunyavskiy | yeputons | 9 |
And separately for single-person teams.
Team name | Member 1 | Place |
---|---|---|
Alexander Udalov | udalov | 28 |
duolCauqA | Aquacloud | 170 |
Xellos | Xellos | 15 |
Snowbear | marat.snowbear | 44 |
Futures Leaguer | gs12117 | 12 |
This post should stay in recent actions.
Check up the link to the contest site
Hm, the link's acting strange: it was written correctly in HTML, but
http://codeforces.net/
was added before it when displayed on the page. Addinghttp://
in it solves the problem.I have Final exam At that time :(
Shit happens. I had an onsite competition during CF round 248.
Maybe you could write an alternative one later? It's already bad enough when there aren't multiple dates for exams.
It would be nice if we start posting our teams associated with CF handles.
+8 but no one reply with teams? Well I'll start :D
My team: Zenith
Members: I_love_Hoang_Yen flashmt ngfam_kongu
Me team: TrailBlazers
Members: asdoc nishad94 xennygrimmato
I think author can start adding teams to the main post.
MSU Trinity:
Zlobober, sankear, malcolm.
Yup, I'll update after queries :D
team: cutie, dropout & useless woman
Members: vadimmm, Rubanenko, baba_beda
why useless ?
she is cute :)
murrr. :3
You mean baba_beda? I think it is she who is referred as 'cutie'. So the question who is referred as useless woman remains open :-)
The cutie is vadimmm, so the handles are mentioned in the same order as in the team name :)
BTW, don't forget to google translate her nickname ;-)
How is he supposed to do that not knowing Russian alphabet?
If you know that it's Russian, you select the language manually and type in "baba beda" there will be a message saying "maybe you meant баба беда?" and clicking that will give you the translation. I checked that before posting my previous comment.
like this :)
Havka-papstvo Egor, pashka and Petr
If authors read this topic, please, publish pdf version of statements on some mirror (maybe with link here on CF) when contest starts. Every year there are problems with site accessibility.
It's best to write right to the official contact mailbox — if it doesn't get read there, then it's hopeless anyway :D
I'll try to dropbox or something the statements and archive when they're available and I download them, at least.
They allow to download full package (problem set, in files, etc.) before contest starts and after that just wait for a short text password. It's easy and we don't have any problems with that (this and previous year)
XZ team: eatmore AlexFetisov winger
KPZ: eduardische popoffka Alex_2oo8
Count me in.
single-person team: duolCauqA
As usual, very nice and fun contest. Huge thanks to the organizers!
The only technical downside in my opinion was problem E — we've realized too late in the contest that it gives full information about cases and that is exploitable to get the full solution, but by that time the queue was too large and we had to wait 2-5 minutes for each test to evaluate. Sure, this problem is unavoidable, but the system also didn't let us submit the next code without waiting for the result for the previous one (which would help massively, since all 6-9 submits to get the information needed are totally independent of each other), so we couldn't make it in time. So, I would really prefer if this restriction wasn't present in the future.
How to solve problem I hard?
The solutions have been published already. Short idea: fix a vertex; we need to assign a binary string to each son of this vertex, so that the sum of (substring length * son's subtree size) is minimal; the optimal solution is known as Huffmann encoding.
Thanks for the response! Did you know before the contest about Huffmann encoding?
Yeah, I did. But during the contest, it was more of a guesswork: I tried various greedies and then realized that what we're trying to do (build a binary tree whose leaves are substrings with cost = depth * number of occurences) is really similar to what Huffmann encoding does — I guessed that it could be Huffmann and then realized why it works.
The solution for I is not published yet.
Also, fun fact, we somehow managed to solve it without Huffman encoding by sorting the sons' subtree sizes in descending order and then calculating the minimal sum with following DP: a[i][j][k] — max sum if we processed i sons, and are left with k different strings of length j. So, if (N-i) <= k, we just fill the remainder with j, otherwise we can go from a[i][j][k] to a[i+1][j][k-1] (k > 1) and a[i][j+1][min(N-i, 2*k)].
This is O(N^3), but in reality we don't use most of the cases — the only thing we couldn't solve by this was the root of the final test case in I2, but the sons' subtree sizes are 12479 and 1 (4999 times), so this bit is solvable by hand.
or you can wait for the run for 2-3 minutes.
We used map for storing DP values for faster execution, and we ran out of memory for it, so we received exception rather than the code was executed for too long.
I tried to solve I with a similar O(N^3) DP, except i wasn't able to solve the last case, after some wrong answer, i manage to remove [j] from my DP state by precompute the suffix sum of the subtree size, so a[i][k] go to a[i+1][k-1] (k > 1) and s[i]+a[i][min(N-i, 2*k)] which run in O(N^2).
How to solve G2 and M2?
As I said: the solutions have been published... I don't know, anyway :D
They made a preliminary version of the booklet available: http://ipsc.ksp.sk/2014/real/solutions/booklet.pdf
Thanks
Did you manage with G2? They give the formula only, but how should I get there?
You should construct for every wining position some turn, that moves it to fail position. But in this case description of winning/failing positions is complicated, so it is very hard to write solution (but it is possible to understand it by yourself).
They also provide code to some of the problems: http://ipsc.ksp.sk/archive.
Here is G2: http://ipsc.ksp.sk/2014/real/solutions/g-solve.cpp. They don't give a proof of their approach, however.
The proof of G2 will be available in the final version of the booklet, the author is still writing it :)
I'm surprised at the low number of solutions of F hard, considering it was just about googling the first 107 digits of π (slightly more doesn't matter, a lot more isn't a good idea anyway), finding all first occurences of permutations in it and writing a backtrack. I could've solved it if I had just a bit more time...
The condition that we want the smallest occurence actually helps optimize the naive backtrack — since the order of the 3 triples of lines and that of lines in each triple doesn't matter, then we can limit ourselves to searching sudokus with o[0] > o[3] > o[6] and o[3k] > o[3k + 1] > o[3k + 2] (o[i] is the offset of the i-th row). This way, if we find a solution, we know that it's optimal.
Plus, there are just 3.6 thousand different permutations among the first 107 digits of π, so we can precompute the conditions "can lines x and y be rows of 1 sudoku?" and "can lines x and y be rows of 1 triple?" (triple: lines which form 3 neighboring squares) to make the code faster.
My code has these as only observations (the rest is raw parsing) and runs in 5 minutes on my computer.
Nowadays, most contests only set problems with fancy efficient algorithmic solutions. Maybe the art of writing good brute force solutions is getting lost :)
That's why I love IPSC :)
It's really a PROBLEM SOLVING contest, rather than (academic) algorithmic/programming contest.
Don't forget about the INTERNET part — sometimes, heavy use of Internet is expected, and required. Like in H. And K... I don't know Pokemons, for example — I prefer less sane stuff (warning: NSFW) :D
H1 could be solved easily without internet (as we did), just open hashCode implementation in Long and that's it.
And, actually it's a questions from interview, like how the HashSet or HashMap is working.
Who would believe that you can query pokedex using restful API?
http://pokeapi.co/api/v1/pokemon/25
http://pokeapi.co/api/v1/pokemon/pikachu
Well, we had first 100 millions digits of pi. So I got 46K rows, which was enough to solve F1, but then I figured out that we could group rows into triples, which could solve F2, by iterating over 3 triples. But, of course, with 46K rows my triple computation was taking too long and I didn't thought of reducing 46K to 1K (like in solution).
So, indeed, sometimes less is better.
Can anyone explain why the solution for D1 works? I meant how could "cat easy.in | bunzip2 | bunzip2 | tr -d ’\000’" not take 1TB of hard disk? What is the mechanism behind this? Thanks :)
The piped programs run in parallel, not sequentially. The pipe can be seen as a temporary buffer. On my Linux, the size of this buffer is 64 kB. Roughly, it works as follows: whenever a program wants to read from an empty pipe or write into a full one, it is suspended until it can do so. In this way, you get a simple synchronization between the piped programs:
So there never is the entire unpacked file in memory anywhere. At any moment, each of the bunzips only sees a small part of the file it decompresses.
It's pipe technology. Every input/output happening in three inner pipes can be made using temporary buffer of little size (64kb on linux by default). For example,
cat easy.in
writes 64kb of output then freezes untilbunzip2
starts reading this buffer producing his output tp the next pipe and so on.The main idea is that
cat
,bunzip2
andtr
are stream-processing algorithms that need only O(1) of input data to produce O(1) of corresponding output data.