It seems that nobody has posted about the round till now, so I thought of posting about it.
It will start on Sunday from 17:30 IST, 15:00 MSK, 12:00 PM UTC. Please check your timezone here
More details about the contest can be found here. This round is also a regional event which will be held in ITMO University. Maximum 40 people will advance to next round.
Best of luck to all !!!
Let's discuss problems here after the contest !!
I will participate online, not onsite. So:
Do I have to register for the event? Or regular registration as before SRM is enough?
Are there any prizes (shirts) for online participants?
What is the maximum number of online participants? How many of them will advance to the next round?
2 and 3. Onsite competitions are completely optional. There is no difference between onsite participants and online participants.
If someone qualifies for round 3 through 'online' round '2A' today ,can he/she still participate (onsite) in already registered 'onsite' round '2D' ?
There are parallel rounds, so yes — you can definitely solve problems.
What about t-shirts?
Is anyone else facing problems with the applet? Mine does not start.
Redownload, clear Java cache, reinstall Java, reinstall OS, try the web arena, jump on your left leg at midnight... or mail the support (options sorted in order of efficiency).
Keeping more versions of the applet is useful as well, in case rollbacks are made.
Thnx , redownloading + clearing cache worked.
How to clear cache ?
http://lmgtfy.com/?q=clear+java+cache
Stupid question, but how to solve 600?
I tried the following: binary search, inside it iterate over potential roots of connected component, try to move each fox up as much as possible, then mark all nodes on paths from foxes to root as "must have >=1 fox", and try to saturate these nodes via max-matching (from initial positions of the foxes). Magically, this doesn't work on large samples (which are impossible to debug), finding answers smaller than jury's one. Am I missing something obvious?
Exactly the same solution worked for me just fine.
I see... I feel really stupid — I've just added several assert's and finally found my bug. It turned out I had a bug in prewritten Floyd O_O (which I have in my code library for like 3 years, but this is the first time I've ever used it: usually it's easier to write 4 lines of code, but today I was too lazy, and my laziness was punished :( )
Userpic related? :D
Yep, that's what I did as well (working name for this solution: "like wtf", considering how many small things it uses). It passed samples on the first try; I wonder if it'll pass the systests as well.
I submitted exactly the same thing and it's correct on samples, however the runtime is O(n^4*log(maxanswer)), which sounds like it should pass, but it's already 1.5s on a sample so it might not. It's possible to optimize it to O(n^3*log(maxanswer)) using even more WTFy tricks, but I was too slow to submit that :(
If it's optimisations you want, I'd use the fact that there are ≤ N2 / 2 possible answers to decrease the number of binsearch's passes twice. It's a big constant when we're making the distinction between a barely passing and barely not passing solution.
After binary searxh I iterated over foxes from deapest and stopped each time I'm in vertex where theres no foxes and there are some in subtree
Could you explain this part of your solution? I could not find out, how to collect all foxes around some root optimally.
For example, if we have a free root, and busy childs nearby, and some child of childs are busy too. How to choose, which of them should be moved first and which of them should be remained on their places?
*children
That's exactly what's done by max. matching. As long as you know the initial positions + where you're allowed to move them, and the set of final positions, it's a textbook matching problem.
The only problem is that you need to know the set of final positions, which is done by the part before it — the marked paths must contain at least 1 fox per vertex (otherwise, either our choice of the root is wrong or no solution exists) and any solution that does contain them can be compressed to one that doesn't have foxes in any more vertices simply by moving foxes from unnecessary vertices upwards.
Wow, I've got an answer for my challenge. Half an hour after it was submitted to TopCoder.
What was the verdict?
Successfully TLed. And I have also noticed two other solutions with similar approach during the challenge phase.
And what about 950?
I guess that the answer is just minkowski sum of chosen triangles. So we want to know area of minkowski sum of triangles say T1, T2, ..., Tk. The perimeter of minkowski sum will consists of all sides of all triangles and the answer will be sth like
S(T1) + ... + S(Tk) + some parallelograms made from some sides of Ti, Tj
Then the expected value is easy to compute. But which sides? of which triangles? I guess it's some condition on angles...
An (x, y) vector adds (sx, sy) × (x, y) to the answer if its triangle is chosen (i.e. with probability p[triangle]) where (sx, sy) is the expected sum of the vectors of the sides that precede (x, y) in convex hull order and that come from chosen triangles. Every triangle contributes to (sx, sy) independenly but with all its sides simultaneously! It is not that every vector is independently added to (sx, sy) but every triangle independently from other triangles adds all its sides that precede (x, y).
"For advancement to next round, we will be nullifying results (-25 or +50) of the challenge phase. Top 40 placements based on system testing only will be advanced. Official results may be delayed a day or so."
"The match will be unrated. Final results are likely to be very delayed. We apologize for this. Further announcements will be made in the forums."
What is needed for hard is just a good picture:
We can see that if we are adding new triangle then 3 kinds of new areas appear. One is our new triangle, and there are two groups of new parallelograms. What is crucial here is that how set of sides changes. It turns out that old sides are unchanged (I mean, if we are looking at sides as just vectors) and three sides of are new triangle appear somewhere on perimeter, so calculating area of those parallelograms is in fact easy and all what we need is some cross products. You can look at my code here: http://ideone.com/3OY93a
So sad that I couldn't debug it for half an hour, because I lacked "&" ; /.
I'll just leave it here: Minkowski sum
I know what it is (I even used this yesterday in GCJ in B, but failed hard subtask because of precision errors :/), but I think that analyzing how area changes is not obvious just from defition and to grasp the idea such picture is perfect. What do you wanted to say when just leaving this link?
There's a part regarding behaviour of convex hulls under Minkowski sum (maybe a link to the part would be better), basically they are "merged" if all the sides are sorted by angle. Given this, there's hardly any picture needed: by linearity of expectation, find the expected area of directed trapezoid under every side of every triangle.
Frankly saying I fail to see how anything written there can be applied in this problem directly apart from that merging which is highly not sufficient alone. I will stick to my picture :P. For me such picture is sufficient to make everything crystal clear while I don't see how one can come up with formulas using only algebraic approach without any pictures (even in your head :P)? How it can be derived when we should count those trapezoids according to some angles formulas? Where this new triangle pops out?
My approach (I guess same as Endagorion's). First, from mathematical point of view:
If you implement it straightforwardly, you'll get O(n3). Optimization to O(n2) can be done by looking at the code and pre-calculating one value outside of the loop instead of calculating it each time inside of the loop.
Approach for 300 please ?
One of correct solutions:
1) cnt: array [0..R], stores how times repeat each number. Initially all 1's for 1..R
2)for every m[i]:
a) "cut" the part above m[i]: for (var j=m[i];j<=R;j++) cnt[j%m[i]]+=cnt[j];
b) R = Math.min(R, m[i]-1);
Its linear time, because we cant cut more than R elements
Very creative solution indeed !
Systests are finally done.
Btw, I coded to 300, I sometimes forget that I need to disregard nice observations and code bruteforce instead of something clever :P.
Could you explain your idea? :)
Firstly, I get rid of redundant modulo oprations, to make sequence of m[i]'s strictly decreasing. Then I use similar approach to one in O(m2) (I assume familiarity with it), but I don't go through all m's one by one, but binary search on first modulo operation that will affect my interval. One can easily see that if a > b then 2(a%b) < a, so I will need just log R binary searches.
In 300 problem, I observed a pattern —
in example #0 {5,3,2}, R =10 , Mods were repeating with interval of 5(taking from Zero).
1st no. is from R and 2nd no. is the result I got after applying all Modulus operations. In this case interval of 5
0-0 1-1 2-0 3-0 4-1 5-0 6-1 7-0 8-0 9-1 10-0 11-1 12-0 13-0 14-1 15-0 16-1
in example #1 {33, 15, 7, 10, 100, 9, 5}, R = 64, Repeating interval of 15 —
0-0 1-1 2-2 3-3 4-4 5-0 6-1 7-0 8-1 9-2 10-3 11-4 12-0 13-1 14-0 15-0 16-1 17-2 18-3 19-4 20-0 21-1 22-0 23-1 24-2 25-3 26-4 27-0 28-1 29-0 30-0 31-1 32-2 33-0 34-1 35-2 36-3 37-4 38-0 39-1 40-0 41-1 42-2 43-3 44-4 45-0 46-1
in example #2 {995,149,28,265,275,107,555,241,702,462,519,212,362,478,783,381,602,546,183,886,59,317,977,612,328,91,771,131}, R = 992363 Repeating inteval of 28 —
0-0 1-1 2-2 3-3 4-4 5-5 6-6 7-7 8-8 9-9 10-10 11-11 12-12 13-13 14-14 15-15 16-16 17-17 18-18 19-19 20-20 21-21 22-22 23-23 24-24 25-25 26-26 27-27 28-0 29-1 30-2 31-3 32-4 33-5 34-6 35-7 36-8 37-9 38-10 39-11 40-12 41-13 42-14 43-15 44-16 45-17 46-18 47-19 48-20 49-21 50-22 51-23 52-24 53-25 54-26 55-27 56-0 57-1 58-2
But I couldn't crack how this pattern was forming, Anyone else used this idea? or can tell is it right or wrong?
Once we know what is the repeating interval, sum till any value of R can be easily calculated.
I have the solution based on that. Basically the pattern will be created by decreasing subsequence of mods starting with first element.
I have a table telling me how much each number adds to the result, after going through all these mods.
I mark multiples of the first element (putting 0) on the whole array and I reduce array to the size of the first element — I do this for all decreasing elements.
Now I go in the reversed order — I consider the smallest mod first. I count the distance from the previous 0 and repeat that pattern until the end of the next mod.
Example: smallest element = 3, next is 8
1 2 0 1 2 0 1 0 (I copied the pattern for the first 3 elements 2 times, I copied the first element on position 7, but 8 is zero — the next mod).
Once I counted all the values until first mod, I just copy the pattern until R.
In example 1 you can see that result[34] == result[1], result[35] == result[2] and so on.
The mathematical idea is such that if there is a number > mod[0], then only v = number%mod[0] matters — so we are interested in the value within 1..mod[0]. Then v will ignore any mods >= mod[0] until it meets a smaller, let's say mod[k] < mod[0], and again we are interested in values within 1..mod[k].
I hope it is understandable.
I am not able to understand your approach. Are you trying to find the interval after which the values will start repeating ? Can you please, try running your algorithm on the test case { 5,3,4} and R = 10.
So it turns out that it was rated ; d? Even more pity that I missed "&" xd
The successful challenge cases should have been included in the final tests. There were people in my room who were getting TLE for a random large case I created but their solution passed the final tests.
I have a feeling that the results of the challenge phase were incorrect For example after the challenge phase ended I suddenly got a message saying my submission had been hacked. But now I see that it has passed systests. Now either systests are weak and I am incredibly lucky (which is unlikely since I tend to have rotten luck) or the results of challenge phase are not trustworthy (I like to think this is true).
Challenges were nullified because they got processed in random order and extremely slowly.
When you say extremely slowly, do you mean that the result of the challenge was correct just that it took a long while for the result to be displayed, or that peoples submissions were run at a slower speed than usual causing wrong results?
The first, but there are some runtime problems (during systests) that are still being resolved.
When the challenges were not working and there was an announcement that the challenge phase is going to be extended, I manually typed the code I wanted to challenge later in my arena and tested it on the large random case using the "test" button in the arena and the message displayed was: "The code execution exceeded the 2.00s Time Limit" . So, for your claim to be true, the "test" button should also have been working incorrectly during the challenge phase.
Why should the test button work fine if everything else is going haywire. A challenge and a test are intrinsically the same thing. As long as you did all of they typing and testing during challenge phase the results cannot be tested.
Just to be sure retype the code now and run it on random cases in practice room.
As far as i remember the original time limit for this question was tc's default 2 sec but now it shows it to be 4 sec. Maybe because of this, those solutions passed final tests.
On tc forums it said that it was initially done just to see if large i/o was the reason for the mess-up in challenge phase and now its just left there so that people very close to 2s can see their times. I forget the exact words but I think the 4s limit was brought up on forums. I don't think systests were done on 4s though. Again, I am not sure and don't want to make any false statements so I'll just make false conjectures :)