Hi everybody,
This Sunday there will be a Moscow programming competition for school students of grades from 6 to 9. This contest is prepared by Moscow Olympiad Scientific Committee that you may know by Moscow Open Olympiad, Moscow Team Olympiad and Metropolises Olympiad (rounds 327, 342, 345, 376, 401, 433, 441, 466, 469, 507, 516).
Round will be held at 10:05 UTC on Saturday. You will be given 6 problems and 2 hours to solve them. Round will be rated for second division (rating below 2100). As usual, participants from the first division can participate in a contest out of competition.
Problems are prepared by vintage_Vlad_Makeev, grphil, cdkrot, VFeafanov, Sehnsucht, Sender, voidmax under my supervision.
Thanks to cdkrot for the round coordination and statement translation, and also thanks for MikeMirzayanov for systems Codeforces and Polygon, which was used to prepare problems of this olympiad.
Good luck everybody!
UPD1: Due to some last-minute changes, there will be 7 problems.
UPD2: Winners!
Div 2:
Div.1 + Div.2:
UPD3: Editorial
Happy Coding and high rating.
Hoped that everybody has good feeling
Why this unusual timing!
Have classes at the time of contest. :(
We need intersection between official contest and round because we want to prevent cheating from onsite participants.
Perfect time for Chinese users! Finally we don't need to participate the contest at midnight! XD
yes, is very good
Good luck & high rating!
Once again with a hope to cross 1200
Good Luck!
May I ask about the score distribution?
What's the score for each problem?
I'm sorry.This sentence was sent by my classmates.
Bless all of you!
Div-2 rounds are held with extended ACM-ICPC rule, so there is no point distribution.
Sure?
all the best
Extended by 15 min?
-_- sad -_-
why why wait more ..
15 minutes...
Why extended by 15 min?I can't Register it. :(
just the regular delay
15 min delay :((( why always me:?
Time to read comments Credit : Adibov
That's for 5 minutes delays not for 15 minutes.
delayed lol
Can't wait more!
on amoo_safar arm
I skipped school for this. hope i don't get disappointed. High rating for everyone :)
We want the 15 minutes delay in tomorrow's round to watch Liverpool Vs Man United match not in today's round :D
C137 where is your Morty?
Looking for Jessica
Why don't you make a clone of her for his birthday like that robot that you bought him
We want a 'Rick and Morty' special CF Round :D
Okay, I bunked my lecture only to find the round is delayed now. Rip my attendance.
What is the score distribution?
"6 problems"
6 == 7
Mmmmm... I love contestd like this. Yes, i LOVE contest with easy problem F.
Silly me just looked at submission counts now, I spent most time on E thinking it might be not too hard :P
Solving C before B is the only good decision I have made in my life
Actual difficulty: A < B < C < F < E < D < G...
Btw, anyone knows about pretest 6 of D?
6th pretest may be 2 2 <= <<
output is Yes
Answer:
Finally done and dealt with my mistake. ;)
Thanks! :D
Its quite unusual to find an increase in the number of problems, due to last minute changes. Usually, problems are removed, but not added.
It can happen only on CF.
what is the testcase # 4 in B and # 7 in F..
try this 4 0 0 1 1 1 1 2 2
I fixed this case to pass test 4)
my code return 3
I got error at #8, then fixed for that but couldn't post in last 5 secs to see result: 3 2 0 3 2 3 4
my code return 3 this case too..
Cool contest, but these difficulty estimates were all over the place. I think it would probably go ACFBDEG. None were too off except F. Was there some intended solution that didn't involve just stitching linked lists together?
make the dsu tree and do a preorder traversal
You can do it using only vectors. In order to unite 2 sets just append shortest vector to the longest one. Overall complexity will be O(nlogn).
How to solve C?
i think sort -> element reallocation 0>0, 1>n-1, 2>1, 3>n-2, ...
2, 4, ... , n — 3, n — 1, n, n — 2, n — 4, ... , 3, 1
sooo booring A-F
Similar problem D.
https://open.kattis.com/problems/chesstournament
C: https://www.hackerearth.com/practice/algorithms/greedy/basics-of-greedy-algorithms/practice-problems/algorithm/hunger-games/description/
A bit mathforces...A stuck me 5 min and B stuck me 10 min...
Would you prefer if two harder problems used math rather than two easier problems?
I have a weird solution for F.Could anyone help me to check it?
Build a tree similar to Kruskal rebuild tree.When dealing (xi, yi),add a new node,its two children are xi's highest father and yi's highest father.
After building,print the preorder traversal of the binary tree.Is that correct?
That's exactly how I wrote during contest!
My submission's here
I had an issue with hacking some solutions for Problem A. The solutions used integers when defining all the variable of widths, heights, and the answer. However, I copied the solutions to my IDE, which they used the concept of areas that demands multiplication for sure, and tried hacking them on tests of 107 and 108 widths and heights, but the answers were all correct. How comes? why didn't the answers get overflowed?
Code example:
Test example:
#define int long long int
They overflowed but then returned back. It's like you calculate answer modulo 231. Since it's always less than 231 it will be correct in the end.
Signed overflow is technically undefined behavior.
I am not sure, but in those particular cases the compiler could be doing some optimizations by removing the multiplication (it is possible).
Even if that isn't the case, it still works. Consider that overflow is actually predictable. For example in this particular case:
(10^7+2)*(10^7+2) - (10^7) * (10^7) = 316447236 - 276447232 = 4 * 10^7 + 4
as expected. The thing here is that when it overflows, you get "wrapped" around, i.e similar to taking modulo2^32
. For ex.102 - 100 = 2 = 102 - 100 (mod 70)
, but150 - 60 = 90 != 150 - 60 (mod 70) = 20
. When the difference is small enough (smaller than the mod) you still get a valid result for the difference, no matter whether you take the mod (or overflow) or not.Of course you shouldn't rely on that because it is still UB per the C++ specification, but still.
What was the approach to solve problem B ?
No. Of points such that x = y between coordinates (a,b) and (c,d) are max(0,min(c,d) — max(a,b) + 1). Just make sure to handle duplicates coordinates properly.
can you explain, why this is correct ? Need a proof for the same.
Consider a grid with topmost point (a,b) and bottom right point (c,d). There will be point whose coordinate x will lie in [a,c] and coordinate y will lie in [b,d].
Am I the only one who found D and F easier than C? :'(
Can someone provide a proof for optimality of the approach for C?
q1: YES
There exist a seq such that any gap will be at most 2 element, say a[7], a[5], a[3], a[1], a[2], a[4], a[6], call it (*).
The largest difference in (*) is a[2] - a[1] or a[n] - a[n - 1] or a[i] - a[i - 2]. For the first two its clear that any sequence must be a difference larger of equal of that. For the last cases, if we don't want a[i] - a[i - 2] to appear, then we should insert something between them. Note it is a circle so we need consider transverse in clockwise and anticlockwise. Say if we insert a[i - 1] in one direction, then we must have a[j < i - 2] and a[k > i] meet somewhere in another direction. The difference a[j < i - 2] and a[k > i] is larger in that the largest difference in (*).So we cannot construct any better solution and (*) will be optimal.
e.g. a[] = {1, 2, 3, 4, 100, 101, 102} we can write in 102, 100, 3, 1, 2, 4, 101. The largest difference is (100, 3). If we put 4 between them we get 102, 100, 4, 3, 1, 2, 101. Note the gap in another direction is bad and the largest difference is 101 - 2 = 99 > 100 - 3
Why is n in problem C so small?
It's very strange.
(Sorry to my poor English
A funny solution: Binary search for the answer and use the Hungarian algorithm to check.
I hate A, B.
I solved 5 problems but lower than 4 solvers...
Is it actually possible to solve problem B in 2 hours?
F is gonna wreck my rating. But I solved D at least ...
At the last minute I decided to submit an extraordinarily crappy solution of problem F and it passed......
Can anyone explain why this passed?
Probably cause there is no test like
Which should result in TLE. Though you can easily make this solution O(nlogn) by merging smaller vector to larger instead of second one to first one
Could you please tell me how merging smaller to larger vector gives O(nlogn)?
Search union by size optimisation of DSU.
Hope this helps. https://cp-algorithms.com/data_structures/disjoint_set_union.html#toc-tgt-15
Lets think how many times an element will be copied in worst case when merging small to large:
We have N elements so complexity is O(NlogN)
Thanks for the crisp explanation :)
test added to upsolving, thanks!
My approach 50381829 which appends linked lists to each other (instead of copying a vector) while keeping track of the mostRight and mostLeft elements in each linked lists without keeping track which is smaller (But still doing path compression). Should it result in TLE too?
No, your approach is same as DSU with path compression and according to this its O(NlogN)
I see. Thank you!
Easy problems, but I wa, wa, wa QAQ
The contest was a little bit unbalanced
Problems A — C were around div 3 level, and problem D immediately jumped to div 1 level. Was this done on purpose?
no cuz D is very standard.
And there is a lot of problems similar to F
I didn't get a chance to look at F during the contest, but it looks pretty easy to solve with DSU. I'm not sure about problem D though.
D can be easily solved using DSU and Topology sorting.
I think the difficulty of problem D is around D div 2 of usual Div-2 contests with 5 problems.
F can also be solved in O(n), for example we can maintain first element, last element, previous, next in each of the components and simulate accordingly in unite method.
http://codeforces.net/contest/1131/submission/50389432
It's O(n*α(n)),btw I am the same solution
Actually union find set with only path compression has the conplexity O(nlogn).
right, but possibly a good alternative to storing list of elements in each components :)
Seemed the cf-predictor played a joke to me :/
The CF rating predictor is showing +86 while the actual rating change I got is +40 :/
How to solve D?
D can be solved using DSU and Topology sorting.
Firstly, if a(i, j) = '=' then join i and j + n to the same set. Thus, there can be at most n + m sets. Let's create a directed graph with n + m vertices, each vertex represents a set. Now if a(i, j) = '<' then connect an edge from head of set i to head of set j, a(i, j) = '>' otherwise. If we can't toposort this graph (i.e, this graph has cycles), then the answer is No. Also if two element i and j in the same set but a(i, j) != '=' then the answer is also No. Otherwise you can toposort the graph and find the answer easily.
Also logN in overall complexity could be avoided by making a graph EQ with only '=' edges and finding CC's there instead of DSU.
Problem C is easier than B, but i did B first :(, and waste so much time to fix problem B. I just submit 1 time to ac the problem C
what is the problem with my code? (Problem F) https://codeforces.net/contest/1131/submission/50382324
It gets WA at testcase #7 which contains n=150000...
Can anyone please tell me why this solution(https://codeforces.net/contest/1131/submission/50393496) for F is giving me memory limit exceeded?
You should merge to the cell with higher size.
Also, vector::clear() doesn't free memory, if you want to to that, use
But why is it necessary to merge to the cell with higher size??? As after the merging operation the resultant vector in the both the cases will be of equal size.
I could have agreed if the verdict was TLE but why is the verdict MLE.
Because clear operation on vector doesn't free memory.
Allright got it thanks :D
Finally,i became an expert!!!
Congratulations!
Thanks:)
Problem D is quite similar to this problem:Table Compression
Can someone explain the solution for F. I am not able to get it through editorial.
Can anybody please point out why my solution for F is exceeding time limit?
Thanks in advance!
50385034
In problem C, I pass the tests using an algorithm which I don't know why it is true, please help me. THX!
First, sort the array.
Second, binary search the minimum discomfort value.
Third, when I check whether a value is possible, let a[0] as a start point, a[n-1] as a end point, and I build one path from a[0] to a[n-1] using greedy method, then build another path using the remaining points. Thus I get a circle, finally I check whether this circle is legal.
Please see this submission for more details 50400061.
By your check function, you decides that, sorting the array then destributing the elements with the way you mentioned, is the optimal way to reach the minimum discomfort value. So, firstly, you don't need to binary-search for the answer (
mid
value doesn't have effects on the work of the check function).The proof of the greedy method of the check function can be found in the editorial (or above in comments)
I think the detail of the check function may differ from the method mentioned in the editorial.
Sorry for that, if it's true, then I didn't understand what exactly is your greedy method.