Hello everybody! A new round is upcoming and I'm honored to be its author. The round will first appear as the onsite for Dasha.AI Code Championship and later we will hold a rated mirror. Everybody is welcomed to participate in Codeforces and I wish good luck for people in Saint Petersburg and Novosibirsk.
Using the opportunity, I want to thank:
- mnbvmar for helping in the preparation of a great part of everything. Really, without you, I wouldn't do it.
- 300iq for coordination and help with preparation.
- As always, MikeMirzayanov for taking care of international competitive programming and making such great platforms as Codeforces and Polygon.
- KAN, zscoder, Lewin, Jeffrey, Darkstalker, Darko and Gtaumaturgo for testing the problems and great advice.
- Dasha.Ai for an organization and a sponsorship.
Scoring will appear later.
Good luck and see you during the contest!
UPD1a: Scoring in div2: 500-750-1250-1750-2500-3000
UPD1b: Scoring in div1: 500-1000-1500-2250-(1500+1250)-3500
UPD2: editorial
UPD3: Congratulations to the winners!
In div1:
In div2:
In the onsite competition in Saint Petersburg:
In the onsite competition in Novosibirsk:
Let's upvote this blog to make Radewoosh top 1 contributorWill this contest also have some awesome theme (and cool figures)?
Nothing special this time :/
Nothing special, but cool figures definitely — top 9 of Polish coders by Codeforces rating. I wish I was top 9 before the contest not after :P
You have to write my and mnbvmar ’s Pokémon contest one day, to see what cool statements are :P
Well, depends what you think is special. For something as cool as Pokemon for sure you have to wait a bit, but it's always ok to give something.
:still angry:
Sorry, you have to write contests more often :P
I think it would be then called a "Replay" not a "Mirror", if it is not happening parallel to the original contest :)
The Rules for Mirror Contests don't say anything like that. Of course, that's because there are no Rules for Mirror Contests and the term has always been used for contests that don't run in parallel as well.
300iq, а условия на русском будут??
orz orz
Is it Rated?
[DELETED]
Of course he just want downvotes
me too
Hope for interesting tasks as Radewoosh is the author!
Sorry if I misunderstood, but are the problems same as Dasha Code Championship Finals? If they first appear in Dasha Code Championship Finals, wouldn't some of the participants of finals know the problem before this contest starts? Sorry if I'm wrong. I have a poor English understandings... Thank you!
It's standard for mirror rounds. In the interest of sportsmanship, onsite participants should not participate in the mirror rounds or discuss the problems until the mirror rounds have concluded.
I see. Thank you very much!
first time to div.1 qwq...
scared worried but happy.
Wow! Subtask again.
I hope ethical and beautiful behavior now
I want to do both div1 + div2 at once so I make 2 accounts. If problems D of div 2 coincides with problems A div 1 and I submit the same code, will I be skipped the contest?
queueForces
Another queuforces :)
queue is soooooo slow 5 mins and i still dont know if its accepted
Graphforces
codeforces
How to solve Div2 D / Div1 A ?
I was one of the testers of the round. Here is a brief description of my solutions for d2A-d1E (couldnt solve d1F T_T):
d2A: easiest way is to try all possible subsets
d2B: try to make all digits 0 (or 1 if first digit) from the front. be careful of the 1-digit case (given in samples)
d2C: try every possible number from 1-6 to be represented by each node and see how many distinct dominoes you can place
d1A: Note that if >= 2 students have the same set of solved problems, you can take them all without affecting anything. Suppose you take them all, and call the current set S. Now, for every other student (with no other student sharing the same set of solved problems with them), you can take them iff their set of solved problems is a subset of the set of solved problems of one of the guys in S (if not, their problems solved must be a proper subset of some other guy in the set, then just induct on size of problems solved).
d1B: Relatively standard idea. Build a sparse table s.t. for each node x, we know the gcd from that node to the 2^i-th parent. Now, for each node u, consider the sequence of gcds to root. Since gcd is either 0 or divided by >= 2 each time it changes, we only have O(log10^12) distinct gcds. So, for each node x, we can use O(log n) time to determine the range of the current gcd and keep moving up to the node where gcd changes. Complexity: O(nlognlog10^12)
d1C: Straightforward sqrt decomp works. We call a node big if it has >= 400 neighbours and small otherwise. For each node x, we maintain h[x], the number of neighbours with more rubles than x. For each big node, we store the list of small neighbours with more rubles than it (idea being that when we have an update on the big node, these are the only small nodes we nee to update). For each update, if the node u is small, we just update all its neighbours and itself naively, and append u to all the neighbour big nodes. If node u is big, we update all the small neighbours in the list for u and also update the big neighbours if necessary. One can see that each query can increase the sum of size of "list of small neighbours" by sqrt(n), so the amortized complexity is O(nsqrt(n)) (treating n~m~q for convenience)
d1D: Pretty straightforward if you have some group theory knowledge. Each shuffling method can be considered as an element of $$$S_5$$$ (the symmetric group on 5 elements). $$$f(l,r)$$$ is literally just asking about the size of the group generated by the $$$l$$$-th to $$$r$$$-th shuffling method.
Note that there are < 200 (actually around 150) subgroups of $$$S_5$$$. Consider the subgroups formed by the elements starting from l. The sequence of distinct subgroups formed will be short (since each subgroup must be contained in the next). Hence, we can use the same idea as Div1B: Just build a sparse table where each element denotes the subgroup formed by [l,l+2^x-1] and then find the set of segments representing the same subgroup (so complexity should be something like nlogn*some small constant).
d1E1: You can just use the method from d1E2 to solve this, or you can be dumb like me and try to solve this subtask alone (this solution looks overcomplicated when you see the d1E2 solution). My idea that works for this subtask is that 6C3=20, and so we can use a meet-in-the-middle approach. For each half of the left vertices, we brute force all possible graphs (2^18 of them since there are <= 3*6 edges) and for each graph, compute the set of all possible perfect matchings. Let L[S] denote the probability that a graph with only the first n/2 left vertices can match the set S of subsets of size n/2 of the right-hand-side (<= 6C3 possible subsets, so # of possible S is <= 2^20). Define R[S] similarly, but as the probability that a graph with only the last n/2 left vertices is such that the set S of subsets of size n/2 of the right-hand-side can possibly be the set that is NOT matched by any of the last n/2 left vertces. Then, we are looking for sum(L[A]*R[B]) where (A&B)!=0, so we can just do a FWT on the arrays L and R to find this sum.
d1E2: Consider the following way of determining if the graph has a perfect matching: Start from the first vertex, and when we consider the first i vertices, keep track of the set of all subsets of vertices that could have been matched by the first i left vertices. It turns out that we can also use the exact same method to compute our answer! The key is that there is actually not many "set of all subsets of vertices that could have been matched by the first i left vertices", so if we do something like dp[i][S] = probability that if we consider first i left vertices, set S is the set of all subsets of right vertices that can be matched by first i left vertices, it will still work in time (there are < 200k possible S for each layer if not mistaken). What's the proof? Generate all possible states by program XD
So in div1A the case when several students have the same set of problems was not part of the 20 pretests, was it? ohh boy...
No, there were tests with answer different than zero if you are asking about it.
Nevermind, I'm just dumb. I was thinking that when decreasing the set you may be eliminating more elements at once with equal masks that could themselves create a potentially valid group, but it's not possible since that would imply one of them beats the other
for Div1B I actually used the same algorithm but got TLE :(( , How could you do it in nlognlog10^12?
I had to use binary search so it was nlog^2nlog10^12
I solved DIV2E using the same approach but don't know why got WA on test 6. Can you please help? code link
Try this task: ~~~~~ 6 9 12 4 4 4 8 1 2 2 3 3 4 4 5 5 6 ~~~~~ And the real answer 88 while my code(WA) get 96
Problem B and D were very similar, the key insight was the same in both of them.
How would you be able to find the GCD from a node X to K-th Ancestor of X in logn? shouldn't it be logn(for binary lifting) * log(n)(for GCD)?
For d1B, is there not another log(10^12) factor from gcd operation itself? I mean when binary lifting, I take gcd at each step, so for each node is O(log n log(10^12)) to go from one gcd to the next
In my code, what I did was when I start from one node $$$u$$$, I try to go up 2^18,2^17,...,2^0 steps. When I find a step size that still maintains the same gcd, I move up. To check if the current step size maintains the same gcd, I check if the sparse table entry is divisible by the current gcd (special case handling when gcd=0), so I don't need to do another gcd operation for every step size.
For Div1B, we only need to maintain the list of ancestors where gcd decreases and remove the $$$log_n$$$ part.
You're right we can just maintain the gcd list when we move down the tree. This indeed seems easier (I got too accustomed to doing binary lifting lol)
can you explain a little bit more please?
Let's solve the problem on an array. Assume you have this array:
6 9 12 30 24 36
Imagine we are at 36 and we want to compute the gcd for every suffix, the gcd array will be:
3 3 6 6 12 36
For repeated values, we can easily compress them by storing the maximum index it appears. So what we store would be something like:
(3, 1) (6, 3) (12, 4) (36, 5)
When you add a new value, you can just iterate through the gcd array and update it. Assume we add
18
, the gcd array will be:(3, 1) (6, 3) (6, 4) (18, 5) (18, 6)
And then just remove
(6, 3)
and(18, 5)
.There is an easier solution for problem D1C.
Let's make a directed edge from vertex $$$u$$$ to vertex $$$v$$$ if $$$u$$$ earns more money than $$$v$$$ and they dislike each other. A $$$v$$$-th employee salary revision changes the direction of all edges, which ends in vertex $$$v$$$. We can maintain the list of these edges for each vertex and recalculate the answer iterating through the list for current vertex.
Let's prove that the whole algorithm works in $$$O(n \cdot \sqrt{n}))$$$ (we assume that $$$O(n) = O(m) = O(q)$$$).
We can note that the number of iterations is equal to the number of edge direction changes. Let $$$x_{i}$$$ be number of $$$i$$$-th employee salary revisions. Edge $$$(u, v)$$$ changes its direction at most $$$O(min(x_{u}, x_{v}))$$$ times. Let's $$$S_{i}$$$ be a subset of vertexes $$$v$$$ such as $$$x_{v} \geq i$$$. We can note that the sum of minimums on the edges is equal to sum of numbers of edges in induced subgraphs $$$S_{i}$$$. For each subgraph ratio between number of edges and number of vertices can't be more than $$$O(\sqrt{n})$$$. Also, sum of sizes of $$$S_{i}$$$ is exactly number of queries ($$$O(n)$$$), so the sum of minimums is at most $$$O(n \cdot \sqrt{n})$$$.
How to solve C (div 2)?
More specifically, is there a better way to do it other than try every possible arrangement?
I dont know if my solution is perfect, but this was my thought process:
Sounds like the model solution. :)
I put values from 1 to n in decreasing order of degrees of nodes, and then if there is a node without a value, I brute forced its value. (and that was when I realised I could have just brute forced the value of all the nodes...)
Was Div 1 E1 and E2 based on Hall's theorem.
Works till n=5. Tried it for n=6 and now i am trying to fix my pc for last one and half hours..
Test case problem B Div2 is quite strong.
I don't agree with test case, but pretests are strong.
Pretest cases should be weaker, allowing participants to hack incorrect solutions.
suck contest
MikeMirzayanov please fix
Solved problem E and could not press the submit button in the last second
Waiting for the system testing to finish and be able to submit my solution,
If it is correct then I am gonna kill myself
Hope your solution is not correct.
Did you check it?
Yeah and I am still alive and blue XD
Am I the only one who thought DIV.2 E Was about finding the sum of all pairs??? And didn't notice the ancestor part :(
The setter underestimate my dumbness and didn't put this is neon in the statement. I endedup loosing more than 1 hours until i find out that only ancestors count.
At first full input need to be taken.. If the code can't take full input ... How the problem can be Ac...!! correct ans or not is the later priority... I got 50 negative for this in hacking phase... ****
No one demands from solutions to read full input. If there is already enough information to return answer and finish, this is still okay.
Yeah..Learn from today... But It made a lose to me..
When can I submit my code?Is it testing formally now?
What is the answer of D2-D for case -> A = 5, 4, 1 and B = 5, 5, 5 ?
0?
Why cannot we take complete set — 1 and 4 obviously cannot dominate 5 and 5 cannot dominate in presence pf both 1 and 4. Is it flawed ?
Actually it is equivalent to the second testcase 3 1 2 3 1 2 3
5 will think it is better than 4 alone, and will think it is better than 1 alone. So, 5 will think it is better than everyone else in the group.
So for set {6,6,1,1} We can insert 1 or 2 or 4 but cannot insert 3 because it will better than both 1 and 6. Right, you cleared it up, Thanks!
0
0
[deleted]
If I solved a problem after contest is done and I wanna check if it is correct, I should wait when system testing is over?
Yes.
So, is it sufficient in d1C to simply store list of neighbors with greater value for all nodes? I couldn't come up with any testcase having complexity more than $$$O(q \sqrt m)$$$ for such solution and submitted it. Any counter-examples?
I have the same solution, but came to it in a little different way. I say that the edges are directed from highest to lowest and some time we want to flip it. Suppose $$$n = m = q$$$ (for simplicity), and it seems that the number of such flips will be $$$O(n\sqrt n)$$$.
The proof is as follows. Consider $$$w_v$$$ as the number of occurrences of $$$v$$$ in the queries. The vertex is "light" if $$$w_v < \sqrt n$$$, and "heavy" otherwise. Now the following is true: the edge $$$u \rightarrow v$$$ will be flipped at most $$$\max(w_v, w_u)$$$. So, the edges with at least one "light" vertex at the end will be flipped no more than $$$O(\sqrt n)$$$ times. Now consider the edges between two "heavy" vertices. Consider we just removed the "light" vertices. So, now our graph has $$$O(\sqrt n)$$$ vertices, so we will flip at one query no more than $$$O(\sqrt n)$$$ such edges. So, the amount of flips is $$$O(n\sqrt n)$$$.
Let's prove the complexity. Let's say that nodes with degree less than K are light and others are heavy. Update of lights nodes add no more than nK. Consider one heavy node. In sequential updates, the number of updated nodes on the second one is no more than distance between the updates because all these nodes should've been touched (otherwise nothing changes for them). Thus, all updates except for the first one, sum up to n for any node. First updates are no more than m in total.
Thus, if we set $$$K=\sqrt{n}$$$, we get $$$m+n\sqrt{n}$$$ in total
I don't know how to prove it (or even if it is correct), but I think complexity of that strategy is $$$O(n + m + q)$$$ amortized. Do you know a test that proves it wrong?
Full graph on $$$\sqrt{m}$$$ vertices, and queries 1 2 $$$\ldots$$$ $$$n$$$ 1 2 $$$\ldots$$$. Each query works in $$$\sqrt{m}$$$ time.
Yes. Suppose the complete graph of $$$n$$$ vertices and $$$\frac{n\cdot(n-1)}2$$$ edges. Also there are $$$q = n^2$$$ queries $$$n\ n-1\ \dots\ 1\ 2\ 3\ \dots\ n-1\ n\dots$$$. The number of flips here is $$$O(n^3) > O(n + m + q) = O(n^2)$$$, because on each block of $$$q$$$ queries we flip all the edges.
Let's look at the $$$\sqrt{m}$$$ consecutive queries. We will change each of at most $$$m$$$ edges not inside this group of vertices at most once + at most $$$\sqrt{m}$$$ edges inside this group of vertices for each query, so total complexity is $$$O(q \sqrt{m})$$$
Another way to prove it is to consider a potential function. Call a node heavy if it has at least $$$\sqrt{m}$$$ edges. Direct each edge from higher node to lower node. Let the potential be the total number of edges to a heavy node.
When we access a light node, we change the direction of at most $$$\sqrt{m}$$$ edges and do at most $$$\sqrt{m}$$$ work, so we use at most amortized $$$O(\sqrt{m})$$$ time.
When we access a heavy node, say we do $$$k$$$ work. Then the potential function decreases by $$$k$$$, but can increase by at most $$$\sqrt{m}$$$ (since there are at most $$$\sqrt{m}$$$ heavy nodes). Therefore the time used is $$$O(\sqrt{m} - k + k) = O(\sqrt{m})$$$.
How you solved Div2 — C (Anadi and Domino) ?
Do all possible permutations [1-6] between first 6 vertexes. Also test all possible values for the 7th vertex.
Iterate through edges and maintain a set of already used edges. Return the size of the set.
I just tried 6^7 variants of coloring vertices and tried to add as many dominoes as possible with a simple DFS/BFS, got AC.
First time learning that shifting an unsigned long long by 64 bits is not defined behavior ...
Is it? How does it work? Is shifting by 63 bits ok?
Yup. It's only defined when you're shifting by [0, sizeinbits-1] bits*.
In practice (but you should never, ever, ever rely on this), The Way Our Computers Do It™ is that the processors ignore all but five least-significant digits of the shift (or six if we're shifting 64-bit values): ideone.
[*] Disclaimer: small integer types will usually be promoted to
int
before the shift, so it's totally possible to shiftshort
20 bits to the left. Unlessint
overflows, of course.I was scared by all the "system test failed" solutions of div2-C. I was completely sure that my solution with no adjacency list + 7 nested loop + (i dont know what i did to save it from tle.) will fail. Surprisingly got Accepted. 61168827(Weirdest solution ever)
Why we can't choose all the three students in second test case of Div2 D i.e. 01, 10, 11 (their binary representation) none of them is better than other two?
11 is better than 01 and 10.
Can I get full input for 8th test of d2 D?
congratulation codelegend
I got a message from system that my solution 61138460 for the problem 1230A significantly coincides with solutions nafiscode/61138460, tourist_plus_kan/61141927.
It was really a coincidence. I solved the problem by myself. I didn't take help from anyone. I didn't publish the code anywhere.
Are you sure you didn't publish your code? With wrong settings on ideone anyone can see your code.
Yes, i used ideone. I always use it.but I didn't share my source code link with anyone .Then how come anyone could see my code??
If you left default settings for visibility (which is "Open"), link to your code is published in the "Recent codes" page.
Why I used Doubling Algorithm(O(nlong(n)*long(max(a[i])))) to solve problem B(div 1) and get a TLE?
I looked at your last contest submission: 61172578 and I found a bug that worsened your time complexity to $$$O(n^2)$$$ or something a bit worse. In
solve
, you've got an incorrect condition testing if you can use a jump-pointer without changing the gcd. It's not enough to compare your current gcd with the gcd of values on a jump-pointer (why?).Your submission with this condition fixed passes in 1s/4s: 61238834.
Thank you very much and I realized that I should use gcd(gd[j][i],g) to check instead of g becouse there may be a tree chain with all value 0.Appreciating your help again.
Really amazing contest! Enjoyed a lot. Hope you make another one soon :D
Can anyone please explain me the solution C all possible check. Actually I don't understand the part when n>6. Can you explain a bit when n>6. An example would be better understandable for me. :) Thanks :)
as we know if n <= 6, we can build a complete graph, so we just need to put all of the dominoes.
and what about if n == 7, you just need to do permutation and take 6 vertex from 7 vertex to make it as a complete graph (n <= 6 condition).
and for each of the permutation you need to do iteration and change the unused vertex to one of the taken vertex (6 vertex) and count numbers of dominoes which hasn't placed in graph.
example: n = 7 m = 9
so if we just take vertex 1-6 and ignoring vertex 7, there won't be appear same dominoes (complete graph)
and for vertex 7 you can change it to vertex (1-6) but if you change it to vertex 6, the used dominoes is just 8. since dominoes (6,2) has been used.
and if you change vertex 7 to 1 you can used all of the vertex
here is my solution: https://codeforces.net/contest/1230/submission/61308257
Problem sets from Polish people are like Belgian chocolates :D
Problem sets from Belgian people are like Polish chocolates.
I am not able to understand the case of n>6 in Div 2/Problem C... Can anyone explain me in detail...
Assume that each side of the domino denotes a color, so basically you need to find a coloring of the graph which covers the maximum number of edges from the given set of 21 dominoes. You can do this by checking all possible colorings. Check out my implementation here: https://codeforces.net/contest/1230/submission/61163475
I understood the problem... but I am not able to understand the implementation that is the else statement in your code... Can you explain to me how you used the map and iteration...