I hope you enjoyed the problems! Implementations will be added soon.
UPD: Implementations are added!
Tutorial
Tutorial
1450C1 - Errich-Tac-Toe (Easy Version)
Tutorial
1450C2 - Errich-Tac-Toe (Hard Version)
Tutorial
Tutorial
Tutorial
1450F - The Struggling Contestant
Tutorial
Tutorial
1450H1 - Multithreading (Easy Version)
Tutorial
1450H2 - Multithreading (Hard Version)
Tutorial
Thanks for superfast editorial
Too difficult contest :(
I always fail B. Did C2 but failed B yet again. The cost of getting C2 instead of D was also not worth it.
I didn't like C1, C2
I didn't either during the contest but after reading the editorial I loved them. Although I think it's better to be replaced with d.
It took me about 1 hour to solve C1. Right after I solved it and it passed pretests I was quite content but then I came to conclusion that my solution was wrong. And right now it turns out it was correct after all. C1 is quite a confusing task
This contest was not for below pupil I guess :). The questions were tough(for me at least), but the questions were good and it was a good learning experience. Thanks for the superfast editorial
I feel like A and B are still pretty easy, but C is quite hard :(
I felt C1 was fine, but C2 was pretty tough.
Problem C is quite similar to this problem: 1438C - Engineer Artem
Also it's quite similar to problem 764D - Timofey and rectangles
yes. I used this problem as reference
can you add who were the authors of each problem as well?
Up
C is hard this time lol
It is a beautiful problem. You can just try to brute force one of these 3 patterns over the grid, and one of them will always satisfy the condition.
The implementation is also easy.
Same idea. but i was really not sure this conclusion is right, and even pretty sure it will fail system test, but it really work, what a surprise:)
Yes, that’s what I did to C1, and it wasn’t an easy observation to see at first. For example, the first time I submitted, I only used one of the three pattern, but of course it fails lol.
Overkilled B and wasted 1 hour
I was solving a complete different problem for B until the last 20 minutes, where I sadly ignored the attract all points statement. Lets say it is possible to "choose" which points to attract to itself for a point and not all. The problem being similar to the making all elements equal in a 1D array, but here in a 2d plane, with the k parameter. How would we solve such a problem?
Accidentally solved C, but the proof is really hard to think out
I was able to come up with the strategy to solve C1 but with only few minutes left on the clock. Gave all my time to D.
Have the idea but don't have time is really sad
Even worse is if you start implementing solution like 45 minutes before end and still can't make it on time
C1 is like asking for some formular. You saw that one before and can remember, or not.
No, it isn't. I haven't seen C1 before and there are a lot them.
This is actually puzzle, sometimes you would be able to solve, sometimes you wont. I just needed a simple observation for C2 and I somehow missed that. This is how it goes.
Coloring arguments like this one are very common in math olympiads. That may be where "saw before" comes from.
Even I don't know how I solved D. Just kept adding bunch of if's and miraculously AC'D.
Beautiful Editorials though :)
I also don't know how I solved D, just added binary search and got AC :)
I used next smaller index. Was gonna lose like -50 rating, D saved the contest for me.
Could you please explain your approach? Thanks in advance :)
I got TLE during system test for D even though my solution was O(nlog^2(n)) where n<=1e5. Feels very sad man! I hope from next time the pretest for TL would be stronger.
During contest your codes time was 1824 ms so with this information you can easily understand code need some more speed.
The point distribution suggests that doing both C1 and C2 was as difficult as doing D, but I felt like D was muuuch easier than even C1.
Problem C WAS confusing.
Please give my brain cells back nvm they are already dead.
Can anyone give some suggestions if my thinking come to a dead end.
In D, I spend all my time to think how to perform min on neighbor elements efficiently, like from [1,5,3,4,2] to [1,3,3,2], eagering to optimize O(n^2) to O(nlogn). I also think about some properties like increasing or decreasing sequence to speed up.
But when I saw the editorial, it is totally another way.
I sometimes get this problem at harder CF problems and, at least in my case, the best solution is to just try a completely different approach to the problem. If this different approach is actually correct, congrats, all is left is to implement the solution. Otherwise, you can go back to your original ideas with a fresh mindset, which means you can notice things you previously haven't thought about.
One thing I realize is that, if we want to gain more information (like simulating whole process in D, instead of just deciding whether a permutation), it usually requires a higher time complexity. So it may be a good way to rethink if we record redundent information, and what is the minimum necessary factors that matter
One of the best contest.
Thanks for the beautiful contest. I think I enjoyed today's contest the most out of all the one's I have attended so far.
Deleted
That's not a sparse table implementation. It's a segment tree. I've a feeling you've probably copied code from somewhere.
I couldn't solve C1 and C2 in contest but after seeing the editorial, I just loved their solution. Two of the most amazing problems I've ever seen on Codeforces. Thanks Monogon
.
My alternative solution to D:
For each element in the array, find the longest contiguous subarray containing it and surrounding elements which are no smaller than it. If that subarray has length $$$L$$$, we know that this element's value occurs as a compressed value for all $$$k \leq L$$$. Then to determine if a $$$k$$$-compressed array is a permutation, we want to know if all elements in $$$[1, N - k + 1]$$$ appear somewhere with $$$L \geq k$$$. It can be handled simply by precomputing the max $$$L$$$ for each distinct element and computing prefix mins.
There is a binary search approach as well. only the first element of output can be 1 apart from it we bs the transition point for the rest of the array.
Alternative approach to D:
Note that if some value $$$a_i$$$ appears in a $$$k$$$-compression, it will be present in $$$(k-1)$$$-compression. Now for every $$$a_i$$$, we need to compute the greatest $$$k$$$ such that $$$a_i$$$ is present in the $$$k$$$-compression. Suppose that $$$l < i$$$ such that $$$l$$$ is the greatest possible and $$$a_l < a_i$$$. Similarly $$$r > i$$$ such that $$$r$$$ is the least possible and $$$a_r < a_i$$$. In other words, we find the closest two lower elements to the left and to the right. Then $$$a_i$$$ will appear in $$$k$$$-compressions for $$$k \in \{r - l + 1, r - l, r - l, \dots, 1\}$$$. Finding $$$l$$$ and $$$r$$$ can be done by adding indices $$$i$$$ of $$$a_i$$$ in the increasing order of values of $$$a_i$$$. We keep the indices in a set, now we can quickly find greater/lower element.
Now we can iterate through every $$$k$$$-compression by going from $$$n$$$ to $$$1$$$ and add each element, that now appears for the first time. We can easily check if the current $$$k$$$-compression is a permutation by keeping it as a set and checking its size, its minimum element and its maximum element.
can you explain me how use set to get l and r
thank in advance
Order input elements of the input and make sure you keep its original index. For example keep a vector<pair<int,int>>, where first is the value, second is the original index and sort this.
Now add indices of values in increasing order to the set. To find the next greater, call upper_bound on the set. To find the next lower, you can keep a second set with reverse order. Calling upper_bound on this one finds the next lower.
From the way you add items to the set, you know that the index you find belongs to a element with a lower value.
I had an alternative $$$O(n)$$$ solution for D that uses the "next smaller" problem (see my submission). It turned out to be an overkill but maybe it's of interest for some :)
Consider our input array $$$A$$$. First observe that for any window size $$$k$$$, an integer $$$i$$$, $$$1 \leq i \leq n$$$, will be in the resulting array if and only if there is some subarray of $$$A$$$ with size $$$\geq k$$$ where $$$i$$$ is the minimum.
Now, we can compute the array $$$B$$$, where $$$B[i]$$$ is the maximum value of $$$k$$$ for which $$$i$$$ will appear in the resulting array. So for the example array $$$A = [1, 3, 3, 3, 2]$$$ the corresponding array $$$B$$$ is $$$B = [5, 4, 3, 0, 0]$$$. We can compute this array in $$$O(n)$$$ time by first pre-computing for every index $$$j$$$, the first index to the left and first index to the right that the number is smaller than $$$A[j]$$$ (if you don't know how to compute those indices, see this article).
Finally, for a particular $$$k$$$, notice that the array resulting from the window-min operation will be a permutation if and only if $$$B[i] \geq k$$$ for every $$$1 \leq i \leq n - k +1$$$. Why? It's clear that if it is not fulfilled, the array will not be a permutation. On the other hand, if the condition holds, we'll clearly have at least one of each integer between $$$1$$$ and $$$n - k + 1$$$. At the same time, our array will have exactly $$$n-k+1$$$ integers, therefore it'll have exactly one of each integer.
Therefore, we can compute prefix "sums" (using min instead of addition) on the array $$$B$$$ which will give us an array $$$C$$$ that allows us to quickly check whether the condition above holds in constant time--we will have: $$$k$$$ will result in a permutation $$$\iff$$$ $$$C[n - k + 1] >= k$$$. In our example, we'll have $$$C = [5, 4, 3, 0, 0]$$$ and checking for the condition will yield us the output $$$00111$$$.
Overall, for each step we required linear time so the time complexity is $$$O(n)$$$.
I was thinking in similar way of maintaining the prev smaller and next smaller and finding the maximum length for which it remains smaller. But couldn't connect dots of how to find solution with it.
Thanks for the explanation!
yeah, even I am thinking of the same approach I request Monogon to please add this as an alternate solution.
Thanks for your explanation :)
The moment when you use FFT instead of trivial Vandermonde's identity (https://en.wikipedia.org/wiki/Vandermonde%27s_identity) is the moment you should realize people can be too experienced
How does FFT help compute what we had to compute using vandermonde? I'd love to understand
FFT lets you do the following:
You are given two sequences: $$$a_0, ..., a_n$$$ and $$$b_0, ..., b_m$$$. Compute sequence $$$c_0, \ldots, c_{n+m}$$$ such that $$$c_i = a_0b_i + a_1b_{i-1} + \ldots + a_{i-1}b_1 + a_ib_0$$$.
This is exactly what I needed here, where $$$a_i = {n \choose i}$$$ and $$$b_i = {m \choose i}$$$. If I thought for a little longer I would realize that $$$c_i = {n + m \choose i}$$$ instead of computing it using FFT.
I had another idea for D. It seems to have passed the tests but correct me if I'm wrong.
What I tried to to do is to find for each value x (1 <= x <= n) what is the maximum value of k so that there exists a segment of length k, where x is a minimum value. Let's call this maximum value k_x.
So for example for this case: 1 ? ? 2 ? ? if x = 2 and ? >= 2 then k_x = 5
Of course it is possible that there is more than 1 number equal to x in the array. In this case k_x is equal to maximum of k_x of all positions where a_i = x.
Calculating k_x for all x can be done with stack.
Then to create answer we do the following:
Let's iterate over x from 1 to n. If x was not present in array we break the loop. Otherwise let's calculate what k must be equal to if we want to create permutation from 1 to x. It is equal n - x + 1. Then let's calculate what is the greatest value of k that can be used. It is equal to min(k_j) where 1 <= j <= x. So if min(k_j) >= n - x + 1 then answer is 1 else it is 0.
Here is my code
Edit: Someone had already wrote about this approach while I was writing this comment. You can refer to this link to see it
To those who don't know anything about sparse table or segment tree yet and wanted to solve D, My approach:
A set of observations to start with:
0) Kth compression array is 2nd compression of K-1th array. Try proving this by taking some sample cases :)
1) We find the lowest number which has been repeated in the array. To do this, just sort the array and check whether a[i]==i for 1<=i<=n. If its not, break at that i. Benefit: the integers above this can never form a permutation, since this number is duplicate and smaller than them. Store 0's in the answer string
2)If we have an integer a[i] such a[i-1] > a[i] and a[i] < a[i+1], it will make a duplicate of itself when we compress it, so we need to find the smallest integer and above that integer, any array wont be a permutation since it would have duplicates. Store 0's in answer string upto this integer.
3) Now we have an array left which has a[i]=i for 1<=i<=n and the array is free of an a[i] such that a[i-1]> a[i] and a[i]<a[i+1]. The number of elements left in the array is the number of 1's you would have in your answer string.
Feel free to check my code as well here
So many alternate ideas to D in the last few comments
Can somebody please explain the editorial version of problem D
for the test case n = 3, array = [2,2,1] as per my interpretation: 1-compression is [2,2,1] -- not happy , 2-compression is [2,1] -- happy, 3-compression is [1] — happy.
But as per editorial we will fix l = 1,r = 3 initially and we iterate over i = 1,2,3:
so for i = 1 , as arr[3] is 1 ,so we decrement r to 2 and min(a[1],a[2]) is 2 which is i+1
and next for i = 2 ,as arr[1] is 2 , so we increment l to 2 but min(a[2]) which is 2 != i+1 , so we are failing at this i,
so as per editorial k>2 are happy and rest are unhappy, but we know 2 compression is happy , did I misunderstood somewhere please help me out
Thanks in advance :)
I did the same thing using two pointers — $$$O(n)$$$.
We start with two pointers $$$left$$$ and $$$right$$$, initially pointing to either ends of the original string.
Let $$$n$$$ be length of original string
The substring of length $$$n$$$ will be a permutation only if it has min element $$$1$$$. If this is not true, all shorter substrings before it will also fail the condition.
Now, after this, there exist two substrings of length $$$n-1$$$ each . One of them must have min element $$$1$$$ and the other $$$2$$$. This is satisfied only when $$$1$$$ is at either endpoint of the string (with only 1 occurence). Had it not been so, both would have min element $$$1$$$(Think about it).
A similar pattern follows for $$$3,4..$$$ and so on.
So, as long as we keep finding single occurences like this, we keep increasing the corresponding pointer($$$left$$$ or $$$right$$$ end).
It has also been observed that after a particular no, if we stop obtaining a permutation, it is guaranteed for all shorter substrings the pattern cannot be a permutation(except substring length $$$1$$$, as the entire original string may be a permutation itself) .
So just keep searching for the required element at either end until you stop obtaining permutations. Then break and check for length $$$1$$$ separately. Assign $$$0$$$ and $$$1$$$ to the $$$result$$$ string along the way (starting from the back).
string solve(){ // CALM DOWN : — )
ret(""); }
its same as explained in editorial
you mean C1? min condition is not correct.
Too difficult contest.I spend lot of time on c1. But it's solution is very nice :).
I expected some text art in some of the test cases in either C1 or C2. Alas!
In problem D, wasn't sample output were big hint . In my case i got hint that except for k=1 , sequence for 1 will be contiguous .Thus i separately solved for k=1 and did binary search for rest and it got accepted .
In codeforces i find that many times sample's give a lot of hint compared to CodeChef short contests . That's why accuracy of most people on CodeChef is less compared to CF.
Can u explain binary search approch
Just check for k=1 separately . Then you can do binary search from k=2 till k=n . For checking a particular k , you can do what is told in the question . You can find range minimum using segment tree .Complexity will be nlog(n)log(n) . You can prove why it works on lines similar to the editorial .
Interestingly enough, it gets TL if you use multiset to squeeze the array inside binary search
Yeah this is because of high constant factor in multiset,set and map. Segment tree i guess is faster, the only overhead is function calls.
I solved D using Binary Search.
Can u explain
Main observation is that if for a particular K we are able to find a permutation, then for all K+1 to N there always exists a permutation. So for finding a K we can apply Binary search.
tangodown
For C, what if the direction of the color diagonals are to the other way? So it goes from top left to bottom right, shouldn't there be a check for that case as well?
Either way will work, the proof is the same.
Can someone provide an example for which greedy in C1 fails. Greedy being swapping 'X' which is contained in most triples of 'X' with 'O' until there are no more triples of 'X'?100579257 this is how I tried to solve the problem.
I hate constructive algorithms
Thanks for the constructive feedback
.
Bro, did you find out your flaw. Actually I was trying but couldn't find out a testcase which you were missing. But I am pretty much sure that your code fails on the case where we can't take the same coloring for both of X's and O's..
Yeah, I found a case in which I was changing more than k/3 tokens.
Can you give that case.
Yeah sure...
Output:
There are 8 changed tokens but allowed are only (21/3) = 7
can someone explain problem D
There are editorials from the author and also from other participants' comments. Try to read them.
I solved D using binary search. Take k=1 as special case and binary search from 2 to n. Anyone else did this? It passed but is it a correct solution ?
For problem D, if the check fails on index i, then shouldn't the answer be 1 for n-i+1 <= k <= n ? Consider 1 5 3 4 2. Check fails for i=3 and k=3 gives a permutation.
Thank You Monogon for such a nice contest, really loved the problems
Errichto did't intimidate Monogon from taking his top contributor spot on Codeforces!
Even though I didn't do well in this contest, I'd like to say that the problems were of great quality.
Kudos to all the authors!
lbyyds
.
Hello! I found this testcase of problem F
The correct answer is 6, but i can't get how to reach it. Can anybody help?
Cut like this:
Then rearrange to:
Got it, thanks!
I can't understand the tutorial for E. I thought if there's a cycle with positive weight also give a contradiction. For ex :
Suppose each edge in cylce above is 1, then the weight of this cycle is positive. In other words, i can say : u1 > u2 > ... uk > u1 (?)
I think any cycle that exist must be 0-weight. Can anybody explain what was wrong with me T.T
Since it is a bipartie graph, the cycle has an even length.
If all edge are given directed as a cycle, there will be a negative cycle(all with weight -1).
Otherwise, you can direct them in an alternating way, so $$$w_2 = w_1 + 1, w_3 = w_2 - 1 = w_1, \dots$$$. So there are no contradictions.
UPD: If the sum of the cycle is positive, you can direct some undirected edges so that the sum is $$$0$$$.
Thanks you! I got it. But "The property of shortest paths tell us..." what's the property which author referred to? (if you see me too weak to understand E, pls tell me and i will skip this :D)
I think it is $$$d_u + w(u, v) \ge d(v)$$$, where $$$d(u)$$$ is the distance from $$$S$$$ to $$$u$$$.
You can see $$$|d_u - d_v| \le 1$$$.
Monogon Same problem as C1 in Hackerearth Contest 2 days back. Link
Maybe it has similar ideas, but it's certainly not the same problem.
it's too hard to wait for 12 days without a contest :(
Can anyone explain why, in 1450H1, the final equation has a factor of 1/(2^F) instead of 1/2? I would suppose that since each appropriate parity subset of F is a bijection to a valid colouring with f(c) = 1/2 * |x-{size of subset}|, the constant factor is 1/2.
There are $$$2^{F - 1}$$$ colorings.
Thanks, took me too long. For the others:
There are a total of $$$2^{F-1}$$$ possible colourings, since the last colour is fixed by the rest to maintain parity. We want the expectation / average, so that's why we multiply by $$$\frac{1}{2^{F-1}}$$$
How to approach this slightly modified problem b,
Instead of attracting all the balls (in the original problem) what if we can deselect any number of balls to attract in any operation.
Can anyone help how does one get the idea of (i+j)%3 for the problems C1, C2?
Working on small dense examples by hand, you might come across this pattern of every third diagonal.
I took a different approach to problem E.
I start by assigning a $$$a_{src}=0$$$ to a "source" node. According to this, all remaining nodes $$$i$$$ are associated with a set of possible values for $$$a_i$$$. For example, consider the example of the following graph:
Taking node 1 as the source, the set of possible values of $$$a$$$ associated with node 1 = {$$$0$$$}, node 2 = {$$$-1,1$$$}, node 3 = {$$$-2,0,2$$$}, node 4 = {$$$-1,1,3$$$}.
In fact, I can prove (using induction) that all such sets will be either continuous odd numbers or continuous even numbers. So, I can conveniently represent them as a pair $$$[i,j]$$$ denoting the range of the associated set {$$$i,i+2,i+4,...,j$$$}, which are stored in the vector $$$range$$$.
Now I start a dfs at the source, for an edge between nodes $$$u$$$ and $$$v$$$, set:
If there is already some value of $$$range$$$ associated with $$$v$$$, take the intersection of the new value with the original value. If this intersection is $$$\phi$$$, the society cannot be capitalist. If the intersection is not the same as the original value, update $$$range[v]$$$ and call dfs on $$$v$$$, since the $$$range$$$ of other nodes may also need updating.
Repeating this process by keeping each node as the source iteratively, there will be at least one such source which gives the maximum income inequality as $$$\max\limits_{1 \leq i \leq n} (range[i].second)$$$. In such a case, setting $$$a_i=range[i].second$$$ for all $$$i$$$ gives a valid construction of the answer.
My submission: 100720392
EDIT: I have now concluded that the complexity of dfs is $$$O(nm)$$$.
For any node, the maximum range that can be assigned to it is $$$[-n,n]$$$. Every single time this range is updated, it must decrease by at least 2, since we always take the intersection of the original range and new range. Thus, each node can be updated at most $$$O(n)$$$ times.
Furthermore, every single time we update a node, we iterate over all its adjacent edges. If the degree of a node is $$$d_u$$$, the dfs will perform at most $$$n*d_u$$$ steps for node $$$u$$$. Summing over all nodes, $$$\sum n*d_u = 2nm = O(nm)$$$.
Thus, the solution runs in overall $$$O(n^2m)$$$ time which is just enough (for me, a privileged C++ user) to pass the time limits. But if you disagree, please feel free to hack the solution if you can. :P
The ranges have size $$$O(n)$$$ and shrink every time dfs is called on it. An edge is processed every time one of its endpoints has a dfs call. This way, every edge is processed $$$O(n)$$$ times, so it seems your complexity guess is correct.
Yes!! Thank you!
Sorry, it seems I was in the middle of editing my comment just as you posted this one (xD). Thanks a ton though!
Nice Problem C!
In the non-optimized ver. of problem G, how did 3^C come out? Had trouble with bitmasks complexities.
For each bitmask, you iterate over only its submasks, not all masks. For each position, (mask, submask) has 3 possibilities: (0, 0), (1, 0), and (1, 1). Therefore the are 3^C bitmask-submask pairs.
Thanks for the fast response!
Also I would like to ask about the memory limit of G which is unusual, what kind of solutions did the author want to block?
There is a solution with subset sum convolution that doesn't use the observation about overlapping ranges.
For problem B(Balls of Steel) how the answer can only be 1 and -1. Could any one please explain this to me.
I proved my claim in the editorial. Basically you assume for contradiction that there is a solution with 2 or more moves, and show that it's impossible.
I have a solution for problem D. I used segment trees to find the largest length where a[i] is the minimum, and I did this for all i from 1 to n. Then I maintained the largest length for each number from 1 to n such that the number is the minimum in its range using the previous information. Finally, I used a range prefix sum technique: I knew that if value k has a maximum length of a, then it will only be able to be used in a 1, 2, 3, . . . , min(a, n — k + 1)-compressions. So I will have a prefix sum array where I increase ps[1] by 1 and ps[min(a, n — k + 1)] by -1. Then in O(n), I compute my prefix sum array. Now for each i, all I need to do is make sure that ps[i] = n — i + 1 in order to have a 1 in the string at the ith position. Otherwise, there's a 0 at the ith position.
153769266