I hope you enjoyed the problems! You can give feedback on each problem and also choose your favourite below. This could help me and other problemsetters understand what type of problems you want to see on Codeforces.
Hint 1
Hint 2
Hint 3
Tutorial
Feedback
Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Tutorial
Feedback
Hint 1
Hint 2
Hint 3
Tutorial
Feedback
Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Hint 6
Tutorial
Feedback
Hint 1
Hint 2
Hint 3
Hint 4
Tutorial
Feedback
Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Hint 6
Hint 7
Tutorial
Feedback
Now you can choose your favourite problem from the ones you solved during the contest or upsolved after it.
UPD: Don't forget to check out ak2006's video tutorials on his channel.
Thanks for fast editorial and the wonderful round! This is the fastest editorial I've ever seen.
A little question about avoiding accidents:
In this round I finished coding my solution to problem E in the last four minutes, but when I try to submit the code, I found I cannot do that. In some pages I can't click the bottom 'submit', and in somes others, I was told 'Put you source into the textarea or choose the file' although I have already choosen the file', it even return 403 error in one attempt.
I think there is something wrong with my network, so I want to ask that have you ever encountered such a situation? How to avoid such a desperate situation?
Forgive my poor English.
You are not alone comment, I also get 403 and white pages that don't reload... I started noticing this one year ago, more o less.
I also can't click on submit sometimes and during hacking, each submission takes around 30 seconds to load which is annoying.
:holyfrick: That was lightining fast .. Thnx for the hinted editorial !! helps a lot in upsolving :)
Can someone give reasons for why D was a bad problem?
couldn’t solve = bad ig. I liked it, felt it was interesting
Thanks to codeforces for providing fast editorial I was waiting for a solution to problem D
Valentine Day with codeforces has its own joy
I spend Valentine's day with codeforces but failed round. That is unrequited love((((
Can anyone tell me why my merge_sorting& dsu solution to task C got TLE with code1 146412015,and after removing the array "rk" (code2)146425315 I got pretsets past???:(
Don't use "memset(rk,0,sizeof(rk));" for each testcase,if you do so,the time complexity of your code will be O(T*sizeof(rk)).
But if you use "for(int i=1;i<=n;i++)rk[i]=0",the time complexity will be O(sum(n)),which can get pretsets past as well.
(sorry for my poor English)
got it,thanks!
Where is editorial of problem A & B? Please, check tutorial section of A & B.
For b Push all odds into one array and all evens into another array if both arrays are not sorted print no else print yes the observation is if you start from sorted array and perform this swap operations and consider all possible arrays from these swap operations then you will see all even numbers are sorted in those arrays and also all odd numbers are sorted in those arrays
Thank you to all of you. The contest was so fun and challenging for me. The editorial is lighting speed too!
In F, I thought that for N up to 10'000 O(N^2) wouldn't pass. Looks like I should get better at judging constraints :P
Actually, $$$N = 10^4$$$ means to me that "only" $$$O(N^2)$$$ pass.
The strange limit for $$$n$$$ is for failing some heuristic that works like this: Find a wrong solution which finds an answer in $$$O(n)$$$ for a given starting position. Also have a brute force solution that finds the correct answer in $$$O(n^2)$$$ for a given starting position. Then, run the wrong solution on all positions and the correct solution only for the $$$30$$$ most promissing positions.
Unfortunately, I could not find a pattern that would fail this, so we changed the limit for $$$n$$$. My solution runs in 748ms/2000ms.
python IO screwed me for D :(( limits were pretty rough
same solution as editorial, just didn't use FastIO library. unfortunate
Could any one help me i am not able to find editorials , when I click on it it shows only some text, I can't find code any where??
The text is the editorial. It explains the solution of the problem.
I think this is a simpler approach for problem C:
For every $$$i$$$ that $$$a_1, a_2,...,a_i$$$ is a permutation of $$$1,2,...,i$$$ we have a new connected component.
double check problem D
Ah it's C... my bad. Thank you man.
You mean problem C
Yes, I made the edit.
That does work.
how?
Let us take an example of 3 2 1 6 5 4 , Here till the third index permutation of 3 is present so we could surely say that it will be a separate component which would not share any edge with 6 , 5 or 4 . But if we take an example :- 3 1 5 2 4 , Here till the third index we don't have the permutation of 3 this implies that there exists a number after third index which will share an edge with 3 and 5 both. Likewise we could calculate the number of connected components.
solution boils down to
Can anyone help me find why I got TLE for problem A in this solution 146366602.
Since I got TLE in it, I tried multiple different solutions, with similar logics, but got TLE in all those solutions as well. (146370431, 146389736, 146400809, 146425393)
I spent more than 1 (or 1 and half hours) in problem A, and still couldn't solve. But was able to solve C in about 10 minutes. This made me feel that Problem C was much easier than problem A. (Don't downvote me for saying this)
scanner?
For problems
B
andC
as well, I use the same way to get the input using scanner. In those 2 problems, the value ofn
is a lot bigger than this.There are 500 tests and 500 n in problem A. It is 250000 elements. In problem B, for example, special limit on the amount of n: 50000. Scanner is slowly.
Problem C: Inversion Graph I got wrong answer on pretest 2 .
DO you help me to get where is the problem and even this logic is it right ? 146422401
Your merge function has a bug.
shoulde be
Though even after the fix the code will probably TLE.
For C I got Accepted with DSU while maintaining a set of maximums for components in the array's prefix. I believe it works in $$$O(n log n)$$$ amortized.
After competition, we go 0 with DSU. Sed lyf
For problem C, can't we use DSU? I used DSU and got TLE!!! Can anyone explain why DSU is giving TLE?
Thanks in advance!
My DSU solution passes, you can check it here 146394231
Thank you for sharing the submission. But, I didn't get what you did! It would be great if you could explain what was the approach and how that differs from my solution.
See my explanation here. DSU is modified so that we maintain the maximum element of the set in the array 'm', similarly to the size of the set in the array 'sz'.
You hadn't used DSU. If you remove all the dsu parts the code still works.
solve() have difficult O(n ^ 2)
Nah it amortizes a lot. I think it's $$$O(n * log n)$$$ because when there's a bunch of maximums bigger than currently added value, they all get reduced to one component
Can this be proven somehow? Even I passed it using this but still have no idea why it works.
Yeah, of course. I will refer to my solution 146394231. DSU technically isn't O(1) but for these constraints let's say it is.
So let's consider the inner
while
loop. How many times the operations inside, which btw take constant time, will get executed (not in one iteration offor
loop but for the duration of the wholesolve()
)? For this let's consider the value ofcomp_maxes.size()
. We see that for each iteration offor
loop it increases at most by 1. So it cannot be greater than $$$n$$$, its size is linear. And inwhile
loop it can't decrease by more than its size. Said differently, number of deletions (which is equivalent to number of executions ofwhile
loop) cannot be greater than number of insertions, which is linear. So body ofwhile
loop will get executed at most $$$n$$$ times overall. So the cost ofsolve()
is $$$n$$$ times the cost ofset::lower_bound()
which is $$$O(log n)$$$.Your implementation for dsu has an infinite loop. If s1 = s2 in the unite function, then value of dsu[s2] becomes positive and if you try to use find_set on any member of this component, it will never find an answer.
I used DSU and merge_sorting.Here is my AC code 146425315,which is O(nlogn).Hope it will help you.
Thanks, it helps me a lot.
You are getting TLE not because of DSU, but because of nested for loops.
In my opinion, Problem D is just a TopSort. The solution by using BFS is just implementing the TopSort using BFS, they are the same. Am I right?
I'm not sure you can do it as a topsort directly, because there may be multiple different operations that "unblock" a position. However, you only need one operation to unblock a position, but the topsort will enforce that you do all such operations. Maybe there's a different way to frame it as a topsort, but I could not come up with one.
Hackforces...
To me it's more like SpeedForces...
Shit happens:
For problem E, I don't understand why we need
using a data structure such as Fenwick tree or segment tree
.It seems that once we have the
lazy[color]
array, we don't need to do "range add" operations, we just need to deal with intervals.I think, when you change color you should utilize
lazy
array and add its value on a given interval which you consider.Can anyone tell me how i got memory limit exceeded for this submission for problem B? I guess I messed something up but everything seems OK when I look at the code.
If
n == 1
, you dont read in the only element in the array, which then causes it to become the n on the next pass. Trycin >> n;
inside the if block.I wonder why my solution for C worked. I counted all such indexes where the prefix sum is
i*(i+1) // 2
(1 based index). Can anyone explain or hack my solution?You will need this observation: if $$$a_1,...,a_i$$$ is a permutation of $$$1,...,i$$$ then there is no edge from vertices from $$$1,...,i$$$ to the rest. Then, call $$$i_0$$$ the first index satisying our condition. It is clear that $$$a_1,...,a_i$$$ form a connected component. Then consider $$$i_1$$$ the next number. We have $$$a[1..i_0]$$$ is not connected to $$$a[i_0+1..i_1]$$$ and $$$a[1..i_1]$$$ is also not connected to the rest. So we can conclude $$$a[i_0+1..i_1]$$$ is also a connected component.
So now we need to count number of $$$i_j$$$. Your implementation is correct because sum of $$$k$$$ distinct positive integers is $$$k\cdot(k+1)/2$$$ if and only if they are a permuation of $$$1,...,k$$$.
The intended solution for C is really cool.
I solved problem C using sets and Segment Tree which is overkill but can anyone help me calculate it's complexity I think it's O(n * log(n)) but not sure... Here is the link
I think Problem D had weak pretests
Problem C short solution -
Submission — 146445712.
Submission During contest — 146419476.
(The only time it was helpful for me in not knowing a particular topic like Segment tree / DSU / Graphs / Etc.)
Problem C can be solved using sparse table 146448264
Creating sparse table is actually not required. Since, you are always calculating minimums for a suffix, you can precompute all suffix minimums.
FML
Could someone help me understand problem E please?
I understand that we keep contiguous intervals that have the same value and why that's o(q), I don't understand how we handle a value update query. The only possible way I see is going through all the intervals that have elements between l and r and individually updating those intervals, which is o(q). Do we have a bunch of seg trees of intervals? How does that work when the intervals are changing all the time?
Each interval has a color id. Let's keep two arrays:
pendingColor[c]
, the updates pending for each color group,pending[sid]
, the updates delta pending for a particular segment idWhen updating a color, do it in O(1):
pendingColor[c]+=x
.When creating a new segment with a given color, I create it with an offset
pending[sid]=-pendingColor[c]
.That way, the updates pending for a given segment is
pendingColor[c]+pending[sid]
.Please, someone help me find why this code is giving RE? Here is the submission link: https://codeforces.net/contest/1638/submission/146451668
for(int i = 0; i < vpe.size() — 1; i++){
vpe.size() is unsigned int64. if size == 0, then it will not be i < -1, but i < max(unsigned int64).
PS: I looked your code and can not find the error so I tried for myself and my IDE gave me the tips.
Thanks for the tips! It would be great if you can share the name of your IDE :)
Qt Creator. But I think any modern IDE will have similar functionality.
hey can anyone explain me that if the value of n<=10^5 is constraint of question B then then why there is a testcase n=86227?? is this under constraint???
Get your maths right.
Fun fact about problem D (because I tested this round): its original constraint was
N<=500
, which let a relatively straightforward solution pass. The solution was "while any changes are possible, traverse the whole matrix and update every 2x2 square whose non-special cells are of the same color". Can you see why this always works in O(N^3)?It doesn't seem like that's the case, what happens in a case like the following?
Is the test idea that a BFS would traverse this in a zig-zaggy fashion with O(N^2) iterations in total? I guess that reduces my fun fact to "I was fooled to think my straightforward solution is O(N^3), and tests were weak" xD
Problem C is just a sub problem of 1270H - Number of Components
Good contest , especially considering no matter what you do you will always pass pretest C
Can anyone explain why my C worked, I observed it for few self generated tests and got AC. Can someone hack it or give a proof why it works ?
Code : 146425328
Each component is a segment of the array because consider a inversion with indices i, j then if a[k] < a[i] where i < k < j then it will have edge with index i else if a[k] >= a[i] then it will have edge with index j which means every element between index i,j will belong to the component of i,j.
Consider two adjacent segments seg1,seg2 which are forming different connected components which means there is no edge from any element of seg1 to any element of seg2 which means that max element of seg1 is less than min element of seg2.
If you consider a prefix till index i (one based indexing) and if it is containing all the numbers from 1 to i then max element of 1 to i is i and min element of remaining array is i + 1. So the sub segment till index i will increase num-of-connected-components by one which exactly your code is doing .
I understood first two paragraphs easily but I am having difficulty in understanding 3rd paragraph. Can you please elaborate on it.
Thank you for your time :)
Its like we are starting with all the indices in one component and then we are forming new component when prefix till i is disconnecting with suffix[i+1 .. n](which again may get split in future). And that happens when max(prefix till i ) = i, that is prefix till index i has elements from 1 to i.
Loved the contest, Big Brush was a really good problem.
can anybody help me why my program is giving TLE in test case 4 ?? program link 146482560
instead of endl use "\n", TLE is due to slow output, my solution was also getting TLE due to same issue.
Thank you bro it is submitted
yeah, no problem!
Problem B can be solved by finding the minimum on the suffix.
Submission: 146422034
Great contest!! really nice problems. Short story and clear statements.
If you are/were getting a WA/RE verdict on any of the problems from this contest, you can get a small counter example for your submission on cfstress.com
Problems added: "A, B, C, D, E, F".
If you are not able to find a counter example even after changing the parameters, reply to this thread, mentioning the contest_id, problem_index and submission_id.
This is super helpful! Thank you.
Good contest.
Some comments:
Did I miss something? I'm talking about problem E.
146566296
146566646
What's wrong with this solution? It's not cool, when you face with this kind of "problems" :/
P.S. tight TL for SegTree?
In regard to my implementation, BIT is almost 4-times faster than Segment Tree. So it is recommended to use BIT when both are options because it is faster and shorter.
Another reason that leads to TLE is the excess time on IO. (n=10^6) You can use
fread
to get faster IO like 146629883. In my experiments, it save ~1600ms in this problem.I think I kinda cheesed C lol (weak tests?) I used a DSU with a simple set. A element can only start its own connected component if it is the greatest in the set so far (if you sweep from left to right). Therefore any element smaller than that will auto join its connected component. In the set, I only kept the biggest values of each connected component and for each element, binary search and manually join the components with max values bigger than that element.
Here is my code that passes in nearly 800ms lol https://codeforces.net/contest/1638/submission/146576523
Edit: there is a smart optimization for this but its still kinda cool my brute force passed
Edit 2: i appreciate the hack :D
For Input n-1, n-2, n-3,... 1 it should fail, since it becomes O(n^2)
I did nearly the same implementation, but deleted the unused elements from the set, then it is O(nlogn)
see https://codeforces.net/contest/1638/submission/146446755
Hacked.
In D, are "dead ends" not possible? We can choose every possible square on each step?
I'm also thinking this thing while upsolving D. I had a blurred picture not super clear that if there might be a dead end it'll be reached in m*n steps and when we have a choice to form new special cells, we can go with any choice as long long it's creating at least 1 new special cell because choosing set A cells rather than B do not hinder our answer as set B can always be chosen later. I mean choosing one set of cells over another doesn't force us with restricted choice. Idk whether what I'm thinking is right or wrong.
Just want to let you guys that I think it's almost impossible to get Accepted in problem B with Python3 with regular I/O, you will need some kind of fast I/O. I tried multiple submission and changed the solution for problem B, but alas the problem fast only I/O all along.
Here's the difference Without fast I/0 (TLE): 146596308 With fast I/0 (Accepted): 146596576
No offence really, but I also think it would be better if the problem setter take this kinda stuff in mind so it doesn't make Python coders in disadvantage. Or the admin could set difference time constraint according to difference language.
Anyway, I'm not really angry that my solution didn't get Accepted in the contest. Just a little bit annoyed because the constraint, that's all, peace ✌️✌️
Instead of using fast i/o you've mentioned, there is ultra light weight version (but slower a bit): 146387910
have any one could prove why must have the special square and then have ans?
Interesting E & F
Div2 D
Question E last time can help me see why my code runs incorrectly. I have been debugging for a long time, but I still haven't found the error. https://codeforces.net/contest/1638/submission/146658516
My solution for C. First build the graph and then run dfs to get the number of connected components. In order to build the graph, I iterated from left to right while storing the maximum. Then at each node i, I would store the minimum value in the range [i + 1, n — 1] using prefix sums. I would then add an edge between the current node and the maximum node, and the maximum node and the minimum value found with prefix sums. I used an adjacency set to weed out duplicate edges and then converted it to an adjacency list to run DFS on.
146678558
Another way to do c:
notice that if we have two elements where the left one is greater than the right one they are connected and all the elements in between them are also connected to them. Start from the left and keep the maximum element in the current segment and the minimum element in the remaining segment. If our maximum in the segment is less than the minimum in the remaining suffix then we must stop and create a new segment from the remaining suffix. Otherwise connect the minimum in the remaining suffix to our current segment and get the maximum of the updated segment
Another other way to do C: Notice that with any two adjacent segments a, b. MaxA < MinB. Also MinA < MaxA. Also MinB < MaxB. Basically this means we can be sure that for any segment A to the left of B MaxA<MinB. Ok so if we imagine a split with a prefix and a suffix, if we have it split on a segment then we know that the max of the prefix must be > min of suffix. Why? Well for anything to be connected between the prefix and suffix this must be the case since this is like worst case. And since we split on a segment they will be connected. But if we have max prefix < min suffix we cannot connect them. This means we must be splitting between two segments. Ok so for each prefix/suffix we can just check this condition and then our answer is just the number of these splits + 1.
Two alternative solutions for 1638E - Красочные запросы. (in some sense harder than in editorial!)
Solution 1:
Think about offline solution. This is when you read all input first and then assemble answer.
Consider single cell what is sufficient to answer all queries about it?
For single cell sufficient to know when and how much the value was changed. But this is not flexible to know for every single cell. Think about what sufficient to know about cell and what can be shared across cells.
It is sufficient to know periods of colors, but to answer queries fast you also need to have value in the cell at start of each period you store. So, let's call 'history' of the cell is array of tuples: span of time, in which color it was painted, and the value it had when it occurred. Then, to answer query you just need to find appropriate period of time, then get all differences (Add queries) from paint to the time you're answering.
You can get all differences for color using premade prefix sums for each color separately and binary search by time.
Think about following questions: can we change history? How fast?
We can add 'Color' query into history at time t2 if we know t1 — time of previous 'Color' query and time t3 of next 'Color' query.
We need to remove contribution of whole segment of time [t1, t3] of color which it was [t1, t3], then we need to add [t1, t2] contribution of previous color and add contribution [t2, t3]. And to update all the values in all next segments after this time we need to change their values by those deltas. We can do this using Fenwick tree. By the way t1 and t3 we can find out using balancing tree (set in c++).
Similarly we can remove 'Color' query from history at time t2 if we know t1 — time of previous 'Color' query and time t3 of next 'Color' query. How? Completely similarly. Almost the same.
Note: we're not allowed to merge same color (think "Why?"). But it makes our life easier :).
How many changes of history we need?
O(q) Details in solution.
If we sort all 'Color' queries by time we can perform 'sweepline' technique. But instead of sorting because we have l and r capped by n we can use counting sort approach and make vector of events for each coordinate to avoid sorting. Then add and remove 'Color' events into history according to events (details in Hint 3). And answer all queries using history (details in 'Key insight').
Time complexity: O(n + (q log q)). Source: 146511325 It was kinda hard to implement.
Solution 2 will require Hint 1, 2, Key Insight.
Think about following. Suppose for a cell you postponed all 'Color' queries from time t. So this means you know you performed everything up to time t. Can we update it up to current time efficiently?
You can concatenate two 'histories'. If we know that first history ended up at time t1 and color c1 and total difference is v1. And other history starts from time t2 and ends with time t3 and color c2. Then total effect of those histories is following.
Total contribution is v1 + v2 + all differences (operations 'add') on color c1 within time [t1, t2], and new history starts with start of first history and ends at time t3 with color c2.
Just build Segment Tree with lazy propagation using 'histories' as value and 'concatenation' of them as operation. Then to answer queries you just need to get 'history' from Segment Tree in its point + add corresponding difference.
Note: this solution is online. Just build prefix sums of 'Add' queries on the fly. Time complexity is not higher than O(n + q * (log n) * (log q)). Hard to justify log q, because each time I combine histories I do binary search over array up to size q.
Source: 146622449 It was much easier to implement, because duty of 'set' is taken by segment tree.
By the way. Has anyone been able to pass following 'naive' solution?
Let's maintain treap (or other balancing merging tree) for each color. And also let's maintain segments of same color using balancing tree (set in c++). Then, to perform 'Color' query we can cut all segments from corresponding colors, merge them, and insert into corresponding tree. To perform 'Add' query we can do something like segment tree but on treap. I don't know its name. To perform 'Query' we can just get value in a point.
Here is my implementation but it gets TL (it may have bugs): 146454369
If you look closely to which ranges do we make binary search. Or if you try to borrow some ideas from approach from editorial, you may came with following optimizations.
Notice that when we add color in nodes we create history with already known location in prefix sum of corresponding color. And each time we concatenate history h1 to history h2 we need to find end of h1 and start of h2 within ending color of h1. But for h2 we unable to know what ending color of h1 will be in advance, so we unable to know for h1 index of its start within color of h1. But for h1 we may maintain index within color of h1 because we know ending color of h1. As additional observation, holding index is enough, we don't need time. This allows us remove binary search for 'Query' queries, and remove one binary search for each concatenation, and narrow down second one.
Source: 146904800
Each time we have segment (l, r) in history, it means it's colored by color of l-th query. Just because l is time when it triggered. So we can store color in index l. More that that, each time when we do binary search for segment (l, r) we know it's starting from l and color of l-th query, so we can precalculate array idx — which will hold location of prefix sum within corresponding color.
I also had to shift indexing of events to avoid numerous if statements handling case when first query is 'Color'. If I don't do that, it will either access array out of bounds, or think that whole array is colored by color of first query.
Source: 146925172
As a result, both solutions runs in 2 sec in comparison with 2.2 sec and 3.7 sec correspondingly.
In problem E can anyone tell me prove of time complexicity when we use on segemnt tree and stop at node of only on monochrome segments.
.........
https://codeforces.net/contest/1638/submission/165944078
Here's my (probably overcomplicated) implementation of E using only a lazy segtree which ACs, centered around the intuition to keep track of maximal monochromatic segments. Though, I don't have a rigorous proof of its time complexity.
Can anyone please tell me why my solution for B is failing? I am using the same logic as the editorial but the implementation is different. For each parity, I am checking if the current element is the greatest element of the current parity or not. If it is not, then the answer is NO, else the answer is YES. 172802388
Understood. I was not updating the set.
C. Inversion Graph — my DSU working approach 198748310
I solved E using offline query. I iterate from 1st element to the )last element while maintain a set store about color changes from the first query to the last query. When moving to next element I update the set (add color or remove color, also update value for each timeline using fenwick tree) Complexity is about (n+q)log2(n)