Orn0's blog

By Orn0, history, 12 hours ago, In English

Hello fellow Codeforcers ! I took part in [contest] on last Friday. I didn't have a Div. 3 contest in a long time, so I was eager to participate. I didn't like the round so much, because I felt it relied too much on implementation rather than problem solving. I was able to find the theoretical solution for A-F during the contest, but I couldn't implement successfully D-F in real time. Having a better understanding of what I missed could get me a huge improve in future contests.

This is why I share here is my thoughts and solutions about the first 6 problems of the contest. Welcome to my humble green editorial !

2033A - Sakurako and Kosuke

Observation

We want to find the parity of the last $$$k$$$ s.t. $$$|S_k|\leq n \implies k \leq n$$$, i.e. the parity of $$$n$$$. If $$$n\%2$$$, Kosuke plays the last turn, else it is Sakurako.

Complexity

$$$\mathcal{O}(1)$$$ complexity

What I've learnt

Not much, Div. 3 A problem, focus on pattern identification.

2033B - Sakurako and Water

Observation : When defining a square to increase one value on its diagonal, it's always better to consider the largest square possible increasing it.

The formalism here is boring and I think it doesn't really help to understand the solution. At each operation, one can increase all values in the same "diagonal" of the matrix by 1 by choosing the largest possible squares for each "diagonal". The answer is to find the sum of the maximum over all matrix "diagonals".

Complexity

You'll end up iterating over all values in a specific order. $$$\mathcal{O}(n^2)$$$ complexity.

What I've learnt

Not much, problem statement was misleading for me, and the only difficulty is the implementation.

2033C - Sakurako's Field Trip

I read many unhappy comments on C, and during the contest, less people succeeded at it than D. I felt it wasn't a hard problem at all, which means I must be improving :D .

Observation : The participation to the disturbance of each pair of $$$(i, n-i+1)$$$ can be decided only based on the previous pair $$$(i-1, n-(i-1)+1)$$$

This leads to consider pairs iteratively, starting from the middle pair (or middle point) of the array. For the $$$i$$$-th pair $$$(i, n-i+1)$$$, regardless of the positioning of the $$$(i-1)$$$-th pair, you can always swap it or not to get the minimal disturbance amount.

Complexity

Iterating over values in $$$a$$$ results in $$$\mathcal{O}(n)$$$ complexity overall.

What I've learnt

This was an easy problem in my opinion, but nothing surprising for a Div. 3 C problem.

2033D - Kousuke's Assignment

Here comes my big frustration. During the contest, I found the statement misleading, since there may be many way to chose beautiful segments. Once I got my head wrapped around it, the idea for this problem stroke me as pretty simple.

Observation : (Very standard) The sum over a segment $$$(l,r)$$$ can be computed as $$$pref[r] - pref[l-1]$$$ where $$$pref$$$ is a prefix sum array.

For any two positions $$$i < j$$$, if $$$pref[i] == pref[j]$$$, then $$$(i+1,j)$$$ is a beautiful segment. To find the maximum number of such segments, one should consider greedily the shortest possible segments. This can be done by iterating of $$$i$$$ and store the prefixes met. If the current prefix has been seen after the last beautiful segment has been found, a new shortest non-overlapping beautiful segment can be considered.

The loop is in $$$\mathcal{O}(n)$$$, and I used a $$$unordered_set$$$ to insert and search elements in $$$\mathcal{O}(1)$$$.

During the contest, I got WA (on 530-th value of test 2 ...) because I wasn't careful enough on handling beautiful segments with ends in $$$1$$$ or $$$n$$$.

I came back after the contest, and wrote correct code, this time getting TLE on test 18. My solution seems correct and I didn't know how to optimize it.

Code

Looking at others submissions, I came across this submission(TLE) and this one(Accepted) by avi032904, between which he changed from an unordered_set to a set. I did some more research and found this blog by neal. I highly encourage you to read it, I'll summarize here the main ideas.

For insertion operations on an unordered_map to be in $$$\mathcal{O}(1)$$$ time complexity, the average number of collisions for each item should be $$$\mathcal{O}(1)$$$ on average. This is most likely always true in an context where items at selected at random. Now, in a hacking context, knowing the hash function of the unordered_map, one can deliberately cause as much collisions as possible and make insertion a $$$\mathcal{O}(n^2)$$$ operation.

An approach is to reintroduce unpredicability by using a precise clock and some arithmetic operations to prevent targeted attacks on modular key values. The detailed explanation is to read on neal's blog.

This allows my code to pass all tests.

My final code

Complexity

The code only performs one prefix sum on the whole array and perform basic operations on an unordered_map, resulting in an overall $$$\mathcal{O}(n)$$$ complexity.

What I've learnt

In some (hacking) contexts, things I held to be always true (insertion in $$$\mathcal{O}(1)$$$) can outcome being proven wrong. When facing such situations, one must be ready to look into a new level of complexity and understand in depth the related concepts. I'll definitely remember this one.

2033E - Sakurako, Kosuke, and the Permutation

This is not the first time I see a problem about permutations with an operation involving $$$p_{p_i}$$$. I should remember the following observations. My editorial is very similar to the official from FBI :

Observations

My idea during the contest was to suppose that one was always able to break a cycle of size $$$k$$$ into two cycles of size $$$\frac{k}{2}$$$ and $$$\frac{k}{2}+(k\%2)$$$. I used a recursive function and memoization to get the statement tests right, but got WA on testcase 2.

My assumption and my implementation were correct, but it doesn't yield the optimal answer. A better assumption is that is is always possible to perform a swap to create 2-cycles iteratively. Proof : Consider a permutation $$$[p_1, p_2, p_3, ..., p_n]$$$ with a cycle of size $$$k \geq 3$$$ denoted by $$$(c_1, c_2, ..., c_k)$$$. This can be seen as the sequence of consecutive visited values, and it holds that $$$p_{c_i} = c_{i+1}$$$.

If we swap the positions of the values $$$c_1$$$ and $$$c_3$$$ :

  • $$$p_{c_k}$$$ yields the value in positions $$$c_k$$$, which was $$$c_1$$$ and is now $$$ c_3$$$

  • $$$p_{c_2}$$$ yields the value in $$$c_2$$$ which was $$$c_3$$$ and is now $$$c_1$$$

  • $$$\forall i \neq 2,k, p_{c_i} = c_{i+1}$$$

We then have a cycle $$$(c_2, c_3)$$$ of size $$$2$$$ and a cycle $$$(c_1,c_4,...,c_k)$$$ of size $$$k-2$$$

A cycle of size $$$n$$$ needs $$$\lfloor \frac{k-1}{2} \rfloor$$$ operations to create a valid subsequence in the permutation.

Complexity

Detecting all cycles is in $$$\mathcal{O}(n)$$$ and each can be handled independently, resulting in an overall $$$\mathcal{O}(n)$$$ complexity.

What I've learnt

My first assumption was suboptimal, and I failed at realizing it in time. This lead me to prove the result used in the problem, I'll probably have use for this later.

2033F - Kosuke's Sloth

This problem is definitely built like a Project Euler problem, which is not to my displeasure. I took some 30 minutes during the contest to think about this, and I must say that I came pretty close. I submitted an accepted solution short after the end of the contest, which I already see as a win.

I got the intuition that Fibonacci modulo $$$k$$$ was periodic. My approach was then to compute the first complete period and store the positions of zeros in an array $$$zeros$$$. The answer can be found by taking computing $$$\lfloor \frac{n}{size(z)} \rfloor$$$ to know the number of complete periods before $$$n$$$, and consider the right element in $$$zeros$$$.

With more research and the official editorial, I learnt more properties about the Fibonacci sequence :

  • The Fibonacci modulo $$$n$$$ is called Pisano period and writes $$$\pi(n)$$$.

  • $$$gcd(F_n, F_m) = F_{gcd(n,m)}$$$. This leads to $$$gcd(F_i, F_{i*j}) = F_i$$$ and finally $$$F_i | F_{i*j}$$$

We can show that zeros in the period cycle are evenly spaced. $$$F(0) = 0$$$, we can skip it. If we denote $$$i$$$ the smallest natural integer such that $$$F(i) \equiv 0 [n]$$$, all others zeros will be at indices $$$i*j$$$.

Proof

The simplest solution is to look for $$$i$$$ and consider $$$i*n$$$.

Complexity

It appears that $$$\pi(k) \leq 6*k$$$ the resulting complexity is then $$$\mathcal{O}(k)$$$

What I've learnt

I really liked this type of problem. I have a computer science and mathematics background, so I should be able to be good at this type of problems. I will definitely try to remember these properties on Fibonacci (as well as the proof).

Conclusion

I have less time currently to write these custom editorials and I apologize for there is less preparation. I think it still allows me to better learn from contests and maybe share some thoughts with other participants. I really liked the problems and learnt many very useful things on maps, Fibonacci and others. Thank you again FBI, Vladosiya and all other involved in the contest organization (Parag_AP for green testing :)) and once again MikeMirzayanov for creating Codeforces and Polygon.
See you soon for my next editorial, this time with more graphics I promise.

Full text and comments »

  • Vote: I like it
  • +11
  • Vote: I do not like it

By Orn0, 3 weeks ago, In English

Hello fellow Codeforcers ! I participated in Codeforces Round 977 (Div. 2, based on COMPFEST 16 - Final Round) last Saturday and I had a great time. I was composed and was able to solve A, B and C1 in just under an hour. I thought I was close to solve C2 as well, but it took me some more time after the contest.

Here is my thoughts and solution about the first 5 problems of the contest. Welcome to my humble green editorial !

2021A - Meaning Mean

My first thought was that choosing two elements of different parity would result in "loosing" a unit because of the floor operation. I then tried with toy inputs like $$$8,5,2$$$.

  • $$$\lfloor \frac{\textbf{8}+\textbf{5}}{2} \rfloor = 6$$$, $$$\lfloor \frac{6+\textbf{2}}{2} \rfloor=4$$$
  • $$$\lfloor \frac{\textbf{8}+\textbf{2}}{2} \rfloor = 6$$$, $$$\lfloor \frac{5+\textbf{5}}{2} \rfloor=5$$$
  • $$$\lfloor \frac{\textbf{2}+\textbf{5}}{2} \rfloor = 3$$$, $$$\lfloor \frac{3+\textbf{8}}{2} \rfloor=5$$$

I could conjecture two propositions. First, an order of summation could yield an optimal solution even if it combined odd and even number together (like in the third option). Then, it seems important to combine the largest values in the last steps.

Suppose we consider the values in the same order as they are in $$$a$$$, and insert new values at the front. The result for 3 values is :

$$$\lfloor \frac{\lfloor \frac{a_1 + a_2}{2} \rfloor + a_3}{2} \rfloor$$$

Without floor this rewrites as

$$$ \frac{a_1}{4} + \frac{a_2}{4} + \frac{a_3}{2}$$$

First chosen values will appear in more operations, and consecutive division by 2 will reduce their impact on the solution. Operations with floor will only increase this effect by eventually penalizing the result by one unit.

Further hints of proof

My idea was to maintain a sorted array by combining the two smallest values and reinsert the result at the front.

Complexity

Initial sort of the array, and performing $$$n-1$$$ operation results in $$$\mathcal{O}(nlog(n) + n) = \mathcal{O}(nlog(n))$$$ complexity

What I've learnt

In first problems of a contest, identifying an intuitive strategy by looking at some key examples is often a good idea

2021B - Maximize Mex

A standard MEX problem. It felt easier than problem A. The general idea here is to iterate over $$$mex$$$ from $$$0$$$ to $$$n$$$ and test whether it is possible to obtain this value. It is possible iff :
- $$$mex \in a$$$, or
- $$$\exists 1 \leq i \leq n$$$ such that $$$a[i] \leq mex$$$ and $$$a[i]-mex \equiv 0 [x]$$$, i.e. $$$mex$$$ can be obtained from $$$a[i]$$$ by performing the operation of adding $$$x$$$ several times.

My approach during the contest was to sort $$$a$$$ and iterate over its elements. While the current element was lower or equal to the current $$$mex$$$, I'd store it as an available lower value w.r.t. its rest in the division by $$$x$$$ in a vector $$$over$$$. Then, I'd look for the number of stored values at the index $$$mex \% x$$$. If it is more that $$$1$$$, I can decrease it by $$$1$$$ and increase $$$mex$$$ by $$$1$$$. Else the real value of $$$mex$$$ has bee found.

The official editorial uses a more elegant solution. Values in $$$a$$$ can be sorted and counted in place inside of an array $$$freq$$$ of size $$$n$$$ in $$$\mathcal{O}(n)$$$. The algorithm then writes as follows :

Complexity

Initial sort of the array, and iterating over values of $$$mex$$$ and elements of $$$a$$$ results in $$$\mathcal{O}(nlog(n) + m + n) = \mathcal{O}(nlog(n))$$$ complexity.

Official editorial reaches a $$$\mathcal{O}(n)$$$ complexity.

What I've learnt

I have solved several MEX problems, and I learnt that finding it's value is generally doable in $$$\mathcal{O}(n)$$$. I liked the idea of propagating overflow while looking for it.

2021C1 - Adjust The Presentation (Easy Version)

I tried to think of the first steps to check if a slideshow $$$b$$$ was good.

  • In the first step, we must have $$$b_1 == a_1$$$, since we couldn't make any decision on $$$a$$$

  • In the second step, we can have replaced $$$a_1$$$ at the front and have $$$b_2 == a_1$$$, or placed it somewhere else and have $$$b_2 == a_2$$$. Previously visited $$$a_1$$$ is considered to be in an available set of values for $$$b$$$.

  • In the third step, complexity arises :

    • If we had $$$b_2 == a_1$$$, it means that we had to replace it at the front before the second step. The only available values for $$$b_2$$$ are either $$$a_1$$$ or $$$a_2$$$, the next available one.
    • Else, we could have replaced $$$a_2$$$ at the front or somewhere else and could have a valid slideshow with $$$b_3 == a_1$$$ or $$$b_3 == a_2$$$ or $$$b_3 == a_3$$$.

My observation was that at each step $$$1 \leq j \leq m$$$, the prefix of the slideshow was valid if :

  • $$$b[j]$$$ was among the $$$k$$$ distinct previous visited values of $$$a$$$, or

  • $$$b[j] == a[k+1]$$$, and $$$b[k]$$$ would be a new visited value.

I used a unordered set to keep track of the visited values, and iterated over the elements in $$$b$$$.

Code

Complexity

Iterating over values in $$$b$$$ results in $$$\mathcal{O}(m)$$$ complexity overall.

What I've learnt

This felt like the simplest of all three problems. However, I didn't solve it considering C2, and I didn't know at the time whether the approach was suitable to tackle the second part of the problem. In future contests, I guess that I should try to anticipate the next parts when looking for a solution.

2021C2 - Adjust The Presentation (Hard Version)

It's always challenging for me to start second part of a problem. In a problem with queries like this one, there usually are two parts to identify :

  • the deep structure of the problem, emphasized in part one, that has to be computed at least once,

  • the shared structure of the problem, that allows to handle queries sequentially without having to perform costly computation (often leveraging pre-computation).

My first naive solution handled each query independently : update of $$$b$$$ in $$$\mathcal{O}(1)$$$ and complete check in $$$\mathcal{O}(m)$$$. Of course, TL.

I couldn't find a way to reuse the unordered set from one query to another, so I had to find another structure that could be maintained. I identified that for a slideshow $$$b$$$ to be valid, I needed two conditions:

  • $$$b[1] == a[1]$$$

  • $$$\forall 1 \leq i \leq n-1$$$, $$$first[a[i]] \leq first[a[i+1]]$$$, where $$$first[v]$$$ holds the index of the first occurrence of $$$v$$$ in $$$b$$$, and $$$m+1$$$ if it's not in $$$b$$$

I defined $$$nbConflicts$$$ the number of conflicts in $$$a$$$, i.e the number of $$$i$$$ s.t. $$$first[a[i]] > first[a[i+1]]$$$. I used $$$ra$$$ the inverse permutation of $$$a$$$ ($$$ra[a[i]] = i$$$). To avoid the $$$\mathcal{O}(m)$$$ conflict check complexity, one should see that queries only create of solve conflicts locally, and this can be checked in $$$\mathcal{O}(1)$$$.

Before updating $$$b$$$, I decrease $$$nbConflicts$$$ for possible conflicts before and after the value $$$t$$$ in $$$a$$$, and before and after the value $$$b[s]$$$ in $$$a$$$. The four inequalities to check (if they exist) are :

  • $$$(first[a[ra[t]-1]] > first[t]])$$$,

  • ($$$first[t] > first[a[ra[t]+1]]$$$),

  • $$$(first[a[ra[b[s]]-1]] > first[b[s]]])$$$,

  • ($$$first[b[s]] > first[a[ra[b[s]]+1]]$$$).

I then updated $$$first[u]$$$ if needed. BUT. I couldn't find a way to update $$$first[b[s]]$$$ and kept doing i in $$$\mathcal{O}(m)$$$ and kept getting TL ...
I then realized I had freewill and could scroll comments under the contest blog, and I saw a comment by DuongForeverAlone with his submission 284562164. His solution was to maintain for each value $$$a[i]$$$ an ordered set of positions of $$$a[i]$$$ in $$$b$$$. This vector is computed only once at the beginning, and maintained in $$$\mathcal{O}(1)$$$ for each query, and we have first[b[s]] = *pos[b[s]].begin().

My complete code

Complexity

Pre-computation of $$$ra$$$, $$$first$$$, $$$nbConflicts$$$ and $$$pos$$$ are done in $$$\mathcal{O}(n + mlog(m))$$$. Then for each query, the complexity of all operation is $$$\mathcal{O}(1)$$$, except the insertion of $$$s$$$ in $$$pos[t]$$$ in $$$\mathcal{O}(log(m))$$$. The total complexity is $$$\mathcal{O}(n + mlog(m) + qlog(m))$$$.

What I've learnt

  • There is a specific mindset needed to tackled problems in several parts. I should try to anticipate the second part when solving the first
  • In a query problem, it is always about precomputation and smart updates. Identify source of complexity in the query loop, and find how to reduce it this way.
  • I genuinely didn't think of using std::set to maintain an ordered sequence (with an insertion cost of $$$\mathcal{O}(log(n))$$$). I'll keep this in my toolbox now.
  • Finally, it took me some times to get this straight, and being able to do so in the time of a contest will be challenging for me !

2021D - Boss, Thirsty

This problem was really challenging for me. I understood that the possible scores for day $$$i$$$ were determined by day $$$i-1$$$, and I should probably build a dynamic array and update it day by day. The first challenge is to find a suitable representation of the solution at day $$$i$$$ that can be updated efficiently to produce a solution at day $$$i+1$$$.

If first thought of $$$dp[i][(a,b)]$$$, the best possible score that can be obtained on day $$$i$$$ by choosing range of products $$$(a,b)$$$. It would be the sum of :

  • the direct score $$$\sum_{c=a}^{c=b}(A_{i,c})$$$,

  • the best possible score of a compatible segment on day $$$i-1$$$.

$$$dp[i][(a,b)] = \sum_{c=a}^{c=b}(A_{i,c}) + \max_{(a',b')}(dp[i-1][(a',b')])$$$

Those segments $$$(a', b')$$$ must be such that $$$(a',b') \cap (a,b) \neq \emptyset$$$ and $$$(a,b) \setminus (a',b') \neq \emptyset$$$, which can be translated to :

  • $$$a \leq a' \leq b$$$, or

  • $$$b \leq b' \leq b$$$,

AND the extra condition that $$$(a,b) \neq (a',b')$$$.

With $$$\frac{m(m-1)}{2}$$$ segments $$$(a,b)$$$ each day, the resulting complexity would be $$$\mathcal{O}(n*m^2)$$$. I thought of introducing $$$dpA[i][a]$$$ (resp. $$$dpB[i][b]$$$) the best possible score at day $$$i$$$ starting in $$$a$$$ (resp. ending in $$$b$$$). However, when trying to connect it with the previous day, I couldn't find a way to enumerate valid $$$(a', b')$$$.

Luckily, the official editorial came out and I started reading their solution. Their idea was to use $$$l$$$ and $$$r$$$ to describe the external edges of a segment rather than its end points.

The conditions on previous segments $$$(l', r')$$$ then writes :
- $$$l < l' < r$$$, or
- $$$l < r' < r$$$,
without any further condition. This could be understood as "any compatible previous segment has an extremity strictly inside of the current segment".

We now introduce $$$dpL[i][l]$$$ (resp. $$$dpR[i][r]$$$) the best possible score obtainable on day $$$i$$$ using segment starting in $$$l$$$ (resp. ending in $$$r$$$). With $$$0 \leq \textbf{l} < \textbf{r}-1 \leq m$$$, we get

$$$dpL[i][l] = \max_{l < \textbf{r}-1 \leq m}{(\sum_{p=l+1}^{p=r}A_{i,p} + \max_{l < \textbf{p} < r}{(dpL[i-1][p], dpR[i-1][p]))}}$$$
$$$dpR[i][r] = \max_{0 \leq \textbf{l} < r-1}{(\sum_{p=l+1}^{p=r}A_{i,p} + \max_{l < \textbf{p} < r}{(dpL[i-1][p], dpR[i-1][p]))}}$$$

This looks much nicer ! For the next steps, I'll focus on $$$dpR$$$. For a given $$$r$$$, the exterior $$$\max$$$ can be computed in $$$\mathcal{O}(r)$$$ and the interior $$$\max$$$ in $$$\mathcal{O}(r)$$$. The resulting complexity of the algorithm would then be $$$\mathcal{O(n*m^2)}$$$.

Since we will compute values of $$$dpR$$$ for iterative values of $$$r$$$, we should try to share efforts and avoid recalculation.

The sum over $$$A_{i,p}$$$ can be expressed with prefix sums. Let's denote it $$$prefS[p] = \sum_{q=1}^{q=p}A_i,q$$$. It can be computed in $$$\mathcal{O}(m)$$$ for all $$$0 \leq p \leq m$$$, then

$$$\forall 0 \leq \textbf{l} < \textbf{r} \leq m,\sum_{p=l+1}^{p=r}A_{i,p} = prefS[r] - prefS[l]$$$

We now have

$$$dpR[i][r] = prefS[r] + \max_{0 \leq \textbf{l} < r-1}{(-prefS[l] + \max_{l < \textbf{p} < r}{(dpL[i-1][p], dpR[i-1][p]))}}$$$

The next step requires to work with index bounds. I needed a long time to understand than $$$\max$$$ expressions can be commuted following this transformation

$$$\max_{0 \leq l < r-1}(f(l) + \max_{l < p < r}(g(p))) = \max_{0 \leq l < r-1}(\max_{l < p < r}(f(l) + g(p))) = \max_{0 < p < r}(\max_{0 \leq l < p}(f(l)) + g(p))$$$

The left most expression can be illustrated with this picture. It is the max over columns $$$l$$$ of the max over all rows $$$p$$$ such that $$$l<p$$$.

The right most expression is the max of all rows $$$p$$$ of the max over all columns $$$l$$$ such that $$$l<p$$$.

[pic]

Both describe the same area, and both of their values is the maximum of $$$f(l) + g(p)$$$ in this area.

The expression of $$$dpR$$$ can be written as

$$$dpR[i][r] = prefS[r] + \max_{0 < \textbf{p} < r}{(\max_{0 \leq \textbf{l} < p}(-prefS[l]) + \max{(dpL[i-1][p], dpR[i-1][p]))}}$$$

The rest is a gymnastic of prefix sums and prefix maximums.

  • $$$prefM[l] = \max{(prefM[l-1], -prefS[l-1])}$$$

  • $$$prefN[p] = \max{(prefN[p-1], prefM[p] + \max{(dpL[i-1][p], dpR[i-1][p])})}$$$

  • $$$dpR[i][r] = prefS[r]+ prefN[r-1]$$$

For $$$dpL$$$, remarking that the problem is symmetric allows us to simply reverse $$$A_i$$$ and use the same approach.

I had some trouble finding the right index ranges, recurring expressions, range end values of these structures and there may be still some mistakes here, but I got AC !

Complexity

Thanks to all prefix sums and maximums, the complexity is linear for each day and results in a $$$\mathcal{O}(nm)$$$ complexity.

What I've learnt

  • It was a very interesting problem, I loved it.

  • I had a really hard time working with indices, and I really doubt I'd able to do this during a contest. This is something I need to work on.

  • Prefix sum and prefix max allow to design really efficient approaches. Furthermore, they can be combined to build prefix of prefix and yield very elegant solutions.

  • Maybe I should take more time to look at other's submission to learn alternative approaches. I came along this super short answer by ecnerwala that I'm still trying to understand.

Thank you for taking some time to read this, I really appreciated doing it and will eventually continue for some recent contests. Let me know if I was unclear, or if you have suggestions to share on some problems.
Thank you ArvinCiu, CyberSleeper, athin and joelgun14 for writing this contest, and MikeMirzayanov for the amazing Codeforces and Polygon platform ! See ya

Full text and comments »

  • Vote: I like it
  • +48
  • Vote: I do not like it

By Orn0, 5 weeks ago, In English

I recently took part in Codeforces Round 976 (Div. 2) and Divide By Zero 9.0, and I performed ... poorly (only A after 6 bad submissions). Thinking about this traumatizing experience, I first tried to find excuses. I entered 15 minutes late, I was sick and unprepared. But I realized that there were more to learn from it, and I wanted to try and produce a custom editorial of the first 4 problems in the contest to better understand and share what I did wrong (and also perhaps right thing :)).

Welcome to my humble editorial

2020A - Find Minimum Operations

The first step was to understand the structure of the optimal solution. For given $$$n$$$ and $$$k$$$, the optimal number of operations is obtained by removing the largest possible portions of $$$n$$$ of the form $$$k^x$$$ at each step, i.e. finding the largest $$$x$$$ such that $$$k^x \leq n$$$.

The algorithm is as follows :

while n > 0
    x = findLargestX(n,k)
    while n-k^x > 0
        n -= k^x

I was quickly able to see that the answer is the sum of the digits of $$$n$$$ in base $$$k$$$. This helps to understand why the algorithm works, and why we should find the largest $$$x$$$ value at each step.

It is more clear with an example. In the fourth test case, $$$n=100$$$ and $$$k=3$$$. $$$(100)_{10} = 1*3^4 + 2*3^2 + 1*3^0 = (10201)_3$$$. The answer for the test case is then $$$1+2+1=4$$$.

I know that $$$x = \lfloor log_k(n) \rfloor$$$.

Proof

I wrote it as $$$(\textbf{ull})(log_2(n) / log_2(k))$$$ and got wrong answer on pretest 2. I wasn't sure to understand how to handle edge cases in double division and casting. A better idea was to look directly for $$$k^x$$$.

My first try was to write a while-loop in $$$\mathcal{O}(x)$$$ to find the largest $$$k^x \leq n$$$. This got time limit. It is only after trying B and coming back that I realized it could be done in $$$\mathcal{O}(log(x))$$$ with a syntax similar to fast exponentiation.

Code

What I learnt

  • Don't rush even into simple problems.
  • It is great to identify known results, but using "black-box" numerical approaches might not be the best idea when designing an exact algorithm
  • When writing a loop in $$$\mathcal{O}(n)$$$, ALWAYS look for a way to simplify it in $$$\mathcal{O}(log(n))$$$, $$$\mathcal{O}(1)$$$ for example.

2020B - Brightness Begins

It is necessary here to think about the structure of the answer. First of all, considering $$$n+1$$$ bulbs instead of $$$n$$$ doesn't change the state of the first $$$n$$$ bulbs. A bulb at position $$$k$$$ is flipped once for each divisor of $$$k$$$. All on bulbs will be located at positions with an even number of divisors.

I thought of the prime factor decomposition of a number, from which it is easy to compute the number of divisors.

$$$n = \prod_{i=0}^{i=k}p_i^{\alpha_i}$$$
$$$nbDiv(n) = \prod_{i=0}^{i=k}\alpha_i+1$$$

However, the answer had to be greater than $$$k \leq 10^{18}$$$, so complete decomposition was not an option.
In fact, even $$$\mathcal{O}(n)$$$ was probably too long.

I was missing the fact that in terms of product, there are more even number than odd number.

Proof

If the power of any prime divisor in the decomposition of $$$n$$$ was odd, then the number of divisors $$$n$$$ would be even. In consequence, the only number with an odd number of divisors would be those with all even powers in the PFD, i.e. perfect squares !
Drawing the bulbs with $$$n=10$$$ really gives this up :

For a given $$$n$$$, the number of perfect squares below or equal to it is $$$\lfloor \sqrt{n} \rfloor$$$, and the number of on bulbs $$$n-\lfloor \sqrt{n} \rfloor$$$. Using binary search, it is possible to look for $$$n$$$ the smallest number of light bulbs.

My code is as follows.

C++

During the contest I fumbled a lot and wasn't able to find the right condition to update $$$l$$$ or $$$r$$$, that's on me. But as I came back after the contest, I still had WA. For $$$k=854258779689358055$$$, my answer was $$$854258780613619263$$$ when it should have been $$$854258780613619262$$$.

I knew it was once again related to how I used a blackbox double sqrt function for an integer program. After investigating, (\textbf{unsigned long long})$$$sqrt(854258780613619263) = 924261208$$$, which is incorrect. To avoid numerical error, I should've used (\textbf{unsigned long long})$$$sqrt($$$(\textbf{long double}) $$$854258780613619263$$$) (or $$$sqrtl(854258780613619263)$$$).

What I learnt

  • Once again, I lost time thinking about complex DFP solution and not see the simpler pattern. Don't rush into complex thinking immediately
  • Sometimes, using graphical hits can help to identify patterns
  • I guess I miss experience with numerical programmation, and I'll try to remember this edge case in the future

2020C - Bitwise Balancing

I read the problem during the contest and I saw that it would be mostly writing a truth table, and I didn't had time to really start it.

My solution is basically the same as in the official tutorial. One can see that each bit can be processed independently (by making sure (a|b) — (a&c) cannot be -1). By building a truth table, I was able to write a solution quickly.

b c d a|b a&c (a|b)-(a&c) a
0 0 0 a 0 a 0
0 0 1 a 0 a 1
0 1 0 a a 0 0/1
0 1 1 a a 0 x
1 0 0 1 0 1 x
1 0 1 1 0 1 0/1
1 1 0 1 a 1-a 1
1 1 1 1 a 1-a 0

What I learnt

Not much, I didn't find the problem very educational

2020D - Connect the Dots

I tackled the problem a few days after the contest. First, it looks like a classic Disjoint Set Union problem. Each operation can be computed $$$k$$$ query by considering arcs ($$$a$$$, $$$a+d$$$), ..., ($$$a+(k-1)*d, a+k*d)$$$. I was able to implement it quickly using my DSU template, but got TL without much surprise.

forn(i,1,m) {
    cin >> a >> d >> k;
    forn(j,0,k-1) dsu.merge(a+j*d, a+(j+1)*d);
}

In a static graph, it seems to me that there is no reason to use DSU over DFS (except I didn't have a DFS template at the time). But this isn't the reason for my TL, counting components using DSU with path compression has a nearly linear complexity.

For each query $$$1\leq q \leq m$$$ I'm creating $$$k_q$$$ edges, resulting in $$$\sum_{q=1}^{q=m}k_q$$$ edges. If ranges are disjoint, this can't be reduced. But since $$$k$$$ can be large w.r.t. $$$n$$$, ranges may have many overlap. For example, let's take $$$(4,2,6)$$$ and $$$(2,2,5)$$$. They overlap on 4 edges ! This is because they share the same gap $$$d$$$. But they could in phase opposition. These two ranges can overlap because they both started at a position with the same rest in the division by $$$d$$$ ($$$4\%2 == 2\%2 == 0)$$$.

I thought I could take advantage of $$$d \leq 10$$$. For a position $$$a$$$, and each value of $$$d$$$ I need to know if I must create an edge $$$(a,a+d)$$$, i.e. if there exists at least one range with step $$$d$$$ containing $$$a$$$. For such a query to exists : - it begins before $$$a$$$ and ends after $$$a$$$ - it has step $$$d$$$ - the rest in the division by $$$d$$$ of any of its values is $$$a\%d$$$

I use a table $$$latestEnd$$$ of size $$$11*10$$$. The first index is associated with the value of $$$d$$$ and the second index with the value of $$$a\%d$$$. For each $$$a$$$, I consider the end of the range of each starting query in $$$queries[a]$$$. If I meet new queries with same characteristics (possibly overlapping), I can what the new latest end is.
Then for each value $$$d$$$ in $$$[1,10]$$$, I test if a range of step $$$d$$$ and offset $$$a\%d$$$ ends after $$$a$$$. If so, I create an edge $$$(a,a+d)$$$.

In the above example, when reaching $$$(2,2,5)$$$, I need to register that I activated a range of step $$$d=2$$$ with offset $$$a\%d = 0$$$ ending in $$$a+d*k = 12$$$. For $$$a=4,6,8,10$$$ and $$$d=2$$$, I can check that a previously started range of step $$$d$$$ and offset $$$0$$$ ends after $$$a$$$, i.e. contains it, and create the edge $$$(a,a+d)$$$.

Later when reaching $$$(4, 2, 6)$$$, the range shares values $$$d=2$$$ and $$$a\%d = 0$$$, but ends later in $$$a+d*k = 16$$$, and I need to notify further values of $$$a$$$ up to $$$16-2 = 14$$$ s.t. $$$a\%2 = 0$$$ to create an edge $$$(a,a+d)$$$

Code

What I learnt

This was a very interesting problem. I many differents answers in the comments of the official editorial, from which I should maybe learn :
- A first classical algorithm a good start to have a working solution
- In case of TL, it's important to identify the main sources of complexity
- One can take advantage of some parameters domain specified for the problem.
- When working with ranges, you most likely want to identify overlapping ranges to reduce duplicated data, i.e. using dp.

Conclusion

I'll stop here. Being able to solve 4 problems in a Div. 2 contest is a reasonable goal. I really appreciated doing this blog, and I hope it can benefit or amuse some fellow codeforcers. I'll eventually write another one after my next contest, even if my performance happens to be less disastrous.
Thank you P.V.Sekhar and nishkarsh for organizing this contest, and MikeMirzayanov for platforms Codeforces and Polygon.

Full text and comments »

  • Vote: I like it
  • +32
  • Vote: I do not like it

By Orn0, 6 weeks ago, In English

This is my version of the editorial for problem D of Codeforces Round 973 (Div. 2). It was motivated by GrishinD's great blog about his solution and presents a slower yet interesting alternative. I wanted to try this exercise to improve my thinking process and maybe become a div1 codeforcer along with GrishinD !

I first had a dynamic programming approach and got TL. Then I tried to work with binary search. I used a constructive algorithm inside the search to check for the validity of the current value. My solution got AC but I wasn't convinced by the correction and I wanted to understand in depth.
This blog explains my solution and brings a comprehensive proof. I'll first describe my algorithm and intuition, and then go through the proof.

Statement

Your are given an array $$$a_1,a_2,…,a_n$$$ of the length n

You can perform any number (possibly, zero) of operations on the array.

In one operation, we choose a position $$$i (1 \leq i \leq n-1)$$$ and perform the following actions:
- $$$a_i:=a_i-1$$$,
- $$$a_{i+1}:=a_{i+1}+1$$$

Find the minimum possible value of $$$max(a_1,a_2,…,a_n)-min(a_1,a_2,…,a_n)$$$

I — Notation and vocabulary

I will use some notations I found useful.
- $$$\forall i, 1 \leq i \leq n$$$ I will denote $$$a_i$$$ as $$$a[i]$$$.
- $$$max(a) $$$ and $$$min(a)$$$ will denote respectively $$$max_{i \in [1, n]}{a[i]}$$$ and $$$min_{i \in [1, n]}{a[i]}$$$

I had a pretty mathematical approach, but I will describe the problem as a line of columns of blocks of size $$$a[i]$$$.

II — Constructive algorithm and intuition

The solution first shows that the smallest maximum (denoted $$$alpha$$$) and the largest minimum obtainable (denoted $$$beta$$$) can be found with binary search in $$$\mathcal{O}(n*log(n))$$$. Then I bring the proof that these two values can be obtained simultaneously using the operation.

I'll describe here my first algorithm used in the binary search to check whether $$$alpha$$$ can be larger than all values of a rearranged $$$a$$$ array.

For $$$i=1$$$ to $$$i=n-1$$$, if $$$a[i] > alpha$$$:
- $$$a[i] := alpha$$$
- $$$a[i+1] := a[i+1] + alpha - a[i]$$$

Note that it is alway a valid algorithm, since it can be decomposed in separation application of the operation.
Visually, the algorithm go through columns from left to right. If a column exceeds $$$alpha$$$, the excedent is reported to the next column.
Let's start with an input array $$$a$$$. The first column excedent is reported to the next, and so on.

Eventually we reach the end of the array

At the end of the loop, $$$alpha$$$ is an admissible majorant iff the last column doesn't exceed $$$alpha$$$. In this case, $$$alpha$$$ is too low, it can't be the smallest minimum.

Quick proof

The same logic can be applied to show that $$$beta$$$ can be the smaller than all values of a rearranged array with the following algorithm

For $$$i=n$$$ to $$$i=2$$$, if $$$a[i] < beta$$$:
- $$$a[i] := beta$$$
- $$$a[i-1] := a[i-1] - (beta - a[i])$$$

Starting from another input array $$$a$$$, the propagation goes backwards :

Values in $$$a$$$ can be temporarly negative as shawn in orange, only the value of $$$a[0]$$$ needs to be tested.

At the end of the loop, $$$beta$$$ is an admissible minorant iff the first column is greater or equal than $$$beta$$$. In this case, $$$beta$$$ is can be an admissible minorant.
The proof is similar.

The intuition is then that we can apply the $$$alpha$$$-algorithm and then the $$$beta$$$-algorithm to get a rearranged array with the smallest maximum $$$alpha$$$ and the largest minimum $$$beta$$$. Then the solution of the problem is $$$alpha - beta$$$.

Code

It seemed logical at first, but I couldn't be sure of its validity. I took some time to think, and the rest of the blog is a formal proof of this result. Time for the heavy guns.

III — Formal proof of $$$alpha$$$-$$$beta$$$

Previous parts showed that we can find $$$alpha$$$ and $$$beta$$$ with a constructive algorithm starting from the initial array $$$a$$$. The idea of the proof is to show that applying $$$alpha$$$-algorithm yields an array for which it is alway possible to apply $$$beta$$$-algorithm as well.

A — Characterization of admissible maximum and minimum

The first step is to find $$$beta= max(min(\tilde{a}))$$$, where $$$\tilde{a}$$$ is the input array after performing an arbitrary number of operations. This is the largest value that can be obtained for the minimum value in the array $$$a$$$.

For a value $$$beta$$$ to be an admissible minimum of $$$\tilde{a}$$$ is equivalent to this statement : $$$\forall i, 1 \leq i \leq n, \sum_{j=1}^{j=i}{a[j]} \geq i * beta$$$

In other words, the following statements are equivalent

  • For any position $$$i$$$, the total amount of blocks left to this position must possible to be less or equal to $$$i$$$ columns of size $$$beta$$$.

  • The columns $$$1$$$ to $$$i$$$ can be rearranged using the operation to be all greater than $$$beta$$$.

Proof

$$$(H_k) : (\forall i, 1 \leq i \leq k, \sum_{j=1}^{j=i}{a[j]} \geq i * beta) \iff (beta=min_{i \in [1, k]}[\tilde{a}[i]])$$$

(Note that this applies to the prefix of size $$$k$$$ of $$$\tilde{a}$$$.)

Initialisation

  • For $$$k=1$$$, the statement is obvious.
  • For $$$k=2$$$, the first column alone has to be larger than i, because it can't ever grow. The second can be smaller, as long as there is enough in a[1]. This writes as : $$$a[1] \geq beta$$$ and $$$a[2] + (a[1] - beta) \geq beta$$$, which is equivalent to $$$(H_2)$$$.

Induction

Suppose $$$(H_k)$$$ is true for an arbitrary value of $$$k$$$. Let's show that $$$(H_{k+1})$$$ is true.

$$$[A \Leftarrow B]$$$ Suppose the right part of $$$(H_{k+1})$$$. If $$$\sum_{j=1}^{j=k+1}a[j] < (k+1)*beta$$$, there is simply not enough blocks left from $$$k+1$$$ for $$$beta$$$ to be an admissible minimum. Therefore, reductio ad absurdum, $$$\sum_{j=1}^{j=k+1}a[j] \geq (k+1)*beta$$$ has to be true.

$$$[A \Rightarrow B]$$$ Suppose the left part of $$$(H_{k+1})$$$. By supposition, we can rearrange $$$a[1], ..., a[k]$$$ to be larger than $$$beta$$$. If $$$a[k+1] \geq beta$$$, the right part of $$$(H_{k+1})$$$ is true. Else,

$$$\sum_{j=1}^{j=k+1}{a[j]} \geq (k+1)*beta \iff \sum_{j=1}^{j=k+1}{a[j]} - k*beta \geq beta \iff \sum_{j=1}^{j=k}{a[j]} - k*beta \geq beta - a[k+1] \geq 0$$$

The sum $$$\sum_{j=1}^{j=k}{a[j]}$$$ represents the excedent left to $$$a[k+1]$$$ after rearranging $$$a[1], ... a[k]$$$ that can be used to get $$$\tilde{a}[k+1] \geq beta$$$

Conclusion

The two statements are equivalent. Moreover, there are also both equivalent to the statement "apply the $$$alpha$$$-algorithm yields a feasible solution". I won't write it again, but a symmetrical proof can be written to prove that the three following statements are equivalent :

  • $$$\forall i, 1 \leq i \leq n, \sum_{j=i}^{j=n}a[j] \leq (n-i+1)*alpha$$$
    For any position $$$i$$$, the total amount of blocks right to this position must less or equal to $$$i$$$ columns of size $$$alpha$$$

  • $$$alpha = max(\tilde{a})$$$
    The columns $$$i$$$ to $$$n$$$ can be rearranged using the operation to be all lower than $$$alpha$$$

  • The $$$alpha$$$-algorithm yields a valid array for which $$$alpha$$$ is a majorant.

The first interesting result is that this prefix (resp. suffix) sum can be used in the binary search for $$$beta$$$ (resp. $$$alpha$$$) instead of building a modified array. But the best is yet to come !

B — Use of smallest maximum and largest minimum in the solution

Finally, this last part shows that we can always rearrange initial $$$a$$$ into an array solution that has both $$$alpha$$$ as maximum and $$$beta$$$ as minimum.

With the optimal $$$alpha$$$ found with binary search and the suffix sum criterion, suppose that we apply $$$alpha$$$-algorithm to the input array $$$a$$$. We will show that it implies that the prefix sum criterion is still respected, and the $$$beta$$$-algorithm can always be applied as well.

$$$beta$$$ being the largest admissible minimum for a rearranged $$$a$$$, the prefix sum criterion is respected before applying $$$alpha$$$-algorithm.
We denote $$$a^{(k)}$$$ the state of the rearranged $$$a$$$ after applying the k-th step of the $$$alpha$$$-algorithm, i.e right after possibly modifying $$$a[k]$$$ and $$$a[k+1]$$$.

$$$(H_k) : (\forall 1 \leq i \leq k, a^{(k)}[i] \leq alpha) \wedge (\sum_{j=1}^{j=i}a^{(k)}[j] \geq i*beta)$$$

Initialisation

$$$(H_1)$$$ : after one step, $$$a^{(k)}[1]$$$ was either set to $$$alpha \geq beta$$$ or kept as $$$a[1] \geq beta$$$.

Induction

Suppose $$$(H_k)$$$ true for a fixed $$$k$$$. Consider $$$(H_{k+1})$$$ :

  • either $$$a^{(k)}[k+1] \leq alpha$$$ then $$$a^{(k+1)}[k+1] = a^{(k)}[k+1]$$$, and no blocks were moved after the column $$$k$$$. The prefix sum at index $$$k+1$$$ hasn't changed and the second part of $$$(H_{k+1})$$$ it true as well.

  • or $$$a^{(k)}[k+1] > alpha$$$, then $$$a^{(k+1)}[k+1] := alpha$$$ and $$$a^{(k+1)}[k+2] := a^{(k)}[k+2] + a^{(k)}[k+1] - alpha$$$.

By induction, $$$\sum_{j=1}^{j=k}a^{(k)}[j] \geq k*beta$$$ and $$$a^{(k+1)}[k+1] = alpha \geq beta$$$,
$$$\Rightarrow \sum_{j=1}^{j=k+1}a^{(k+1)}[j] \geq (k+1)*beta$$$.

Conclusion

$$$(H_n)$$$ is true, and the complete application of $$$alpha$$$-algorithm maintains the prefix-sum criterion, so $$$beta$$$ is still the largest minimum possible.
It is possible to search for $$$alpha$$$ and $$$beta$$$ separately, and combine them to build the final answer, $$$alpha-beta$$$.

Thank you for reading it so far, it took me much to much time, but I loved trying to formalize the proof. Let me know if you find mistakes, or simpler way of proving the result.
Thank you I_Love_Diar_Narumov for organizing this contest, and MikeMirzayanov for this great plateform. See ya !

Full text and comments »

  • Vote: I like it
  • +69
  • Vote: I do not like it