Recent actions

Nor I have copied the solution from somewhere nor I have shared my solution with someone else by my own wish. I don't know how it matched. I have a self made template. Last contest I submitted only one solution. This time I solved 3. No. of solution was never a problem. I think someone shared my code in such a large scale then they deserve punishment. Orelse I don't see so much coincidence. I solved using an online complier. I couldn't submit it early because website was not verifying me as a human. My solution was submitted by another friend whom I myself shared my code and I am sure he won't share with anyone. Please don't delete my contest. I accept to lose points rather than getting tag of a cheater. The online complier was normal complier so I don't see any chance of getting my answer leak there. The only way of getting my answer leak is codeforces website. I may be wrong but thats what I feel.

Regarding the alleged coincidences in the solutions 268180337 and 268200502:

I am the author of the first solution. The resemblance between these two codes is due to the fact that my friends and I had upsolved a recent Div.3 contest completely. In that contest, there was a problem titled "Money Buys Less Happiness Now," which had a similar logic (using a multiset greedily) to this D problem. Since we discussed and solved that question together, our approaches in these submissions were also similar. Therefore, I strongly request that I should not be held accountable for plagiarism in this contest.

Thank you for your understanding.

I've not copied from anyone. Neither given it to anyone.

I've recently got a mail regarding the similarity between my and others' B solution. What should I do about it?How can I get it resolved?

It happens....

Why am I getting mail for Copy code? I have not copied from anyone. Still I got the email and my problems are skipped.

What's the Matter? Can anyone please explain?

The similarities between my solution and another participant's may be due to a common approach to the problem, the use of standard algorithms and templates, or simple coincidence. Given that my solution is similar to only one other participant’s out of over 10,000 participants, the likelihood of coincidence is high. I really don't care about my rating but don't delete my account if you decide otherwise

Same, need a +2 :))

pls give negative to me

You could solve the problem with a greedy: https://codeforces.net/contest/1987/submission/268268604

I imagine that is why the rating is not as high as you thought.

I feel upset that I find G1 is easy for me now .... qwq

I still don't know why there are so many Accepted solutions in the problem D, the dynamic programming is not that so easy, it make my head blow up and many of my candidate master friends didn't solve it. However, according to the clist.by, it is only a 1600 rating problem, I thought it should be at least 1900.

Created or updated the text

Thank you so much for this detailed explanation for the dp approach. Finally understood it.

Hi, may I know why this approach failed when the operation started from the root (not from the leaf)?

When I do iterate from node $$$1$$$ to $$$n$$$, it failed. Else, it correct.

Same things happens when you reverse the order of the official solution to [0, n-1].

Your explanation is exactly correct. Essentially it's accumulated turns Bob can use to take several indices, so he can block Alice from taking them.

The term "allowance" may not really make sense, and if so I'm sorry about that. It's abstract concept (kind of), and I wasn't really sure the right term for it.

its important to remove all cakes of same level of taste cause if bob eat some amount of that a certain level and leave the other then alice will eat the remaining and still be gaining points

hello

Hey, bro does that depth_cnt[i][j] means vertex i which is at depth j and can be used depth_cnt[i][j] no of times.

Can you confirm. Pls

It means, Bob can complete his step 1 (allowance) before Alice can take her next step.

The "allowance" is defined as the number of moves that Bob hasn't taken yet. If Alice takes the index $$$i$$$, Alice moved but Bob hasn't, so Bob now has an extra move. This means that his "allowance" is increased by $$$1$$$. It's a really abstract idea to me and I personally didn't solve it in the contest (womp womp to me).

in case someone gets WA on test 2 in E and so you don't spend an hour to find the test:

Testcase 580 in Test-2 in E Wonderful Tree:

14

0 5 4 2 4 2 1 2 0 3 3 2 2 2

1 1 1 3 2 2 5 6 4 6 9 4 7

I was using set, and at some point I had to keep two states with depth 2 and delta 1, but set obviously only kept one copy.. so, use multiset. :)

I wonder if the rating changes will get temporarily rolled back and I get to pupil after :')

Great

-331 points to CM, i can't wait

Thank you for letting me know, I've corrected it.

Thank youu so much for the reply! there is one thing i still dont understand tho, when you say "If Alice takes the index i, Bob gains 1 allowance."

what exactly do you mean by this?

Afterall it's problem E not A or B, so since making $$$n\le 2\times 10^5$$$ isn't actually a big increase in the difficulty (most people who can solve to this problem should know small to large merging, and it had appeared in div2 contests before), why not make it like that?

Setting the constraint to $$$5000$$$ would make people like me who are used to this technique doubt why the constraint was so small, and I had to look back really carefully just to make sure that I hadn't misread anything.

It's so strong, I simulated it for more than two hours to make it, and you can write this dp in a few minutes

"adds more to a problem" not necessarily in a good way, and here i agree

there is literally no difference between the 2 except for harder implementation and knowledge of the technique ofc

Although The first 2 points are straight forward

but can you explain me 3rd point " Bob should only eat a cake if he intends to eat all of those cakes of that level of tastiness"

isn't it optimal for bob to pick " largest unique values " because no matter what the frequency is alice can only eat once for each element ?

1,9,5,4,5,2,3,3,5
Greedy: 5-2, 4-3, 1-9
Correct: 4-5, 1-9, 3-3, 2-5

Ratings will be calculated differently for new users for the first 6 contests,read this blog for detailed process- https://codeforces.net/blog/entry/77890

First, create an array $$$\{c_n\}$$$ that contains the count of each value of $$$a_i$$$, in the order of increasing $$$a_i$$$. For example, if $$$a = \{ 3, 2, 2, 2, 5, 5, 5, 5 \}$$$, then $$$c = \{ 3, 1, 4\}$$$ (the count of $$$2$$$ is $$$3$$$, the count of $$$3$$$ is $$$1$$$, and the count of $$$5$$$ is $$$4$$$). Let's say the length of $$$c$$$ is $$$m$$$. We forget $$$a$$$ for the rest of the tutorial, and we call cakes with the same value of $$$a$$$ using the index of $$$c$$$ (the cakes $$$i$$$ ($$$i = 0, 1, \ldots, m - 1$$$) implies the $$$c_i$$$ cakes with $$$(i+1)$$$-th smallest value of $$$a$$$). Note we use the $$$0$$$-based index for $$$c$$$ below.

Alice's optimal strategy is simple: it's always optimal for her to take the smallest index (in terms of $$$c$$$) that's available to her. For Bob, he chooses a set of indices $$$x_1, x_2, \cdots, x_k$$$ ($$$1 \le x_p \le m$$$, $$$1 \le p \le k$$$) and try to eat all of the cakes $$$x_p$$$ before Alice can eat any of them, and he wants to maximize $$$k$$$ (the size of $$$x$$$).

Here we want to consider when Bob can take the cakes $$$i$$$. Obviously, he doesn't want to take cakes $$$0$$$ (because Alice takes one of them in her first turn, and it doesn't make sense for him to eat the rest if any). He can take cakes $$$1$$$ if and only if $$$c_1 = 1$$$ (if $$$c_1 \ge 2$$$, then Alice can eat one of cakes $$$1$$$ in her second turn). Similarly, he can take cakes $$$2$$$ if $$$c_2 \le 2$$$, and so on.

So we can think of the following algorithm: Bob has $$$0$$$ allowance initially. We look at each $$$i$$$ in the increasing order, and assign Alice or Bob to each index. If Alice takes the index $$$i$$$, Bob gains $$$1$$$ allowance. Let's say Bob has $$$j$$$ allowances so far. Then he can take the index $$$i$$$ if $$$j \ge c_i$$$, and by doing so he loses $$$c_i$$$ allowances.

At this point it's relatively straightforward to build a DP. Let's define $$$\mathtt{dp}[i][j]$$$ as the maximum possible number of indices Bob took so far, where $$$i$$$ is the index of cakes we're going to look at next, and $$$j$$$ represents Bob's allowances. The state transition is:

  • $$$\mathtt{dp}[0][0] = 0$$$,
  • $$$\mathtt{dp}[i + 1][j + 1] = \max (\mathtt{dp}[i + 1][j + 1], \mathtt{dp}[i][j])$$$ (if Alice takes the index $$$i$$$),
  • $$$\mathtt{dp}[i + 1][j - c_i] = \max (\mathtt{dp}[i + 1][j - c_i], \mathtt{dp}[i][j] + 1)$$$ if $$$j \ge c_i$$$ (if Bob takes the index $$$i$$$).

The final answer (the number of Alice's indices) is $$$\displaystyle m - \max_{0 \le j \le m} \mathtt{dp}[m][j]$$$. The table size is $$$(m + 1)^2$$$, and it takes $$$O(1)$$$-time to calculate each cell of $$$\texttt{dp}$$$, so the algorithm requires $$$O(m^2) = O(n^2)$$$ memory and time. Note that the memory limit 256MB is a bit tight for the $$$5001^2$$$ element table, so you might want to avoid allocating the whole table and calculate each row incrementally.

Still doesn't make any sense, my other friend has an old account, his prev rating was 1048, he got a rank of 10667 and -36 rating only, So you wanna say that at better rank with lower previous rating you will get -49 and at worse rank and higher prev rating you will get only -36? Where's the logic in this.

Except that it didn't actually prevent most of those worse complexity solutions from passing, and it only made a vague boundary between lower and higher constant solutions.

Authors should not only consider their own solution but also whether they would want to allow worse solutions to pass or not, and try to block them if they don't want them to pass. If they didn't want $$$\mathcal{O}(N^2)$$$ memory solutions to pass, then a viable option would be setting $$$N \le 10000$$$ and maybe a bit more TL (and 128 or 64 MB ML is also an option). They should still be fast enough with any $$$\mathcal{O}(N)$$$ memory solution because of high cache hit rate and small I/O. If they had no intention to block $$$\mathcal{O}(N^2)$$$ memory solutions, then the ML should just be higher.

It's like setting an $$$\mathcal{O}(N)$$$ problem with $$$N \le 30000$$$ with 1 second TL. If they don't want $$$\mathcal{O}(N^2)$$$ solutions to pass, they should increase the upper limit of $$$N$$$. It's a very poor preparation of a problem if they just go with "we have a $$$\mathcal{O}(N)$$$ solution, so it's up to you if you're gonna try $$$\mathcal{O}(N^2)$$$ solution or not".

Because he is a new account. In his fifth round, he will have extra +100 rating. So it seems like he got +51, but actually he got -49.

I liked this problem (and contest) but this type of setter takes are so dumb. Optimizing to a completely new complexity class always adds more to a problem...

Isn't the cool part seeing what's the limit possible?

If author solutions have better O complexity of memory, why should they have to help worse complexity pass? You can still get lucky but I don't get why memory should be treated as so variable and only time complexity is more strict.

Hello everyone, I am a newbie in competitive programming. I am looking for a friend ranked preferably higher than me to ask doubts and discuss problems. If anyone can help me out that would be great

How does the rating even work here? My prev rating was 1019, and my friend's was 1035, We both solved 2 problems, A and B, but I did it faster and got a better rank than him, still I got a -5 and my friend got a +51 rating, why is that tho? Both of our solutions got accepted, The screenshots are evident of it, Anyone can explain? this doesn't make any sense

man the rating change in this round is so harsh :(

Please explain the approach for D in little detail??? saw other comments but they seemed kinda vague, ppl have just given the transitions they used in dp without any/proper explanation...

yes, it would be

What is the counterexample for that? I wasn't able to come up with or generate a counterexample and I still don't know why it isn't working :P

DP Forces

Baradar English sohbat mikone

div1+2 is generally harmful for low division < +1900, it's not my opinion but statistics.

Great job, congrats

i dunno bad tasks today or i'm dumb, but i'm able solve only A, somehow have 1300 on another acc. Div2 always roulette

I was struggling greatly with ML in E, but once I switched from a pbds gp_hash_table to a std::map I passed the tests with flying colors. Any ides why that could be the case? Here's exactly the same code that passed, but it's using gp_hash_table instead of a regular map and it gets MLE 268234524.

UPD: I realized that by looking up different depths in a loop I was implicitly adding more and more keys to my hashmaps, leading to MLE. Also even my in-contest solution gets TLE now, well played.

Lets take frequency array F = {4, 4, 4, 3, 5, 5, 2}

Now, according to your approach :

  • Alice {3, 4, 4, 3, 5, 5, 2}
  • Bob {3, 4, 4, 3, 5, 5, 1}
  • Alice {3, 3, 4, 3, 5, 5, 1}
  • Bob {3, 3, 4, 3, 5, 5, 0}
  • Alice {3, 3, 3, 3, 5, 5, 0}

now no matter what Bob does, Alice can choose 3 more cakes. So total 6 cakes. But, if Bob started with 4th cake type then he can also complete last cake type before Alice reach there. So, Alice only can take 5 cakes.

this contest was literal nightmare for me.

slow solve ABC + unable to solve D (too incompetent)

feel worthless.

Take a look now, it was close and not impossible

I did use small to large merging

Although couldn't ac E because of a stupid silly mistake (Forgot to clear the global priority queues) -_-

But it turns out that the setters allowed O(n^2 log(n)) solution to pass.

ExplodingFreeze Isn't the answer at nost $$$n\cdot \max(a)$$$ because you can just increase everything to be equal to $$$\max(a)$$$?

Is there a reason for the lower problems to have 256 MB limit? Especially for problem D and E, it's easy to come up with various solutions that use $$$\mathcal{O}(n^2)$$$ memory, and a single long long array with size $$$5000 \times 5000$$$ already uses near 200 MB. Furthermore, it's super hard for heavier languages (for example, Python) to optimize it to fit in this tight ML.

Apparently, in problem E, it seems like an overlook that many solutions that use $$$\mathcal{O}(n^2)$$$ memory with very high constant passed, including mine. It's kinda lucky for me that the tests didn't have this simple test with a straight tree with no operations required. I knew that my solution can easily fail on this test, but I just believed that the tests are weak :) There are dozens of uphacks already (https://codeforces.net/contest/1987/hacks?verdictName=CHALLENGE_SUCCESSFUL ), but there could be more of them that can be hacked with other patterns.

I personally see no reason to use a tight ML like 256 MB nowadays, unless there is a solution that has to fail with that exact limit and not any more lenient.

for _ in range(int(input())): n=int(input()) l=list(map(int,input().split())) op=0 def f(l,op): t=False

if len(l)<=1:
        return op
    for i in range(len(l)-2,-1,-1):

        if l[i] ==i+1 and len(l)>=2:

            l.pop(i)
            t=True
            l.pop(i)
            op+=1
            print(l,i)
            break

    if t==False:
        return op
    return f(l,op)

print(f(l,op))

can someone please explain me why this is incorrect?

Random algorithms are random no matter the test.

So one could submit several random solutions and the chance of all of them failing would be significantly smaller.

can anyone plz tell what is wrong with my logic for B. here is my code-268197602

yes (at least the way I solved it)

started 50 min late,great contest tho

For me, the number of cases was the same for G1 and G2. For G2 all I needed to add was if statements before some of the cases from the G1 solution.

Yeah I started with greedy too. The correct approach requires dp. The tutorial is out now.

Still, it is assumed that the pretests are good enough

Because it will be abused by solutions which pass with certain probability?

I also tried assigning cakes to bob greedily(minimum frequency first) but I got wa2

Thank you for the contest!

In particular, most of the problems get away from the common pattern of "solve me much faster than O(input^2)", which is nice and refreshing.

Why can't system tests just test solutions in order?

many problems are easy... once you know the solution

When a contestant realizes there is a bug in their submission, and have not yet locked it, they should be able to resubmit. Naturally, the last such attempt will be tested.

If you want to submit an optimized version that shouldn't affect the standings, you can do it after the contest.

orz

For D:

Observation 1: Order does not matter, so move things around however we want

Observation 2: It is always optimal for Alice to eat the smallest cake, so we should sort the cakes

Observation 3: Bob should only eat a cake if he intends to eat all of those cakes of that level of tastiness

So with that, we can reduce this problem down to: for each type of cake (so we aggregate into tastiness, frequency pairs) in sorted order of tastiness, will Bob eat all of this type of cake or will Alice eat one of the cakes?

Let's think about the consequences of each action

Alice eats the cake: +1 to Alice's score, but Bob can eat 1 more cake

Bob eats the cake: Bob can eat $$$count[cake]$$$ fewer cakes, but Alice's score does not change.

Let $$$dp[i, j]$$$ represent the minimum number of cakes Alice can eat, starting with the $$$i^{th}$$$ least tastiest cake with Bob being able to eat $$$j$$$ cakes.

This leads to recurrences:

$$$dp[i, j] = min(Bob, Alice)\\\\$$$
$$$Bob = \begin{cases} dp[i + 1, j - count[i]] & \text{if } j \geq count[i]\\ 0 & \text{otherwise} \end{cases}\\\\$$$
$$$Alice = dp[i + 1, j + 1]\\$$$

What are u reading ? Please don't

lol

In this contest, I definitely reached the expert and at the end of the competition I decided to resubmit problem D with optimization (just like that, I understood that my first solution worked). But I didn’t know that only the last attempt is counted, where is it even said about this? And why some people have multiple parcels tested (in system tests). Because of this stupid rule, I will not reach the expert, thanks for this, any desire for programming is gone.

Sorry this might be dumb but what is random stress tester, is it different from stress testing we do?

The second reason is correct, I didn't feel like it added much to the problem)

wouldn't it be the minimum $$$j$$$?

solved D using simple memoization (just think if bob has to remove any element then he should have to remove all occurrences of that element and just do dp[submission:268191223]

also curious if it is possible to implement G2 quickly and clean

I'm a little confused why G2 is separated from G1, the feeling of having to add many cases and pre-calculations into the code is really frustrating. (And I failed to debug it in time)

Using G1 only will make the problem more clean. Discard G1 and use G2 only will make the thinking process less interrupted and will prevent coding from being tedious in my opinion.

Anyway, the problems are good.

BTW, E could be solved easily in $$$O(n \log n)$$$ with small to large merging, just curious why was the constraint $$$5000$$$?

I suspect its to prevent irritating issues with overflows. Like, you need a leaf to contribute at least $$$n \cdot \max(a)$$$, but a node can sum to upto $$$n \cdot n \cdot max(a)$$$ which will overflow int64 for $$$n = 2 \cdot 10^5$$$ and $$$a_i \leq 10^9$$$. This is fairly easy to fix by capping the sum to $$$n \cdot \max(a)$$$ but its also irritating for no real reason.

Also, I personally feel that adding small to large merging doesn't make the problem any better, its a bog standard implementation of it.

solved using memoization (dp) ,logic is if bob has to remove any number than he remove all occurences of the number, calculate frequency of all element then store in a vector and do o(n^2) memoization (knapsack) you can see the memoization dp solution 268191223

seems i wasn't even close to the proper solutions.

and thx.

I actually was convinced the greedy idea was wrong but it passed ~500 rounds / 2 mins of stress testing at $$$N = 16$$$ so I submitted the code, only for it to find a countercase 2 seconds after submission T_T

FST — fail system tests (apparently pretests were actual system tests xdd)

In short in my solution I was pulling negative sum nodes to their ancestors

merge what?

And what is FST

thank you so much

Since h>0, it is enough with:

for (int j=0;j<n;j++) m=max(m,h[j]+j);

(find the height which is the most costly)

ahh got it, thanks

Try to find how the answer increase from [i+1....n] to [i....n].

If a_i is smaller than a_i+1, it means there must be some point where a_i equals a_i+1, and when a_i+1 becomes 0, the work is done between[i+1....n], and more importantly a_i is now 1.so ans++. If equal, ans++.

If bigger, it depends on what the current answer is. If current answer is bigger or equal than a_i, it still means there must be some point where a_i equals a_i+1, thus ans++. And if current answer is less, ans+=a_i-ans,which i guessed out.

if a[i]>a[i+1] then ith will take a[i] amount of time to reach 0 but if a[i]<=a[i+1] then it will take some extra time which will be equal to the time a[i+1] takes to reach a[i]-1 so simulate this time taken by each tree from right to left and the time taken by the leftmost tree will be the answer

I once read somewhere..

Everybody at cloudflare needs to be locked up

Traverse $$$a$$$ in reverse order. It is easy to see that the whole process will be terminated when $$$a_1 = 0$$$. Then, we only need to care about some positions $$$i$$$ such that $$$a_{i - 1} <= a_i$$$ because it will cause the process "delay". We will add $$$a_{i - 1} - a_i + 1$$$ to our result in this case because it is the required time to make $$$a_{i - 1} > a_i$$$. In the case that $$$a_{i - 1} > a_i$$$, we need to subtract $$$a_{i - 1}$$$ with the time that has passed, i.e our current result. The answer will be result added by the current $$$a_1$$$.

Submission: 268158682

Random stress tester for F save me from penalty for F1 (Wrong greedy (take from back if possible) passes all quite small cases, so I should try larger tests, but happily in random case the number of possible state is not so large)

Oh, I understand now. Thank you for your explanation and the clean code.

i thought about finding increasing order from backward(N , N-1...) and currTime will the index where this break + 1( if there if atleast 1 more value than maxValue of sequence) and making this segments and taking maximum time