Thank you for participating in this round! Problems I are authored by orzdevinwang and the rest by me.
Sorry for the weak pretest in problem B.
Hint 1
Solution
Hint 1
Hint 2
Hint 3
Solution
Hint 1
Hint 2
Solution
Hint 1
Hint 2
Hint 3
Solution
Hint 1
Hint 2
Hint 3
Solution
2061F1 - Kevin and Binary String (Easy Version)
2061F2 - Kevin and Binary String (Hard Version)
Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Solution
Hint 1
Hint 2
Hint 3
Solution
2061H1 - Kevin and Stones (Easy Version)
2061H2 - Kevin and Stones (Hard Version)
Hint 1
Hint 2
Hint 3
Solution
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).
Congrats jiangly on becoming jiangly :))
Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).
Fun facts for problem D: When I was testing this round, this problem was Div. 2 B with $$$n \le 500$$$.(=・ω・=)
Fun fact: It is still Div. 2 B level problem
Whatever $$$O(n^3)$$$ solution was intended is probably harder to find than the current solution.
someone please tell me why did this 302137216 code for D got accepted and this 302137051 didn't, even though the time complexity is same
The base case should be a=1 b=1, otherwise you will try to split 1 in halves and one of them will be 1 again.
can you please tell me why it's giving tle for E https://codeforces.net/contest/2061/submission/302147220
but the solution 302137216 in which i used delta function instead, the base case is same here ,i.e a=0 | b=0 as even if one of them would be 0 then it would return false
oh i did a very silly mistake for boolean instead of && i put &, else the base case is alright. BTW THANK YOU :)
Can you explain why this https://codeforces.net/contest/2061/submission/302985376 solution is giving TLE on test no. 22
If v[0] = 0 isn’t it obvious that he is telling truth as nobody on the left?
he could still be a liar even if what he says is correct
E is a greed-bait.
but.... greedy passes.... my greedy solution
can you post one such bait test case? :3 I am having issues with my greedy solution
Answer is
0
.Tourist -> <-
Congrats jiangly
Thanks to the authors for the amazing problems ! I really enjoyed the round !
For G, a good way to convince yourself that $$$\frac{n + 1}{3}$$$ is attainable is to fix three vertices a, b, c. Say the edge a-b is 1, then by induction among the other vertices there is a matching of size $$$\frac{n + 1}{3} - 1$$$ : if it is of color 1 we are done, and elsewise the edges a-c and b-c have to be 1 as well (or we get a 0-color matching). But now we can make another choice for the vertex c, and since a-b is 1 we will still get a-c and b-c to be 1. So every edge from a or b is 1, as well as every edge from c for any c ... and finally the whole graph is 1, absurd.
I've found this to be helpful, since knowing the answer gives you the idea to get one edge from 3 vertices, hence the mixed-color triplets from the solution, from which you can find a linear construction
nvm
nvm
C is a nice problem. Thank you
Same same but different
On problem D, I found a solution that actually does build A into B rather than the reverse by recursively trying smaller elements to build into B. Definitely way more complicated then the intended solution though.
let the rank name be both alternative ^="tourist", "jiangly". now tourist will have to reach "jiangly" rating xD
Hmm... So it was not intentional? I found this a very good kind of problem to have weak pretests. Some 2017 Codeforces vibes :) There is one simple correct solution and plenty of ways to get an "almost-correct" solution.
Just wondering, is there a way to tell whether main test 11 existed before the round or not? If yes, was it deliberately excluded from protests?
I did not realize that pretests were this weak, so I just assumed the hacks were all related to hashmaps. :P
Take a look at the first hack, it is indeed that test so it was added by hacks
Oh ok. I assume all of these wrong solutions would have passed if it were not for hacks? So then "weak pretest" should actually be "weak tests."
I was thinking of the same idea for E but couldn't figure out that this is a convex function, hence couldn't reach the conclusion that taking $$$k$$$ largest reductions works. Can someone share how they got the intuition for this being a convex function, or is there any other solution that doesn't use this?
You could do the same thing with brute force up to storing all "improvements"; say
arr[i][j] = num(i, j+1) - num(i, j)
. Then you could maintain a multiset or some other sorted list data structure, where initially you add all arr[i][0] to the multiset. You can remove the maximum element, say arr[i][j], and insert arr[i][j+1] back into the multiset (since now that move became available). Repeat k times and you can get the final sum.Edit: nvm this is completely wrong
But without knowing the fact that this is convex, this greedy can't be proved right. Like in a scenario when this isn't convex, maybe there can be a case where for some $$$i_1$$$, the first improvement is not as good as the first improvement for some other $$$i_2$$$, but say the next improvement for $$$i_1$$$ is very huge. If $$$k=2$$$, then the greedy algorithm will pick the first improvement for $$$i_2$$$, and then the first improvement for $$$i_1$$$, whereas the correct answer should be first and second improvement of $$$i_1$$$.
wait yeah you're totally right. I completely fakesolved it
Consider the list of bits that are turned off due to using the AND operator multiple times. Say that the first operation turns off the bit in position 8. After that, no set of operations should modify anything in bits position 8 or above. This is because otherwise, we would have a better first move. Now, note that 2^x>=sum(2^y) over any subset of y in {0,1,2,...,x-1}. Thus, convexity.
1
3 3 2
15
5 6 9
-- trap .. largest reductions , 15 -> 5 -> 1 -> 0
but we can do 15 -> 6 -> 0 optimally
it's over
it's over
E: wtf bruteforce?! Using the facts that sum of convex functions is convex and $$$2^b > \sum_{k=0}^{b-1} 2^k$$$ don't justify it being E and I think it would've had a lot more solutions with lower constraint on M and lower placement in contest. It's really hard to see why this simple nested loop should be fast when many other ones with same number of repetitions aren't, especially without knowledge of cache efficiency and assembly instructions (specifically that min/max are their own instructions, not compare+branch). This just isn't a very good algorithmic problem.
Note that you don't need to prove convexity. Think about it as greedy decisions: operations on different $$$i$$$ are independent and it's always better to remove the largest possible bits from any $$$i$$$, so the first $$$k$$$ operations on any $$$i$$$ will "do more" than anything we could possibly do with $$$k+1$$$ 'th and on. This gives enough intuition and then bruteforce goes brrr.
I dont understand your comment? The complexity is of the order 10^8, and expecting it to pass is not at all that unreasonable??
Complexity of order 10^8 can easily hit TL. Constructing 1e7 2D vectors by randomly appending 3e7 elements takes 2 seconds on custom invocation because cache and indirection; that's still an order of magnitude less loop executions than this case.
There's a rule of thumb, yes, but that doesn't have 10^8, it has 10^7 (or 10^6 in older times, but CPU performance hasn't been increasing all that much) and it's still with caveats — even if it's plenty in 99% of cases, there could be many hidden instructions behind C++ expressions sometimes. You can't base real runtime on a number without specific code features, like the ones I mentioned above.
Examples: At IOI 2012, I burned myself by reconstructing
std::queue
~10^6 times — literally addingstatic
keyword made the code several times faster. In 1975G - Zimpha Fan Club, complexity in the order of 10^9 exceeded TL simply because division with dynamic modulo can't be optimised by compiler and therefore usesdivq
which has high CPI. Many times I've sped up complicated DFS-processing a tree by doing a simple preorder renumbering at the start and the end. Then there's this meme I made simply because it's such an efficient and simple optimisation.302209959 This code does $$$N \cdot 2^M \cdot 7$$$ operations, 7 per loop for no point at all. It still passes.
Your claims are bogus. Yes, indeed $$$10^8$$$ isn't always safe. However, here it clearly should be. Bitwise AND operation is one of the fastest operations. Also it's not like we are measuring the algorithmic complexity here, but the exact number of operations.
You could make an argument for popcount not being fast (even though in my experience, it is also fast), however that is simple to avoid.
Clearly, they wanted to cut (most) $$$O(N \cdot 2^M \cdot M)$$$. I don't support this decision, however neither am i against it. If you really failed to assume that the intended complexity would pass, I am sorry for you.
Speak for yourself, my rule of thumb is < 10^8 mostly safe, 10^8 and 10^9 unsure depends on operations.
Fuck off with condescending statements like "your claims are bogus" or "I'm sorry for you", ok? Nobody appreciates passive aggressiveness. Stay direct and to the point.
The exact number of operations isn't 10^8. My code is $$$O(N 2^M)$$$ with 2e8 loop executions and 7 instructions per loop, like this:
The code you link has 36 instructions per equivalent loop (not counting labels and noops). In addition, one of them on my system (likely different from CF) is
call __popcountdi2@PLT
. We're at 7e9 for exact number or a bit over an order of magnitude above the number of loop executions, which is as expected. That's far above even your rule of thumb.Not all operations are equal. CPI matters, branch prediction matters, avoiding branching by proper assembly matters. AND is one of the fastest, but it's only one among many. Compare, branch and memory access are the important ones, not AND. Then there's a whole another factor — pipelining.
Correctly predicting that the code you linked would fit in TL is nigh impossible as the exact number of operations is well above even your optimistic rule of thumb and there are some slow operations. Predicting that my code would fit is easier since it's intentionally written to be more constant-optimal, but still unreliable when compared to other problems, and much better to just write and measure. That's what I did.
The issue is that this bruteforce is intended to pass, not that it can pass. Unintended real-time-efficient alternatives to an optimal algorithm happen often enough but this is a completely different situation.
Precisely my point. If orders of magnitude non trivially higher than your rule of thumb are still ok, it is maybe reasonable to reevaluate it?
The code is not designed to show you that it should pass, but that this can also pass, and thus an argument that the intended is not supposed to (or we can't tell whether it should) is unreasonable. You are not supposed to predict that the code I linked passes, but the actual intended code should?
it is not? the editorial clearly states O(N * 2^m) complexity, not O(N * 2^m * m) [which is what my code is trying to mimic, but actually doing it TLEs without other optimizations at least].
There are several problems I can link you to where intended complexity is greater than the order of 10^8 per second, and sometimes not even simple operations. (I am not saying that this is always right, but in this example I see nothing wrong). The only difference I suppose as you pointed out, is that they are harder. Ok, so? Looking at submissions, certainly doesn't seem the TL was tight.
Meanwhile, take a look at I. The AC solution in-contest has a complexity of $$$O(N \cdot sqrt(N \cdot log(N)))$$$ with $$$N = 3 \cdot 10^5$$$. Even with my estimates, it is a stretch to believe it. Also, btw $$$10^9$$$ simple operations passing is not exactly a new thing.
You mentioned vector push backs, division operations which are somewhat well known to be costly. Comparing them with min and AND operations is ridiculous. It's like comparing indexed_set (ordered_set) and priority queue. (Ok comparing division isn't that extreme).
302227228 I put bitwise operators and max operations into a code and ran it 10^9 times (10^4 testcases each 10^5 size). I don't know how compilers work so maybe it's just optimized.
Followup : for about 30 seconds in contest, I considered whether N * 2^m * m would pass. If i had not gotten the optimization soon enough, probably I would also submit it.
I would not be completely wrong either since it does come close. Some others also did submit that complexity.
To be clear, by bruteforce I mean anything that checks all subsets of operations for each element. Not amortised, not $$$O(N 2^M)$$$ operations, but exactly $$$N$$$ $$$2^M$$$ repetitions of AND, min/max, no optimisations of which subsets are checked. The AND values can be precomputed, the popcount values can be precomputed instead of using an intrinsic in case the intrinsic would be slow on CF judge (a valid concern raised before, though it doesn't seem the case here), but those are implementation details with many possible effects and the core idea is still that we can afford to check everything as long as we don't do it in some stupid way.
There are also many problems where the intended complexity is much smaller than the order of 10^8. It's exactly part of my point that it massively depends on problem, it's quite a high level skill to predict specific code's real runtime beyond "DP tends to be fast, tree processing tends to be slower..." since it requires some understanding of hardware. I don't care that many operations pass here, but that it's artificially increasing the problem's difficulty and making it measure balls rather than skill, since it's much easier to just submit bruteforce as a hail mary approach than to have the required skills.
Again with the snide remarks. I'm comparing them on purpose to show that CPI matters a lot, not just number of operations. You don't have a better reply than trying to ridicule it. Knowing which operations are slow and which are fast, again beyond general patterns like "float arithmetic is slow compared to int", is quite a high level skill and it ties into the argument I made above.
The code you sent is... running the same asm in the loop that I posted, plus minus a few operations.
There is no rule of thumb number of operations that'd make someone decide that my submission will easily fit in TL without also causing a lot of TLEs on many (though not all) other problems. That's not debatable, it's simply
(10^9 > 10^8) == true
. That requires additional knowledge, which I believe is largely lacking even among people who solved this problem. It should've been treated as either easier or harder, but it isn't like actually hard algorithmic problems that require some of this skill, so decreasing the limit on $$$M$$$ and score would've been a good decision.If the intent was to prevent solutions with an additional log-factor ($$$M$$$) from passing, then that's not following a fairly common guidance in problemsetting that we shouldn't try to separate strictly passing and strictly failing solutions based on one log-factor because it's too difficult, and this problem shall stand as another example why it's a good guidance.
For simple loops like this , you should really evaluate it as loops unrolled. (Of course I get a cost-free way of doing this by JIT). This brings the operations down to basically 4. In fact, it is lower because of executing ahead of time (there is no dependencies to not load the data ahead of the current AND).
it is just a special rule and special case that, for simple and independent branchless sequential array access solutions, 1e9 operations can be achieved.
It felt you are just stating reasons why it could be hard to recognize this passes for a normal person with too pessimistic of a runtime estimation, without seeing that you also provided all the reasons why it easily passes: branching, memory, instruction throughput/latency are beyond perfect in this solution. It is not that you can come to this conclusion without considering branching,memory, instructions type. It is a statement that this is very easy to do on this case, with lots of margins for error, in general, both from first principles and by experiences (if you have sent any similar solution and a similar brute force nature before. If you felt 1e8 is not fast enough in this case, you should have been surprised by runtimes on so many occasions. )
The assembly I posted isn't what I thought up, it's exact assembly of my code. The jump at the bottom goes to the top. Sometimes loops could be unrolled, but not in this case. It's exactly 7 operations per loop. It runs in 600 ms along with $$$O(T 2^M M)$$$ preprocessing.
Yes, I mentioned pipelining.
A normal person doesn't have a runtime estimation. A normal person finds A somewhat challenging. All that you mention is additional, harder knowledge that just demonstrates how difficult it truly is to estimate running time without just writing and running the code. In this case, it's not strictly needed: sequential memory access and cheap operations AND, max = this should be pretty fast, it's worth trying. However, that's not very common cp skill and confidently deciding that it will be really fast needs much more skill, so it boils down to "I solved ABCD, stuck on E, might as well try bruteforce, maybe it'll work". If it was a problem that less people try to seriously solve because there's a bunch of mid-level problems before it, that wouldn't be a concern.
For me. For you. Not for 1400 that solved E and who knows how many that didn't try.
I don't think I've done that for years. They're more of a hallmark of low quality ICPC subregionals.
I'm most often negatively surprised by runtimes. TLE would've been in line with my experiences — I gave a few examples far above that do more complicated stuff, but much fewer operations. Problems with 10^8 non-simple operations tend to TLE, problems with well over 10^9 simple operations tend to barely pass or TLE. Depends on the problem. In this case, I concluded that it could pass, but if it doesn't then I know how to optimise it, which isn't always an option.
I agree with you on this part, I was also very hesitant on using __builtin_popcount inside and thought of precomputing it but it is fast enough,
I think these days its easier to think that solution is fast enough instead of using proper idea in many problems, I was thinking N <= 1e6 and M <= 5 would have been a better choice too.
can you please check this solution for E its giving tle on 19 https://codeforces.net/contest/2061/submission/302147220
Your code is O(N * M * 2^M) make it O(N * 2 ^ M) for precomputation
i think its 2^m * (n + m) for precomputation can you please recheck
seems constant factor issue, just optimizing it, for example vector arr with differences instead of each i, j is enough
did that, but then also its just passing like 1850ms
you can also do arr.reserve(n * m) make it a little bit more faster
Speed of __builtin_popcount is pretty hardware dependent (and for me, compiler dependent too ) so I would not dare to put it through 1e8 loops. This is why I went for the second option below.
That said, if it does not pass, it is easy to change to a pre-computation. So the options are like
I think it is a fair and balanced (even interesting) decision. It is very different if you are only allowed 1 submission and no chances to correct it if TLE.
How much lower? I feel that even if the brute force was more evident it is still a harder problem than C and D. I also don't understand the part where you say we don't need to prove convexity. We can't prove that greedily taking the best decisions across all $$$i$$$ is correct without convexity right (comment regarding the same).
The part where you mention it is better to remove largest bits first, that is indirectly the same argument that convexity claimed isn't it? The convexity proof in the editorial basically says the same, that when we go from $$$g_{i-1}$$$ to $$$g_{i+1}$$$, the largest bit is removed when we first go from $$$g_{i-1}$$$ to $$$g_{i}$$$ and the smaller bits are removed subsequently.
I mean we don't need to formally prove convexity — or even anything remotely close to that. What I described is intuitively directing towards the proof in editorial, but isn't itself a proof, just intuition. The kind of thinking that makes sense but could easily overlook something. A solution can be presented in full to demonstrate a problem's difficulty or handwaved to make it seem easier — I know, I've done it.
I think with more obvious constraints, C/D/E are basically interchangeable. C and D have basically equal number of solutions and there's always a dropoff for later problems, though it's not too steep early on.
All one needs to handwavy is when we do subsequent operations, the delta by which ai changes decreases. Here are handwavy exchange arguments.
Let's say we have a sequence of operations A,A&B1,A&B1&B2,A&B1&B2&B3, and so on.... Let say D1=A&B1 — A, D2 = A&B1&B2 — A&B1 and so on...
If D3 < D4, means using B3 on A&B1&B2 decreases value lesser than using B4 on A&B1&B2&B3. If this happens, why not just use B4 instead of B3, you will definitely remove all the bits that have been turned off between A&B1&B2&B3 and A&B1&B2&B3&B4, with some potentially more bits.
To extend upon this idea, my initial solution was to just try out all O(M) bi to find the next best Bi for the operations, but it gets a WA.
I don’t think it is bad to test understandings of big differences of constant factor, especially when that constant factor is as big as a factor of 10 away — I am very sure that an iteration of this kind with 1e9 would have passed (in C++) too. (Not all implementations equal, like bit counts has to be precomputed).
It's not too bad and it's a useful skill that helps from time to time in various problems anyway, but I don't think it should have a major role in not-too-hard problems. Sometimes you screw up implementation, don't pay attention to constant factor and your code is slow. Sometimes you can squeeze an inefficient algorithm through very efficient implementation, even with tricks of its own. If it's one of the last problems that requires constant optimisation on top of other stuff — that's fine, I'll just take it as a juicy bone. But in this case I feel like it's artificially inflating complexity and at the level it was placed, it's measuring weight of balls more than that specific skill.
It's a valid contest problem, but there ought to be a tendency to use them only rarely and even then preferrably at higher level.
For my understanding--you don't even need the fact that some removal has lower reduction than the sum of all previous reductions right? Just having a lower reduction than the last removal is sufficient (implying that the sequence of reductions is just non-increasing)?
I agree with you. It's not the end of the world, but that's how I felt when I was solving E lol
In Problem C, can someone please explain why "If they are a liar, since two liars cannot stand next to each other, the (i−1)-th person must be honest. In this case, there are a(i−1) + 1 liars to their left."
Shouldn't it be a(i-1) liars to the left of i-th person (i-th person being a liar). Because (i-1)th is honest, so there are a(i-1) liars upto position i-1, so there are a(i-1) liars to the left of the i-th person, right??
Yeah shouldn't that be a(i-1)+1 liars to left of (i+1)th person? (that's what makes the dp transition for liar)
Exactly thats what i m thinking if ith person is lier and i-1 is not then to the left of ith person there will be a(i-1) liers not a(i-1)+1 even i m confused of this +1 in the editorial are they counting the ith lier also butin ques its mentioned of strictly left and not including the current one
its missprint int editorial i will be a[i-2]+1 actually
D was a good implementation of use of heaps
Is codeforces problemset really this hard? I have to concentrate hard on B and C problems for a long time in order to arrive at the solution.
you registered 6 years ago lil bro
Fair. I guess I forgot. Back after a long time
My D TLE'd due to hash collisions smh....
if you could not understand the editorial solution for C,
Let dp[i][0], and dp[i][1] denote number of sequences ending at i, where i-th person is honest and liar respectively
i-1th person and ith person can be:
honest-honest, in this case, the condition is: a[i-1] should be the same as a[i] since both are true.
honest-liar, we do not need any condition in this case.
liar-honest, in this case, the condition is: i-2th person should be honest. so a[i-2]+1= a[i].
Overall, dp[n-1][0]+dp[n-1][1] will give us the answer
I had the same structure for my dp but couldn't find the transition... Thank you!!
same, I panicked, went into random ideas and stopped seeing things clearly during the contest :(
Lesson learnt!
could you elaborate alittle on how you set the initial conditions in your code?
Help a lot! "The final answer is given by dp[n]+dp[n-1]" Can you explain this sentence in the Editorial?
I think its the same as what he meant, the first dp[n], is for the case where nth person tells the truth, dp[n-1] is the case where the n-1'th person tells the truth, which is when nth person lies so its basically the same as "dp[n-1][0]+dp[n-1][1]" or "dp[n][0]+dp[n][1]"
excellent approach
Hi, why does 302120893 solution for D gets TLE while 302121766 passes comfortably. The time complexity should be about same.
Suppose you have 2^30 in query
You query 2^29 2 times Than for each of them. In total you query 2^28 4 times
Similarly you reach 1 2^30 times
Can anyone tell me why in QUESTION-D my 302152821 is causing sigterm in testcase (Given in question) :
while other than this everything is fine.
In other submission 302153118 I just added only one line (Mentioned in comment) and it got accepted.
In problem D, I'm trying to understand the time complexity of this solution. This straightforward DFS using std::map runs in 300ms:
https://codeforces.net/contest/2061/submission/302153025
For each target x, I appear to be doing 2^(log x) recursive calls in worst case, each with a logn map operation, but the actual runtime looks much better. What am I missing? Is it just the testcases are not strict enough?
Also, I don't understand why is std::map is much faster here? If I use unordered_map with exact same solution it TLE's.
Edit: ok. nvm i got it. amount of recursive calls is just bound by the input size it gets consumed. and unordered_map is just targeted for collisions, works fine with a custom hash function.
Problem C can be solved (overcomplicated) with 2-sat.
Let's figure out what the array should look like. It should look something like:
The algorithm consists of iterating over the array from $1$ to $$$n$$$. Let's maintain the current variable $$$\texttt{sync}$$$, initially $$$0$$$. An element $$$a_i$$$ is out of sync if it does not equal $$$\texttt{sync}$$$. Naturally, a liar is found nearby and we do $$$\texttt{sync++}$$$. Formally, what this means is that at least one of the $$$i$$$ or $$$(i-1)$$$ is a liar. We can write $$$(p_{i-1} \ \lor \ p_i)$$$. Additionally, we write $$$(\neg p_{i-1} \ \lor \ \neg p_i)$$$, since at least one of them tells the truth. There are a few cases. For example, if $$$a_i > \texttt{sync}+1$$$, then $$$i$$$ is surely a liar. We write down $$$(p_i \ \lor \ p_i)$$$.
A good way to check if a solution exists is to run a standard SCC 2-sat. I personally avoided doing so during the contest and chose to do tedious casework. Afterward, when you are sure at least one valid solution exists, find adjacent clause components. If there are intersections or singular clauses like $$$(p_i \ \lor \ p_i)$$$, the component has exactly $$$1$$$ way to be filled with LIARs. If the component has no intersections, such as in $$$(4,5), (6,7), (8,9)$$$, it has $$$k+1$$$ ways to be filled with LIARs, where $$$k$$$ is the total number of clauses in the component. Let me show an example:
Since the components are independent, just multiply the results of each.
Here is my submission: 302126337
FST contest
Maybe solution of G can be intuitively understood considering induction proof of this: https://math.stackexchange.com/questions/4603759/show-that-rmk-2-mk-2-3m-1
Hi, could someone explain F2 in greater detail?
I don't really understand this "Let the distance between the i-th block and the nearest preceding 1-block be p. The number of 0s between blocks j and i cannot exceed p."
Thanks a lot
[submission:302161510][
(https://codeforces.net/contest/2061/submission/302161510)
can anyone help, why is my code getting WA on test 6?
Hello everyone, I tried to solve problem C using dp. First, I thought that we can make dp state of $$$dp[i][k][isLiar]$$$, which represents the number of ways till $$$i^{th}$$$ person if $$$k$$$ liars are there till now and last person was the liar or not. Since, it can go till $$$O(n * n/2 * 2)$$$, so it is not possible. Then, I changed the solution to $$$dp[i][isLiar] = {ways, k}$$$, where I was storing the number of ways and liar till now in dp value and not in state, and (suprisingly to me) it passed.
Now, in editorial a dp solution with only $$$O(n)$$$ states is given. So, my question is that is there any way or observation by which we can see that some variables are not necessary for the dp state and the dp state can be made smaller?
The constraints. The constraints should force you to think better, it was obvious o(n*n) wont pass.
Well from constraints, I could have wrongly understood that DP might not be possible so I should try some greedy approach.
It should have atleast hinted that if dp would work it needs to be <=nlog(n),now its upto your dp skills if you could fit it or not.
Can someone explain F2 in detail?
In E , why doesn't the greedy method for finding num(i , j) (going iteratively) work ?
not sure I can visualize this well, but imagine that the "best" operation has two highest bits: A, B (and nothing else on). The second best operation has only bit A on, but also some other bits CD, the third-best operation has bit B and other bits EF.
If you can use exactly two operations on that number, it's better to choose second and third best, skipping the top one (as together they have the bits ABCDEF which is the most you can get with two operations).
It is possible to solve E without proving convexity or really any property of the function g for that matter.
Submission
Can we solve problem C with combinatrics(star and bars)?
Can someone explain the DP solution of C in a better way Can't the last person be liar?as in editorial why this is not considered in transition!?
If the last person was a liar then the second last person must be honest. Hence the extra $$$dp_{n-1}$$$
Just upsolved F2. It is one of the best problems I have seen. On the other hand, I have no clue on how the editorial just skips to the following claim :
I do get there eventually, but the steps are definitely non trivial. I decided to write my own solution for both F1 and F2, explaining the intermediate steps.
Note that the order of the $$$0$$$ blocks don't change. They are always sorted according to the initial order.
Let's number the blocks $$$1, 2, ... K$$$. We can view every unite as merging $$$(i - 2, i)$$$ and $$$(i - 1, i + 1)$$$ pairs of blocks. Note that because of this reason, the order of merging does not matter.
Now, a simple strategy works. If $$$S_1 = T_1$$$, we will not move the first block of $$$S$$$. Remove it and recur. Otherwise, $$$S_1 \ne T_1$$$, and we will definitely have the second block of $$$S$$$ move to the first position, move it and recurse again.
Correctly Implementing the above process gives us O(n) complexity. 302078645 for reference
We start off similar to the actual editorial. The cost function is the number of "fixed" blocks, and the answer is $$$\frac{(\text{total blocks} - \text{fixed blocks})}{2}$$$ We want to maximize number of fixed blocks.
Consider $$$dp_i = $$$ the maximum number of fixed blocks with $$$i$$$-th block being fixed. We get transitions $$$dp_i = max(dp_j + 1)$$$ such that $$$j \le i$$$ and $$$(j, i)$$$ is valid. Now, the crux of the problem is to realize when an interval $$$(j, i)$$$ becomes valid (by valid, we mean that $$$i$$$ can be the next fixed index after $$$j$$$).
To solve this, we have to use the powerful property that there is no fixed block between $$$j$$$ and $$$i$$$. This is a crucial restriction actually. Looking at the string formed by the blocks $$$j + 1, ...., i - 1$$$, we want to find a nice way to represent strings which can be formed by permuting the substring such that no block goes to it's own place.
Claim : A derangement is only possible when the string has even number of blocks, and the only possible derangement is either $$$0000....11111...$$$ or $$$1111....00000...$$$ depending on the first character of $$$S$$$.
Proof : Let $$$P_i = $$$ the position where the $$$i$$$-th block ends up at. By the claim in the F1 solution, the $$$0$$$-blocks and $$$1$$$-blocks end up always sorted. Thus, $$$P_i < P_{i + 2}$$$ is a necessary condition. Also, since we want derangements, $$$P_i \ne i$$$.
$$$P_1 \ge 2$$$ because $$$P_1 \ne 1$$$. $$$P_3 > P_1$$$ implies $$$P_3 \ge 3$$$, but $$$P_3 \ne 3$$$ and hence $$$P_3 \ge 4$$$. By induction, $$$P_{2 \cdot i - 1} \ge (2 \cdot i)$$$. For odd $$$K$$$ (where $$$K$$$ represents the number of blocks), this is impossible because we get $$$P_K \ge K + 1$$$.
On the other hand, for even $$$K$$$, we still have $$$P_{2 \cdot i - 1} \ge (2 \cdot i)$$$ for all $$$i$$$. Now, by induction, we can see that in fact the arrangement is indeed $$$1111...0000...$$$ (assuming $$$S_1 = 0$$$). This is because, since $$$P_1 \ge 2$$$ and thus, the block pairs $$$(1, 3)$$$ got merged. Further, $$$P_3 \ge 4$$$ meaning that block pairs $$$(3, 5)$$$ got merged as well as $$$(2, 4)$$$, using the property that order of swaps don't matter. By induction we can verify that all $$$0$$$-blocks and all $$$1$$$-blocks end up merged.
Now, we can finally solve the problem. A $$$O(n^3)$$$ DP is fairly trivial and we need to speed it up. Let $$$c_0$$$ be the number of $$$0$$$s between the $$$j$$$-th and the $$$i$$$-th block, and $$$c_1$$$ denote the number of $$$1$$$s. Assume WLOG, the $$$j$$$-block is $$$0$$$s (this means that the $$$i$$$-th block is $$$1$$$s). Note that then, the derangement between $$$j$$$ and $$$i$$$ will be of the form $$$0000....11111...$$$ because it starts with the $$$(j + 1)$$$-th block which is $$$1$$$. We require the first $$$c_0$$$ characters of $$$T$$$ to have no $$$1$$$, and the last $$$c_1$$$ characters of $$$T$$$ to have no $$$0$$$.
Going back to the actual editorial now, note that the $$$2$$$ conditions are disjoint, and impose restrictions of the form $$$j \ge l_i$$$ and $$$i \le r_j$$$. Finding $$$l$$$ and $$$r$$$ is easy with a binary search and prefix sums. We can do a sweepline DP on $$$i$$$ maintaining in a segment tree the values of the $$$j$$$ satisfying $$$i \le r_j$$$. Remove them from the segtree (by turning to -INFINITY) when $$$i > r_j$$$. For a certain $$$i$$$, we query the range $$$(l_i, i)$$$.
The ending characters need to be handled properly (and evaluating if the ranges $$$(0, x)$$$ $$$(x, n + 1)$$$ are valid). One way is to bruteforce on the final values of $$$S_1$$$ and $$$S_n$$$ but that too has a few cases (in my way at least). 302178819 for reference
Great explanation!
Could you elaborate more on why it's necessary to "brute-force the final values of $$$S_1$$$ and $$$S_n$$$"? Couldn't we just add dummy characters ("fixed" block) on both ends of $$$S$$$ and $$$T$$$? Thanks a lot.
Yeah that should be fine too i think. I think i overlilled fixing this issue.
Hmm. I read jiangly's submission (same as yours) and he both added dummy characters and bruteforce both ends. I think there must be a reason and I'm missing something.
right i remembered the issue now
you will need to calculate whether the interval (0, x) is good regardless of bruteforcing over the first/last or not, but (0, x) good-ness depends on the choice of the first character
Any alternate way to calculate (0, x) good-ness should be fine
Could you give an example? Appreciate!
well it is specific to the method used for calculating r_i and l_i (because i used the same method to calculate r_0 and l_(n + 1))
To caculate r_i for S[i] = 1, i found the position of the next 0, say p, and limited the number of 1s in the interval (i, j) to at most (p — i).
For S[i] = 0, the algorithm is the same, but it depends on the position of the next 1, and limits the number of 0s. Hence the difference
Thanks a lot <3
Can't you add before s and t symbol different to s[0]? And the same at the end. It will mean that you 100% have 2 fixed positions on 0 and n+1. I did this in my solution.
i solved A,B,C,D,F1 all questions felt below 1600 got a delta of +200
E too was kinda easy
Hello everyone, I am trying to do the problem D of this contest. My idea is that I would make 2 sets of a and b (map storing frequencies), and then I would check for every element of set b-
at last, if set a is also empty then print "Yes"
According to my understanding, it should take $$$O(n * log(max(b_i)))$$$, which should be feasible, but it is giving TLE verdict. Can anyone help me in this, whether I am missing something important?
Thanks
UPD: I got to know that this approach is same as the editorial.
yeah i did this way too.
submission: 302094211
Thanks a lot TheArcticWolf. It is a very nice implementation.
what will happen if x will be 3? it will search for 0 and 1 And what will happen when x will be 0? it will search for 0 and 0 ?? it will never stop this may be one of the reason.
I am checking whether x is less than the lowest number present in set a, so it should stop at 0 atleast because 0 can never be present in set a.
Fixed Code
U are not decreasing the count of num from brr if it's current frequency is more than 1
Thanks a ton gogo13. I was thinking that maybe I am inserting numbers back in set b, so it is increasing the complexity more than $$$log(n)$$$. Thanks for pointing that mistake.
my submission for problme E — https://codeforces.net/contest/2061/submission/302182528 can anyone tell my this is wrong.
I've got a question about problem I. I implement the O(n\sqrt(n\log n)) solution ,but the second part( j−(l−j)<−(r−l) ) seems to be not convex because the transition point is not monotonic(in very rare occasions,but it does accure).
In my friends list, a pupil, a specialist, an expert(me before this contest ;-;), a candidate master and a master all got FST for B. B destroyed everyone equally XD.
In problem E, I thought, at each move, i could greedily decide which magic to use with which number. I know this code would receive TLE, but it is getting WA. Submission
can anyone please help me out here? Is it the assumption that is wrong?
The assumption is wrong because using the most effective number first is not always optimal. For example, if the numbers are 1100, 1010, and 0101 in binary, a greedy algorithm would pick 1100 and 1010 instead of the optimal 1010 and 0101.
I think you got it wrong aksLolCoding,
if we check for each number with each magic, whats the max reduction at each step, then probably, this obs looks correct to me
Can someone explain problem E in simpler words? Couldn't grasp the editorial
Look at code by Dominater069 and to understand why it passes you can look at the conversation between Dominater069 and Xellos above in this section. Editorial is way more complicated than the problem itself imo. You can also refer to my code(shameless me) here
An interesting fact.Is jqdai0815 trying to hide the editorial of H ?(). Why he post it on a google using Chinese while Chinese can't be accessed to it and others can't understand Chinese.
How to solve C in O(n^2),I am noob.
Where is the editorial for the problems H1, H2 ?
Time to learn chinese
In Problem E, why doesn't just a greedy approach work for each A[i]. For example why is this approach wrong and why do we need to iterate over each subsequence of B.
The Editorial is not good :(
Can anyone explain why solution of Problem D https://codeforces.net/contest/2061/submission/302985376 solution is giving TLE on test no. 22