Hello! I'm happy to announce XXII Open Cup: Grand Prix of Nanjing, which will be held on Dec 12th, 2021.
This contest is mainly prepared by SUA Programming Contest Problem Setter Team, which has already been used as the 2021 ICPC Asia Nanjing Regional Contest. There are around 700 teams participating in the Regional.
This is the fourth Nanjing Regional and the third Grand Prix of Nanjing in Open Cup, I feel very grateful and wish everyone enjoy this contest.
We also have an anonymous author this year. He provides a very interesting problem to this contest.
Authors: chenjb, TsReaper, jiangshibiao, shb123, quailty, Subconscious, oipotato
Testers: KAN, Um_nik, Merkurev, TLE, antontrygubO_o, Rewinding, J.T.J.L., tun, sfiction, cxt, pb0207, Eden_CY, Suika_predator, lwn_16, lxlxl, xpchf, liyang21, AkaiLemon, nothing100, UESTC_Nocturne, Orenji.Sora, Gromah, smax, arvindr9, RandomKami
Contest link: https://official.contest.yandex.ru/opencupXXII/contest/33444/enter (Only visible for users with OpenCup login)
Open Cup Scoreboard: http://opentrains.snarknews.info/~ejudge/res/res10559
Regional Scoreboard: https://board.xcpcio.com/icpc/46th/nanjing
UPD: The contest is in the gym now.
Auto comment: topic has been updated by chenjb (previous revision, new revision, compare).
I Love ChenJB. :)
Thanks for the interesting problems!
How to solve problem L?
Try to solve the problem assuming we may light/extinguish the central torch for each operation, then try to execute the same sequence by only lighting. While there are operations we can make, make them. In the end the remaining operations form consecutive groups, each of size 2. If there's an unlit torch next to such group, we can light it, do both operations in the group, then light it again, to the same effect. If all 2-groups are 1 apart, don't do anything as everything if fixed already.
How to solve problem C? I tried dp on positions of x for every x that occures in the input but that approach gives WA7.
Note that if you enumerate the mode x, you only care about x and x-k in the end. So enumerate all possible x and it becomes finding a maximum value for a prefix sum formula.
I also got WA 7 at first because I mishandled the case
k=0
, you should check if your solution works in that caseYes, k = 0 was the problem. Got accepted after fixing that case. Thanks for your help!
my code done right when k=0, but WA on test 5
Write simple $$$O(n^3)$$$ bruteforce solution which tries all possible $$$(l, r)$$$ and applies operation for it. Find the answer by taking maximum over all such pairs. Also don't forget that you don't have to use the operation. After you have this solution write random test generator. Use bash/batch script to generate random tests, run both of your solutions on the generated input and compare output files. Do this while the output files doesn't differ. Most likely you will find some input where your fast solution fails while your correct but slow bruteforce solution doesn't. Then all you have to do is analyze the input and understand why your solution is wrong.
thanks for your advice
How to solve J?
Apply the solution of ABC231 Problem E on the divisibility graph of the factors of b-a.
Sorry I don't quite understand. How do you bring down a to 1?
I think what you basically construct is the following. Let $$$p_1, p_2, \dots, p_k$$$ be the sequence of primes you use to divide by in the reverse order: $$$p_1$$$ is the last prime to be divided by. Let $$$x_0$$$ be the balance of minus-plus operations you perform after you divide by every prime, $$$x_1$$$ be the balance of minus-plus operations before dividing by $$$p_1$$$ and so on.
Then you should choose $$$x_0, x_1, \dots, x_k$$$ in such a way that $$$x_0 + p_1 \cdot x_1 + p_1 \cdot p_2 \cdot x_2 + \dots = a$$$ and $$$\sum\limits_{i=0}^{k} |x_i|$$$ is minimum possible.
So to reduce to the ABC problem, you make the sequence of coins the sequence $$$1, p_1, p_1 \cdot p_2, \dots$$$.
The only difference is that you forbid the transition to $$$\left\lfloor \frac a x \right\rfloor$$$ if it's equal to $$$0$$$.I guess it's never optimal to go to $$$0$$$ instead of $$$1$$$.You will find the TL is even tighter in ECFinal 2020 G. We actually set a TL for those code which has no optimization in 4*4 matrix multiplication to pass while the std code runs within 500ms.
It is trivial to halve the constant since you only need to compute the upper diagonal. And you may further find that you only need to compute C(4,2)=6 in the end.
UPD: Not sure why it is 3s in gym for that problem now. When I was testing that one, it was set in 1s and you have to expand the multiplication to pass.
That's interesting, we didn't manage to fit in the time limit even with an $$$n^3/6$$$ matrix multiplication without the following optimization: if the matrix is a unit one, we can skip push operation.
Could you try submitting your TLE code in Gym again (with 64-bit compiler)?
Now the optimized multiplication version without this push optimization passes in 1.2 (TLE 43 on the contest) (138919648), while an unoptimized one still gets TLE 23 (138920067)
This is the one without any optimization: 138170891
I guess my solution makes two modify operations for each operation, so maybe this matters. Or maybe my style of writing pushes whenever we enter a vertex is just too bad:)
In my testing of seg tree codes, there are indeed ways of coding it that can be 70% slower with no apparent reason other than the way things are pushed. I'm sure it can get worse.
No. I am just saying it is pain because I got AC 2 mins after contest ended.
My TLE is not because of constant time... I used a sqrt solution and forgot to change bucket size to something that is not $$$2$$$ when I was submitting.
I still dont actually know what the intended is...
Code lol
You simply use a segment tree to maintain everything you need: the length, the linear sum, the square sum and historical square sum. You can handle all pushdown and update by a 4*4 matrix.
I use the sqrt decomposition also fit the time during the contest.
A team from SJTU also wrote a sqrt decomposition in the Regional and passed it. I didn't generate data for this kind of solution. I am also wondering if there is any sqrt solution running in less than 1 second.
After adding pragmas and tuning the bucket size, I have gotten my solution down to 436ms on the gym. I checked the status and this is actually the current fastest solution on the gym. 139039589
Maybe it can be brought down even further if we made the code more friendly to be vectorized?
I love statements taking INF time to load.
this round
Due to the Yandex system, Snark loads all opencup statements (for most rounds prepared in Polygon) by uploading an image directly. Meanwhile, you can always download a complete PDF by pushing a single button on the right.
How do solve D and H?
H: Observe that if you enter a vertex $$$u$$$ for the first time, it can have at most two children from where you can collect the points. If it has some child $$$v$$$ with $$$t_v = 3$$$, you may go to some other child $$$w$$$, collect its points, come back to $$$u$$$ and then go to $$$v$$$. Alternatively, you may just go to some child and collect its points.
Equivalently, we want to color the tree in three colors: white (did not collect points), black (did collect points) and red (did collect points but "in passing" like from $$$w$$$ above, without chance to collect points from its children). The coloring must satisfy the following constraints:
and must maximize the sum of points in red and black vertices. This can now be solved using a classical tree dp, where $$$\mathrm{dp}[u][k]$$$ is the maximum number of points you can get from the subtree of $$$u$$$ given that $$$u$$$ has the color $$$k$$$.
Hi there, I followed your guide, but maybe there's some bug that I cannot find, can you help me? Thx
If you read the tree like this
Consider this testcase
The answer for it is 20 but looks like your solution will output 2 because of thisDefinitely, that's what I'm looking for. Thank you so much.
D: This sorting algorithm can be reformulated as follows: construct an increasing subsequence greedily (start from the first element, each time take the next element strictly bigger then the current maximum), cycle shift it by one, then perform an insertion sort. And you need to print the length of the increasing subsequence + the number of swaps performed by an insertion sort. For one array this is easily done in $$$O(n\log n)$$$: find the increasing subsequence in $$$O(n)$$$, cycle shift it, then for each element add the number of distinct values on the prefix, that are bigger than this element.
Now we need to recalculate the answer after adding an element to the array. If this element is not a maximal one, this is pretty easy, since it does not affect any of the previous steps. If this element is a new maximum, the length of the increasing subsequence increases by one, and also one more swap is needed for some iterations of the insertion sort. It is not hard to understand that this swap is needed for the elements that are positioned after the second occurrence of the previous maximum (hard to explain, but easy to understand from some examples). After that everything can be easily recalculated.
Interestingly this sorting algorithm has been recently published as this
LOL, I also wrote that kind of sort when I just began CP. I thought it was bubble sort.
I thought it is the Bubble Sort for a long time when I began CP.
Auto comment: topic has been updated by chenjb (previous revision, new revision, compare).
How to solve B, F, G, K?
Also, are there any solutions for E without any matrix multiplications?
E: I'm pretty sure all solutions would be pretty much equivalent to matrix multiplications. The key here is that the operations (a += k and history += a^2) are linear operations on history, a^2, a, and 1, which lets us compose them succinctly. Matrices and matrix multiplication are just notations for linear operators, but I think all solutions probably rely on this linearity.
F: There are 2 cases, either one polygon contains the other, or there's a separating line through the origin. In the first case, the outer polygon must exactly be the convex hull of all points, so we can check this solution. In the second case, we can sort the points by the angle to the origin, and then sweep the angle of the dividing line. Note that within each convex polygon, the points must be sorted by angle to the origin, so we can precompute whether each triple of neighboring points is actually convex. This lets us efficiently check the validity of any solution, so we can just take the max of all valid ones.
K: This problem asks for the number of sets of 4 points which form and independent set, minus the number of sets which form a clique. Let's focus on counting the number of independent sets. We can count this using PIE: we want the number of sets of 4 points, minus the number of sets of 4 points and 1 edge among them, plus the number of sets of 4 points and 2 edges among them, etc. In particular, we need to add the number of sets of 4 points with 6 edges among them, i.e. cliques. Thus, we only need to count the number of each configuration of 4 points with up to 5 edges. These are:
Of these values, numbers 1 through 5 are trivial, numbers 6, 7, 8, and 10 can be counted by enumerating all triangles, and number 9 requires counting 4-cycles. We do both of these in $$$O(E \sqrt{E})$$$ time. There are a few ways to do this, here's a pretty nice one.
Sort all vertices of the graph by degree, and then direct each edge from the earlier (smaller) vertex to the later one. Then, we claim that each vertex has outdegree at most $$$\sqrt{2E}$$$. This is because all later vertices must have degree at least equal to your outdegree, but the total degree is $$$2E$$$, so there can be at most $$$\sqrt{2E}$$$ out-neighbors.
Now, to enumerate triangles, we can casework on the edge between two smallest vertices; fixing this, there are at most $$$\sqrt{2E}$$$ choices for the outedge from the middle to the largest vertex, so we can enumerate all triangles in $$$O(E \sqrt{E})$$$ time.
To enumerate 4-cycles, we will casework on the diagonal containing the largest vertex; let's denote that vertex by $$$z$$$ and the opposite vertex by $$$x$$$. Fix $$$x$$$, and consider every $$$y$$$ adjacent to $$$x$$$; then, $$$z$$$ must be an out-neighbor of $$$y$$$. Thus, we can enumerate them naively and count the number of times each $$$z$$$ is 2 away from $$$x$$$; taking those counts choose 2 gives the number of squares. The total runtime is also $$$O(E \sqrt{E})$$$, since each (undirected) edge is used twice as the edge between $$$x$$$ and $$$y$$$, and $$$y$$$ has at most $$$O(\sqrt{E})$$$ out-neighbors.
We had this code in our ICPC book from before: https://github.com/ecnerwala/icpc-book/blob/master/content/graph/cycle-counting.cpp.
G: Instead of diameter, we can maximize the length of some path instead (and don't care about any other paths). We can use $$$dp[p][x][y]$$$, meaning the maximum path length from node $$$x$$$ to node $$$y$$$, and that we are about to place $$$a_p$$$.
For each path $$$(x,y)$$$, we additionally need to store the number of "free" edges we can use without extending $$$(x,y)$$$.
For a fast transition, we add two more states: $$$dx$$$ is a boolean meaning we have not assigned some cost to the edge $$$(x', x)$$$ ($$$x'$$$ is the previous $$$x$$$), and $$$dy$$$ the same thing but for $$$y$$$. Then the transition depends on $$$dx$$$ and $$$dy$$$: in particular, if $$$dx=1$$$ we can either use a current "free" edge or iterate on vertices adjacent to $$$x$$$ for extending the path. If $$$dx=0$$$ we can either use a "free" edge or assign $$$a_p$$$ to the last edge of $$$x$$$ (so increase the value of $$$dp$$$ by $$$a_p$$$). Same for $$$dy$$$.
Total complexity is $$$O(n^3)$$$.
Can you share your code here for better understanding please ?
Code
Thanks :)
Can you explain, what is matrix idea to solve E?
I'll skip the reduction to a single 1D segtree with "add on range" and "get a historic sum of squares on range".
You can store a vector of $$$K$$$ values in each node of a segtree and represent your push (lazy update) operation as a $$$K \times K$$$ matrix that moves some of these values into others with different coefficients. Applying a lazy tag to a node is multiplying a vector by a matrix. Pushing a lazy tag into children is multiplying their matrices by yours.
I believe, the sum of squares is a non-trivial part if you never encountered it. You want to transform $$$\sum \limits_{i=l}^{r} x_i^2$$$ into $$$\sum \limits_{i=l}^{r-1} (x_i + \mathit{push})^2$$$. Rewrite as $$$\sum \limits_{i=l}^{r-1}(x_i^2 + 2 \cdot x_i\cdot \mathit{push} + \mathit{push}^2)$$$ = $$$\sum \limits_{i=l}^{r-1} x_i^2 + 2 \cdot \mathit{push} \cdot \sum \limits_{i=l}^{r-1} x_i + (r - l) \cdot \mathit{push}^2$$$.
Thus, this vector of values in a node responsible for the range $$$[l; r)$$$ can be $$$(\mathit{historic~sum~x^2}, \mathit{sum~x^2}, \mathit{sum~x}, r - l)$$$. So there are $$$4$$$ values and you need a $$$4 \times 4$$$ matrix to deal with them.
B: we will try to reduce both graphs to canonical form. If canonical forms coincide, we perform the reduction for $$$A$$$, and undo the reduction for $$$B$$$ after that.
If you make operations $$$(a, b, c, d, x)$$$ and $$$(b, a, c, d, x)$$$, weight of $$$ab$$$ edge will increase by $$$2x$$$, and weight of $$$cd$$$ will decrease by $$$2x$$$. This allows to change weights pretty much arbitrarily, preserving the total sum, as well as parity of each individual weight. We will "canonize" parity first, and then do some even steps.
If $$$n$$$ is small (say, at most $$$5$$$), we can look for the lex. smallest reachable parity configuration with BFS. If $$$n = 4$$$, sum of opposite edge weights is always preserved, thus after parity fixing we can make all edges not adjacent to vertex $$$1$$$ to be $$$0$$$ or $$$1$$$, and that's canonical form. If $$$n = 5$$$, we can make all edges other than $$$(1, 2)$$$ to be $$$0$$$ or $$$1$$$ by transferring their weights to $$$(1, 2)$$$ with the $$$2x$$$ gadget above, possibly with an intermediate step.
If $$$n \geq 6$$$, we can always make at most one edge have odd weight. If edges $$$ab$$$ and $$$cd$$$ are odd, and $$$e, f$$$ are any other two vertices, then applying operations to $$$acef, adef, bcef, bdef, abcd$$$ (all with $$$x = 1$$$) makes $$$ab$$$ and $$$cd$$$ even while preserving all other parities. Doing this repeatedly (possibly with intermediate steps, like above) we can make all edges, except possibly $$$(1, 2)$$$, have even weights. After that we can gather all even weights into $$$(1, 2)$$$ similar to the above.
UPD: also I'm stupid and we just need to reduce the difference of $$$A$$$ and $$$B$$$ to all zeros lol.
Is it right that
scanf/printf
are really slow on CodeForces?During the Open Cup round, I used
scanf
andprintf
in problem H, it passed in less than 500ms. But when I submit the solution to gym after the contest, it got TLE (time limit is 2 seconds, so it's at least 4 times slower than Yandex.Contest).Then, I replaced
scanf
withcin
andfread
, and they both got accepted quickly and in almost the same time as on Yandex.Contest.Maybe you should try submitting with 32-bit compiler. I've seen this on others submission attempts to this problem.
Thanks, you are correct. It got accepted.
It's weird that the performance of scanf improves a lot when switching to a 32-bit compiler...
How to solve M?