Hello, everyone!
Japanese Olympiad in Informatics (JOI) Spring Camp 2022 will be held from March 20 to March 23. There are 4 days in this contest.
- Day 1: March 20, 02:00 GMT — March 20, 24:00 GMT
- Day 2: March 21, 02:00 GMT — March 21, 24:00 GMT
- Day 3: March 22, 02:00 GMT — March 22, 24:00 GMT
- Day 4: March 23, 02:00 GMT — March 23, 24:00 GMT
The duration of the contest is 5 hours. You can choose start time from specified range freely. Please note that if you start competing after 19:00 GMT, the contest ends at 24:00 GMT (and it means the duration is less than 5 hours). For example, if you start competing at 21:00 GMT, the duration is 3 hours.
The contest information is as follows. Details are available in contest page.
- Number of problems in each day: 3 or 4 problems
- Max score for each task: 100 points
- Style: IOI-Style (There may be some partial scores)
- Problem statement: Japanese & English
- There may be some unusual tasks (e.g. output-only task, communication task, reactive task) like IOI
The registration page and live scoreboard will be announced 10 minutes before the contest.
We welcome all participants. Good luck and have fun!
Commented so this would remain in recent actions
Now, the contest Day1 starts!
You can choose start time from March 20, 02:00 GMT — 24:00 GMT freely. Let's participate!
Commented so this would remain in recent actions
Why are you stealing my thing >:( ?
looks like he wants contribution
Looks like you want contribution
Seems the Day 1 contest has been finished.
Let's discuss the tasks here. How to solve Task 2 (Kyoto)?
The edges in the answer must be on the convex hull
There is a solution which does not use convex hull, so I will introduce that.
Subtask 1:
Dynamic programming. Time complexity is $$$O(HW)$$$.
Subtask 2:
The horizontal road which satisfies both $$$A_i \geq \min(A_1, A_2, \dots, A_{i-1})$$$ and $$$A_i \geq \min(A_{i+1}, A_{i+2}, \dots, A_N)$$$ will not be used for the optimal route. Same for vertical roads. For $$$A_i, B_j \leq 1000$$$, almost of the road will be eliminated, so this problem can be solved by DP of $$$2000 \times 2000$$$ grid.
Full Score:
Suppose the case of which the $$$i$$$-th northernmost road is $$$x = x_i$$$ and the speed is $$$A_i$$$, and the $$$j$$$-th westernmost road is $$$y = y_j$$$ and the speed is $$$B_j$$$. (In this problem, $$$x_i = i$$$ and $$$y_j = j$$$ for each $$$i, j$$$). First, consider the $$$H = 2$$$ and $$$W = 2$$$ case.
If you transform this formula, you will know that the condition of which route is better will be equivalent to which of $$$\frac{A_2 - A_1}{x_2 - x_1}$$$ and $$$\frac{B_2 - B_1}{y_2 - y_1}$$$ is larger. If the former route is better, it is not wrong to say that it is "impossible" turn left at $$$(2, 1)$$$, and if the latter route is better, it is not wrong to say that it is "impossible" to turn right at $$$(1, 2)$$$.
Consider the general case. We will first calculate $$$\frac{A_{i+1} - A_i}{x_{i+1} - x_i}$$$ and $$$\frac{B_{j+1} - B_j}{y_{j+1} - y_j}$$$ for all $$$i, j$$$. There are $$$H+W-2$$$ values, but suppose that the largest among them is $$$\frac{A_{k+1} - A_k}{x_{k+1} - x_k}$$$. In this case, if we apply the condition stated above for all $$$W+1$$$ squares sandwiched by roads $$$x = x_k$$$ and $$$x = x_{k+1}$$$, it can be said that we cannot turn left at $$$(k+1, 1), \dots, (k+1, W-1)$$$, which means that road $$$x = x_{k+1}$$$ will not be used. This will eliminate one road. (Note that some part of the road will remain if $$$k = H-1$$$.)
If we eliminate all roads by repeating this procedure, only one path from $$$(1, 1)$$$ and $$$(H, W)$$$ will be there — which is, the optimal route. We can simulate this procedure by using std::set or something else, and we can find the optimal route in $$$O((H+W) \log (H+W))$$$.
https://atcoder.jp/contests/jag2015summer-day3/tasks/icpc2015summer_day3_i
[Day 1] My solution for
misspelling
— AC (100 pts):Let's look closer at what each requirement means.
Assume we are given that $$$T_{A_j}\le T_{B_j}$$$, where $$$A_j<B_j$$$. We can decompose $$$T_{A_j}$$$ and $$$T_{B_j}$$$ as follows:
$$$T_{A_j}=S_{1..A_j-1}S_{A_j+1..B_j}S_{B_j+1..n}$$$
$$$T_{B_j}=S_{1..A_j-1}S_{A_j..B_j-1}S_{B_j+1..n}$$$
Since the prefix and suffix part of each string is the same, we can rewrite the requirement more concisely as $$$S_{A_j+1..B_j}\le S_{A_j..B_j-1}$$$. Similarly, if $$$A_j>B_j$$$, we can rewrite the requirement as $$$S_{B_j..A_j-1}\le S_{B_j+1..A_j}$$$. (Note that if $$$A_j=B_j$$$, then the two strings are already the same, so we can safely ignore the requirement.)
Now, what we want to do is turn these string-comparison requirements into an easier to work with format. Let's derive an array $$$F$$$ from string $$$S$$$, where for each $$$i$$$ in range $$$[2..n]$$$ we have:
Then each requirement of the form $$$S_{A_j+1..B_j}\le S_{A_j..B_j-1}$$$ can be rewritten as " the first non-zero element in $$$F_{A_j+1..B_j}$$$ must be $$$1$$$ or there must be no such element ", and each requirement of the form $$$S_{B_j..A_j-1}\le S_{B_j+1..A_j}$$$ can be rewritten as " the first non-zero element in $$$F_{B_j+1..A_j}$$$ must be $$$-1$$$ or there must be no such element ". For convenience let's call the former requirements $$$1$$$-requirements and the latter $$$-1$$$-requirements.
With these in mind, we can start figuring out a DP solution. Let $$$dp_{i,c}$$$ be the number of ways to fill in the rest of the string, assuming $$$S_{1..i}$$$ has been determined, $$$S_i=c$$$, and all requirements with range $$$[l..r]$$$ where $$$l<=i$$$ have been satisfied. Let's look at how to calculate $$$dp_{i,c}$$$. If $$$i=n$$$ then $$$dp_{i,c}=1$$$, trivially. Otherwise we have 2 cases:
If $$$S_j=S_i$$$ for all $$$j>i$$$, then all remaining requirements are satisfied, so we can add 1 to $$$dp_{i,c}$$$.
Otherwise, let $$$j$$$ be the lowest position $$$>i$$$ such that $$$S_i\ne S_j$$$, and let $$$S_j=d$$$. Then we know that $$$F_{k}=0$$$ for all $$$i<k<j$$$, and $$$F_{j}=-1$$$ if $$$c<d$$$, otherwise it is equal to $$$1$$$. So if $$$c<d$$$, then there must be no $$$1$$$-requirement with range $$$[l,r]$$$ such that $$$i<l\le j \le r$$$, otherwise that requirement won't be satisfied. Similarly, if $$$c>d$$$ then there must be no $$$-1$$$-requirement with range $$$[l,r]$$$ such that $$$i<l\le j \le r$$$. In either case, once we have determined a suitable pair $$$(j,d)$$$, all other requirements in said range will be of the other type ($$$-1$$$ for $$$c<d$$$, $$$1$$$ for $$$c>d$$$) and so will automatically be satisfied, while all requirements with $$$i<l\le r<j$$$ will also be satisfied, so the assumption in $$$dp_{j,d}$$$ holds and we can add that to $$$dp_{i,c}$$$.
When all is said and done, the number of satisfying strings will be $$$\Sigma dp_{1,c}$$$ over all $$$c$$$.
Now, we can do sweepline for $$$i$$$ from $$$n$$$ to $$$1$$$, maintaining all suitable $$$j$$$ for each case ($$$c<d$$$, $$$c>d$$$), as well as the sum of $$$dp_{j,d}$$$ for each $$$d$$$ over all suitable $$$j$$$. When we are calculating $$$dp_i$$$, we add $$$i+1$$$ to the suitable sets and process requirements with range $$$[i+1..r]$$$, removing elements from the suitable sets as neccesary. With careful implementation, the problem can be solved in $$$O((n+m)*log_2n+n*|alphabet|)$$$, since each $$$j$$$ will enter and exit each suitable set at most once.
Also, this is my first attempt at writing an editorial (at least publicly), so any feedback is very much welcome!
thanks, your solution is easier to understand than the official one :))
Contest Day1 is Over
The contest Day1 is finished. Thank you for your participation.
Day2 will start 2 hours later (March 21, 02:00 GMT — 24:00 GMT), so good luck and have fun!
Is there any way to see the day 1 problems? Doesn't seem to be on the contest page for me (only shows countdown for day 2).
I believe you can only see the problems if you participate in the contest and download the pdfs.
Here are the statements for all four contests.
Contest Day2 is Over
The contest Day2 is finished. Thank you for your participation, and now you can discuss the problem.
Day3 will start 2 hours later (March 22, 02:00 GMT — 24:00 GMT), so good luck and have fun!
How to make
SetID()
in day2 task2 useful?Here is my 100 points solution for Copy and Paste 3 from Day 2:
Firstly, observe that every string that is put in the clipboard is pasted somewhere in the future at least two times (otherwise there was no point in constructing it in the first place). This means that every such string is a substring of $$$S$$$.
We can construct $$$S$$$ in stages. In each stage, we cut the string made during the previous stage, and paste it multiple times while creating the next string.
To compute the answer, we can use dynamic programming. Let $$$DP[l][r]$$$ be the minimum time to construct the substring $$$[l, r]$$$ of $$$S$$$. Here are the transitions for computing it:
We can construct the substring $$$[l, r-1]$$$ and simply add one more character.
If while constructing the substring $$$[l, r]$$$ we don't paste any smaller substring, we can simply type one instance of $$$S[l]$$$ at the beginning and then type the remaining characters in $$$[l+1, r]$$$. If we do paste a smaller substring, we can type $$$S[l]$$$ at the beginning of the final stage of construction of $$$[l+1, r]$$$.
There is another type of transition, where we cut and paste strings. If we want to construct some string $$$P$$$ using some string $$$Q$$$, we need to paste $$$Q$$$ as many times as possible. To do this, we need to find the maximum number of disjoint occurrences of $$$Q$$$ in $$$P$$$. (We are assuming that it is more efficient to cut and paste strings instead of typing characters one by one; if this is not the case, the previously described transitions will still compute the correct answer.) We can also assume that a disjoint prefix and suffix of $$$P$$$ are equal to $$$Q$$$, as we can construct any string which contains $$$P$$$ as a substring using the previous two transitions.
We can compute the DP values in the increasing order of $$$s=r-l+1$$$. For a given value of $$$s$$$, let $$$next[i]$$$ be the smallest index such that the substrings $$$[i, i+s-1]$$$ and $$$[next[i], next[i]+s-1]$$$ are equal, and $$$next[i] \geq i+s$$$. If no such index exists, $$$next[i]$$$ is undefined. If we are constructing some bigger string using $$$[i, i+s-1]$$$ it is optimal for the next occurrence of that substring to be $$$[next[i], next[i]+s-1]$$$.
To compute $$$next$$$ efficiently, we can compute hashes for all substrings, and then within each unique hash we can use the two pointers technique.
Now, for a given $$$s$$$ and $$$l$$$ we can iterate over $$$r' = next[l]+s-1, next[next[l]]+s-1, ...$$$ until we reach an undefined value. For each $$$r'$$$, we can create a transition from $$$[l, r]$$$ to $$$[l, r']$$$.
It initially appears that this algorithm is $$$O(N^3)$$$, because for each of the $$$O(N^2)$$$ DP states we use $$$O(N)$$$ transitions. However, that is not the case! For a fixed $$$(l, s)$$$ pair, we can create at most $$$\frac{N}{s}$$$ transitions (because when we go to a $$$next$$$ value, we jump over at least $$$s$$$ elements). So, the total number of transitions is $$$N \times (\frac{N}{1} + ... + \frac{N}{N}) = O(N^2 \log N)$$$.
Where can we upsolve the problems from previous days?
The problems from previous editions of the camp are available here: https://oj.uz/problems/source/47. The ones from the current edition will probably be there as well in a week or two. You just have to wait.
How to solve day2 p2?
Our procedure will be as follows:
We first split the tree into disjoint connected components. For each component we assign to it an integer label $$$comp$$$, and for each vertex within the component we assign another integer label $$$vertex$$$ w.r.t. the component's structure.
For each vertex, we set the ID of it to be the pair $$$(comp, vertex)$$$.
Benjamin communicates to Ali the unordered pair $$$(comp_x, comp_y)$$$.
Ali communicates to Benjamin information about the two components' structure, so that Benjamin can figure out the position of each vertex within its component, and the distance between the two components.
Our goal is to minimize the number of bits used in step 4 and we will try to do this by trying to minimize the number of different component structures (or isomorphism classes), which is correlated to the maximum size of a component.
This leads to the idea of splitting the tree into components of size in the range $$$[K, 2K)$$$ for some $$$K$$$ greedily in a bottom-up manner:
Run a DFS: At vertex $$$v$$$ first recursively split for each child's subtree. Then, for each child $$$c$$$, if the size of the component containing $$$c$$$ has size $$$\ge K$$$ then we leave it alone, otherwise we merge it with the component containing $$$v$$$.
The root's component might have less than $$$K$$$ vertices, but it should be the only one.
Which value $$$K$$$ should we select? Let $$$C$$$ be the number of components. Since Benjamin is only allowed $$$20$$$ bits in step 3, $$$\binom{C}{2}$$$ can be at most $$$2^{20}$$$, so $$$C \le 1448$$$. Furthermore, $$$max(C) = \lceil \frac{N}{K} \rceil$$$, thus $$$K \ge 7$$$. So we will select $$$K = 7$$$.
Notice that if we assign $$$comp$$$ and $$$vertex$$$ from $$$0, 1, 2, ...$$$, we have $$$comp < C \le \lceil \frac{N}{K} \rceil$$$ and $$$vertex < 2K$$$, thus the number of pairs $$$(comp, vertex)$$$ should be less than $$$2N + 19$$$, so that's step 2 nicely taken care of.
Now, how many different component structures are there? As each component is a tree, if we consider all binary trees of size less than or equal to $$$2K - 1 = 13$$$, then the total number of structures is pretty big, though that should be good enough for a decent score.
Because of our way of splitting the tree, the sizes of every subtree of each component (which is a rooted tree) that isn't the whole component is less than $$$K$$$. Let's define a good tree as a tree that satisfies this condition.
If a good tree is of size less than $$$2K - 1$$$, we can add auxiliary vertices to it without damaging the original structure to create a new good tree of size $$$2K - 1$$$. Thus, we only need to map each component to a good tree of size $$$2K - 1$$$.
The number of good trees of size $$$2K - 1$$$ (two trees are the same if they are isomorphic i.e. we can morph one tree to another by relabeling the vertices), where $$$K = 7$$$ is $$$\binom{11}{2} = 66$$$, since $$$11$$$ is the number of rooted binary trees of size $$$K - 1 = 6$$$. Thus, there are only $$$66$$$ states of component structures.
Now, let's consider step 4. If Ali sends Benjamin the component structures for $$$comp_x$$$ and $$$comp_y$$$, Benjamin still needs to know the distance between those components. (The case where $$$comp_x = comp_y$$$ can be handled easily, so we will focus on the case where $$$comp_x \ne comp_y$$$). More specifically, Benjamin needs to know the shortest distance between two vertices of components $$$comp_x$$$ and $$$comp_y$$$ as well as the $$$vertex$$$ label of the two ends. Notice that if we label the components in DFS order (or BFS order or by depth), the end of the component with the higher $$$comp$$$ label is always the root of the component.
Thus, along with the component structures for $$$comp_x$$$ and $$$comp_y$$$, Ali needs to send to Benjamin the distance between the components and the $$$vertex$$$ label of the lower labeled component's end. The total number of possible states is $$$66^2 \cdot 10^4 \cdot 13 = 2^{29.0769...}$$$, needing $$$30$$$ bits to transmit and giving 94 points.
For the small optimization to 96 points, let's look at the data sent in step 4 for $$$comp_x$$$ and $$$comp_y$$$ separately (assuming $$$comp_x > comp_y$$$). For $$$comp_x$$$, we only need the depths (distance from the component's root) of each $$$vertex$$$ label, so we can group different structures with the same multiset of depths and label $$$vertex$$$ accordingly, reducing from $$$66$$$ to $$$38$$$ states. For $$$comp_y$$$, notice that not all pairs $$$([component \ structure], vertex)$$$ are valid, since a vertex with degree $$$3$$$ can't be the end of a shortest path. So we only need to consider pairs $$$([component \ structure], vertex)$$$ where the degree of $$$vertex$$$ is less than $$$3$$$, reducing from $$$66 \cdot 13$$$ states to $$$690$$$ states. The final total number of possible states is $$$66 * 690 * 10^4 = 2^{27.9660...}$$$, needing $$$28$$$ bits to transmit and giving 96 points.
How to get 100 points?
Is the judging server broken? The submission I sent 25 minutes ago is still compiling...
UPD: seems the CMS server is down :(
I think so too
There had been a server problem during Contest Day 3, which prevented submissions from being judged. The problem is likely happened since 3 AM in Japan. Sorry for the inconvenience!
The server problem is now solved, so I think that the Contest Day 4 will start on time.
I somehow didn't participate in first two days, but came excited to participate in the third day. 90% of my enthusiasm and motivation has gone away once I read the tree problem. Remaining 10% has gone away after I read the ants problem. Then I read the device problem and it looked nice, but the blow from the tree one was too severe
Why is that?
This is not 2014, centroid decomposition is not cool for the sake of itself. This is literally a tutorial problem for quite a complicated technique and JOI is not supposed to be a an educational round. Once on final stage of POI we had a problem whose whole statement was literally "write NTT" and this is similar approach
I believe there is a solution for the tree problem (
sprinkler
) without using centroid decomposition.Source: I solved it in contest without using centroid decomposition
How do you do that?
EDIT: D<=40, nevermind >_>
In fact, it turns out that the editorial solution for "sprinkler" is not centroid decomposition, but an $$$O(ND)$$$ solution with simple implementation!
The centroid decomposition would pass subtask 5, but it's likely to get TLE for full score. The main reason is because the "inverse" is not defined for composite mod. Although it is possible to factorize $$$M$$$ and solve for each prime factor, it is still time-consuming.
Oh, so distances were <=40 >_>...? Great. Such crucial and very unusual constraint stated among the sea of obvious constraints only on the 3rd page (especially if the problem is solvable in the general setting). Such crucial constraint should definitely be stated in a much more visible way. I don't blame myself that much for not reading carefully the remaining 5 pages after I read the first one :p
Device made me similarly annoyed as it stated that we can use 2000 operations only to change its mind on the freaking fifth page that this is just a very technical constraint and in fact you can use at most 140 for the full score.
Also, it is true that the fact that multiplying mod sth non-prime has no inverse operation and that is a serious problem for the standard way of doing cent-decomp. However it can be done in the same complexity, so that you do not need inverse operation if you use the "1/3-2/3" approach that finds a centroid and divides its subtrees into two groups such that each of them has >=n/3 vertices. But I did not have that one in my library xd
1) I don't remember JOI being particularly known for hiding unusual constraints without telling you in the statement, so this might just be an error that made it past quality assurance.
2) From my exprience, writing the limit for getting any points at all in the statement, then having detailed limits breakdown in the Scoring section is common practice for interactive/communication problems in OI contests.
So "hiding" unusual constraints without telling you in the statement is low-quality? I disagree with that.
Oof. I shouldn't have been so hasty when writing my opinion out.
I don't think that putting unusual constraints in the Constraints section and not in the statment upfront is wrong. It's where constraints should go, after all.
But I have experienced cases where problemsetters intentionally hid unusual constraints in that section, to lower the chance of contestants noticing it and artificially increasing the difficulty of the problem ( cough ICPC Vietnam National 2017 cough), and so I have built a personal distaste for problems like that.
Sorry for bothering you by saying that such kind of problemsetting is outright low-quality without thinking it through.
Contest Day3 is Over
The contest Day3 is finished. Thank you for your participation, and now you can discuss the problem.
Day4 will start 2 hours later (March 23, 02:00 GMT — 24:00 GMT), so good luck and have fun!
Totally defeated by the Day3 problems:(
By the way, is there an easy way to get a high score in problem device2?
The 80 points solution imo:
String A: "0101010101010101" (90 times "01") String B: for each bit of N (from 0 to 59), if that bit is 0 then you add three zeros to B, else you add three ones.
The processing part is quite long, and since you are LGM, I will safely assume that you can make it out for yourself.
(P/s: I don't know if this solution is easy or not)
[Day 3] My solution for
device2
— 80 pts:Without loss of generality let's assume we want to send $$$N=60$$$ bits of information, stored as an array $$$D$$$, where each bit is 1 or -1.
The main idea is to minimize the impact of the second string being merged into the first string on us reading that first string. We acheive this by using prefix sums.
The array $$$S$$$ Anna sends will always be $$$[1,-1,1,-1...1,-1]$$$ ($$$180$$$ characters).
Meanwhile the array $$$T$$$ will take the form $$$[$$$ $$$D_0$$$ repeated 3 times, $$$D_1$$$ repeated 3 times, ..., $$$D_{59}$$$ repeated 3 times $$$]$$$ (also $$$180$$$ characters).
Let $$$pf(arr,i)$$$ indicate the prefix sum of the first $$$i$$$ elements of an array $$$arr$$$. We notice that $$$pf(S,x)=$$$ either 0 or 1 for any $$$x$$$, which we will denote as $$$pf(S,x)=$$${$$$0,1$$$}.
Now, we have $$$pf(U,i)=pf(S,a_i)+pf(T,b_i) $$$ for some $$$a_i,b_i$$$ such that $$$a_i+b_i=i$$$, and $$$a_i\le a_j$$$ and $$$b_i\le b_j$$$ for all $$$i\le j$$$. (This is due to the property of the merge preserving relative order between elements of the same array). From the previous observation $$$pf(S,x)=$$${$$$0,1$$$}, we can rewrite the above as $$$pf(U,i) = pf(T,b_i) + $$${$$$0,1$$$} $$$=$$${$$$pf(T,b_i), pf(T,b_i)+1$$$}.
Let $$$b_i=3k+x$$$, then $$$pf(U,i) = pf(T,b_i) + $$${$$$0,1$$$}$$$=3*pf(D,k)+x*D_k+$$${$$$0,1$$$}. (This is due to how the array $$$T$$$ is constructed.) Now, let's say that we can extract $$$u$$$ from some $$$v$$$ if $$$v=3u$$$ or $$$v=3u+1$$$. Then,
The proof is not hard (it is just writing down all the possible cases for $$$D_k$$$ and $$$pf(U,i)$$$), and is left as an exercise for the reader. The key takeaways are:
Thus, if we take all the extracted values from the array $$$pf(U)$$$, put it in an array (in the same order as $$$pf(U)$$$), and remove equal adjacent elements, the array we get is exactly $$$pf(D)$$$. From here, getting the original array $$$D$$$ is a trivial task.
Once you understand the solution, the code is actually pretty short, as shown below.
Would be interested in hearing the full solution!
I'm sorry, but this hint only needs 0 bit to communicate
Brilliant!Thanks to both of you!
Any other different solution? Very interested
My 96-point solution for
device2
, which got the highest points among all the solutions submitted in JOISC contest 3.In my opinion, this is just a extension of the 80-point solution which have been mentioned for many times.
But I don't think there's an easy way to get a full solution by improving this algorithm.
First we consider how to transfer a 01-string $$$A$$$ from Anna to Bruno.
Just like other solutions, $$$T$$$ is a fixed vector $$$[0,1,0,1,0,1,...]$$$
For each continuous segment of zeros or ones in $$$A$$$ of length $$$N$$$ , we add $$$2 \times N + 1$$$ zeros or ones into $$$S$$$.
For example, if $$$A = [0,0,1,1,1,0,1]$$$ , then $$$S = [0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,1,1,1]$$$.
Now we are considering how to get $$$A$$$ through a riffle shuffled 01-string of $$$S$$$ and $$$T$$$.
Assume that the shuffled string is $$$U$$$, we construct $$$V$$$ by the following steps.
0
) , we add the first element of $$$U$$$ into $$$V$$$ and erase it from $$$U$$$.Assume the remaining elements in $$$U$$$ is $$$U'$$$ .
It's easy to find out that all the elements moved to $$$V$$$ must come from $$$S$$$ in the process of riffle shuffle.
And $$$U'$$$ is in the form of $$$[0,1,0,1,...]$$$, which has the same starting character and the ending character as $$$T$$$. However, the length of $$$U'$$$ may be greater than $$$T$$$, because some of the elements remained in $$$U$$$ may come from $$$S$$$ .
Since the form of remaining elements in $$$U'$$$ is very similar to the form of $$$T$$$, the elements in $$$U'$$$ that come from $$$S$$$ should satisfy a strong condition. That is, the elements from $$$S$$$ in $$$U'$$$ have to be some disjoint consecutive segment $$$[0,1]$$$ or $$$[1,0]$$$.
For example, if $$$U' = [0,1,0,1,0,1,0,1]$$$ and $$$T = [0,1,0,1]$$$
Then the elements from $$$S$$$ in $$$U'$$$ can be $$$[0,1,(0),(1),0,(1),(0),1]$$$ or $$$[0,1,0,(1),(0),(1),(0),1]$$$ or $$$[(0),(1),0,(1),(0),1,0,1]$$$ or something else, such that $$$U' - $$$ $$$($$$ elements from $$$S$$$ in $$$U'$$$ $$$)$$$ $$$= T$$$
So look at $$$V$$$ and $$$S$$$, the only difference is that some consecutive segment $$$[0,1]$$$ or $$$[1,0]$$$ of $$$S$$$ is missing in $$$V$$$(which is mentioned above in $$$U'$$$).
And we can recover that by going through each consecutive segment of zeros or ones in $$$V$$$. if the length of the current segment is even, then we can realized that a character is missing in the end of this segment, since the length of each consecutive segment of zeros or ones in $$$S$$$ is odd. And we should add $$$[1,0]$$$ or $$$[0,1]$$$ to the end of this segment in this case, then continue looking at the next consecutive segment.
After that, we can make a small trick. That is, the first character in $$$S$$$ isn't necessary to be transferred. Since the first three characters in $$$S$$$ have to be the same, the first two characters in $$$V$$$ have to be the same. So we can know the first characters in by looking at the second character in $$$V$$$.
Similarly, the last character in $$$S$$$ isn't necessary to be transferred either.
After knowing how to transfer a 01-string $$$A$$$ from Anna to Bruno, we need to translate an integer $$$x(x \leq 1e18)$$$ to a $$$01$$$ string $$$A$$$, satisfying that the length of 01-string $$$T$$$ (which Anna constructed by $$$A$$$) is less than $$$N+2$$$ ($$$N = 145$$$ in my solution), since the first and the last character don't need to be transferred.
We can use a vector $$$P$$$ and a boolean $$$B$$$ to describe a 01-string $$$A$$$. Each element in the vector stand for a consecutive segment of zeros or ones, and $$$B$$$ stand for the first element of $$$A$$$.
For example, if $$$A = [1,1,0,0,0,1,0]$$$, then $$$B = 1$$$ and $$$P = [2,3,1,1]$$$.
We can find out that the length of $$$T$$$ is less than $$$N+2$$$ if and only if vector $$$P$$$ satisfied that $$$2 \times sum + size \leq N +2$$$.
So you can just sort these vectors by any order you like and transfer the 01-string related to the $$$x$$$-th vector to Bruno when receiving the integer $$$x$$$. We set $$$B = 1$$$ if and only if $$$x > 5e17$$$ and abstract $$$5e17$$$ from $$$x$$$ in this case.
Since there are more than $$$6e17$$$ vectors satisfying that condition, it's enough to translate each integer $$$x(x \leq 1e18)$$$ to a boolean and a vector satisfying that condition, and these can be translated to a 01-string which can be transfered from Anna to Bruno.
Sorry for my poor English.
Look forward to a full solution.
How to prevent the $$$\log$$$ factor in day 3 problem 2? I had a $$$O(Q*40* \log N)$$$ solution and failed to optimize it for three hours. :(
You can specify that the length of the path is exactly 40 or 39 by continuing moving up at the lca.
ok, thank you for your contribution to the society
your hlep is greatly appreciated
I don't know what your solution is like, here is my solution.
Consider maintaining $$$mul_{i,j}$$$, which means all the childs of node $$$i$$$ with depth $$$j$$$ should be multiplied by $$$mul_{i,j}$$$. It's easy to see the answer to node $$$i$$$ is $$$h_i\times \sum\limits_{j=0}^{maxd=40} mul_{fa_{i,j},j}$$$.($$$fa_{i,j}$$$ is the $$$j$$$-th ancestor of node $$$i$$$.)
As for modifies(i,d,w), jump to $$$fa_{i,d}$$$ and then process from up to down.
In more detail, we multiply $$$mul_{fa_{i,d},0}$$$ by $$$w$$$ at first. Then, for $$$mul_{fa_{i,j},k}$$$, if $$$k>d-j$$$, then the children can't be reached. If $$$k<d-j-1$$$, then $$$fa_{i,j+1}$$$ has multipled it in $$$mul_{fa_{i,j+1},k-1}$$$, so it shouldn't be multiplied twice. Therefore, we can simply do $$$mul_{fa_{i,j},d-j}\leftarrow mul_{fa_{i,j},d-j}\times w,mul_{fa_{i,j},d-j-1}\leftarrow mul_{fa_{i,j},d-j-1}\times w$$$.
(x) means the tag shouldn't be multiplied.
We can multiply $$$w$$$ to at most 2 mul tags for each node, so time complexity is $$$O(d)$$$ per modify.
Special case: $$$fa_{i,d}$$$ may be not exist, then just multiply the other tags to the root of the tree.
btw sorry for my poor English.
Thank you, I understand now.
I binary searched on the range of nodes that will be updated for each depth, and updated the range using a segment tree. Guess I've unnecessarily overcomplicated it, sad :(
Could you explain in more detail how you handle updates?
Updated my comment so it is now clearer.
I spent most of my time doing sugar, so I automatically liked it :v
The main idea is using Hall's marriage theorem. Suppose $$$X$$$ is a set of ants, and $$$N(X)$$$ is the set of all sugars which are reachable by one of the ants in $$$X$$$. All ants will have a corresponding sugar if $$$|X| \leq |N(X)|$$$ for all subsets $$$X$$$. Equivalently, we can't match $$$\max_{X \subseteq A}(|X| - |N(X)|)$$$ ants no matter what, so the answer is $$$|A| - \max_{X \subseteq A}(|X| - |N(X)|)$$$.
Now we are left with the following problem:
Suppose $$$A_x$$$ is the number of ants at coordinate $$$x$$$, and $$$B_x$$$ is the number of sugar at coordinate $$$x$$$. We want to pick disjoint segments $$$[l_1,r_1], [l_2,r_2], \ldots, [l_k,r_k]$$$ such that $$$\left(\sum_{i=1}^{k} \sum_{j=l_i}^{r_i} A_j \right) - \left( \sum_{i=1}^{k} \sum_{j=l_i - L}^{r_i + L} B_j \right)$$$ is maximum. Note that we don't need to care if one of the $$$B_x$$$ is double-counted, since in the optimal configuration it will not exist (since $$$B_x$$$ contributes negatively to the sum).
Consider:
$$$C_x = \sum_{i \leq x} A_i - \sum_{i \leq x + L} B_i$$$
$$$D_x = \sum_{i \leq x - 1} A_i - \sum_{i \leq x - L - 1} B_i$$$
Then the sum is $$$\sum_{i=1}^{k} C_{r_i} - D_{l_i}$$$. Note that the pattern of $$$C$$$ and $$$D$$$ we pick is always $$$DCDCDCDC\ldots DCDC$$$ (when looking from left to right), and $$$l_i = r_i$$$ is possible.
Now we can formulate this as dynamic programming: $$$dp[n][b]$$$ means we have considered all $$$C_x, D_x$$$ where $$$x \leq n$$$, and the last type we pick is $$$b$$$ (either $$$b = C$$$ or $$$b = D$$$).
Since this is DP, and there are updates, we store it in a segment tree. Each node in a segment tree will store $$$dp[b_1][b_2]$$$, where $$$b_1$$$ is the leftmost picked type and $$$b_2$$$ is the rightmost picked type (whether it is a $$$C$$$ or a $$$D$$$).
Now let's consider what happens if we add $$$1$$$ ant at position $$$x$$$.
But, note that if we update the segment tree DP, the answer will change depending on how many $$$C$$$ and $$$D$$$ that we picked. However, we can simplify the updates to the following:
Note that if we add both $$$C$$$ and $$$D$$$ by the same constant, then in the segment tree DP, the answer changes only depending on $$$b_1,b_2$$$ in $$$dp[b_1][b_2]$$$ (since all matched $$$CD$$$ pairs will "nullify" the constant addition). The other update is simply a point update, which we can do naively.
Next, let's consider what happens if we add $$$1$$$ sugar at position $$$x$$$.
For the first one, you can do it the exact same way. For the second one, you need to range update $$$[x - L, x + L]$$$, which, as mentioned, depends on the number of $$$C$$$ you picked.
Even without knowing how to handle the case when adding sugar, we can already solve all subtasks except the last one.
Observe that the number of $$$C$$$ that you pick in the range $$$[x - L, x + L]$$$ in the optimal answer will not be more than $$$2$$$ (or perhaps even at most $$$1$$$, but I coded for $$$2$$$ just-in-case), since the length of the range is only $$$2L + 1$$$. So, in the DP, we additionally store the number of $$$C$$$ that we pick for all $$$0 \leq k \leq 2$$$.
Now we can solve this problem in $$$O(Q \log Q)$$$.
how can we prove that matching with size $$$|A|-max_{X\subseteq A}(|X|-|N(X)|)$$$ is always achieveble ?
Contest Day4 is Over
The contest Day4 (final day) is finished. Thank you for your participation, and let's discuss the problem.
The overall ranking is as follows. Details are available in ranking page.
test data when
Uploaded codes to the problem which I have full solution here
It was so bizarre when I got 100 % on dango3 with a random algorithm ...
In fact, as long as the data is randomly ordered before processing, then it is random for all cases.
I also use the random algorithm and passed... but still don't know how it works...
Then what's the expectation of number of queries?
I used random as well and I think it is impossible to counter that approach. At least the way I did it. If checker is not adaptive then my solution clearly works, but even if it is then I think it maybe is provable it is impossible to counter it.
I drew random 2000 dangos and kept fingers crossed I get nonzero answer. Then I take them out one by one if removing a particular dango doesn't decrease query answer to 0 and at the end I remain with a proper stick.
Was intended solution different? It was weird cause at no point I had an idea that would pass for 22 points. That random approach jumps directly from 7 to 1000
The problem can be solved in $$$n*m*\lceil log_2(m) \rceil$$$ queries as follows:
The query can be re-described as: Given a set of dangos $$$S$$$, $$$q(S)$$$ is the number of dangos in $$$S$$$ that has color $$$c$$$, where $$$c$$$ is the color that has the lowest number of dangos in set $$$S$$$.
Now notice that $$$w(S)=m-q(U-S)$$$, where $$$U$$$ is the set of all dangos, would return the number of dangos in $$$S$$$ that has color $$$c$$$, where $$$c$$$ is the color that has the highest number of dangos in set $$$S$$$.
Now we iterate over all dangos, maintaining $$$m$$$ groups, such that at all times the following property holds: Let $$$cnt_c$$$ be the number of dangos with color $$$c$$$ that has been processed, then each group from $$$1$$$ to $$$cnt_c$$$ holds exactly $$$1$$$ dango of color $$$c$$$.
Let's assume that we are processing a dango of color $$$x$$$. We don't know $$$x$$$, but we only need to know $$$cnt_x$$$ to insert the dango in the correct group. We see that if we insert the dango in group $$$i$$$, then use $$$w()$$$ on that group, we will receive $$$2$$$ iff $$$i \le cnt_x$$$, and $$$1$$$ otherwise. Thus we can simply use binary search to find $$$cnt_x$$$.
In the end, each group will hold exactly $$$1$$$ dango of each color, so the problem is solved.
Cool one, thanks. I wonder what's the expected number of queries that my approach has. There is some chance it could be better, but it's rather hard to optimize and analyze properly, since that 2000 was fairly arbitrary and would need improvements for general case
I got 22 points at first because I just shuffled the index array at first but not after each
Answer()
...When can we upsolve the problems in the CMS platform?
By the way,Thanks for the excellent problems.
Analysis
The contest site is in analysis mode from March 24, 15:00 GMT to March 31, 15:00 GMT. You can upsolve all the problems!
the analysis mode has finished. any chance the problems will be uploaded somewhere for upsolving?
here
I would be interested to hear the solution to fish eating problem
Here's my solution: (it's a bit bizarre, but I've managed to inplement it during the contest and it passed anyhow :)
Let's set $$$a_0 = a_{n+1} = \infty$$$. Consider those segments $$$[l, r]$$$ s.t. $$$ \sum_{i=l}^{r} a_i < \min (a_{l-1}, a_{r+1})$$$. One can prove that:
There exist only $$$O(n)$$$ such segments;
For any pair of segments, they either don't intersect or one is included in the other;
For any $$$i\in [1, n]$$$, there exist at most $$$O(\log A)$$$ segments including $$$i$$$.
Let's try to maintain the set $$$S$$$ of valid segments during modifications. Consider a modification on $$$a_i$$$. Only segments that either include $$$i$$$ or use $$$i-1$$$/$$$i+1$$$ as its right/left endpoint will be affected. Let's store those segments on its endpoints (using std::set or std::vector), and by doing binary search on segment trees to quickly find valid segments, we can simply delete and rebuild those affected segments in a quite straightforward manner, yielding a time complexity of $$$O(\log A \log n)$$$ per modification.
Consider a query $$$[l, r]$$$. Let's find the rightmost $$$i\in [l, r]$$$ s.t. $$$a_i > \sum_{k=l}^{i-1} a_k$$$, and the leftmost $$$j \in [l, r]$$$ s.t. $$$a_j > \sum_{k=j+1}^r a_k$$$. Fish $$$x$$$ can survive iff
It's not hard to find that, if for each $$$i$$$ we maintain an extra counter $$$cnt[i]$$$ which counts the number of segments including $$$i$$$, then all $$$x \in [i, j]$$$ whose $$$cnt[]$$$ reaches the minimum of $$$cnt[i...j]$$$ satisfy the conditions described above. Let's use a segment tree to range +1/-1 when a segment is inserted/erased from $$$S$$$ while maintaining the minimum and its number of existence. Therefore, after finding $$$i$$$ and $$$j$$$ using also binary search on segment trees, we can easily get the answer, thus solve the whole problem!
Thanks for solution. I had the first idea at the beginning, but could not improve. Didn't see that number of segments does not exceed O(N), and that they don't intersect. Very cool solution.
It's just a little too complicated though. There exists a rather simple and straightforward solution. Basically, we just maintain $$$O(\log A)$$$ prefixes/suffixes on segment trees, and then try to speed up the process of node merging. Feel free to ask wasa855 for more details if anyone's intrested :)
First build a simple segment tree on the array, for each node in the segtree $$$[L,R]$$$, maintain each fish between $$$[L,R]$$$, the leftmost and the rightmost fish $$$(l,r)$$$ it can eat.
Consider a valid candidate pair $$$(l,r)$$$ for a segtree node $$$[L,R]$$$. Firstly either $$$l=L$$$ or $$$r=R$$$ should hold. If $$$l=L$$$, then $$$\sum_{i=l}^r a_i<a_{r+1}$$$ or $$$r=R$$$, since prefix sum of $$$\sum_{i=l}^r a_i$$$ will multiply by 2 after added a new fish, there will only be $$$O(\log V)$$$ candidate pair. Then for each candidate pair, count how many begin fish will get it.
To merge two segtree node, just use simple two-pointers.
check my code here for details.
Also check out my (long and shitty) code :)
How to solve the problem Jail in Day1?
I have tried some greedy methods,but it doesn't work.
Lemma: if the solution exists, then we can move prisoners from $$$S_k$$$ to $$$T_k$$$ directly one by one.
I don't have formal proof of the above, though. Assuming the above is true, we have the following:
Suppose prisoner $$$i$$$ wants to go from $$$S_i$$$ to $$$T_i$$$, and $$$j$$$ wants to go from $$$S_j$$$ to $$$T_j$$$. Then, if $$$S_j$$$ is on the simple path from $$$S_i$$$ to $$$T_i$$$, $$$j$$$ must move before $$$i$$$, so we draw an edge $$$j \to i$$$. If $$$T_j$$$ is on the simple path from $$$S_i$$$ to $$$T_i$$$, $$$i$$$ must move before $$$j$$$, so draw an edge $$$i \to j$$$. Afterwards, we can simply check whether there is a cycle in the directed graph.
Everything else is just implementing data structures to do DFS on the digraph quickly.
We can prove the lemma "if the solution exists, we can move prisoners directly one by one". The idea is "if there's a prisoner appearing in two different sections of moves, we can contract them".
Let's define a sequence of moves like "move prisoner $$$a_1$$$ for $$$n_1$$$ times, then move prisoner $$$a_2$$$ for $$$n_2$$$ times, then..., then move prisoner $$$a_k$$$ for $$$n_k$$$ times", such that $$$a_i \neq a_{i+1}$$$ for all $$$i$$$, like run-length compression.
The lemma is equivalent to "if there is a valid moves with $$$k \gt M$$$, we can 'contract' the sequence of moves to the smaller $$$k$$$". Consider the "minimal" subsequence $$$(a_l, a_{l+1}, \dots, a_r)$$$ such that $$$a_l = a_r$$$ and $$$a_l, a_{l+1}, \dots, a_{r-1}$$$ are all different.
Let's say prisoner $$$X = a_l$$$ moves from $$$V_1$$$ to $$$V_2$$$ in turn $$$l$$$, and moves from $$$V_2$$$ to $$$V_3$$$ in turn $$$r$$$. In this case, the moves in turn $$$l+1, \dots, r-1$$$ will not pass vertex $$$V_2$$$ (if not, they're not a valid move). If we delete vertex $$$V_2$$$ from the tree, it will be divided into several components; one includes $$$V_1$$$ and another includes $$$V_3$$$. The moves in turn $$$l+1, \dots, r-1$$$ will be done exclusively in one component each, which means the moves in different components will not interfere each other.
Let's say the moves in the component including $$$V_3$$$ is done in turn $$$b_1, b_2, \dots, b_s$$$ (in increasing order), and let the other turns $$$c_1, c_2, \dots, c_{(r-l-1)-s}$$$ (in increasing order). Then, if we rewrite subsequence $$$(a_l, a_{l+1}, \dots, a_r)$$$ to $$$(a_{b_1}, a_{b_2}, \dots, a_{b_s}, X, a_{c_1}, a_{c_2}, \dots, a_{c_{(r-l-1)-s}})$$$, it's still a valid move, and the length of sequence is decreased by $$$1$$$.
We can repeat the "contraction" procedure while $$$k \gt M$$$, and eventually create the move with $$$k = M$$$, which means that the lemma is proved.
Thank you for the great contest. I really liked flights, team, sugar, fish2, reconstruction.
The competition summary of JOI 2022 Spring Camp is published! Now we can see (1) problem statement, (2) actual test data, (3) sample source codes, (4) Japanese editorials, (5) score distribution of onsite contest, in JOI website.
https://www.ioi-jp.org/camp/2022/2022-sp-tasks/index.html
The page is in Japanese, so feel free to read with translators if you want.
You can submit all problems here: https://oj.uz/problems/source/595
For problem
device2
, currently, the riffle shuffle is only done pseudo-randomly, though the contest announcement said otherwise (i.e. it is not guaranteed that it's random). This is because the test data doesn't contain any information about how the shuffle should be done. The problem might be rejudged if we manage to find the correct shuffling method for each test.In fact,problem reconstruction can be solved in $$$O(m\log m+Q)$$$ using the same solution of spanning tree with minimum variance.