Hello everyone!
JOI Open Contest 2021 will be held from Sunday, June 6, 04:00 UTC to Monday, June 7, 04:00 UTC. You can start any time and the duration is basically 5 hours. Note that the duration becomes shorter if you start after Sunday, June 6, 23:00 UTC, i.e. less than 5 hours before the contest ends.
There will be 3 tasks, and the maximum score for each task is 100 points. Since this is an IOI-style contest, there may be some subtasks. Each submitted source program must be written only in C++. Problem statements will be provided both in Japanese and in English. You can find more details in the contest page.
Past contests are also available in this page, and you can test your code in oj.uz website.
Good luck and have fun!
Reminder: The contest will start in an hour.
.
Each submitted source program must be written only in C++
How to solve monster?
Here is my solution, which use 12000 queries imo, but gets 100 points for some reason.
The idea is to take any hamilton path with this trick. Then they can be partitioned into a subsegment where each can be rotated to form a valid order.
Do you know how to solve one of the others? I could only do updates in corssings and a cubic solution to financails..
I have no idea how similar my idea is to rama_pang's or ko_osaga's solution, but this is how I did it.
We find a semi-sorted array with blocks of elements of the shape $$$A, A-1, A-2, \dots$$$. An example of a semi-sorted array is $$$0, 2, 1, 5, 4, 3$$$, with three blocks: $$$0$$$ and $$$2,1$$$ and $$$5,4,3$$$
If we're able to find the blocks, we can solve the problem. However, finding the blocks is quite tough so we need another observation.
If we just merge any two consecutive decreasing ranges, we have a cycle rotation of $$$A, A-1, A-2, \dots$$$. This means that we can loosen our condition and find blocks of cyclic rotations instead.
One way to do this is iterate left to right. Let $$$a$$$ be the start of the next block and $$$a=0$$$ initially. If we compare $$$a$$$ and $$$a+2$$$, it could be smaller or greater. Apparently we can somehow split these into two cases.
Case 1: If $$$a$$$ wins $$$a+2$$$, then we sweep right until $$$a$$$ loses to $$$a+k$$$. Then $$$a$$$ to $$$a+k-1$$$ forms a new range.
Case 2: If $$$a$$$ loses to $$$a+2$$$, we can sweep right until $$$a$$$ wins $$$a+k$$$. Then $$$a$$$ to $$$a+k$$$ forms a new range.
A bonus property of this method is that each range has at most two possible configurations for each case.
For case 1, the configurations are like $$$3,2,1,0$$$ or $$$2,1,0,3$$$. For case 2, the configurations are like $$$0,3,2,1$$$ or $$$1,0,3,2$$$. Of course, a block size of $$$3$$$ is a corner case that has to be dealt with separately$.
This step should take at most $$$N$$$ queries. (I think it's more similar to $$$\frac{2}{3}N$$$ because we never compare $$$a$$$ and $$$a+1$$$.)
Since we know that each range has at most two possible configurations for this case. We just need to verify which configuration it is.
We solve the first block first. If we query $$$(a,x)$$$ for all $$$x$$$ for some fixed $$$a$$$, and $$$a$$$ only wins exactly one $$$x$$$, then $$$a$$$ is either $$$0$$$ or $$$1$$$. This helps us differentiate one of the configurations from the other. (you can optimize this by just querying $$$x$$$ across first two blocks only.)
Then for the subsequent blocks, they are much easier. There are only two possible positions for the smallest element of this block. We assume it's one of them, then query the smallest element with the biggest element of the previous block. Then we just check if our assumption is correct. This step should take $$$\frac{1}{3}n$$$ steps as each block's size is at least $$$3$$$. (By the way, in my code I didn't do this optimization because it got 100 before doing this, In my code I just tried the 1st 2nd, last and 2nd last positions).
I have no idea actually. Supposedly the sorting take $$$8977$$$ queries. The second step takes $$$\frac{2}{3}n$$$ queries according to my analysis. The last steps take $$$n$$$ + $$$\frac{1}{3}n$$$ in the worst case respectively. So based on this, I think the number of queries is closer to $$$11000$$$ queries.
My guess is that the step of finding the first block only requires the size of the two smallest blocks. And the adaptive grader wasn't able to give me the worst case where my semi-sorted sequence is just a few giant blocks. As such, finding the first block only required a few steps as the average block size is actually very small.
Use merge sort with the (faulty) comparator. The resulting array will be in the form: $$$[x_1, x_1+1, x_1+2, \cdots, N, x_2, x_2+1, x_2+2, \cdots, x_1-1, x_3, x_3+1, x_3+2, \cdots]$$$ (we can divide them into groups of consecutive increasing values, then sort them decreasingly). For example, $$$[3, 4, 5, 2, 0, 1] = [[3, 4, 5], [2], [0, 1]]$$$.
Also, in this almost-sorted array, the groups of size $$$1$$$ will never be adjacent to each other, and there are at least $$$2$$$ groups.
This takes at most $$$\sum_{i = 1}^{N}{\lceil log_2{i} \rceil} \leq (N \log N) - N$$$ queries ($$$8977$$$ queries for $$$N = 1000$$$).
If we can find the position of the maximum element in the almost-sorted array, we can get the sorted array.
Assume maximum element is at position $$$x$$$. Then, the first $$$x$$$ greatest elements are $$$almost[x], almost[x - 1], \cdots, almost[0]$$$.
Then, using $$$almost[0]$$$, query $$$almost[i]$$$ for $$$x < i$$$ in increasing order of $$$i$$$ until $$$Query(almost[0], almost[i]) = 0$$$. If this happens, then $$$almost[i] = almost[0] - 1$$$, so we know the second group of consecutive elements.
We can recurse until we're done. This takes a total of $$$N$$$ queries.
We can find the maximum element in $$$2N$$$ queries. Store every $$$i$$$ where $$$Query(almost[0], almost[i]) = 0$$$ in a vector $$$wrong$$$.
We can check whether an element $$$v$$$ is the maximum or second maximum by querying all other elements. If $$$Query(v, i) = 0$$$ only occurs once, then $$$v$$$ is either the maximum or second maximum.
If $$$wrong.size() = 1$$$, then check whether $$$almost[wrong[0]]$$$ is a maximum element or a second maximum element. If it is, then $$$almost[0]$$$ must be the maximum element. If not, then $$$almost[1]$$$ is the maximum element.
If $$$wrong.size() > 1$$$, then $$$almost[wrong[-2]]$$$ must be the maximum element.
After finding the maximum element, we will need an additional $$$N$$$ queries to construct the sorted array.
This uses around $$$N \log N + 2N$$$ queries and yields 92-94 points.
We can use casework to use at most $$$N + constant$$$ queries to find the first $$$1-3$$$ groups of consecutive elements in $$$O(size)$$$.
It uses the fact that, for a group with size $$$\geq 3$$$, we can easily find the last element in this group (If $$$Query(almost[i], almost[i + 2]) = 0$$$, then $$$i + 1$$$ is end of group).
I haven't submitted the 100 points solution, but it's correct for all permutations $$$N \leq 10$$$ in local testing.
Edit: my solution.
Edit 2: Just submitted in analysis mode, and it got 100 points.
Does anybody know how to solve crossing or financials? They sound easy (especially financials) but I was unable to solve them
Idea for Crossings
There aren't many possible strings that you can get from the three initial ones. Therefore, you can just generate all the ones that you can get. Let's call these reachable.
You need to check whether your current $$$t0$$$ is one of the reachable ones. You can do this easily in $$$O(n^2)$$$.
In order to optimize this solution, you can calculate hashes for the reachable strings and store them in, for example, a set.
Now you just need a way to quickly calculate the hash of $$$t0$$$ after every step and check if that hash is one of the stored in a set.
Also, a quick note. You can transform letters into numbers: 'J' to 0, 'O' to 1 and 'I' to 2. This will make your life a bit easier when implementing the solution.
To calculate a hash of $$$t0$$$, you can just have a segment tree for that. In this segtree in the node that stores information for the range $$$[le; ri]$$$ you store the hash (the sum) for that substring of $$$t0$$$. You should implement two operations:
Update the values in the range $$$[L; R]$$$ to some certain value. This will require a lazy propagation.
Get the sum of the whole range — i.e., just return the value stored in the root of the tree (since usually it is the node that stores information for the range $$$[0; n-1]$$$).
Both of these operations are standard ones for segtree and the time complexity for both of them is $$$O(\log n)$$$, which means that the total complexity will be $$$O(n + q \log n)$$$.
Thanks really! I think I was able to update fast with lazy propagations but there was a testcase that didn't pass for some reason, testcase 40 in subtask 2. For some reason it passed when I had an even base for hashing , but then of course many other testcases failed. I tried to prove that there weren't many strings that could be generated but failed. Thanks a lot again!
My idea for financials
Traverse the list by increasing order of $$$a[i]$$$. If there are more than one value with the same $$$a[i]$$$, then traverse these from the rightmost one to the leftmost one (so, backwards).
When traversing, store $$$dp[i]$$$ — the answer if you take some numbers before the $$$i$$$th index and you take the $$$i$$$th one, and the $$$i$$$th one should be the largest among them. You can see that the final answer is the maximum of these $$$dp$$$ values.
When we want to calculate the value of $$$dp[i]$$$ it will be the maximum of all $$$dp[j] + 1$$$, where $$$j$$$th index is reachable from the $$$i$$$th one. What do I call reachable from the $$$i$$$th one? An index $$$j$$$ is reachable if you can go from the $$$j$$$th index to $$$i$$$th index by jumping on the indices that we have previously traversed and not jumping by more than $$$d$$$. So we can have these with DSU.
This gives an $$$O(n^2)$$$ solution, which can be easily optimized to $$$O(n \log n)$$$ with segtree.
Hope, this doesn't sound too complicated :)
Problems were not bad, but I was disappointed because my expectation for JOI Open is very high. Financial is very standard, and Crossing is just... meh. Monster is ok and I liked solving it, but the limit seems unnecessarily tight.
You can solve all problems here: https://oj.uz/problems/source/574
Editorial for the problem Monster is.. very sketchy.
Do you have proof? E869120
I think this method can get under $$$10000 $$$ queries deterministically
Yeah, I'm just talking about the model solution from the author.
Also, I opened it's code, there were tons of ifs. I don't want to think it's intended.