The statement and dataset of this problem can be found here.
Short statement: there are $$$ N \le 20000 $$$ objects with value $$$ 0 $$$ or $$$ 1 $$$. Each time you can select some objects and permute them to form a 0-1 sequence, then the interactor will tell you the total number of 0-1 and 1-0 shifts in the sequence(i.e. 011000
has 2 shifts and 000101
has 3 shifts). It is guaranteed that the value of the 1st object is $$$ 0 $$$. You are required to find out the number of objects with value $$$ 0 $$$ within $$$ 226 $$$ queries ($$$100\%$$$) / $$$ 904 $$$ queries ($$$>25\%$$$), and the total length of the sequence in the queries is bounded by $$$ 10^5 $$$.
When encountering the problem, a linear algorithm comes directly. We can check whether there is a shift in value sequence $$$ 0, x $$$. If so $$$ x = 1 $$$, otherwise $$$ x = 0 $$$. Hence we can solve the problem within $$$ N $$$ queries, getting $$$ 10\% $$$ as a result.
We then can come up to do some tiny optimizations, without changing the linearity of the algorithm. After two queries, we will have at least two $$$ 0 $$$ s or $$$ 1 $$$ s. Then we can set the value sequence to be $$$ x, 0, y, 0 $$$ or $$$ x, 1, y, 1 $$$, then the value of $$$ x $$$ and $$$ y $$$ can be determined simultaneously. This lead to an approach with $$$ \frac{N}{2} + 1 $$$ queries which gets $$$25\%$$$.
For higher points, we need a sublinear algorithm. As we only need the sum of the values of objects, we don't have to determine the value of each object. One observation is that we can take advantage of the objects we have determined, and infer the sum of values by asking for $$$ a, X, b, X, c, X, d, X \ldots, X $$$ where $$$ X = 0 $$$ or $$$ 1 $$$ depending on which kind of object we have more (so that the sequence can contain more unseen objects for counting). When $$$ X = 0 $$$ as an example, the returned number by the interactor is just $$$ a + 2b + 2c + 2d + \ldots $$$. Notably, the value of $$$ a $$$ can be determined by the last digit of the result, since each object will be counted twice except $$$ a $$$. Hence we can get more objects with determining value (which can be used as $$$ X $$$ to extend the length of our query) when we count the objects. Directly apply this approach enables us to solve for $$$ N \le \sum_{i = 1}^{m + 1} \lceil i / 2 \rceil $$$ within $$$ m $$$ queries, which gets $$$ 80\% $$$.
However, one should notice that putting more than one value between $$$ X $$$ or replacing $$$ X $$$ with both $$$ 1 $$$ and $$$ 0 $$$ seems useless. The former is just why we need pattern like $$$ a, X, b, X, \ldots $$$, and the latter is because of, if we get variable $$$ p - q $$$, we are required to perform another query to solve $$$ p $$$ and $$$ q $$$, then why not ask $$$ p $$$ and $$$ q $$$ individually. This may make us confused about how to optimize the algorithm.
In fact, there is still an easy optimization of the current algorithm. We can divide the program into two phases: in the first phase, we use patterns like $$$ x, 0, y, 0 $$$, and determine two variables at a time; in the second phase, we use the determined variables to do queries. Balancing the two phases (seems 3:4 in number of queries) gives $$$ 244 $$$ queries as a result (about $$$ 1.75 \sqrt{n} $$$ according to calculation), which gets $$$ 92.62\% $$$.
We will reformulate the statement (which is stronger) according to our discussion above. We can hold a value $$$ k $$$ on our hand, initially $$$ k = 1 $$$. Then in each step, you can ask for the sum of no more than $$$ \lceil k / 2 \rceil $$$ objects. Once you know the value of some specific object, we can increase $$$ k $$$ by one; and after each step, $$$ k $$$ will automatically increase one. Our target is to determine the sum of all objects. Then our two-phase algorithm is just to ask one object in the first phase, and as many objects as possible in the second phase.
Intuitively, the bottleneck of our algorithm seems to be that only ask for one object at a time, which only utilizes $$$ 1 $$$ bits of information. This still seems hard to optimize, but if you once met another problem before, it will be easier for you to solve this problem. The target of that problem is to recover a 0-1 sequence of length $$$ n $$$ within $$$ O(n / \log n) $$$ steps by asking the sum of some elements (for example, $$$ n = 1000 $$$ and solve it in $$$ 300 $$$ steps).
The solution of that problem is a divide-and-conquer like approach. Let $$$ f_m $$$ be the longest sequence we can recover within $$$ 2^m $$$ steps, and $$$ q_m $$$ be the query sequence of it, with the last step asking for the sum of the whole sequence, then we can have $$$ f_{m + 1} \ge 2 f_m + 2^m - 1 $$$. This is done by dividing the sequence into two blocks of size $$$ f_m $$$ and one block of size $$$ 2^m - 1 $$$. Then we first ask for the total number of ones $$$ c $$$ in the second block of size $$$ f_m $$$. For each non-last query $$$ q_{m, i} $$$ in $$$ q_m $$$, if we apply it on the two $$$ f_m $$$ blocks, we will get $$$ a $$$ and $$$ b $$$ respectively. We construct two queries, asking for $$$ a + b $$$ and $$$ a + (c - b) + i $$$, where $$$ i $$$ is a bit in the last block, then clearly we can recover $$$ a, b, i $$$ through the query. The last query is used to obtain the total number of ones in the sequence. We recursively apply this approach, and finally, we are done with $$$ O(n / \log n) $$$ steps.
Thus in the first phase of our algorithm, we can try to use a bunch of $$$ 2^m $$$ queries to obtain $$$ f_m $$$ determined objects if we have enough determined objects for calculating the sum. We can sort the required bits in $$$ q_m $$$ and get the required initial bits, then do a DP on the number of queries (we take the addition on $$$ k $$$ in each step into consideration). Finally, we enumerate a position to begin the second phase. This yields $$$ 209 $$$ queries in total, which is much smaller than the bound given in the problem. A tiny optimization reducing the number of ones required in the last query of $$$ q_m $$$ further enables us to solve the problem in $$$ 203 $$$ queries.
The implementation of this approach can be found here.