Hello,
For this question. Does there exist an optimal solution, i.e., processing each query in O(1) or O(log K) time or some significantly efficient way if the constraints were higher for each query?
Thank you for reading, L
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3831 |
3 | Radewoosh | 3646 |
4 | jqdai0815 | 3620 |
4 | Benq | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | gamegame | 3386 |
10 | ksun48 | 3373 |
# | User | Contrib. |
---|---|---|
1 | cry | 164 |
1 | maomao90 | 164 |
3 | Um_nik | 163 |
4 | atcoder_official | 160 |
5 | -is-this-fft- | 158 |
6 | awoo | 157 |
7 | adamant | 156 |
8 | TheScrasse | 154 |
8 | nor | 154 |
10 | Dominater069 | 153 |
Hello,
For this question. Does there exist an optimal solution, i.e., processing each query in O(1) or O(log K) time or some significantly efficient way if the constraints were higher for each query?
Thank you for reading, L
Name |
---|
You could solve any one query (qi < K^2) in Klog(Ai + Bi) average time using divide and conquer — binary search on the value and do a two-pointer sweep to count how many values less than that there are. Once there are exactly qi values smaller, the largest one you sweep over is the result.