Assume a codeforces like platform where there are $$$n$$$ users and their ratings vary over time $$$[0, T]$$$. Now given an interval $$$[l,r]$$$, we would like to find top k profiles which had highest delta in that interval.
Is it possible to answer this query without iterating over $$$n$$$ interval sums? For example we can get $$$O(n\log(n)\log(T))$$$ by maintaing fenwick tree per user and sorting the list of interval sums.
Edit: Added requirement to avoid looping over $$$n$$$ users..
Cant you just use prefix sums of each profile. Then use PriorityQueue and in total you need to precompute in O(T*N) and each query works for O(N+K).
Thats true. What I should have asked for is to reduce $$$n$$$ factor in the query since I do not want to iterate over list of users.
You can store it in dp[l,r] and do query in O(K) but your space complexity will be very very large. I think there is more smart way, but its first what comes to mind.
Yeah. That DP is not feasible because $$$T$$$ can be quite big.
Auto comment: topic has been updated by aravind.nujella (previous revision, new revision, compare).
I don't quite understand the problem, do you just want to find the k biggest values in a subarray?
No. More like TopK interval sums.
Say you have
user1
rating deltas[+10, -10, 20]
and foruser2
you have[+5, +10, -20]
We can query like highest_delta in between time steps - For interval[1, 3]
you get list of users[1, 2]
- For interval[1, 2]
and get list of users[2, 1]
Found this on Stackoverflow, apparently called popularity algorithm — https://stackoverflow.com/questions/1025436/popularity-algorithm
What are the problem constraints?
$$$0 \leq n \leq 10^6$$$ $$$0 \leq T \leq 10^6$$$ $$$0 \leq k \leq 100$$$
How many queries?