Thank you so much for participating in our round! We really hope you enjoyed it. For us, it was an amazing and educational experience, and we look forward to the possibility that we could one day help to or hold a round again. Thanks again to abc864197532 and Vladithur for making the process so smooth and enjoyable.
#include <bits/stdc++.h>
using namespace std;
int main() {
cin.tie(0)->sync_with_stdio(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
int maxw = 0, maxh = 0;
for (int i=0; i<n; i++) {
int w, h;
cin >> w >> h;
maxw = max(maxw, w);
maxh = max(maxh, h);
}
cout << 2 * (maxw + maxh) << "\n";
}
return 0;
}
#include <bits/stdc++.h>
using namespace std;
int main() {
cin.tie(0)->sync_with_stdio(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> A(n);
for (int i=0; i<n; i++) cin >> A[i];
int best = 0;
for (int i=0; i<n; i++) {
int curr = 0;
for (int j=i; j<n; j++) {
if (A[j] <= A[i]) {
curr += 1;
}
}
best = max(best, curr);
}
cout << n - best << "\n";
}
return 0;
}
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
int main() {
cin.tie(0)->sync_with_stdio(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<ll> A(n);
for (int i=0; i<n; i++) cin >> A[i];
map<ll,vector<ll>> adj;
for (int i=1; i<n; i++) {
ll u = A[i] + i;
ll v = u + i;
adj[u].push_back(v);
}
set<ll> vis;
function<void(ll)> dfs = [&](ll u) -> void {
if (vis.count(u)) return;
vis.insert(u);
for (ll v : adj[u]) dfs(v);
};
dfs(n);
cout << *vis.rbegin() << "\n";
}
return 0;
}
2027D1 - The Endspeaker (Easy Version)
2027D1 - The Endspeaker (Easy Version)
Let's use dynamic programming. We will have $$$\operatorname{dp}_{i,j}$$$ be the minimum cost to remove the prefix of length $$$i$$$, where the current value of $$$k$$$ is $$$j$$$. By a type $$$1$$$ operation, we can transition from $$$\operatorname{dp}_{i,j}$$$ to $$$\operatorname{dp}_{i,j+1}$$$ at no cost. Otherwise, by a type $$$2$$$ operation, we need to remove some contiguous subarray $$$a_{i+1}, a_{i+2}, \dots, a_{x}$$$ (a prefix of the current array), to transition to $$$\operatorname{dp}_{x,j}$$$ with a cost of $$$m - k$$$.
Let $$$r$$$ be the largest value of $$$x$$$ possible. Given we're spending $$$m - k$$$ whatever value of $$$x$$$ we choose, it's clear to see that we only need to transition to $$$\operatorname{dp}_{r,j}$$$. To find $$$r$$$ for each value of $$$i$$$ and $$$j$$$, we can either binary search over the prefix sums or simply maintain $$$r$$$ as we increase $$$i$$$ for a fixed value of $$$k$$$. The answer is then $$$\operatorname{dp}_{n,k}$$$. The latter method solves the problem in $$$\mathcal{O}(nm)$$$.
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
const int inf = 1 << 30;
void chmin(int &a, int b) {
a = min(a, b);
}
int main() {
cin.tie(0)->sync_with_stdio(0);
int t;
cin >> t;
while (t--) {
int n, m;
cin >> n >> m;
vector<int> A(n+1);
for (int i=0; i<n; i++) cin >> A[i];
vector<int> B(m);
for (int i=0; i<m; i++) cin >> B[i];
vector nxt(n, vector<int>(m));
for (int k=0; k<m; k++) {
int r = -1, sum = 0;
for (int i=0; i<n; i++) {
while (r < n && sum <= B[k]) sum += A[++r];
nxt[i][k] = r;
sum -= A[i];
}
}
vector dp(n+1, vector<int>(m, inf));
dp[0][0] = 0;
for (int k=0; k<m; k++) {
for (int i=0; i<n; i++) {
chmin(dp[nxt[i][k]][k], dp[i][k] + m - k - 1);
if (k < m-1)
chmin(dp[i][k+1], dp[i][k]);
}
}
int ans = inf;
for (int k=0; k<m; k++) {
chmin(ans, dp[n][k]);
}
if (ans == inf) {
cout << "-1\n";
} else {
cout << ans << "\n";
}
}
return 0;
}
2027D2 - The Endspeaker (Hard Version)
2027D2 - The Endspeaker (Hard Version)
Following on from the editorial for D1. Let's have the $$$\operatorname{dp}$$$ table store a pair of the minimum cost and the number of ways. Since we're now counting the ways, it's not enough just to consider the transition to $$$\operatorname{dp}_{r,j}$$$; we also need to transition to all $$$\operatorname{dp}_{x,j}$$$. Doing this naively is too slow, so let's instead find a way to perform range updates.
Let's say we want to range update $$$\operatorname{dp}_{l,j}, \operatorname{dp}_{l+1,j}, ..., \operatorname{dp}_{r,j}$$$. We'll store some updates in another table at either end of the range. Then, for a fixed value of $$$k$$$ as we iterate through increasing $$$i$$$-values, let's maintain a map of cost to ways. Whenever the number of ways falls to zero, we can remove it from the map. On each iteration, we can set $$$\operatorname{dp}_{i,j}$$$ to the smallest entry in the map, and then perform the transitions. This works in $$$\mathcal{O}(nm \log n)$$$.
Bonus: It's also possible to solve without the $$$\log n$$$ factor. We can use the fact that $$$\operatorname{dp}_{i,k}$$$ is non-increasing for a fixed value of $$$k$$$ to make the range updates non-intersecting by updating a range strictly after the previous iteration. Then we can just update a prefix sum array, instead of using a map.
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
const int inf = 1 << 30;
const int MOD = 1000000007;
int main() {
cin.tie(0)->sync_with_stdio(0);
int t;
cin >> t;
while (t--) {
int n, m;
cin >> n >> m;
vector<int> A(n+1);
for (int i=0; i<n; i++) cin >> A[i];
vector<int> B(m);
for (int i=0; i<m; i++) cin >> B[i];
vector nxt(n, vector<int>(m));
for (int k=0; k<m; k++) {
int r = -1, sum = 0;
for (int i=0; i<n; i++) {
while (r < n && sum <= B[k]) sum += A[++r];
nxt[i][k] = r;
sum -= A[i];
}
}
vector dp(n+1, vector<array<int,2>>(m, {inf, 0}));
vector upd(n+1, vector(m, vector<array<int,3>>()));
upd[0][0].push_back({0, 0, 1});
upd[1][0].push_back({1, 0, 1});
for (int k=0; k<m; k++) {
map<int,array<int,2>> mp;
for (int i=0; i<=n; i++) {
for (auto [t, move, count] : upd[i][k]) {
if (t == 0) {
auto &[a, b] = mp[move];
a += 1;
(b += count) %= MOD;
} else {
auto &[a, b] = mp[move];
a -= 1;
(b += MOD - count) %= MOD;
if (a == 0) mp.erase(move);
}
}
if (mp.empty()) continue;
auto &[move, info] = *mp.begin();
dp[i][k] = {move, info[1]};
if (i == n) continue;
if (k < m-1) {
upd[i][k+1].push_back({0, move, info[1]});
upd[i+1][k+1].push_back({1, move, info[1]});
}
if (nxt[i][k] > i) {
upd[i+1][k].push_back({0, move + (m - k - 1), info[1]});
if (nxt[i][k] < n) {
upd[nxt[i][k]+1][k].push_back({1, move + (m - k - 1), info[1]});
}
}
}
}
map<int,int> mp;
for (int k=0; k<m; k++) {
auto &[move, count] = dp[n][k];
(mp[move] += count) %= MOD;
}
auto &[move, count] = *mp.begin();
if (move == inf) {
cout << "-1\n";
} else {
cout << move << " " << count << "\n";
}
}
return 0;
}
There exists an alternative solution for D1 & D2, using segment tree. We can actually consider the process in reverse; let's reformulate $$$\operatorname{dp}_{i,j}$$$ to represent the minimum score required to remove all elements after the $$$i$$$-th element, given that the current value of $$$k$$$ is $$$j$$$.
Instead of using a dp table, we maintain $$$m$$$ segment trees, each of length $$$n$$$. The $$$i$$$-th segment tree will represent the $$$i$$$-th column of the dp table.
We precalculate for each $$$i$$$ and $$$j$$$ the furthest position we can remove starting from $$$i$$$ — specifically, the maximum subarray starting from $$$i$$$ with a sum less than $$$b_j$$$. We store this in $$$\operatorname{nxt}_{i,j}$$$. This calculation can be done in $$$\mathcal{O}(nm)$$$ time using a sliding window.
To transition in the dp, we have:
This transition can be computed in $$$\mathcal{O}(\log n)$$$ time thanks to range querying on the segment tree, so our total complexity is $$$\mathcal{O}(nm \log n)$$$. For D2, we can store the count of minimums within each segment, and simply sum these counts to get the total number of ways.
#include <bits/stdc++.h>
using namespace std;
#define int long long
int modN = 1e9 + 7;
int mod(int n) {
return (n + modN) % modN;
}
struct SegmentTree {
struct Node {
int val = 1e18;
int cnt = 1;
};
vector<Node> st;
int n;
SegmentTree(int n): n(n) {
st.resize(4 * n + 1, Node());
}
SegmentTree(vector<int> a): n(a.size()) {
st.resize(4 * n + 1, Node());
build(a, 1, 0, n - 1);
}
void merge(Node& a, Node& b, Node& c) {
a.val = min(b.val, c.val);
if (b.val == c.val)
a.cnt = mod(b.cnt + c.cnt);
else if (b.val < c.val)
a.cnt = b.cnt;
else if (b.val > c.val)
a.cnt = c.cnt;
}
void build(vector<int>& a, int id, int l, int r) {
if (l == r) {
st[id].val = a[l];
return;
}
int mid = (l + r) / 2;
build(a, id * 2, l, mid);
build(a, id * 2 + 1, mid + 1, r);
merge(st[id], st[id * 2], st[id * 2 + 1]);
}
void update(int id, int l, int r, int u, int val, int cnt) {
if (l == r) {
st[id].val = val; // or st[id].sum += val
st[id].cnt = cnt;
return;
}
int mid = (l + r) / 2;
if (u <= mid) update(id * 2, l, mid, u, val, cnt);
else update(id * 2 + 1, mid + 1, r, u, val, cnt);
merge(st[id], st[id * 2], st[id * 2 + 1]);
}
void update(int idx, int val, int cnt) { //wrapper
update(1, 0, n - 1, idx, val, cnt);
}
Node query(int id, int l, int r, int u, int v) { //give 0, n - 1 as l and r and 1 as id
if (v < l || r < u) return Node();
if (u <= l && r <= v) {
return st[id];
}
int mid = (l + r) / 2;
auto a = query(id * 2, l, mid, u, v);
auto b = query(id * 2 + 1, mid + 1, r, u, v);
Node res;
merge(res, a, b);
return res;
}
Node query(int l, int r) { //wrapper
return query(1, 0, n - 1, l, r);
}
};
void solve() {
int n, m;
cin >> n >> m;
vector<int> a(n), b(m);
for (int &A : a) cin >> A;
for (int &B : b) cin >> B;
if (*max_element(a.begin(), a.end()) > b[0]) {
cout << -1 << '\n';
return;
}
vector<vector<int>> nxt(m, vector<int>(n));
for (int i = 0; i < m; i++) {
int curr = 0, r = -1;
for (int j = 0; j < n; j++) {
while (r + 1 < n && curr + a[r + 1] <= b[i])
curr += a[r + 1], r += 1;
nxt[i][j] = r + 1;
if (j <= r) curr -= a[j];
r = max(r, j);
}
}
vector<SegmentTree> dp(m, SegmentTree(vector<int>(n + 1, 1e18)));
for (int i = 0; i < m; i++)
dp[i].update(n, 0, 1);
for (int i = n - 1; i >= 0; i--) {
for (int j = m - 1; j >= 0; j--) {
auto q1 = dp[j].query(i + 1, nxt[j][i]);
int v1 = q1.val + m - (j + 1), ps1 = q1.cnt;
if (i + 1 <= nxt[j][i]) dp[j].update(i, v1, ps1);
if (j != m - 1) {
auto q2 = dp[j + 1].query(i, i);
int v2 = q2.val, ps2 = q2.cnt;
auto q3 = dp[j].query(i, i);
if (v2 < q3.val)
dp[j].update(i, v2, ps2);
else if (v2 == q3.val)
dp[j].update(i, v2, mod(ps2 + q3.cnt));
}
}
}
cout << dp[0].query(0, 0).val << ' ' << dp[0].query(0, 0).cnt << '\n';
}
signed main() {
cin.tie(0) -> sync_with_stdio(false);
int t;
cin >> t;
while (t--)
solve();
return 0;
}
2027E1 - Bit Game (Easy Version)
#include <bits/stdc++.h>
using namespace std;
int nimber(int x, int a) {
int aprime = 0;
bool goodbit = false;
for (int bit=30; bit>=0; bit--) {
if (x & (1 << bit)) {
aprime *= 2;
if (goodbit || (a & (1 << bit))) {
aprime += 1;
}
} else if (a & (1 << bit)) {
goodbit = true;
}
}
// g(2^k - 2) = 0, for all k >= 1.
for (int k=1; k<=30; k++) {
if (aprime == (1 << k) - 2) {
return 0;
}
}
// g(2^k - 1) = k, for all k >= 1.
for (int k=1; k<=30; k++) {
if (aprime == (1 << k) - 1) {
return k;
}
}
// g(2^k) = k + (-1)^k, for all k >= 0.
for (int k=1; k<=30; k++) {
if (aprime == (1 << k)) {
if (k % 2) return k - 1;
else return k + 1;
}
}
// g(2^k+1) = g(2^k+2) = ... = g(2^{k+1} - 3) = k + 1, for all k >= 2.
for (int k=2; k<=30; k++) {
if ((1 << k) < aprime && aprime <= (2 << k) - 3) {
return k + 1;
}
}
// should never get to this point
assert(false);
return -1;
}
int main() {
cin.tie(0)->sync_with_stdio(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> A(n);
for (int i=0; i<n; i++) cin >> A[i];
vector<int> X(n);
for (int i=0; i<n; i++) cin >> X[i];
int curr = 0;
for (int i=0; i<n; i++) curr ^= nimber(X[i], A[i]);
cout << (curr ? "Alice" : "Bob") << "\n";
}
return 0;
}
2027E2 - Bit Game (Hard Version)
#include <bits/stdc++.h>
using namespace std;
int dp[32][32][6][2][2];
const int mod = 1000000007;
int main() {
cin.tie(0)->sync_with_stdio(0);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vector<int> A(n);
for (int i=0; i<n; i++) cin >> A[i];
vector<int> B(n);
for (int i=0; i<n; i++) cin >> B[i];
vector<int> curr(32);
curr[0] = 1; // identity
for (int i=0; i<n; i++) {
memset(dp, 0, sizeof dp);
dp[0][0][0][0][0] = 1;
for (int j=0; j<=29; j++) {
int p = 29 - j; // place we are going to add a bit in
for (int k=0; k<=29; k++) { // position of most significant one in a'
for (int type=0; type<6; type++) { // 0, 1, 11111, 11110, 10000, else
for (int good=0; good<2; good++) { // good=1 iff good bit has occured
for (int low=0; low<2; low++) { // low=1 iff prefix below b
for (int bit=0; bit<2; bit++) { // bit in x
if (dp[j][k][type][good][low] == 0)
continue; // no point in transition since count is 0
if (!low && (B[i] & (1 << p)) == 0 && bit == 1)
continue; // x can't go higher than B[i]
if (bit == 0) {
int good2 = good || (A[i] & (1 << p)) != 0; // check good bit
int low2 = low || (B[i] & (1 << p)) != 0; // check if low
// nothing added to a' so nothing else changes
(dp[j+1][k][type][good2][low2] += dp[j][k][type][good][low]) %= mod;
} else {
int bita = good || (A[i] & (1 << p)) != 0; // bit in a'
int k2 = type == 0 ? 0 : k + 1; // increase if MSOne exists
int type2 = bita ?
(
type == 0 ? 1 : // add first one
(type == 1 || type == 2) ? 2 : // 11111
5 // can't add 1 after a 0
) : (
(type == 0) ? 0 : // 0
(type == 1 || type == 4) ? 4 : // 10000
(type == 2) ? 3 : // 11110
5 // can't have a zero in any other case
);
(dp[j+1][k2][type2][good][low] += dp[j][k][type][good][low]) %= mod;
}
}
}
}
}
}
}
vector<int> count(32); // number of x-values for each nimber
for (int k=0; k<=29; k++) { // position of MSOne
for (int good=0; good<2; good++) { // doesn't matter
for (int low=0; low<2; low++) { // doesn't matter
(count[0] += dp[30][k][0][good][low]) %= mod; // 0
(count[1] += dp[30][k][1][good][low]) %= mod; // 1
(count[k+1] += dp[30][k][2][good][low]) %= mod; // 11111
(count[0] += dp[30][k][3][good][low]) %= mod; // 11110
(count[k+(k%2?-1:1)] += dp[30][k][4][good][low]) %= mod; // 10000
(count[k+1] += dp[30][k][5][good][low]) %= mod; // else
}
}
}
count[0] -= 1; // remove when x=0
vector<int> next(32); // knapsack after adding this pile
for (int j=0; j<32; j++)
for (int k=0; k<32; k++)
(next[j ^ k] += 1LL * curr[j] * count[k] % mod) %= mod;
swap(curr, next);
}
cout << curr[0] << "\n";
}
return 0;
}
D1 is the worst problem i've seen in recent times
Why?
its was a nice dp problem acc to me
demn buddy , you became cm . Congrats !
c was worst
Broo your downvotes officially makes you the joint 8th top (de-) contributor on codeforces
Problem B is very tricky.
Same, I found it very hard, tried for 1 hour and 30 minutes in contest and did not get it. In contest I always seem to be able to do problem A but then I struggle with problem B.
Good to know my first approach to 2027B was nlog(n). I couldnt think of O(n^2) solution for some reason.
Can you explain your solution? I am unable to understand what you did in your solution ?
Using
ordered_set
in C++O(n²) Approach:
Iterate from left to right, finding all elements to the right of the current element that are greater than it.
For each index i from 0 to n — 1: 1. Initialize ans to 0 for each element.
For each index j from i + 1 to n — 1, increment ans if ele[i] is greater than ele[j].
The goal is to minimize
ans
across all elements.O(n log n) Approach:
This approach is equivalent to iterating from right to left, inserting elements into an
ordered_set
as you go.For each element: 1. Use the function x = ordered_of_key(element + 1), which returns the count of elements in the
ordered_set
that are smaller than or equal to (element + 1) (log n)Here is my sol
do you know why my solution doesn't work? basically any element Ai that is not the maximum of (A1, A2... Ai-1) could be removed from the vector by stalin sort, so I don't even add it to my vector. Later I just look for the longest non increasing subsequence in the vector. The answer would be the difference between the size of the (new) vector and the LDS length. It passes the sample but WA's later.
code
See we wanted to sort the array in non-increasing order upon minimum deletion that means we have to have larger elements at the start so I sorted the array in decreasing order.
then I recorded the first occurrence of every element in a map (first occurrence only because having 2nd occurrence means we have to delete more elements but question asks for minimum deletion).
Now total deletions would be (total elements greater than an element and number of elements before that element).
for eg; 3 6 4 9 2 5 2 when u consider 6 element only one element is greater than 6 i.e. 9 and one element is behind 6 i.e 3 but for 9 no element is bigger but 3 elements have to be deleted prior to it.
to get this i just iterate over the sorted array and then add the elements behind that arrays first occurence and element greater than it and take minimum for all the elements in the array and print it.
But you have to delete only those greater elements which are on right of given element because left ones are all deleted anyways. Your solution deletes all greater elements which can lead to extra deletions. Am I misunderstanding something here?
iterate over each index and check the number of elements greater than a[index] in the right of the index. And delete all the elements at the left of the index. check this for every index and find the minimum deletions.
B and C were pretty good problems...D1 felt like a common DP problem
dp is always difficult for me
Can anyone explain why changing map to unordered_map in 2027C - Add Zeros causes a TL?
I even took the solution from the editorial and ended up with a TL.
Editorial solution with unordered_map 288191624 (TL 16)
Worst case access for unordered_map is O(n), whereas worst case for ordered_map is O(log n). In unordered map, the values are stored in buckets modulo a big number. If the input is constructed in such a way that all elements get stored in the same bucket, the worst case time complexity to access a single element becomes O(n). Hence, causing the TLE
test cases are made in such a way that they cause collisions , as the worst time complexity is O(n) so it can't pass. If you still want to use hashmap you would have to make your own hash function. https://codeforces.net/contest/2027/submission/288190713
One way to bypass this issue is to declare a custom_hash. The worst case time complexity for this still remains O(n), but you could atleast pass the current input values since you can use a random number generator to hash the values. Add the following piece of code and let me know if it passes or not.
Thank you for your replies!
I also found this amazing post explaining that.
Blowing up unordered_map, and how to stop getting hacked on it
Yeah, that's a great post. In the last contest, my submission to D was hacked for the very same reason..so this time, I made a custom hash function to minimize the risk of collisions for my solution to C
i also had the same thing.
My guess is that based on this blog, https://codeforces.net/blog/entry/62393 test cases were created with large numbers of hash collisions.
demn this shit doubt after every contest consider googling sometimes
problem B
4
1 1 3 2
if we apply stalin then 2 will be automatically deleted and now we just have to remove 3 , then the resultant array will be 1 1 . The answer should be 1 .
You have to remove some/none of the elements then apply operations on resulting array.
I couldn't find this particular line in the problem statement. Can you please highlight which statement points to this?
read this: ** determine the minimum number of integers which must be removed from the array to make it vulnerable.**
he need minimum integers to it be vulnerable vulnerable means when Stalin sort is applied it will be in non-increasing order
i also fall to it
Thanks!
Thanks for fast editorial
B was really difficult to analyze (for me); Only able to solve A ,hoping to perform better in upcoming ones.
because N<=2000 then you should think about an n^2 solution now we have no idea what that solution might be but every time we think about an n^2 solution we tend to fix some number in the array and loop. and here comes the idea of fixing the max element in the resulting array or the left most element in the resulting array. this was my thought process
If anyone could explain D2 to me in detailed manner would be really helpful, thanks in advance..
I solved this using segment trees. You need to define each node of the tree to have two parameters, "cost" and "count". When you combine the two nodes, if the costs are identical, you return a new node with the same cost and the sum of the two counts, otherwise, you return a copy of the node with the lower cost.
Now, the problem is much simpler. Define dp as a list of length m+1 of these trees (each of size n+1), and initially set all nodes to have infinite cost. Then, update it such that dp[i][n] has cost 0 and frequency 1, for all i. Now, if we define dp[j][i] as the minimum cost to remove all elements from a[i] onwards, using all elements from b[j] onwards, it is clear we need to consider two transitions:
Switching from b[j] to b[j+1] with no cost — we do this by filling in the dp with the outer loop going from n-1 to 0, and the inner loop going from m-1 to 0. Then we can just update dp[j][i] to be equal to dp[j+1][i].
Transitioning from a[i] to a[i+k], for all k that it is possible to do in one go (which is solved the same way as D1, I did this using a sliding window approach in O(mn)). This transition is simply equivalent to querying the segment tree dp[j] from i+1 to i+k, because we have made the segment trees such that by combining nodes, you're automatically picking the ones with the lowest cost, and adding up the frequencies. This is all done in O(logn) time, adding up to a O(mnlogn) solution, which passes comfortably.
I am told there is a much nicer solution but I'm far too daft to figure that out myself, so bashing the problem with a few too many segment trees seems to be the only option.
can anyone please outline the two pointer approach for D1?
great solution, and very clean code. thank you very much
Can we solve B using Monotoic Stack to find the maximum sub array that have the first element is largest? Time complexity will be reduce to O(n)?
no
Having an answer be 0 mod 1e9+7 for D2 is kind of crazy
Some contests ago there were some copied solutions which started with ans = 1 and printed ans — 1 in the end but if the answer was mod — 1, then they printed -1 and got it wrong so i think it was to counter it
Actually, it's to counter an incorrect solution for D2. Consider the first editorial solution: we have a map of
ways
to a pair of{num_updates, count}
. If you just remove the entry whencount
falls to zero, it might be the case thatnum_updates
is nonzero if the answer is $$$\equiv 0$$$, so it would be a mistake to remove it. We tried to find testdata to break this solution but it proved to be almost impossible. In the end, we found a test where the number of ways ended on something $$$\equiv 0$$$, and it breaks some solutions which are wrong in a similar way, but sadly we couldn't fully counter it.It might be the case that some incorrect solutions slipped through, but it turned out that there are many alternative solutions to D2, so this didn't end up making a significant difference.
Why my solution for Question C getting TLE
My solution 288152538
try refering this. This might help.
Problem c have also another solution
can some one please explain the last part in problem C if one didn't use a map
problem B "this will always break the non-decreasing property."
this should be "non-increasing property" ?
Yes, you're right. The blog should update soon. Thanks for pointing it out.
Why these indian cheaters do cheating by youtube and telegram groups, this leads to bad rank even on solving 3 problems ;(
I have different 2027E1 - Bit Game (Easy Version) solution that I can't really prove, and don't know if it's actually correct, so I will leave it here if someone is interested, it is also a little bit simpler than official solution:
Let's look at some pile $$$i$$$ with numbers $$$a$$$ and $$$x$$$. Denote largest bit of $$$a$$$ as $$$b$$$, if there's bit in $$$x$$$ that is greater than $$$b$$$ we can discard it because we can never do anything with it. We discard all useless bits.
Now if $$$ a >= x $$$ we can pretend there is no restriction so our grundy number will be $$$popcount$$$ of $$$x$$$.
If $$$ a < x $$$ we cannot always do whatever we want so we use function $$$solve(i, s)$$$ where $$$i$$$ is just an index and $$$s$$$ is number of suffix bits that we turned off in $$$x$$$. Note that both $$$a$$$ and $$$x$$$ will have $$$b$$$ bit on (because of $$$ a < x $$$ and discarding useless bits).
Transitions are:
1) Destroy largest bit ($$$b$$$) in $$$x$$$ and some number of additional bits (taking care of not exceeding $$$a$$$) (we go to no restriction case $$$ a >= x $$$)
2) Destroy some more bits on suffix in $$$x$$$ (no restriction because it cannot exceed $$$a$$$) (we go to $$$solve(i, s+p)$$$ where $$$p$$$ is number of bits we destroyed)
Now you calculate grundy number using grundy theory (take mex of grundy numbers you can transition to). That's literally it. Nothing special.
Intuition is based on: when you don't destroy largest bit, it's optimal to destroy suffix because it leaves opponent with least additional moves. I can't prove it tho. When you destroy largest bit, it only matters how much more bits you can destroy (to not exceed $$$a$$$, greedy).
Submission: 288167141. Note that is a little bit messy because I tried to code it fast.
I'm happy I got single digit placement :)
This is very interesting, and I don't know if I can prove the suffix idea either.
The editorial solution is quite complicated because you need to be able to easily classify the SG values in order to count efficiently for E2; I don't think a solution like yours would follow on as nicely. However, taking E1 as its own problem, I think your solution is quite nice though, and avoids casework/inspection :)
I'll have a think about how to prove it!
LOL,very good problem b,made my score down.:(
for question C
can someone tell why my solution is wrong. what i am doing is first of all i am checking in which elements i can do first operation. if the required size of for doing operation in a element is n so we can start with the element. then i am just calculating how much element we can do operation if i start with that element. and we will store the maxsize of array ever reached.
For D1, why does my top down TLE but bottom up only takes 100ms? Shouldn't both have O(nm) states?
You should differentiate between cases when:
inf
(i.e. no possible answer)Otherwise, it can visit the same impossible state exponential times. Initializing the dp value to
-1
works. 288223019Right, i completely forgot about that. Thanks!
This seems wrong to me. If r==n-1 then won't A[++r] raise an error ? Say a=[1,2] and b=[4].
Thank you for the contest.
MikeMirzayanov I would like to report suspected cases of cheating in Codeforces Round 982 (Div. 2). I have identified multiple pairs of submissions with identical or nearly identical solutions. Here are the relevant submission links:
Pair 1: https://codeforces.net/contest/2027/submission/288161794 https://codeforces.net/contest/2027/submission/288145385
Pair 2: https://codeforces.net/contest/2027/submission/288152323 https://codeforces.net/contest/2027/submission/288137323
Pair 3: https://codeforces.net/contest/2027/submission/288163896 https://codeforces.net/contest/2027/submission/288155729
Each pair of submissions shows a high degree of similarity.
B is very tricky. Anyways, if i understand correctly, only the max length subsequence which contains the max element as the first element is reduciable to a non increasing array? so we find this max length subsequence and then subtract this length from the total n. which gives us the min number of elements to be deleted??
that's right
In Problem B, I bet that the trial test cases explanation made it even more trickier. BTW, a good logical problem in B.
can anyone explain binary search approach for problem d1 ?
See my implementation 288212421
It is DP with Binary Search. If you cannot understand anything, let me know.
without using dp only binary search
Hey first of all your code is really clean! In my opinion the editorial didn't do a great job explaining the implementation and I was wondering if you could explain your solution further, mainly this part:
Sure. We define a 2d dp array dp[i][j]. It means the minimum cost to solve the problem if we consider only the first i elements of array b and upto first j elements of array a. Then we look if the current element in array a is bigger than our current element in array b. If it is, then we cannot use the current element of array b here. So we take our dp value of the current j from our previous element in array a. That is basically dp[i — 1][j]. Because the array b is reverse sorted. And the previous element is sure to be bigger.
Now, if we can use our current element of array a in our dp[i][j], we will use the upper bound method to search how far back can we stretch our reach. We will use a prefix array of the reversed array a (because we want to see how far back we can go). After the that we will get an index in our prefix array where our reach ends. Now we have two choices here, either include the current new value that we got in our dp[i][j], or use the value of the previous element of the array b. That is dp[i-1][j]. We will use the minimum of both of them. It will look something like this. dp[i][j] = min(dp[i-1][j], dp[i][the first element back that we cannot reach] + the cost to do this current operation.
while (r < n && sum <= B[k]) sum += A[++r];
— From D1 editorialif r=n-1 and condition satisfies , A[n] will be accessed leading to RTE right.
Fixed (increased size of A to n+1)
Isn't there another solution for C by using greedy method. I didn't do it but saw that one on youtube.
For E1 testcase2, the last move that Alice did was to take 12 out of 14 from the last pile, why didnt she take all 14, so that Bob cannot move anymore
The sample explanation is an example of how a game might progress, to ensure you understand the conditions on $$$d$$$. It does not accurately reflect any optimal strategy; indeed, Alice could have taken all $$$14$$$ so that Bob could not move anymore — but in an optimal strategy, Bob would ensure that they never got to that point.
Caution: They have testcases in problem C which touch worst case complexity for Unordered map and sets !! ```
``` Upsolving after contest and this code gives me TLE, couldn't figure a better approach went to tutorial, approach is exactly the same but uses normal map and set instead of unordered, changed my code to use normal instead of unordered and it's accepted. So glad this wasn't happening in a contest I'd be enraged post contest for this.
They have testcases which touch worst case for Unordered map and sets !!
Can anyone explain problem D2 solution without segment tree method ??
Has anyone tried question B in
NlogN
? I am not able to get an idea'_'
What you can do is, maintain a multiset of the element on right of index i, then just find upper_bound of the current elemrnt in that multiset, and use distance(it,s.end()) to get how many elements are present in a multiset which are greater than a[i] and as you move forward keep erasing the previous elements.
How to know the intuition behind problem C ? Yes, after knowing it's a graph problem it was easy ,but how someone can develop intuition like this ??
Think like this, choose a test case and draw what happens if you choose a valid index,what all happens: if you do this you'll draw a directed graph(tree) like structure and from there on it becomes obvious you need max sum path(memoized dfs).
Thanks!!
Is there any dp solution for C? I have a thought that dp[if someone reach this number] = I can get this number. But I have problem on where can the dp get start? I can go neither left to right nor right to left, and from largest number to smallest number also not work.
I made a directed graph from the data, then applied memoized dfs on it. DP[node] tells max 0 creatable if you choose that node(that index).
I am wondering that is there any dp solution that without graph concept?
My solution was a DP. DP(len) returns the longest array that can be built starting from an array of length 'len'. Below I explain how to compute DP(len):
Let 'result' be the result for DP(len). You can initialize it as 'len', because, if you cannot extend the array any further, then its maximum length is 'len' itself. Then for every index 'i' such that it fits the question requirements, you do
result = max(result, DP(len + i));
But how to find all 'i' such that it fits the requirements? Well, from the question statement, you have
a_i = |a| + 1 — i (one indexing)
So the following is true:
a_i = |a| — i (zero indexing)
a_i + i = |a|
And you can precompute a_i + i for every 'i' and put it in a map<int, vector> so you efficiently find all 'i' that have that sum equal to |a|.
Implementation: 288133528
Sorry for necro-posting but someone else confirm that for D1/D2, the constraint on b being decreasing is not necessary to solve the problem?
IMO if you solve D1/D2 fully by range-update dynamic programming, that condition isn't required. It's there to enable at least the greedy observation in D1 to reduce the segment tree stuff (or anything doing similar such range updates).
Got it, thanks for corroborating. I was wondering why that condition was there at all.
Considering how much I complicated D1 using DP-Tabulation, Pre-Computation and Binary-Search only to find out it was basic DP Implementation :(
I tried in O(n) complexity,please someone review this code it is failing in some cases https://codeforces.net/contest/2027/submission/291305482
can anyone tell me what approach i am missing in my code here is my submission https://codeforces.net/contest/2027/submission/294786831 and here is the question https://codeforces.net/contest/2027/problem/B