Idea & preparation: Evirir
2049A - MEX Destruction
Case 1: All elements are $$$0$$$. Then the answer is $$$0$$$.
Case 2: Some element is non-zero, and all non-zero elements form a contiguous subarray. Then the answer is $$$1$$$ since we can choose that subarray and replace it with a $$$0$$$.
Case 3: Otherwise, the answer is $$$2$$$.
- We can replace the entire array with a non-zero element (since $$$0$$$ is in the array), then replace the entire array again with a $$$0$$$ (since the only element left is non-zero).
- $$$1$$$ operation is not enough. If we only use $$$1$$$ operation, the selected subarray must contain all non-zero elements. Since the non-zero elements do not form a subarray, the selected subarray must contain a $$$0$$$, thus the $$$\operatorname{MEX}$$$ will be non-zero.
#include <bits/stdc++.h>
using namespace std;
void solve()
{
int n; cin >> n;
vector<int> a(n);
for (int i = 0; i < n; i++)
cin >> a[i];
while (!a.empty() && a.back() == 0)
a.pop_back();
reverse(a.begin(), a.end());
while (!a.empty() && a.back() == 0)
a.pop_back();
reverse(a.begin(), a.end());
if (a.empty())
{
cout << 0 << '\n';
return;
}
bool hasZero = false;
for (const auto x : a)
hasZero |= x == 0;
if (hasZero)
cout << 2 << '\n';
else
cout << 1 << '\n';
}
int main()
{
int t;
cin >> t;
for (int i = 0; i < t; i++)
solve();
return 0;
}
Idea & preparation: Evirir
2049B - pspspsps
Since the entire $$$p$$$ must be a permutation, if $$$s_1 =$$$s
, we can set $$$s_1 = $$$.
, and if $$$s_n =$$$p
, we can set $$$s_n =$$$.
.
After that, the answer is YES
if and only if all non-dot characters in $$$s$$$ are all p
or s
.
If all non-dot characters are p
, we can choose the permutation $$$p = [1, 2, \ldots, n]$$$. If all non-dot characters are s
, we can choose $$$p = [n, n - 1, \ldots, 1]$$$.
Otherwise, there exists both a p
and a s
. Suppose for contradiction that there is a solution. Let $$$a$$$ and $$$b$$$ represent the subarrays represented by the p
and s
respectively. Without loss of generality, suppose $$$a$$$ is the shorter subarray.
- Since $$$b$$$ is also a permutation, the elements of $$$a$$$ must be in $$$b$$$. Since $$$p$$$ is a permutation, $$$a$$$ must be a subarray of $$$b$$$.
- However, $$$b$$$ cannot contain $$$a$$$: since $$$b$$$ is not the entire $$$p$$$, $$$b$$$ does not contain $$$p_1$$$. However, $$$a$$$ contains $$$p_1$$$. Contradiction.
#include <bits/stdc++.h>
using namespace std;
void solve()
{
int n; cin >> n;
string s; cin >> s;
if (s[0] == 's') s[0] = '.';
if (s.back() == 'p') s.back() = '.';
bool found_p = false;
bool found_s = false;
for (const auto c : s)
{
switch (c)
{
case 'p':
found_p = true;
break;
case 's':
found_s = true;
break;
}
}
cout << (found_p && found_s ? "NO" : "YES") << '\n';
}
int main()
{
int t; cin >> t;
for (int i = 0; i < t; i++) solve();
return 0;
}
Idea and solution: Evirir and Kaey
Preparation: Evirir
2049C - MEX Cycle
There are many possible solutions. The simplest one we can find (thanks to Kaey) is as follows:
- Set $$$a_x = 0, a_{x+1} = 1, a_{x+2} = 0, \ldots$$$, alternating between 0 and 1, wrapping around accordingly. Formally, using 0-based indexing, set $$$a_{(x+i) \bmod n} = i \bmod 2$$$ for all $$$i$$$ ($$$0 \le i \le n - 1$$$).
- If $$$n$$$ is odd or if $$$x - y$$$ is even, set $$$a_x = 2$$$.
Why this works:
- If $$$n$$$ is even and $$$x - y$$$ is odd, all $$$0$$$'s are only friends with $$$1$$$'s and vice versa.
- If $$$n$$$ is odd, $$$a_x$$$ will be adjacent to $$$0$$$ and $$$1$$$, so we set $$$a_x = 2$$$. Now $$$a$$$ is valid ignoring the extra friendship. Adding in the extra friendship, $$$a$$$ is still valid since $$$a_x = 2 > a_y$$$, so it will not affect $$$a_y$$$.
- If $$$n$$$ is even and $$$x - y$$$ is even, the extra friendship connects two $$$0$$$ or two $$$1$$$. Setting $$$a_x = 2$$$ works because dragon $$$x$$$'s friends still have another neighbor to maintain their $$$\operatorname{MEX}$$$.
#include <iostream>
#include <vector>
using namespace std;
void solve() {
int n, x, y;
cin >> n >> x >> y;
--x; --y;
vector<int> ans(n);
for (int i = 0; i < n; ++i) ans[(x + i) % n] = i % 2;
if (n % 2 || (x - y) % 2 == 0)
ans[x] = 2;
for (auto x : ans)cout << x << ' ';
cout << endl;
}
int main() {
int T;
cin >> T;
while (T--) solve();
}
Idea: Evirir and Kaey
Solution: CSQ31, Evirir, Kaey
Preparation: Evirir and CSQ31
2049D - Shift + Esc
Let $$$f(i,j)$$$ be the minimum cost to move to cell $$$(i,j)$$$ after shifting and $$$g(i,j,x)$$$ be the minimum cost to move to $$$(i,j)$$$ assuming row $$$i$$$ is shifted to the left by $$$x$$$.
For simplicity sake, we will add a row with all zeros above the first row. Also note that the operations with states denoting columns are all under modulo $$$m$$$, I am omitting the notation to avoid clutter.
The transitions are as follows:
Base cases:
From row $$$i$$$ to row $$$i+1$$$:
In $$$(*)$$$, the $$$f(i-1,j)$$$ term is from the case where you move from $$$(i-1,j)$$$ to $$$(i,j)$$$. Similarly the $$$g(i,j-1,x)$$$ term is from the case where you move from $$$(i,j-1)$$$ to $$$(i,j)$$$.
The final answer is $$$f(n,m-1)$$$. The overall complexity is $$$O(nm^2)$$$.
#include<bits/stdc++.h>
using namespace std;
typedef long long int ll;
ll dp[511][511],a[511][511];
void solve()
{
int n,m,k;
cin>>n>>m>>k;
for(int i=1;i<=n;i++){
for(int j=0;j<m;j++)cin>>a[i][j];
}
for(int i=0;i<=n;i++){
for(int j=0;j<m;j++)dp[i][j] = 1e18;
}
dp[0][0] = 0;
for(int i=1;i<=n;i++){
for(int shift = 0;shift<m;shift++){
vector<ll>tmp(m,1e18);
for(int j=0;j<m;j++)tmp[j] = dp[i-1][j] + a[i][(j+shift)%m] + k*1LL*shift;
for(int j=0;j<m;j++)tmp[j] = min(tmp[j],tmp[(j+m-1)%m] + a[i][(j+shift)%m]);
for(int j=0;j<m;j++)tmp[j] = min(tmp[j],tmp[(j+m-1)%m] + a[i][(j+shift)%m]);
for(int j=0;j<m;j++)dp[i][j] = min(dp[i][j],tmp[j]);
}
//for(int j=0;j<m;j++)cout<<dp[i][j]<<" ";
// cout<<'\n';
}
cout<<dp[n][m-1]<<endl;
}
int main()
{
int t; cin>>t;
for (int i = 0; i < t; i++) solve();
}
Idea & preparation: Evirir
Solution: Kaey
2049E - Broken Queries
Make 2 queries $$$[1, n/4]$$$ and $$$[n/4+1, n/2]$$$. This tells us which half the 1 is in: it is in $$$[1, n / 2]$$$ if the query results are different and $$$[n / 2 + 1, n]$$$ otherwise.
Make 1 query: query $$$[1, n/2]$$$ if the 1 is in it or $$$[n/2+1, n]$$$ otherwise. This tells us that $$$k < n / 2$$$ if the result is $$$1$$$ and $$$k \ge n / 2$$$ otherwise.
Without loss of generality, assume that the 1 is in $$$[1, n / 2]$$$. Now we can binary search for $$$k$$$ in $$$[1, n / 2]$$$ or $$$[n / 2 + 1, n]$$$. Let $$$k'$$$ be our guess.
- If $$$k < n / 2$$$, query $$$[n / 2 + 1, n / 2 + k']$$$. The result is $$$1$$$ if $$$k' \ge k$$$ and $$$0$$$ otherwise.
- If $$$k \ge n / 2$$$, query $$$[1, k']$$$. The result is $$$0$$$ if $$$k' \ge k$$$ and $$$1$$$ otherwise.
In both cases, the binary search takes $$$\log n - 1 \le 29$$$ queries. Overall, this takes at most $$$2 + 1 + 29 = 32$$$ queries.
The limit of $$$33$$$ queries (instead of $$$32$$$) is to allow less optimized solutions and other solutions. For example, one can instead do 3 queries in the beginning $$$[1, n / 4]$$$, $$$[n / 4 + 1, n / 2]$$$, $$$[n / 2 + 1, 3n / 4]$$$ to determine which quarter the 1 is in.
#include <bits/stdc++.h>
using namespace std;
int qry(int l, int r, bool rev = 0, int n = 0) {
if (rev) {
int t = n - l;
l = n - r;
r = t;
}
cout << "? " << l + 1 << ' ' << r << endl;
cin >> r;
return r;
}
void solve() {
int n;
cin >> n;
int a = qry(0, n / 4);
int b = qry(n / 4, n / 2);
bool kSmall = 1;
bool firstHalf = 1;
if (a == b) firstHalf = 0;
int bs = 0;
if (qry(0, n / 2, firstHalf, n) == 0) kSmall = 0;
if (kSmall) {
for (int k = n / 4; k; k /= 2)
if (qry(0, bs + k, firstHalf, n) == 0) bs += k;
} else {
bs = n / 2 - 1;
for (int k = n / 4; k; k /= 2)
if (qry(0, bs + k, 1-firstHalf, n) == 1) bs += k;
}
cout << "! " << bs + 1 << endl;
}
int main() {
int T = 1;
cin >> T;
while (T--) solve();
return 0;
}
Idea & preparation: YouKn0wWho
2049F - MEX OR Mania
Let's figure out when a sequence is good. Let $$$m$$$ be the maximum element of the sequence. Notice that the bitwise OR of the sequence is at least $$$m$$$ and as MEX $$$-$$$ OR $$$=1$$$, that means MEX has to be at least $$$m + 1$$$. Which means all elements from $$$0$$$ to $$$m$$$ has to be present in the sequence. As MEX can't be greater than $$$m + 1$$$, the MEX has to be exactly $$$m + 1$$$.
Now we need to check for which $$$m$$$ the bitwise OR of the elements from $$$0$$$ to $$$m$$$ is exactly $$$m$$$. It's not hard to see that this is true for $$$m = 2^k - 1$$$ for some integer $$$k \geq 0$$$. The reason is that all bits from $$$0$$$ to $$$k - 1$$$ have to be set in $$$m$$$ for the OR to be $$$m$$$ and it's only possible if $$$m$$$ is of the form $$$2^k - 1$$$.
So, a sequence is good if the maximum element is $$$m = 2^k - 1$$$ for some integer $$$k$$$ and all elements from $$$0$$$ to $$$m$$$ are present in the sequence.
Now, let's see how to answer the queries without any updates. To find the longest good subarray, we can use a two-pointers approach. But a better way to do this is to fix the power $$$k (0 \leq k \leq \log_2 n)$$$ and find the longest good subarray with maximum element $$$2^k - 1$$$. To do this, ignore the elements greater than $$$2^k - 1$$$ and then split the array into segments of consecutive numbers where each segment has elements from $$$0$$$ to $$$2^k - 1$$$. To check if a segment is good, we can track the number of distinct elements in the segment. If the number of distinct elements is $$$2^k$$$, then the segment is good.
So to sum it up, for each power $$$k$$$, we will track some segments/components and the number of distinct elements in them and also the lengths of the segments to get the longest one during queries.
Now regarding the updates, it is hard to track everything if we do the updates normally. But its's easier if we look at them in reverse order!
Then each update will be decreasing the value of $$$a_i$$$ by $$$x$$$. Then for each power $$$k$$$, we will have to add a new element to a component or merge two components. For tracking distinct elements, we can use a map or unordered map and to merge we can use DSU with small to large merging. And that's pretty much it.
Please check my code for more details.
Overall complexity is $$$O((n + q) \log^2 n)$$$ or $$$O((n + q) \log^3 n)$$$ depending on if you use an unordered map or a map.
#include<bits/stdc++.h>
using namespace std;
const int N = 1e5 + 9, Q = 3e5 + 9;
using ll = long long;
struct GoodSet { // insert, erase and track distinct and total elements
map<int, int> mp;
int size;
int k;
GoodSet() {}
GoodSet(int _k): k(_k), size(0) { };
void insert(int x, int c = 1) {
mp[x] += c;
size += c;
}
void erase(int x) {
if (mp[x] == 1) {
mp.erase(x);
}
else {
mp[x]--;
}
size -= 1;
}
void merge(GoodSet oth) {
for (auto [x, c]: oth.mp) {
insert(x, c);
}
}
bool is_good() { // check if all elements from 0 to 2^k - 1 exists in the set
return (int) mp.size() == (1 << k);
}
int get_value() {
if (is_good()) return size;
return 0;
}
};
struct MaxSet { // insert, erase and track max element
map<int, int> mp;
MaxSet() {}
void insert(int x) {
mp[x]++;
}
void erase(int x) {
mp[x]--;
if (mp[x] == 0) mp.erase(x);
}
int get_max() {
return mp.rbegin() -> first;
}
};
struct DSU { // DSU for each power of 2
int n;
int k;
vector<int> par;
vector<GoodSet> comp;
MaxSet good_lengths;
DSU() {}
DSU(int _n, int _k): n(_n), k(_k) {
par.resize(n + 1);
comp.resize(n + 1);
for (int i = 1; i <= n; i++) {
par[i] = i;
comp[i] = GoodSet(k);
good_lengths.insert(comp[i].get_value());
}
}
int find(int u) {
return par[u] = (par[u] == u ? u : find(par[u]));
}
void merge(int u, int v) {
u = find(u); v = find(v);
if (u == v) return;
good_lengths.erase(comp[u].get_value());
good_lengths.erase(comp[v].get_value());
// small to large merging
if (comp[u].mp.size() < comp[v].mp.size()) {
comp[u].mp.swap(comp[v].mp);
swap(comp[u].size, comp[v].size);
}
comp[u].merge(comp[v]);
comp[v].mp.clear(); // clear to save up memory
good_lengths.insert(comp[u].get_value());
par[v] = u;
}
// insert or erase an element from the component that u belongs to
void update_in_component(int u, int x, bool insert = true) {
u = find(u);
good_lengths.erase(comp[u].get_value());
if (insert) comp[u].insert(x);
else comp[u].erase(x);
good_lengths.insert(comp[u].get_value());
}
};
DSU f[18];
ll a[N]; // make it long long as total sum can be huge
int id[Q], x[Q], ans[Q];
void solve() {
int n, q; cin >> n >> q;
for (int i = 1; i <= n; i++) {
cin >> a[i];
}
for (int i = 1; i <= q; i++) {
cin >> id[i] >> x[i];
a[id[i]] += x[i];
}
MaxSet se;
for (int k = 0; (1 << k) <= n; k++) {
f[k] = DSU(n, k);
for (int i = 1; i <= n; i++) {
if (a[i] < (1 << k)) {
f[k].update_in_component(i, a[i], true);
}
}
for (int i = 2; i <= n; i++) {
if (a[i] < (1 << k) and a[i - 1] < (1 << k)) {
f[k].merge(i - 1, i);
}
}
se.insert(f[k].good_lengths.get_max());
}
for (int qid = q; qid >= 1; qid--) {
ans[qid] = se.get_max();
int i = id[qid], sub = x[qid];
for (int k = 0; (1 << k) <= n; k++) {
se.erase(f[k].good_lengths.get_max());
if (a[i] < (1 << k)) f[k].update_in_component(i, a[i], false);
if (a[i] - sub < (1 << k)) f[k].update_in_component(i, a[i] - sub, true);
if (a[i] >= (1 << k) and a[i] - sub < (1 << k)) {
if (i > 1 and a[i - 1] < (1 << k)) {
f[k].merge(i - 1, i);
}
if (i + 1 <= n and a[i + 1] < (1 << k)) {
f[k].merge(i, i + 1);
}
}
se.insert(f[k].good_lengths.get_max());
}
a[i] -= sub;
}
for (int i = 1; i <= q; i++) {
cout << ans[i] << '\n';
}
}
int32_t main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
cin >> t;
while (t--) {
solve();
}
return 0;
}
first.
B was brutal
I overcomplicated it and am crying now.
i spent so long missing a case for my B solution lmao
I haven't participated but thought b was easy bro?
C is also awful in my opinion.
jocker
second.
E was interesting.
B broke my back
B broke me completely
Third.
I will quit competitive programming...
b was absolute shit
I didn't really learn anything from this round. B was either brute forced implementation(like i did) or editorial's edge case heavy direct implementation, C was just observations derived by dry running mutliple cases. Yesterday's global round had a better C.
There was a simple logic in B, can we fit a certain set of number which has to be smaller than the gap between last s and first p!
You can check my submission for a simple implementation !
Check my submission for an elegant approach to B
For F, my solution does actually do the updates in the normal order. I maintained for each power $$$k$$$ a set of ranges, where each range stores a frequency array of size $$$2^k$$$, as well as a multiset of the lengths of valid ranges (to get the answer). A range is valid if every element in its frequency array is nonzero.
Then each update either modifies or splits $$$\text{log} \, n$$$ ranges. For each power $$$k$$$, find the range that position $$$i$$$ is contained in (or skip if it doesn't exist). If $$$a_i + x$$$ is still within the range, just update its frequency array. Otherwise we need to split it into two ranges. We can use "large-to-small splitting": modify the frequency array of the larger half, then make a new range for the smaller half (if its size is at least $$$2^k$$$). So the total number of frequency array updates per power $$$k$$$ is at most $$$O(q + n \, \text{log} \, n)$$$, so the overall complexity is $$$O((n + q) \, \text{log}^2 \, n)$$$.
I use bfs in problem C.I thought I would get MLE or TLE pending but I passed the testings.:)
How?? I didn't think of it. Can you explain your approach a little?
I solved it using dfs but I think the idea should be pretty much the same. First, I make a graph where the edges connect friends. Then, I run dfs and every time some vertex is visited, I set its value to the mex of already visited neighbors. It works because I never break the condition for the vertices that were already set
ahh ok. I'll see your code for the details.
very similar. For each node u,if it is not the mex of neighbors, I set its value to the mex. If it is,I set the unassigned neighbors as mex+1.
ahh B...
got d right after the contest ended
Same.
IMO D would've fit it's position better if time limits would've cut $$$O(n*m^3)$$$ solutions! Or maybe I'm just salty that I spent way too long finding the $$$O(n*m^2)$$$ solution XD
the n ^ 4 and n ^ 3 are the exact same , just maintain an array of minimums for each column at each row i didnt even realize my code was n ^ 4 submitted 2 times before realizing and adding that vector was enuogh
I think you misunderstood me, some $$$O(n*m^3)$$$ solutions passed : 297536216 .
Wait!!! How is it possible? 5 * 10^4 *(200 * 200) is around 2e9 iterations. How the solution is getting accepted?
i had a similar solution , O(n * m^3) but mine TLEd on tc5. i could not figure out why.
submission : 297577834
it passes in 3.3s when i tried to submit it in a mashup gym.
still , how could i make it pass?
(also i take inf = 1e15)
nvm, wrote iterative dp and it works just fine in under 1s. yayy
submission: 297581320
Never had it crossed my mind to put the absolute full path in the include directive. That's cursed af.
haha , cursed but works! i was not able to figure alternatives with my IDE so i thought to leave it be ,why should it matter.
wait... $$$O(n*m^3)$$$ actually works?... you're right, limits cutting those solutions would've been nice.
This is brutally cruel, I got WA because my max query count was $$$34$$$... Why is there the need to outright destroying implementation that simply lacks a bit of thinking that isn't much crucial to the problem idea?
(Of course, I am a bit salty, and I don't think I should really blame it coz I upsolved either way, but I feel the "less optimized solutions to pass" intention is quite not-so-there.)
while(l <= r) is a very dangerous way of binary searching because in the most common way u dont make full use of the info u get after each query , using every bit of info u had , u could've passed it just with 29 queries which u had
So you are telling me that I could only be unoptimal in at most $$$1$$$ place out of $$$2$$$... :,)
Btw, I'm curious on the binsearch implementation. How could this be made use better? I usually code that way coz' it's the most mathematically coherent algorithm in my head.
i remember a problem called ruler or something from a div 4 not sure if i can find it , it showed me how dogshit this way of binary searching is , still doing it because i dont have to worry about the floor or ceil of the middle point
edit : problem 1999G1 - Ruler (easy version)
...I ternary searched that problem, and with tersearch I had a completely different setup...
Still I think I kinda understand what you're getting at here...
My binary search is like this
btw, your exact same WA code passes if you change x4 = 1^x1^x2^x3
Yeah, I did something similar to that already. It's not hard to figure but it's just babbling to be squeezed sometimes...
B made me want to KMS.
I think this is even more difficult than before
I dislike C so much, how did people come up with the solution?
i thought C was nice lol
my solution was a bit diff, basically I set $$$a_x=0$$$ and $$$a_y=1$$$, then filled in everything between
in such problems thinking about a few specific cases and then generalizing helps a lot.
look at the graph formed by the numbers, and first look at the cases when x and y are already friends :
Case 1: N is odd the smallest case is a cycle of 3. it's easy to see 0-1-2 works. can you generalise this?
if I were to add 2 nodes between 0 1, it would be 1 and 0 giving us a solution for N= 5. you could extend this for all odd N
Case 2: N is even the smallest case is a cycle of 4. here 0-1-0-1 works! if I were to add 2 nodes between 0 and 1 it would be 1 and 0.
with this you've generalised the solution for even N
now what's left is cases where x and y aren't already friends
here you can have an odd cycle next to an even cycle, two even cycles, or two odd cycles. like above starting with cycles of length (3,4) , (3,3) ,(4,4) and then generalising it helps you see the construction.
297495561
C was, without a doubt, the most utterly repulsive problem I'd ever had the misfortune of encountering.
very cool contest
tho the difficulty had a very sudden jump on e while maintaining a steady grow from a to d (in my opinion, of course)
Where is the cycle in the DP transition for D coming from? Surely you can just do $$$g(i, j, x) = \text{min}(g(i, j - 1, x), f(i - 1, j) + kx) + a(i, j + x)$$$, and there is no cycle?
That was also my solution and it passed. Probably the authors just overcomplicated things
is the recurrence given in the editorial even correct ?
Yes you are right, I got the cycle part mixed up with another solution. (the editorial is updated)
The recurrence in the editorial is still incorrect , the $$$kx$$$ term should be with the $$$f(i-1,j)$$$ portion only
I am confused. What about $$$g(i,0,x)$$$ depending upon $$$g(i,m-1,x)$$$? Why shouldn't there be a cyclic dependency?
It doesn't. You can only move down or right, you can't wrap around the grid yourself (only the rows cyclically shift). So the only way to get to cell $$$(i, 0)$$$ is from cell $$$(i - 1, 0)$$$, and so $$$g(i, 0, x)$$$ depends only on $$$f(i - 1, 0)$$$.
I was one +-1 away from solving E during the contest... Cool problem though
Editorial for B made me pity meself.
Maybe you over-complicated B during the contest and panicked maybe? Few things were obvious like if there is no 'p' or no 's' then ans is obvio 'YES', and if 'p' came before 's' then it's impossible also trivial if we just made the subarray and saw as mentioned in q. The next thing i did was make few cases like 'sspp' etc and checked where i came up with 'YES' is only possible when we have 'sppp..' or 'ssss..p' type of string rest it won't be possible. Usually i am too dumb for qs but maybe yesterday was just lucky day for me.
Great observation Lve. You weren't lucky , you are improving. Also , that's alot of maybes to put irl , had had a hard time dealing with my maybes so i prefer to say i sucked.
for F you can solve in $$$O(n \log n)$$$ as follows. Observe that when checking for $$$2^k$$$, only sets with size at least $$$2^k$$$ can possibly work, so delay building them until the size is $$$2^k$$$. With this, it is reasonable to use just an array of size 2^k to store the frequecny array. Insetad of merging two sets in time $$$set constnat * min(|A|,|B|)$$$, we can merge this in exactly $$$2^k$$$. A simple amortization will show that this process is $$$O(n)$$$ for each target power of two. So this part works in $$$O(n log n)$$$.
We also have to maintain the maxmium size of all working sets. This costs $$$O(X log X)$$$ for $$$X$$$ at most the number of built set, with the log from constnats of map. Notice that when checking for $$$2^k$$$, $$$X$$$ is bounded by $$$\frac{n}{2^k}$$$. Suming this over $$$k$$$, this part is also $$$O(n \log n)$$$.
Edit: I forgot that we you process queries, the same already built set can be moved in/out of working sets, so unfortunately the last seemingly easiest part is actually the slowest at $$$O(n log^2 n)$$$ I don’t see a good way to save the log in this task.
Second edit: it turns out saving this last log isn’t as bad as I thought. It is not a true savings, but vEB tree (rather, the practical base 64 variant) allows this final part to be done in O(n log n log log n) overall
Alternative solution to C and D.
C: If $$$(x>y)$$$ swap $$$(x,y)$$$, set $$$A_x = 0$$$, and alternate between 0 and 1 and wrap that around on the right side until $$$y$$$ (not including $$$y$$$). Also do the same for the left side. Then $$$A_y = max(A_{y-1},A_{y+1}) +1$$$. This works because $$$A_y >=1$$$ because $$$y$$$ connected to $$$x$$$ and $$$A_x = 0$$$.
D: Very similar to the editorial, but I set up the dp differently.
Let $$$dp[i][j][b] =$$$ the minimum cost using to get to $$$(i,j)$$$ shifting the $$$i$$$th row $$$b$$$ times.
Let $$$stored[i][j] =$$$ the minimum cost to get to $$$(i,j)$$$ including operation costs and across all possible shifts.
Intitally, $$$dp[i][j][b] = inf$$$, for all $$$(i,j)$$$. Except $$$dp[0][j] = arr[0][j]$$$.
Transitions are as follows:
$$$dp[i][j][b] = min(dp[i][j-1][b] + arr[i][j],stored[i-1][j] + arr[i][j])$$$
$$$stored[i][j] = max(dp[i][j][b] + b*k)$$$ for all $$$b$$$ from $$$(0,m)$$$
I did the following DP in D:
dp[i][j]
— mincost of getting to cell (i,j).dp[i][j] = min_{t <= j} dp[i-1][t] + f[t][j]
, where f[t][j] — mincost of getting from (i,t) to (i,j).computing f is the hard part: I compute all f[l][l+len-1] for each given len=1..m.
First, I pushed
b[i][l+len-1] - b[i][l-1] + k * (l-1)
to a queue with minimum for all l = 1..m. In the queue for each l, I support values of all m cyclic shifts of the row i. Then f[l][l+len-1] for each l = 1..m-len+1 isf[l][l+len-1] = (min of the queue) - k * (l-1)
. f[l][r] for each row i is calculated in O(m log m), and dp[i][j] is calculated in O(m) for each (i,j), so the total complexity is O(nm^2 log m).An interesting approach to finding cheapest paths on a "layer" in a matrix!
I found it in a different way, for each length
len
do: (assuming thati
fixed)Calculated all sum's (from each start position
j
) of lengthlen
: $$$\sum_{k=j}^{j+\text{len}-1} a_{i,k}$$$When found minimal of them — let it be for position $$$j = \text{pos}$$$. Then cheapest passing from $$$(i, \text{pos})$$$ to $$$(i, \text{pos}+\text{len}-1)$$$ is the path without a shifting (just $$$a_{i,\text{pos}} + a_{i, \text{pos}+1} + .. + a_{i, \text{pos}+\text{len} - 1}$$$)
Next moving from
pos
to the left, choose cheapest path from:My submission: 299116684. Total complexity: $$$O(nm^2)$$$, but in this solution getting lost generality, because it used the fact of a linear penalty for shifting.
Note that you can do binary search directly on $$$[n/4+1,n-1]$$$ in E. Code:297545783.
Nice contest... I'm happy with my performance, finally reached specialist!
I don't understand why my dp solution for D did not work, it is failing on 2 of the sample inputs. Can someone please point out where I'm going wrong here? (Assuming I'd have optimized the transition to make it O(nm^2) by saving the min values for each row)
Spoiler exists.
oops... I've fixed it now, thanks!
i have done similar ig you may see.
thanks for the code... I understand your approach, i must've done something wrong in one of my transitions
https://codeforces.net/contest/2049/submission/297595468
I could have submitted B if I had only 5 more seconds, hurts lol
C took 10% of time as compared to B.
For C i just keep iterating over answer untill it converged, i did this coz i thought it will converged in ~ 4 steps but not sure exactly how.
can somebody hack my solution? or help me to prove why this works? 297490012
Yeah. The thought train is simple: friends should not have equal values, so answer should be at most $$$\text{cnt}_{\text{friends}}$$$. And since you can only have at most $$$3$$$ friends (two adjacent, one predetermined in $$$(x, y)$$$), so...
B makes me feel like I am dumb:-*
I am going back to noobie
-121 :D
Nice B
Could anyone suggest similar problems or tell me what topic this falls under? Any tips to solve these types of problems faster?
B provides so many cases. When they provide so many, you know you have to look at them. And then you should do a bit of reasoning, which is, really easy I think.
It was greedy no others concepts is also required.
This contest was more inclined towards implementation.Though, I became specialist thanks to this contest.
for D the second same for is unnecessary and also there is no path from [i][m — 1] to [i][1] so there is no need to write tmp[(j+m-1)%m]
so simpler code is:
(100% correct)
also tmp[(j-1)%m] can be changed to tmp[j-1](more clear...)
Why is there no path from [m-1] to [1]? For eg about the case where we shift $$$i$$$ th row to the left m-1 times. then [i][1] is to the right of [i][m-1].
I think you misunderstood, we can move only to the down or right and thats not about shifts : we already know how many shifts we did and tmp[i] is the ith row AFTER shifts. so we can simply write : tmp[j] = min(tmp[j],tmp[j-1] + a[i][(j+shift)%m]) (if you still did not understand, proof is -> code, you can submit it)
$$$B$$$ $$$\gt$$$ $$$D$$$
Codeforces Hot News!
Wow! Coder chenlinxuan0226 competed in Codeforces Round 994 (Div. 2) and gained -160 rating points taking place 4892
Share it!
Congratulations!
I'll upvote this contest for great contribution to chenlinxuan0226's rating.
Codeforces Hot News!
Wow! Coder chenlinxuan0226 competed in Codeforces Round 994 (Div. 2) and gained -160 rating points taking place 4892
Share it!
ts funny asf
My 4th contest and Div 2 gives me real reality chevk. Solved only A ... and crossed 1000 barrier yay! 2025 will be a year of CP for me.:)
For B, we just need to check if array [1,2,3...,n-1,n] or [n,n-1,...,2,1] satisfies the condition. So we don't have to worry about corner cases
Very nice and simple solution!
Why would these always be valid solutions?
Just being curious, what problem B and C teaches us?
for problem B, my thought process is segments that need to form permutations must be in subset relation. otherwise there is an element in the smaller segment that is not in the larger segment. then the larger segment is not a permutation.
hence all segments should form a chain of subset. to check this, intersect all the segments and check if it is one of the segment.
submission
since a is not the entire p, a does not contain pn. However, b contains pn. In Editorial for problem B can anyone explain how this contradiction is happening? As per my understanding of this line if b contains pn what is problem b may be larger so that b contains some extra elements that a does not have.
This comments section is satisfying. I thought I was the only one struggling in b lol.
In the MEX cycle problem what i did was, i maintained the friends of each element, and then i started iterating from 1..n, and iteratively assigning MEX by looking at their friends, for the MEX answer array i maintained -1 initially for all the values, during the iteration if the friend's value of any element came out to be -1, i just took the element itself, and if the friend's value was not -1 (which will be in the case where i must have calculated the friend's MEX before) i just used its MEX value which i updated in answer array. This approach works but only if i loop from 1..n, if i choose any other order, or iterate in reverse, this fails for some cases, thats why there is a confusion. If any one gets whats happening here please reply to me.
Submission --> 297573068
Submission ---> 297522704
For the B question, what i did was, i normally iterated through the string and if i see an 's', i push a pair
{i, n-1}
in a vector, and if i see a 'p', i push{0, n-1}
into the vector, and at the end, I iterate across the vector and i check if either the first element of all the pairs is 0, or the last element of all the pairs is n-1, then the answer is "YES", otherwise "NO".This way there is no need to handle EDGE CASES as we will just be checking if the permutations either end at same index, or start from same index, in all other cases it will be a "NO".
In problem D, why do you substract (*) to the transition? Is it a substraction or just to explain it later?
Also, I don't get why "the g(i,j−1,k) term is from the case where you move from (i,j−1) to (i,j)", shouldn't it be g(i,j−1,x)?
The (*) part is edited, it's supposed to be a marker for the equation
This B made me cry
Author of B must get death penalty for such shit crime
Kudos to the authors of this round, I really liked the problems!
My intuition for B was the following: - create a vector of the required left,right indexes between which there must be a permutation, and sort them by the size of the permutation - then take them one by one: our first permutation will be the first one in the list, and from the second one onwards, we check the condition: if the new permutation we want to write has its borders outside of what we already are assuming is a permutation, then we extend our permutation to cover this new area - otherwise, this new permutation partially overlaps with our current one, but also partially doesn't, and we run into an incompatibility, meaning it the construction of the final permutation is impossible
(Sorry if my thought process isn't understandable, English isn't my native language.)
Why In Problem E. "Make 2 queries [1,n/4] and [n/4+1,n/2]. This tells us which half the 1 is in: it is in [1,n/2] if the query results are different and [n/2+1,n] otherwise." Works?
Since both queries have the same length, their results will be both correct or both flipped. If the 1 is in $$$[1, n/2]$$$, the only 1 in $$$a$$$ will be in exactly one of the queries, so they will have different results. If not, the two results will be both 0 if not flipped or both 1 if flipped.
Is the proof for B correct?
"However, b cannot contain a: since a is not the entire p, a does not contain pn. However, b contains pn. Contradiction."
Doesn't really make sense
Note that $$$p$$$ must be a permutation, so elements of $$$a$$$ being in $$$b$$$ implies that $$$a$$$ must be a subarray of $$$b$$$.
I get that, but why would that be a contradiction?
[1, 2] is a subarray of [1, 2, 3]. The first one doesn't contain 3. The second one contains.
"b cannot contain a: since a is not the entire p, a does not contain pn. However, b contains pn. Contradiction."
What's the contradiction?
Evirir can you explain how is that a contradiction ? it should be like some element is a member of a but it is not in b then contradiction works isnt ? but which element to consider of a
The contradiction is that $$$a$$$ must be a subarray of $$$b$$$, but $$$b$$$ cannot contain $$$a$$$ (i.e. $$$a$$$ cannot be a subarray of $$$b$$$). The two bullet points contradict each other.
i mean to say you wanted to show that (a cannot be subarray of b) but the proof of that is not exactly showing that because its arguing about one element of b which is not in a , but for proof to work we would need to argue with one element of a which is not in b.
Oh good point, thanks for pointing it out! I have edited it
Problem F is too implementation heavy
i like E a lot
Has anyone approached Problem D using recursive DP not iterative?
Here you go
Thanks, your code is quite clear and understandable
Hello. Can you please explain the reason for checking from every possible position instead of just (n,m)??
implementing recursion from (n,m) will lead to exploring m transitions in one call,instead I just explored every transition independently, in this way it is easy to code the recursion.
First 2 probs weren't that hard tho I got +1 each
The problem was indeed challenging, but the tutorial was fantastic! It broke down the logic so clearly that everything made sense and and easy to grasp. Huge thanks to the author for such straightforward explanation—it really helped me learn and understand the approach better
I was practicing old problems and could see Mex Destruction is exactly same as 1696B — NIT Destroys the Universe
https://codeforces.net/problemset/problem/1696/B
F can be solved through adding barriers instead.
I'm surprised how simple the solution to B is. During the contest, I created a list of segments to check the validity of the ranges. I have completely overlooked the easy solution.
My approach to F :
Firstly, for complex equation which means nothing at first look, we do this deduction :
Handling updates requires a major observation.
The values of a particular element will only increase. This means that the ranges we calculated in the 2nd part can only be broken down into smaller pieces. They can not merge, or increase in size.
This observation leads to a simple idea :
We stored a freq map corresponding to each Range in 2nd part, we will use that here :
I spent hours trying to step 3 thinking this will give me linear complexity before realizing the combined splits on a level will always result in nlogn complexity across queries.
Submission : 299146450
can someone recommend dp problems which require optimization like in problem D ....thanks in advance