Thank you for participating! We put a lot of effort into this contest. Special thanks to TheScrasse for contributing to these problems.
Person | A | B | C1 | C2 | D | E | F | G | H |
---|---|---|---|---|---|---|---|---|---|
null_awe | 800 | 1000 | 1200 | 1700 | 2200 | 2400 | 2500 | 2900 | 3400 |
buffering | 800 | 1000 | 1300 | 1800 | 2300 | 2400 | 2600 | 3100 | 3500 |
redpanda | 800 | 1100 | 1200 | 1800 | |||||
TheScrasse | 800 | 1000 | 1200 | 1600 | 2200 | 2500 | 2500 | 3100 | 3400 |
bronze_coder | 800 | 1000 | 1600 | 1800 | 2400 | 2700 | 3500 | ||
awesomeguy856 | 800 | 1100 | 1300 | 1700 | 2200 | 2300 | 2600 | ||
flamestorm | 800 | 1100 | 1200 | 1600 | 2200 | 2400 | 2500 | 3100 | 3200 |
Ste | 800 | 1100 | 1000 | 1600 | 2000 | 2400 | 2400 |
A | B | C1 | C2 | D | E | F | G | H | |
---|---|---|---|---|---|---|---|---|---|
Average | 800 | 1050 | 1250 | 1700 | 2214 | 2400 | 2516 | 2980 | 3400 |
Actual | 800 | 1100 | 1300 | 1700 | 2000 | 2400 | 2500 | 3200 | 3300 |
Also, check out this video editorial for problems A through C2 by one of our testers! https://www.youtube.com/watch?v=hS8Z3k57f6Q
Solutions
1984A - Strange Splitting
Problem Credits: buffering
Analysis: buffering
When is it impossible?
When $$$a = [x, x \dots x]$$$ for some $$$x$$$.
Only color one element red to make the range $$$0$$$. Which one do you pick to make the blue range different?
Read the hints.
It is impossible to color the array when all the elements are the same, because the range for the red and blue elements will always be $$$0$$$.
Otherwise, notice there will always be at least $$$2$$$ distinct values in the array. This means there is a way to get a positive range for the blue elements, by taking the maximum value and the minimum value and coloring them blue. This works because $$$n \geq 3$$$, so there is at least one element left.
Since we can get a positive range, we can then try to get the red elements to have a range of $$$0$$$, by coloring exactly one value red. So, we can color any $$$a_i$$$ for $$$2 \leq i \leq n - 1$$$ red since it will not affect the positive range we constructed for the blue elements. For simplicity sake, our solution chooses $$$i = 2$$$. Then, the remaining elements can be colored blue.
Therefore, our final solution is to check if it is impossible, or color $$$a_2$$$ red and the rest blue.
#include <iostream>
using namespace std;
int main(){
int T; cin >> T;
while (T--) {
int n; cin >> n;
vector<int> a(n);
for (int i = 0; i < n; i++) {
cin >> a[i];
}
if (a[0] == a[n - 1]) {
cout << "NO" << "\n";
}
else {
cout << "YES" << "\n";
string s(n, 'R');
s[1] = 'B';
cout << s << "\n";
}
}
}
1984B - Large Addition
Problem Credits: flamestorm
Analysis: null_awe
Solution 1
What must the first (largest) digit be?
What must the other non-unit digits be?
What must the last digit be?
Because every digit is large, every two digits being added together will carry to the next digit. The two addends have the same length, so the sum must be one greater in length, with the largest digit equal to $$$1$$$.
For all other digits except for the units digit, we have the value to be the sum of two large digits, plus $$$1$$$ being carried over from the previous column. This makes the acceptable range of values be from $$$1$$$ to $$$9$$$, inclusive.
For the units digit, there is no previous column to carry over a $$$1$$$, so the acceptable range of values is from $$$0$$$ to $$$8$$$, inclusive.
#include <iostream>
using namespace std;
#define ll long long
void solve() {
ll n; cin >> n;
n = n - n % 10 + (n % 10 + 1) % 10;
while (n > 9) {
if (n % 10 == 0) {
cout << "NO\n";
return;
}
n /= 10;
}
cout << (n == 1 ? "YES\n" : "NO\n");
}
int main() {
ios_base::sync_with_stdio(false); cin.tie(NULL);
int t; cin >> t;
for (int i = 0; i < t; ++i) solve();
return 0;
}
1984C1 - Magnitude (Easy Version) and 1984C2 - Magnitude (Hard Version)
Problem Credits: buffering
Analysis: null_awe
How many times do we need to pick option $$$2$$$?
We only need to pick it once. How can we calculate the final value for every position we can pick?
We only need to pick option $$$2$$$ once. Why? Assume we picked option $$$2$$$ more than once, and consider the last two times it was picked. Both of these must occur when $$$c + a_i$$$ is negative, otherwise there is no reason to pick option $$$2$$$ over option $$$1$$$. Thus, if we chose option $$$1$$$ the first of these times instead of option $$$2$$$, that would cause the value of $$$c$$$ before the second operation to decrease. Because it was negative, this increases the magnitude, and thus it was more optimal to choose option $$$1$$$ for the first operation.
Because we only need to use option $$$2$$$ once, we can brute force where we use that operation and compute the final value of $$$c$$$ with prefix sums.
Read the solution to the easy version first.
Let's think a bit more. Where do we actually end up using option $$$2$$$? We only use option $$$2$$$ when our prefix sum up to this point is minimum in the whole array.
Now, focus on each individual "important" use of option $$$2$$$ (where it actually makes a difference from option $$$1$$$), Let's say it is done at index $$$i$$$. Let's consider indices before. Since the operation at index $$$i$$$ is the only important option $$$2$$$, any option $$$2$$$'s we use before must not have been different from option $$$1$$$ (meaning that the value of $$$c + a_j$$$ has to be non-negative). Thus, we have the choice of using option $$$2$$$ or option $$$1$$$ at any index $$$j < i$$$ where the prefix sum is non-negative. Let's say there exists $$$x$$$ unique values of $$$j$$$.
Now, let's consider indices after. Since index $$$i$$$ is at a minimum prefix sum, that guarantees that all indices after will never have a lower value of $$$c$$$ than what $$$c$$$ is after doing the operation at index $$$i$$$. Thus, since we use option 2 at index $$$i$$$, we can use any of the two options moving forward. Thus, any index $$$j > i$$$ has two choices on which option to pick. Let's say there exists $$$y$$$ unique values of $$$j$$$.
The contribution of index $$$i$$$ to the answer is then $$$2^{x + y}$$$.
Special case: the prefix sum never becomes negative. What is the answer then?
#include <iostream>
#include <vector>
#include <climits>
using namespace std;
#define ll long long
void solve() {
int n; cin >> n;
vector<int> arr(n); for (int i = 0; i < n; ++i) cin >> arr[i];
ll sum = 0, mn = 0;
for (int i = 0; i < n; ++i) sum += arr[i], mn = min(mn, sum);
cout << sum - 2 * mn << '\n';
}
int main() {
ios_base::sync_with_stdio(false); cin.tie(NULL);
int t; cin >> t;
for (int i = 0; i < t; ++i) solve();
return 0;
}
#include <iostream>
#include <vector>
#include <climits>
using namespace std;
#define ll long long
const ll MAX_N = 400001;
const ll MOD = 998244353;
vector<ll> p2(MAX_N);
void solve() {
int n; cin >> n;
vector<int> arr(n); for (int i = 0; i < n; ++i) cin >> arr[i];
ll sum = 0, mn = 0, ans = 0, abses = 0;
for (int i = 0; i < n; ++i) sum += arr[i], mn = min(mn, sum);
if (mn == 0) {
cout << p2[n] << '\n';
return;
}
sum = 0;
for (int i = 0; i < n; ++i) {
sum += arr[i];
if (sum == mn) {
ans = (ans + p2[n - i - 1 + abses]) % MOD;
}
if (sum >= 0) ++abses;
}
cout << ans << '\n';
}
int main() {
ios_base::sync_with_stdio(false); cin.tie(NULL);
p2[0] = 1;
for (int i = 1; i < MAX_N; ++i) p2[i] = 2 * p2[i - 1] % MOD;
int t; cin >> t;
for (int i = 0; i < t; ++i) solve();
return 0;
}
1984D - ''a'' String Problem
Problem Credits: le0n
Analysis: null_awe
What does $$$t$$$ have to include?
Special case: the string consists of only "$$$\texttt{a}$$$", then answer is $$$n - 1$$$.
Otherwise, $$$t$$$ has to contain a character that is not "$$$\texttt{a}$$$". Let's consider one approach of counting. Let's force all $$$t$$$ to start with the first non-a character, and see how many work.
To see if one of these strings $$$t$$$ will work, we can start with $$$i = 0$$$. As long as $$$i$$$ does not point to the end of the string, we can find the next non-a character in $$$s$$$, and then see if the next $$$|t|$$$ characters in $$$s$$$ matches $$$t$$$. The last check mentioned can be done through hashing or using the Z-function. If it doesn't, this value of $$$t$$$ doesn't work. Otherwise, we update $$$i$$$ to the new current position and continue checking.
Now, if we find a string $$$t$$$ that does work, we need to count its contribution to the answer. We can just add $$$1$$$, but obviously not all working $$$t$$$ will start with a non-a character. Instead, we can find the minimum unused "$$$\texttt{a}$$$"s before all of the substrings equal to $$$t$$$ (call this $$$m$$$), and the current $$$t$$$ will be able to extend out up to $$$m$$$ "$$$\texttt{a}$$$"s at the beginning of the string. Thus, the contribution is $$$m + 1$$$ to the answer.
How fast is this? Because we are checking at most one string $$$t$$$ of each length, and the check itself can be made to take $$$O(\frac{n}{|t|})$$$, we have a total time complexity of $$$O(n\log{n})$$$ due to harmonic sums.
#include <iostream>
#include <vector>
#include <climits>
#include <set>
using namespace std;
#define ll long long
vector<int> z_function(string s) {
int n = s.size();
vector<int> z(n);
int l = 0, r = 0;
for(int i = 1; i < n; ++i) {
if (i < r) z[i] = min(r - i, z[i - l]);
while (i + z[i] < n && s[z[i]] == s[i + z[i]]) ++z[i];
if (i + z[i] > r) l = i, r = i + z[i];
}
return z;
}
void solve() {
int n; // cin >> n;
string s; cin >> s;
n = s.length();
vector<int> nona(n, n);
for (int i = n - 1; i >= 0; --i) {
if (s[i] != 'a') nona[i] = i;
else if (i + 1 < n) nona[i] = nona[i + 1];
}
if (nona[0] == n) {
cout << n - 1 << '\n';
return;
}
string s2 = "";
int i1 = nona[0];
for (int i = i1; i < n; ++i) s2 += s[i];
vector<int> z = z_function(s2);
ll ans = 0;
for (int len = 1; i1 + len <= n; ++len) {
int cur = i1 + len;
int mn = i1;
bool works = true;
while (cur < n) {
if (nona[cur] == n) break;
int bt = nona[cur] - cur;
mn = min(mn, bt);
cur += bt;
if (z[cur - i1] < len) {
works = false;
break;
}
cur += len;
}
if (works) ans += mn + 1;
}
cout << ans << '\n';
}
int main() {
int t; cin >> t;
for (int i = 0; i < t; ++i) solve();
return 0;
}
1984E - Shuffle
Problem Credits: null_awe, thanks to wavelets for helping with brainstorming
Analysis: null_awe
Excluding the root, the maximum number of leaves we have is the maximum independent set (MIS) of the rest of the tree. Why? First of all, after rooting tree $$$T_2$$$, no two adjacent nodes can both be leaves. This is because, no matter which one you choose to add to $$$T_2$$$ first, it must be the ancestor of the other one. Thus, the first chosen node will not be a leaf. Furthermore, any chosen MIS of nodes can all become leaves. This is because we can essentially choose to add all of the nodes not in the MIS first to the tree, leaving all of the remaining nodes to be childless nodes, making them all leaves.
To find the answer, we want to find the maximum MIS of all subtrees that are missing one leaf. The answer will be one greater than this maximum MIS due to the root of $$$T_2$$$ being also a leaf.
There are multiple dynamic programming approaches to compute this efficiently, including rerooting and performing dynamic programming on tree edges.
Time complexity: $$$O(n)$$$ or
Unable to parse markup [type=CF_MATHJAX]
, depending on implementation.
#include <iostream>
#include <vector>
#include <map>
using namespace std;
#define pii pair<int, int>
int n;
vector<vector<int>> adj;
vector<pii> edges;
map<pii, int> mp;
vector<vector<int>> dp;
vector<vector<int>> from;
vector<int> miss;
void dfs(int e) {
if (dp[0][e] >= 0 || dp[1][e] >= 0) return;
int p = edges[e].first, v = edges[e].second;
dp[0][e] = 0, dp[1][e] = 1;
if (miss[v] < 0) {
for (int u : adj[v]) {
if (u == p) continue;
int ne = mp[{v, u}];
dfs(ne);
from[0][v] += max(dp[1][ne], dp[0][ne]);
from[1][v] += dp[0][ne];
}
miss[v] = p;
}
if (miss[v] != p && miss[v] != n) {
int ne = mp[{v, miss[v]}];
dfs(ne);
from[0][v] += max(dp[1][ne], dp[0][ne]);
from[1][v] += dp[0][ne];
miss[v] = n;
}
if (miss[v] == n) {
int nne = mp[{v, p}];
dp[0][e] += from[0][v] - max(dp[1][nne], dp[0][nne]);
dp[1][e] += from[1][v] - dp[0][nne];
} else if (miss[v] == p) {
dp[0][e] += from[0][v];
dp[1][e] += from[1][v];
}
}
void solve() {
cin >> n;
adj.clear(), edges.clear();
adj.resize(n), edges.resize(2 * n - 2);
for (int i = 0; i < n - 1; ++i) {
int a, b; cin >> a >> b; --a, --b;
adj[a].push_back(b);
adj[b].push_back(a);
edges[2 * i] = {a, b};
edges[2 * i + 1] = {b, a};
mp[{a, b}] = 2 * i;
mp[{b, a}] = 2 * i + 1;
}
from = vector<vector<int>>(2, vector<int>(n));
miss = vector<int>(n, -1);
dp = vector<vector<int>>(2, vector<int>(2 * n - 2, -1));
for (int i = 0; i < 2 * n - 2; ++i) dfs(i);
int ans = 0;
for (int i = 0; i < n; ++i) {
if (adj[i].size() != 1) continue;
int e = mp[{i, adj[i][0]}];
ans = max(ans, 1 + max(dp[0][e], dp[1][e]));
}
cout << ans << '\n';
}
int main() {
ios_base::sync_with_stdio(false); cin.tie(NULL);
int t; cin >> t;
for (int i = 0; i < t; ++i) solve();
return 0;
}
1984F - Reconstruction
Problem Credits: buffering
Analysis: buffering
Solve for no question marks in the string first.
Add a $$$0$$$ on both sides of $$$a$$$ and $$$b$$$, and add a $$$\texttt{P}$$$ before the start of the string and a $$$\texttt{S}$$$ after the string. Now you are guaranteed to have a $$$\texttt{PS}$$$ somewhere in the string. How does this help?
You know the sum of the array.
Look at adjacent elements in $$$b$$$, and its corresponding characters in the string, and see if you can determine anything about $$$a$$$.
Go back to the original problem, with question marks. We do not know the sum now, but how many possible ones can there be?
How to solve with no question marks?
To simplify things, let's extend all arrays both ways by 1 element. In array $$$a$$$ and $$$b$$$, we can keep both values at $$$0$$$, and in the string $$$s$$$, we can assign the left value to "$$$\texttt{P}$$$" and the right value to "$$$\texttt{S}$$$" (the other values will still be satisfied -- why?)
Now, somewhere within the string, we are guaranteed to see the combination "$$$\texttt{PS}$$$" because we now have a string that starts with "$$$\texttt{P}$$$" and ends with "$$$\texttt{S}$$$", therefore it must transition somewhere in between.
Notice, if we add the values in $$$b$$$ of the "$$$\texttt{PS}$$$", we can gather the sum of the array.
Now, solving becomes simpler. We can focus on adjacent elements in the array. Say we are currently examining indices $$$i$$$ and $$$i + 1$$$. There are four cases:
- We are currently at a "$$$\texttt{PP}$$$". We then know $$$b_i + a_{i+1} = b_{i+1}$$$, so $$$a_{i+1}$$$ is now known.
- We are currently at a "$$$\texttt{PS}$$$". Again, this must be the sum. All we have to do here is check that our sum is correct.
- We are currently at an "$$$\texttt{SP}$$$". Here, we have $$$b_{i} + b_{i+1} - SUM = a_i + a_{i+1}$$$ (why?). Our left side is known, and the only requirement on the right side is that both $$$a_i$$$ and $$$a_{i+1}$$$ have to have a magnitude of at most $$$m$$$. Thus, to make their maximum magnitude as small as possible, we make them as close as possible to each other. If the left side is $$$x$$$, we assign $$$a_i = \text{floor}(x / 2)$$$, and $$$a_{i+1} = x - a_i$$$.
- We are currently at an "$$$\texttt{SS}$$$". This is similar to the first case, as we know that $$$b_i = a_i + b_{i+1}$$$. Thus, $$$a[i]$$$ will be known.
If $$$a_i$$$ is always possible, then we know our answer exists.
Thus, this solves a version without question marks. For the version with question marks, we can only try to guess what the sum of the entire array is, since we don't know where "$$$\texttt{PS}$$$" might be for sure every time. There are only $$$n$$$ possibilities — the sum of every pair of adjacent numbers in $$$b$$$.
For every possibility, we can run a dp[i][j]
where we are currently at index $$$i$$$ and the last character within "$$$\texttt{P}$$$" and "$$$\texttt{S}$$$" in string $$$s$$$ used was $$$j$$$. This dynamic programming will run in linear time because of constant-time transitions and linear amount of states. Note that this is only possible due to the independent nature of adjacent pairs as described earlier.
With this DP, we can calculate how many paths are possible, adding to our answer.
Time complexity $$$O(n^2)$$$
#include <iostream>
#include <vector>
#include <cstring>
#include <assert.h>
#include <set>
using namespace std;
#define ll long long
const int INF = 998244353;
// const int BOUND = 1e9;
void solve() {
int n; cin >> n;
int BOUND; cin >> BOUND;
string s; cin >> s;
s = "P" + s + "S";
vector<ll> b(n + 2);
for (int i = 0; i < n; ++i) cin >> b[i + 1];
ll ans = 0;
set<ll> done;
for (int i = 0; i < n + 1; ++i) {
ll sum = b[i] + b[i + 1];
if (done.count(sum)) continue;
int dp[n + 2][2];
for (int j = 0; j < n + 2; ++j) for (int k = 0; k < 2; ++k) dp[j][k] = -1;
// ["P", "S"]
dp[0][0] = 1;
for (int j = 1; j < n + 2; ++j) {
bool tr[2]; tr[0] = tr[1] = true;
if (s[j] == 'P') tr[1] = false;
else if (s[j] == 'S') tr[0] = false;
if (abs(b[j] - b[j - 1]) <= BOUND) {
for (int k = 0; k < 2; ++k)
if (dp[j - 1][k] > -1 && tr[k]) dp[j][k] = dp[j - 1][k];
}
if (dp[j - 1][0] > -1 && tr[1] && sum == b[j] + b[j - 1]) {
// "P" -> "S":
if (dp[j][1] < 0) dp[j][1] = 0;
dp[j][1] = (dp[j][1] + dp[j - 1][0]) % INF;
}
if (dp[j - 1][1] > -1 && tr[0]) {
// "S" -> "P":
ll add = b[j] + b[j - 1] - sum;
ll large = max(abs(add / 2), abs(add - add / 2));
if (large <= BOUND) {
if (dp[j][0] < 0) dp[j][0] = 0;
dp[j][0] = (dp[j][0] + dp[j - 1][1]) % INF;
}
}
}
if (dp[n + 1][1] < 0) continue;
ans = (ans + dp[n + 1][1]) % INF;
done.insert(sum);
}
cout << ans << '\n';
}
int main() {
ios::sync_with_stdio(false); cin.tie(NULL);
int t; cin >> t;
for (int i = 0; i < t; ++i) solve();
return 0;
}
1984G - Magic Trick II
Problem Credits: null_awe
Analysis: null_awe
Solve what the maximum $$$k$$$ will be for different types of arrays.
The maximum is always close to $$$n$$$.
There exists two trivial cases. If array is already sorted, $$$k = n$$$. If array is cyclic shift of sorted array, $$$k = n - 1$$$. Now, $$$k = n - 2$$$ and $$$k = n - 3$$$ is sufficient to sort any array.
Let's assume $$$n$$$ is odd first, and construct an answer in the general case with $$$k = n - 2$$$. Because $$$k$$$ is so large, our operations are limited in choices. If we represent the array as a cyclic array with a divider representing the end of the array, an operation can be seen as two choices:
- Move the divider $$$2$$$ positions in any direction.
- Swap the two numbers around the divider, then move the divider by $$$1$$$ position in any direction.
With this representation, the construction becomes quite easy. Because $$$n$$$ is odd, we can use the first type of operation to put the divider wherever we want. Then, using the second type of operation, if our divider is on the right side of a specific number, we can move it all the way to the right by swapping, then moving the divider right by $$$1$$$ position.
Because we are able to do this, we can bubble sort. For each number, position the divider in $$$O(n)$$$ moves, then move it to the very right in another $$$O(n)$$$ moves. There are $$$n$$$ numbers total, so this takes a total of around $$$2n^2$$$ operations, less if optimized.
What if $$$n$$$ is even? In this case $$$k = n - 2$$$ is not always guaranteed to work. The motivation for seeing this can come from the fact that you can't place the divider anywhere just by using type $$$1$$$ operations.
As a lower bound, we know that $$$k = n - 3$$$ will always work. We can use operations to move the largest number to the very end, then we basically have an array of length $$$n - 1$$$ with $$$k = n - 3$$$, which is the odd case we presented earlier.
So, when can we use $$$k = n - 2$$$ in the even case? Let's consider the number of inversions in the array at any given time. If $$$n$$$ is even, then $$$k = n - 2$$$ is also even, meaning that the parity of the number of inversions will never change with operations. Thus, since a sorted array will have no inversions, we cannot use $$$k = n - 2$$$ if the initial array had an odd number of inversions.
If we have an even number of inversions, we can sort in a similar manner. Except, now, if the inversion constraint prevents moving the divider to the immediate right of our current number with only type $$$1$$$ operations, we can use a few type $$$2$$$ operations to fix the position of the divider (many possible strategies for this, one possible way is shown in the implementation). Overall, this should also take around $$$2n^2$$$ operations total.
#include <iostream>
#include <vector>
using namespace std;
#define pii pair<int, int>
bool sorted(vector<int> arr, int n) {
for (int i = 1; i < n; ++i) if (arr[i] < arr[i - 1]) return false;
return true;
}
bool cyclic(vector<int> arr, int n) {
for (int i = 1; i < n; ++i) if (arr[i] % n != (arr[i - 1] + 1) % n) return false;
return true;
}
void solve() {
int n; cin >> n;
vector<int> arr(n); for (int i = 0; i < n; ++i) cin >> arr[i];
if (sorted(arr, n)) {
cout << n << "\n0\n";
return;
}
if (cyclic(arr, n)) {
cout << n - 1 << '\n';
int pos;
for (pos = 0; pos < n; ++pos) if (arr[pos] == 1) break;
cout << pos << '\n';
for (int i = 0; i < pos; ++i) {
cout << "2 1\n";
}
return;
}
vector<pii> ops;
if (n % 2 == 0) {
int inv = 0;
for (int i = 0; i < n; ++i) for (int j = i + 1; j < n; ++j) if (arr[i] > arr[j]) ++inv;
if (inv & 1) {
int pos;
for (pos = 0; pos < n; ++pos) if (arr[pos] == n) break;
if (pos < 3) {
for (int i = 0; i <= pos; ++i) ops.push_back({2, 1});
vector<int> tmp = arr;
for (int i = 0; i < n - 2; ++i) arr[i] = tmp[(i + pos + 1) % (n - 2)];
}
for (pos = 0; pos < n; ++pos) if (arr[pos] == n) break;
for (int i = pos; i < n - 1; ++i) ops.push_back({3, 4});
vector<int> tmp = arr;
for (int i = 2; i < n; ++i) arr[i] = tmp[((i + pos - 3) % (n - 2)) + 2];
--n;
}
}
cout << n - 2 << '\n';
int div = 0;
for (int i = n; i > 0; --i) {
int pos;
for (pos = 0; pos < n; ++pos) if (arr[pos] == i) break;
pos += 1;
if (pos == i) continue;
if (div % 2 != pos % 2) {
if (n & 1) {
while (div < n) {
ops.push_back({3, 1});
div += 2;
}
div %= n;
} else {
while (div != pos - 1) {
if (div < pos - 1) {
ops.push_back({3, 1});
div += 2;
} else {
ops.push_back({1, 3});
div -= 2;
}
}
if (pos > 1) {
ops.push_back({2, 3});
swap(arr[(div + n - 1) % n], arr[div]);
div = (div + n - 1) % n;
--pos;
}
ops.push_back({3, 1});
div += 2;
ops.push_back({2, 3});
swap(arr[(div + n - 1) % n], arr[div]);
div = (div + n - 1) % n;
}
}
while (div != pos) {
if (div < pos) {
ops.push_back({3, 1});
div += 2;
} else {
ops.push_back({1, 3});
div -= 2;
}
}
for (int j = pos; j < i; ++j) {
ops.push_back({2, 1});
swap(arr[div - 1], arr[div]);
++div;
}
}
if (div % 2 == 1) {
while (div < n) {
ops.push_back({3, 1});
div += 2;
}
div %= n;
}
while (div > 0) {
ops.push_back({1, 3});
div -= 2;
}
cout << ops.size() << '\n';
for (pii p : ops) {
cout << p.first << ' ' << p.second << '\n';
}
}
int main() {
ios_base::sync_with_stdio(false); cin.tie(NULL);
int t; cin >> t;
for (int i = 0; i < t; ++i) solve();
return 0;
}
1984H - Tower Capturing
Problem Credits: flamestorm
Analysis: flamestorm
Are there any useless points?
Draw all triangles that contain all points inside their circumcircle. What do you notice?
Claim. We can't ever pick a tower inside the convex hull.
Proof. A circle can only contain all the points if the points on the circle are on the convex hull; otherwise, the circle will necessarily split the convex hull into two parts, one of which is outside.
It follows that if our initial two towers aren't on the convex hull, the answer is $$$0$$$. Also, we can safely ignore all points in the convex hull, since we'll capture them anyway, as the convex hull is the union of all triangles whose vertices are vertices of the convex hull. From now on we'll only consider points on the convex hull.
Now comes the key claim of the problem.
Claim. Call a triangle covering if its circumcircle contains all the points. Draw all covering triangles. We claim that these triangles form a triangulation of the convex hull.
Proof. Recall that a triangulation is a set of triangles with pairwise non-intersecting interiors whose union is the polygon. There are two parts to the proof:
- the triangles are pairwise non-intersecting.
- their union is the polygon.
Let's prove them separately.
First we'll prove point (1) directly.
Consider two circles through points
Unable to parse markup [type=CF_MATHJAX]
and $$$DEF$$$. Of course, the convex hull needs to lie in the intersection of the two circles. In particular, the circle through $$$ABC$$$ must contain the points $$$DEF$$$, while the circle through $$$DEF$$$ must contain the points $$$ABC$$$. It follows that the two circumcircles (say, $$$\Omega$$$ and $$$\Psi$$$ respectively) have the following property:- the points $$$A$$$, $$$B$$$, $$$C$$$ lie on an arc of $$$\Omega$$$ inside $$$\Psi$$$, and
- the points $$$D$$$, $$$E$$$, $$$F$$$ lie on an arc of $$$\Psi$$$ inside $$$\Omega$$$.
The claim follows. Formally, we can define $$$U$$$ and $$$V$$$ as the intersection points of $$$\Omega$$$ and $$$\Psi$$$. Then if we walk along the digon $$$UV$$$ (whose edges are arcs of $$$\Omega$$$ and $$$\Psi$$$), we will pass through $$$A$$$, $$$B$$$, $$$C$$$ along one of the arcs, and $$$D$$$, $$$E$$$, $$$F$$$ on the other. This means that there is some closed convex loop passing through the points $$$A$$$, $$$B$$$, $$$C$$$ before $$$D$$$, $$$E$$$, $$$F$$$, implying those two triangles don't intersect. The proof remains the same even if some of $$$A$$$, $$$B$$$, $$$C$$$, $$$D$$$, $$$E$$$, $$$F$$$ overlap.
Now we'll move on to the proof of (2). Consider any covering triangle $$$ABC$$$. WLOG suppose $$$AB$$$ is not an edge of the convex hull. Consider the points in the halfplane of $$$AB$$$ not containing $$$C$$$, and let $$$C'$$$ be the point among these such that $$$\angle AC'B$$$ is minimized. Then it follows that $$$ABC'$$$ is also covering. It's easy to see why this works and why $$$C'$$$ is unique by the inscribed angle theorem.
As a result, given any covering triangle, we can recursively triangulate the regions it cuts off by a chord. Thus, inductively, the whole polygon will be triangulated. We are done with the proof.
Note that this implies our original three towers in the problem must form a covering triangle, since we create a covering triangle after every operation; thus, at the end of these operations, all but possibly one of these triangles is covering (the "possibly one" is the initial triangle). But such a covering triangulation exists and is unique, as shown, so our initial triangle in fact must be covering.
Now on to the actual operations present in the problem. Using them, we can "construct" the triangulation one step at a time using operations like the one mentioned. Of course, the triangulation is unique, so the only change we can do is the order in which we construct the triangles.
Consider the dual tree of the triangulation (that is, make a vertex for each triangle and an edge between those vertices corresponding to two triangles that share a diagonal of the convex hull). In an operation, we attach a leaf to any vertex, and in the end we end up with the final dual tree. Note that we can start growing the tree at either triangle adjacent to our original diagonal; that is, if our original points are $$$A$$$ and $$$B$$$, then we need to consider rooting our tree at either $$$T_1$$$ or $$$T_2$$$, where those are the two triangles that contain the edge $$$AB$$$ (note that $$$T_2$$$ may not exist).
Let's reverse time. Then given the final tree (rooted at either $$$T_1$$$ or $$$T_2$$$), in an operation we prune a leaf (a leaf here is a vertex with no children). How many ways can we prune all the leaves?
This is a standard dynamic programming problem, since at each step we need to prune all of our children before we prune ourselves. In particular, if the sizes of our childrens' subtrees are $$$s_1, \dots, s_k$$$, then our answer is $$$\binom{s_1 + \dots + s_k}{s_1, \dots, s_k} \cdot \prod_{i=1}^{k} \mathrm{ans}(\mathrm{child}_i)$$$. This DP runs in $$$\mathcal{O}(n)$$$ time, so it is not a problem to compute.
We can easily compute the triangulation in $$$\mathcal{O}(n^2)$$$ time as follows: given an edge $$$PQ$$$, we need to find the point $$$R$$$ in a halfplane such that $$$(PQR)$$$ covers all points, and as mentioned before, by the inscribed angle theorem this is precisely the point $$$R$$$ such that $$$PQR'$$$ is minimized. So you can find it with an $$$\mathcal{O}(n)$$$ sweep and add the new edges to our triangulation.
Therefore the solution runs in $$$\mathcal{O}(n^2)$$$, but the validator takes $$$\mathcal{O}(n^3)$$$ to check. We accepted slower solutions in $$$\mathcal{O}(n^3)$$$ as well, and even $$$\mathcal{O}(n^4)$$$ with a decent constant factor (which are relatively hard to cut).
A note about implementation: I'm not very good at it, so my code below is a bit messy. Also, to keep the computations in integers, I needed to use big integers at exactly one point, but it's not so bad: you only need to implement big integer multiplication and comparison, which I shamelessly stole from jeroenodb. You may not need to use it, and can pass using floating-point numbers.
#include <bits/stdc++.h>
using namespace std;
const int MAX = 200007;
const int MOD = 998244353;
struct bignum {
static constexpr long long B = 1LL<<30;
static constexpr int N = 6;
array<long long, N> b = {};
bignum() {}
bignum(long long a) {
b[2] = (a / B) / B;
b[1] = (a / B) % B;
b[0] = a % B;
}
bignum operator*(const bignum& o) {
bignum res;
for (int i = 0; i < N; i++) {
for (int j = 0; j + i < N; j++) {
res.b[i + j] += b[i] * o.b[j];
for (int k = i + j; k + 1 < N; k++) {
auto tmp = res.b[k] / B;
res.b[k + 1] += tmp;
res.b[k] -= tmp * B;
}
}
}
return res;
}
bool operator<=(const bignum& o) const {
if (b == o.b) return true;
return lexicographical_compare(b.rbegin(),b.rend(),o.b.rbegin(),o.b.rend());
}
};
template <class T> int sgn(T x) { return (x > 0) - (x < 0); }
template<class T>
struct Point {
typedef Point P;
T x, y;
explicit Point(T x=0, T y=0) : x(x), y(y) {}
bool operator<(P p) const { return tie(x,y) < tie(p.x,p.y); }
bool operator==(P p) const { return tie(x,y)==tie(p.x,p.y); }
P operator+(P p) const { return P(x+p.x, y+p.y); }
P operator-(P p) const { return P(x-p.x, y-p.y); }
P operator*(T d) const { return P(x*d, y*d); }
P operator/(T d) const { return P(x/d, y/d); }
T dot(P p) const { return x*p.x + y*p.y; }
T cross(P p) const { return x*p.y - y*p.x; }
T cross(P a, P b) const { return (a-*this).cross(b-*this); }
T dist2() const { return x*x + y*y; }
double dist() const { return sqrt((double)dist2()); }
// angle to x-axis in interval [-pi, pi]
double angle() const { return atan2(y, x); }
P unit() const { return *this/dist(); } // makes dist()=1
P perp() const { return P(-y, x); } // rotates +90 degrees
P normal() const { return perp().unit(); }
// returns point rotated 'a' radians ccw around the origin
P rotate(double a) const {
return P(x*cos(a)-y*sin(a),x*sin(a)+y*cos(a)); }
friend ostream& operator<<(ostream& os, P p) {
return os << "(" << p.x << "," << p.y << ")"; }
friend istream& operator>>(istream& is, P& p) {
return is >> p.x >> p.y; }
};
typedef Point<long long> P;
vector<P> convexHull(vector<P> pts) {
if (pts.size() <= 1) return pts;
sort(pts.begin(), pts.end());
vector<P> h(pts.size()+1);
int s = 0, t = 0;
for (int it = 2; it--; s = --t, reverse(pts.begin(), pts.end()))
for (P p : pts) {
while (t >= s + 2 && h[t-2].cross(h[t-1], p) <= 0) t--;
h[t++] = p;
}
return {h.begin(), h.begin() + t - (t == 2 && h[0] == h[1])};
}
int n, t;
long long inv[MAX], fact[MAX], invfact[MAX];
vector<P> v;
void orient(P &a, P &b, P &c) {
// move points a, b, c to be in counterclockwise order
long long val = (b - a).cross(c - a);
assert(val != 0);
if (val < 0) {swap(a, c);}
}
pair<long long, long long> angleComp(P a, P b, P c) {
// get a (scaled) value of f(cos(angle ABC))
P ab = a - b, cb = c - b;
long long dt = ab.dot(cb);
dt *= dt;
int sgn = (ab.dist2() + cb.dist2() >= (a - c).dist2() ? 1 : -1);
return make_pair(sgn * dt, ab.dist2() * cb.dist2());
}
bool inCircle(P a, P b, P c, P d) {
// is D in (or on) (ABC)?
orient(a, b, c);
P ad = a - d, bd = b - d, cd = c - d;
return (
ad.dist2() * (bd.x * cd.y - bd.y * cd.x) -
bd.dist2() * (ad.x * cd.y - ad.y * cd.x) +
cd.dist2() * (ad.x * bd.y - ad.y * bd.x)
) >= 0;
}
pair<bool, int> check(int l, int r) {
int start = l, finish = r;
if (finish < start) {finish += n;}
pair<long long, long long> best = make_pair(-MOD, 1);
int w = -1;
for (int i = start + 1; i < finish; i++) {
pair<long long, long long> val = angleComp(v[l], v[i % n], v[r]);
bignum v1 = bignum(val.first) * bignum(best.second);
bignum v2 = bignum(val.second) * bignum(best.first);
if (!(v1 <= v2)) {
best = val;
w = i % n;
}
}
if (w == -1) {
// cout << v[l] << ' ' << v[r] << " empty?\n";
return make_pair(true, -1);
}
// cout << v[l] << ' ' << v[r] << " connects to " << v[w] << "?\n";
for (P Q : v) {
if (!inCircle(v[l], v[w], v[r], Q)) {return make_pair(false, -1);}
}
return make_pair(true, w);
}
void reset(int n) {
v.clear();
// for (int i = 0; i < n + 5; i++) {
// g[i].clear();
// child[i].clear();
// }
t = 1;
}
void solve() {
cin >> n;
reset(n);
vector<P> pts(n);
for (int i = 0; i < n; i++) {
cin >> pts[i];
}
vector<P> us{pts[0], pts[1]};
vector<int> us_vals;
v = convexHull(pts);
n = v.size();
for (auto P : us) {
int i = 0; bool hit = false;
for (auto Q : v) {
if (P == Q) {us_vals.push_back(i); hit = true;}
i++;
}
if (!hit) {cout << 0 << '\n'; return;}
}
if (v.size() <= 3) {cout << 1 << '\n'; return;}
queue<pair<pair<int, int>, int>> q;
vector<int> child[MAX];
q.push(make_pair(make_pair(us_vals[0], us_vals[1]), -1));
q.push(make_pair(make_pair(us_vals[1], us_vals[0]), -1));
while (!q.empty()) {
auto p = q.front();
q.pop();
pair<bool, int> resp = check(p.first.first, p.first.second);
if (!resp.first) {cout << 0 << '\n'; return;}
if (resp.second == -1) {continue;}
q.push(make_pair(make_pair(p.first.first, resp.second), t));
q.push(make_pair(make_pair(resp.second, p.first.second), t));
if (p.second != -1) {
child[p.second].push_back(t);
}
t++;
}
// for (int i = 1; i <= n - 2; i++) {
// cout << i << ": ";
// for (int j : child[i]) {cout << j << ' ';}
// cout << '\n';
// }
bool edge_case = true; // both 1 and 2 are roots
for (int j : child[1]) {
if (j == 2) {edge_case = false;} // only 1 is root
}
vector<long long> dp(n + 7);
vector<int> sz(n + 7);
auto cnt = [&](auto self, int v) -> int {
if (sz[v] != -1) {return sz[v];}
int res = 1;
if (!child[v].empty()) {
for (int u : child[v]) {
res += self(self, u);
}
}
sz[v] = res;
return res;
};
auto f = [&](auto self, int v) -> long long {
if (dp[v] != -1LL) {return dp[v];}
long long res = 1LL;
if (!child[v].empty()) {
res = (res * fact[cnt(cnt, v) - 1]) % MOD;
for (int u : child[v]) {
res = (res * self(self, u)) % MOD;
res = (res * invfact[cnt(cnt, u)]) % MOD;
}
}
dp[v] = res;
return res;
};
if (edge_case) {child[1].push_back(2);}
fill(dp.begin(), dp.end(), -1LL);
fill(sz.begin(), sz.end(), -1);
long long res = f(f, 1);
if (edge_case) {
child[1].erase(remove(child[1].begin(), child[1].end(), 2), child[1].end());
child[2].push_back(1);
fill(dp.begin(), dp.end(), -1LL);
fill(sz.begin(), sz.end(), -1);
res = (res + f(f, 2)) % MOD;
}
cout << res << '\n';
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
inv[0] = inv[1] = 1;
for (int i = 2; i < MAX; i++) {
inv[i] = MOD - (long long)(MOD / i) * inv[MOD % i] % MOD;
}
fact[0] = fact[1] = 1; invfact[0] = invfact[1] = 1;
for (int i = 2; i < MAX; i++) {
fact[i] = (fact[i - 1] * (long long)i) % MOD;
invfact[i] = (invfact[i - 1] * inv[i]) % MOD;
}
int tt; cin >> tt; for (int i = 1; i <= tt; i++) {solve();}
// solve();
}
Judge solutions are currently being posted. Please enjoy the analyses in the meantime.
Thanks for the fast editorial and challenging contest!!
All judge solutions have been posted.
What's MIS?
I was gonna ask the same question!
Maximum Independent Set ?
Maximum independent set (a set containing the maximal amount of vertices where no two are adjacent). The abbreviation is explained in the editorial now.
you are right
Why the codes not presented?
E is nice
very fast editorial! problems are also really interesting, thank you for the round
super fast editorial with super fast system testing !!
F can be solved in $$$O(n \log n)$$$ time: consider the same DP, and sweep the sum over all possible values. All that changes when the sum changes are transitions
PS
(valid whenB[i] + B[i+1] == sum
) and transitionsSP
(valid whensum - 2*M <= B[i] + B[i+1] <= sum + 2*M
). Thus, you can store the transition matrices of the DP in a segment tree and do a sliding window sweep over the sum.H can also be solved in $$$O(n \log n)$$$ time using fast Farthest-Point-Delaunay-Triangulation: the triangles we're allowed to pick are exactly those in the FP-Delaunay Triangulation. There are algorithms for this in linear-time after computing the convex hull.
H is doable in $$$O(n \log n)$$$. It's a fun problem to solve in that time, but takes more effort for sure (but still definitely doable from scratch within contest time)
Can you tell more details about the $$$O(n\log n)$$$ solution? I'm very curious about that.Thx
I do not know where to find any description of FP triangulation, but it indeed relies on the correspondence between them. I want to find the partition of the plane into regions such that every region has one of the input points (which I assume form a convex polygon) as the furthest point (as points where three such regions meet are points determining a triangle whose circumcircle contains everything else). All of these regions are infinite and the only two infinite rays for regions $$$R_i$$$ are bisectors between consecutive pairs of points $$$(P_{i-1}, P_i)$$$ and $$$(P_i, P_{i+1})$$$. It is a good start to understand when a region consists of these two rays only — if I'm not mistaken, that's just a local check for intersections of these two and two neighboring bisectors. When you find such a pair then you can remove both of these bisectors from the set of "active bisectors" kept on a cyclic set and insert bisector between $$$(P_{i-1}, P_{i+1})$$$ and continue in a similar manner
Omg, I get it. It's so magical. Thank you.
I think it works as this :
We use a priority queue to maintain a set of radii of circumcircles formed by every set of three adjacent points. Each time we take out the one with the largest radius, it can be proven that this circle definitely covers all the points.
I have come up with a way to prove it, and the general idea is as follows:
First, we prove a lemma: The circumcircle with the largest radius can always be taken at three consecutive points on the convex hull.
After proving this, we prove that the circumcircle with the largest radius definitely contains all points, which can be done using proof by contradiction.
I apologize for my poor English; the above text was translated by AI.
265618123
I make a submition. It's accepted.I think it's completily correct! Thank you!
Nice, glad to hear that and congrats :)! I am actually not 100% sure about my own claim about local check for when I can commit a triple, but looking at it your way with the radius of the circumcircle certainly sounds convincing :)
Oh , I see it(Farthest-Point-Delaunay-Triangulation) in another comment. Thank you.
What was problem E asking for. Was it saying you recursively are doing the step 1 and 2 until you can't anymore and that is considered as doing the shuffle exactly once. I know I'm not understanding because I don't know how they get 6 for example 4 in the sample test cases.
Same. I solved without doing it recursively and it was wrong, so I solved the sample case by drawing it then guessed the solution from it.
That's impressive
My understanding for example 4 in E:
Choose 5 as root, change (5,8) to (5,1)
change (1,8) to (1,7)
change (1,10) to (1,6)
and you have 6 leaves now (includes 5)
Edit: Nvm!! I made a blunder
I couldn't even get test case 2.
My approach:
Let's choose 5th node: 1->2->3->4(root 1),5(root 5)
Then choose 4th node: 1->2->3(root 1), 4(root 4), 5(root 5)
Then choose 3rd node: 1->2(root 1), 3(root 3), 4(root4), 5(root 5)
Then choose 2nd node: ...
Let's connect back
root 2 is connected to 1: 1->2
root 3 is connected to 1: 1->2,1->3
root 4 is connected to 1: 1->2,1->3,1->4
root 5 is connected to 1: 1->2,1->3,1->4,1->5
ans: 4
correct me if I am wrong.
super fast editorial, thanks but i sleep now. Farewell!
I solved D without Z-algo or hashing, with time complexity of $$$O(m \cdot sqrt(m))$$$ by only brute force. Don't know it is legal or not but it passed the system testing anyway with 93ms (pretty surprised).
Can you link your submission, so I can try to hack it?
Here is it: 264957961 (That account of mine has been locked commenting for 48 hours, that's why I use this account to comment. I know using multiple account is considered bad, but I didn't join any contest from this account for a long time to avoid this bad behaviour).
If it is legal, I hope you could explain why this time complexity could pass.
Can u explain ur approach ?
Store all indices such that $$$s_i \neq 'a'$$$. Let the number of such indices be $$$m$$$ and the vector store those is $$$a$$$. Now, if $$$m$$$ is $$$0$$$, then the answer is $$$n - 1$$$, you can see it in the sample case $$$1$$$. If $$$m$$$ is not $$$0$$$, then there will be at least $$$1$$$ character $$$\neq 'a'$$$ in $$$s$$$. The problem makes us to partition the vector $$$a$$$ such that indices of each part in $$$a$$$ forms equal substrings in $$$s$$$.
For example, consider $$$s = abcadaaabcadaa$$$. Then we can take $$$bcad$$$ or $$$bcadaaabcad$$$ as a valid string $$$t$$$. It can be easily seen that each partition need to be the same size, so we iterate all the divisors $$$i$$$ of $$$m$$$, and check if we can divide $$$a$$$ into $$$\frac{m}{i}$$$ equal substring. The checking part could be done by brute force (corresponding indices must be the same character, and the gap between character of each partition must be the same). After checking that the current partition is valid, we need to pad some character $$$'a'$$$ to the left and right of the substring we found then add it to the answer.
i am sorry but can you explain a bit why $$$a$$$ has to be partitioned into equal sized parts.
https://codeforces.net/contest/1984/submission/264965359
I have also done in the (O(n.sqrt(n))) passed in 62ms
This was our original intended solution. It is supposed to pass.
But this one doesn't use any algorithm. I just used "brute force" to check if the substring is good or not.
$$$n$$$ has much less than $$$\sqrt{n}$$$ divisors for $$$n \approx 10^5$$$.
is $$$log_2{n}$$$ a good approximation? it's a while I'm trying to find a good upper bound for it
the most common approximation i've seen is $$$n^{1/3}$$$ for the number of divisors
There is an asymptotic bound but it's useless for competitive programming. You can just use $$$n^{1/3}$$$.
But this problem is like $$$2*10^5$$$, and when I estimate the runtime, it is about 700ms. I tried to google and see this: https://codeforces.net/blog/entry/19170. I was pretty nervous before trying to implement it as I don't know it is legal or not.
What is basically a brute force solution passing for D was a very pleasant surprise.
u can refer to this https://oeis.org/search?q=1344+maximal+divisors
It would be really helpful if you could explain your appraoch a bit
C1 detailed video editorial
https://youtu.be/5bWLd6AkHzs?feature=shared
nice
Super fast editorial.
can anyone explain why i can't hack? there is no button
You can't hack after contest in normal Div. 1 and/or Div. 2 rounds (except for Educational rounds). There's a thing called uphacking, but it's available only for Candidate Masters and above.
Thanks for explaining
Wow, the solution to C turned out to be clever, I have a more general solution that doesn't use the fact we only need to use op2 once, rather it's just brute force + dp:
Let's think about operation 2, when we have to use it. when $$$a[i] + c$$$ is negative, and when op 1? when $$$a[i] + c$$$ is positive.
Let's solve the problem with dp,
let $$$dp_{0,i}$$$ be: for the first $$$i$$$ elements what's the maximum value of C
and $$$dp_{1, i}$$$ be: for the first $$$i$$$ elements what's the minimum value of C
The transactions are straight forwardJK,$$$dp_{0, i+1} = max(dp_{0, i} + c, |dp_{1, i} + c|, |dp_{0, i} + c|)$$$ and
$$$dp_{1, i+1} = min(dp_{1, i} + c, |dp_{0, i} + c|, |dp_{1, i} + c|)$$$
The answer is obviously $$$dp_{0, n}$$$
The solution for C2 is the exact same, maintain $$$cnt_{2, n}$$$ array where $$$cnt_{i, j}$$$ is the number of ways to get to $$$dp_{i, j}$$$, the rest is just case handling to check which one took the place of the current dp and updating the $$$cnt$$$ accordingly
plz share your solution of c2.
check my comment, I'll be glad to help you:)
dp solutions for C1 && C2 (tap)
I read that already. I got what you said but was not able to understand your code.
I haven't coded it but I can try to explain it.
Assume that $$$dp_{0, i+1}$$$ was equal to $$$|dp_{1, i} + c|$$$(the max of those values is this). then the N.O.W you can reach the state $$$(0, i+1)$$$ get's increased by the N.O.W you can get to $$$(1, i)$$$ meaning: $$$cnt_{0, i+1} += cnt_{1, i}$$$
What is N.O.W?
Number of ways
u can just do dp[0][i] = dp[0][i-1] + c bcz it s obvious that others both are greater than it
https://codeforces.net/contest/1984/submission/264912921
I don't think so, maybe abs(dp[1][i-1] + c) be greater
i mean dp[1][i] for smallest one not the biggest one
dp[i][1] is min
dp1,i+1=min(dp1,i +c,|dp0,i +c|,|dp1,i +c|), why we need |dp1,i +c| for minimum? As |dp1,i +c| >= dp1,i +c. I think this is not needed
we only need dp[1][i] = dp[1][i-1] + c for minimum
tbh I just copied the dp[0] one and edited it, yes correct
Somehow I also end up writing similar solution 266791621 but couldn't prove why taking care of only min and max value of C is correct. Do you have any idea for it?
Update: I got the proof
I don't have a super mathematical proof, I'm just gonna say my reasonings:
When will You use operation 2? When the value is negative and the addition with C will not increase it. So when the value is the minimum possible, assume that You wanna use operation 2 at the current stage, thus, You should know what was the minimum achievable value till now, to check if this will be beneficial, so keeping the minimum value is mandatory.
The same can go for Op1 but it's a bit more obvious so ! gonna explain it
DP solutions for C1 and C2:
C1: let dp[i][0] be minimum score when we made i operations, dp[i][1] — maximum score
C2: let dp[i][j] be amount of times we can get score j after i operations. as we can look only at max and min values, let's clear useless conditions after every step, then there will be only O(1) new values every time, so total complexity is O(n) if u use hash table, O(nlog(n)) — if BST (for e.g. std::map)
C1 code: 264967072
C2 code: 264965726
You don't need a set/map or hashtable for coding c2, use IF brother, but this made the implementation way more clear
of course, u are right, but such structures were made to make our lifes easier)
the simpler the code, the fewer errors=)
actually no need u can update dp based on just i-1. Here are my submissions C1:264932150 C2:264949789 Overall complexity is O(n), and the code stays simple
liked your code it's similar to what I was thinking but you made it so much shorter
Thank you. Really neat code!
How did you come up with keeping only minimum and maximum did not get that
To achieve the final maxima, there are two ways: either keep making the value a positive integer as large as possible, or keep making it a negative integer as small as possible (then apply absolute operation to it at the end). Hence their intuition to keeping only min+max.
can u explain more c2 bro ?
of course, I'll try
have you got my c1 solution?
since we can keep only max and min score on every step(you can find prove by AkiLotus upper), let's do it on every step of our DP. we have 2 conditions and from every condition we can chose 1 of to 2 different actions, which means there will be not more than 2 * 2 = 4 new different scores. let's count each of them. but as we said earlier, we don't need to save all of them — we just need to save max and min score. okay, now let's count amount of times we can reach score we saved. initially, dp[steps = 0][score = 0] = 1, because we starts from score = 0. further we can go from score to score + a[i] or to abs(score + a[i]).
for e.g.: cur_max = max(dp[cur_step]), mx_times = dp[cur_step][cur_mx] then dp[cur_step + 1][cur_mx + a[cur_step + 1]] += mx_times and dp[cur_step + 1][abs(cur_mx + a[cur_step + 1]) += max_times, for minimum score we do it in same way
if yours dp is vector<map<ll, ll>>, then u can write it as I did it higher. but y also can use dp as vector<vector>, but IMO my variant is simpler
correct me if m wrong ,
u are calculating for each index the 4 possiibles values ( in the next dp ) ( and count the ways to achieve them ) and u are only storing the maximum and minimum of them in ( in curr dp )
and so on ?
yep, I calculate no more than 4 new values and store no more than 2 of them(sometimes min == max, so we have to store only 1 value)
thanks sir
u are welcome)
great approach. I was finding the dp approach. Thanks
Here is a simplified code
I scrolled into the comments looking for a $$$dp$$$ solution just like this! Thank you.
can sm1 tell me what are the 3 strings that match in this tc , problem D , abbaaaabbb output : 3
1- abbaaaabbb (whole string)
2- bbaaaabbb (whole string except first character)
3- b [ it's clear :) ]
nvm got it : { b , full string , bbaaaabbb }
Wow these editorials were fast. Thanks!
First solve for H was from rainboy.
Rainboy is the real Orz.
Anyone solved C using dp,it was kind of more obvious for Easy version then u just add another state to dp for num_ways to make a value,nice contest, i really like the C1 and C2
can you please explain c2 ?
i think coin change on cses is very similar to c2 .c1 was very similar to knapsack
yeah, it is knapsack dp cause u are kind of fillinf container,adding based on just the previous state,i still that was nice problem, we really need such high quality tasks in codeforces rounds.
first i assume u understood how C1 is done using dp,now lets call dp[i][0] minimum value of c up to index i element and dp[i][1] maximum value,first we compute it using smae formula of C1,now count number of ways to do it,now at every step we have four choices either use the minimum value of i-1 or max value of i-1,both with or without abs value,we just update number of ways,however we need to be careful as if minimum value equals max value for index i-1 we only consider two choices with or without abs value(as the two others are same and we would be double counting),u can check my submission for implementation details.
can you please elaborate on the last point, I couldnt understand
thanks I got it.
in E MIS was a big hint, is MIS very standard in many problems? when I heard MIS problem became relatively very easy, proving MIS (excluding root) is the answer was also not difficult, but how would someone think that MIS could be the answer, is it just more problem-solving of a similar type?
what is mis
maximum independent set, just google it, its very simple.
thanks
I got there by exploring specific cases. Like, first, I noticed that you can make the answer $$$\frac{n}{2}$$$ on a bamboo, but $$$n - 1$$$ on a star. Then I wondered if I could make at least $$$\frac{n}{2}$$$ on any graph. The two specific types made me think of bipartiteness, since the vertices that become leaves belong to one part in them. So I tried to show that you can take the larger part and make these vertices leaves. Got the construction and figured it's actually only necessary for any two chosen vertices to not be adjacent, which is exactly a MIS problem.
okay, you tried many things narrowing down the answer which is no two chosen vertices are adjacent. basically not even knowing what is MIS one could have solved this, thanks!! i think i just did not try hard enough!
I got there by naturally considering when a node can become a leaf.
Well, either it already was a leaf and got chosen in the first operation. Or, it got chosen as a root after all its neighbours. The second condition obviously gives rise to MIS
Proving that these 2 conditions are sufficient is not hard. If the set of leaves we want is an independent set, then we are guaranteed to find a node we do not want to become a leaf in any tree of size >= 2, thus just choose it and recur till we are left with trees of size 1.
Can you plz explain the problem E's implementation?? Like what is the dp states and how are transition happening?
The sample for F is such a garbage.
G is similar to this problem
why c2 : Thus, we have the choice of using option 2 or option 1 at any index j<i where the prefix sum is non-negative
Can someone help me explain this solution for C2? I just guessed a conclusion during the contest
my submission
Bruh Genius you !
Both C1 and C2 are akin to the knapsack problem in dynamic programming, correct?
Hey, in problem 2 can anyone help me that what is the problem in this logic it's throwing wrong answer on 392nd token test 2
https://codeforces.net/problemset/submission/1984/265022174
Hi, I got tle for C1 (easy). I tried dp (memorisation using maps)
https://codeforces.net/contest/1984/submission/264928809
Would be great if someone could give me insights on this and share a dp solution that got accepted for C. Thank you!
My comment explaining the dp solution
I used the same approach as yours in contest and got tle, this is actually an O(n^2logn) solution You can see this solution https://codeforces.net/contest/1984/submission/264938764 It is kinda greedy/dp with tc O(n)
we are told constantly in our college not to use global and static variables, is it a good a practice?
Hey Guys I solved C1 (not in the contest) using dynamic programming using two dp
Transitions:
with base case dp_max[0] = 0, dp_min[0] = 0 (---1 based indexing)
This solution got accepted now can anyone tell me how will I use these transitions to build the count array which will count all possible ways to acheive this maximum ?
Ps I guess it should be solved using the same way we find all the LIS, or All the min paths in Dijkstra algo. Pls help
checkout this comment for the code and its parent comments for the idea.
In Problem C2, I am using below code to calculate power of 2, but this is giving stack overflow error. Any idea why?
overflow. you need to use mod. Also, too many recursive calls will give stack overflow.
Where?
Also, note that if you are calling this function multiple times for large values of n and are not caching the results, it will give stack overflow error.This is because very large number of function calls will be made. So, its better to cache the results.
jiangly must be tired of getting second place to tourist. Anyways, their consistency is insane.
Another approach for B problem: As all the digits are large, the sum is bound to increase by 1 digit, so if $$$x$$$ is large and has $$$n$$$ digits, then $$$a$$$ and $$$b$$$ must have $$$n-1$$$ digits.
What is the largest large number with $$$n-1$$$ digits?
Yep, it will be $$$999... n-1$$$ times
After that, I subtracted the largest "large" number of $$$n-1$$$ digits from $$$x$$$, resulting in a number say $$$y$$$.
Now, we have to ensure two conditions:
The 1st condition is pretty obvious from the problem statement itself. To prove the second statement, consider $$$y = y_1y_2...y_{n-1}$$$ and a = $$$9999...n-1$$$ times, where each $$$y_i \in$$$ {$$$0,1,2,...,9$$$}
Necessary part
If $$$y_i = 0$$$ for some $$$i$$$, then at max we can "give" $$$y_i = 5$$$, and $$$a_i = 4$$$ or vice versa, thus making one of these numbers not "large".
Sufficient part
Claim: Any of the other digits work ,i.e., $$$y_i \in$$$ {$$$1,2,\dots , 9$$$}.
Proof: It is similar to the earlier proof, we can see $$$10 \le a_i+y_i \le 18$$$, which can always be "distributed" such that $$$5 \le a_i \le 9$$$ and $$$5 \le y_i \le 9$$$, which is exactly the range of a "large" number! $$$\square$$$
Example: $$$a_i = 9$$$ and $$$y_i = 1$$$ can be redistributed as $$$a_i = 5$$$ and $$$y_i = 5$$$, making them both large.
Hopefully, this is easy to implement and you can see my submission here: 264891168
Please let me know, if there is some fault in the solution, or any other clarification is required?
I think I have a far simpler answer that does require any integer conversions and works purely with strings, thus it can be used for very very large numbers:
Basically, for any number to be right, 4 conditions must be met:
Number must not be a single digit.
The first digit must be a '1'.
The last digit must not be a '9'.
All remaining digits must not be a '0'.
Please let me know if there's any holes in my logic.
Yeah, its precisely the same :) I just wanted to write a formal proof ><
I am struggling with a testcase right now on Problem B, the judge is saying that it expected an answer of YES on the number 793, which I believe should be a no.265101056
You might have read it wrong. 793 was the 17th testcase. You got a WA at the testcase on the number 1460 instead.
Thank you very much! That indeed helps, although I found out that my approach was flawed :D.
In C problem, Why we need to choose second operation only once. Can anyone give any testcase ?
How can I fix the memory limit exceed for my solution for C2 ? 265166920
The way you utilized the queue was actually a bruteforce, which will not work. Take this test for example:
Answer is just $$$1$$$, but due to constant doubling, you'll quickly MLE/TLE yourself.
Ohh,I get it. Is there anyway I can optimise it ? Or should I scrape it think of something else ?
I can only advise you to read the tutorial. Your current solution is practically an exhaustive search, and there are a lot more corners to prune with some mathematical intuitions.
Ok thanks a bunch mate :)
Can someone elaborate on how implementation works for E ? Proving the MIS lemma is easy, but I can't figure out how to implement the dynamic computation. Like what is the dp formula ? I've been trying with rerooting and dp on edges as well
Okay I figured something out, the implementation was harder than what I expected at first. Still, this problem is lovely, I don't regret wasting 2 hours on it instead of solving D during the contest !
Basically my idea was to implement is to root the tree at a none-leaf node (exists if n > 2). Then I compute the MIS for going up and down the tree, starting at a given node (with dp). For going down, it's easy dp, it only depends on doing down for subvertices that are lower. But I need to store the state that I chose for each child when going down, that I will use for going up.
Indeed, when going up from node i, whose parent is p, I need to add the value of going up with p, plus the value of going down with p (maybe minus one if p is set to be in the MIS), minus the value of going down with i, whose state is set by what I chose when going down with p.
It would be clearer with maths formulae but I wanted to keep my comment short since idk if anyone will require my hindsight.
For me I root the tree at a leaf node. a = MIS if choose root
b = MIS if not choose root
If a<=b, return b+1
Else we check if the MIS containing the root also contains all the leaves. If MIS also contains all the leaves, return a, else return a+1 (because we can root the tree at that leaf).
I use DP from a node to each of its children, state (a, b, a_spare_leaf, b_spare_leaf)
Solution https://codeforces.net/contest/1984/submission/266256460
Can anyone help me with the problem C2? i am getting WA on test 2. I could not really think of where i am wrong. Thank You here is my submission
For A, if the array isn't [1, 1, 3, 3] also impossible? Hint #1 and the solution code show that it is impossible only if all elements are equal.
Try to color the first 3 elements red and the last one blue.
My code for problem D is failing for test case no.83 , can anyone please explain if double or multiple hashing is the only way to avoid this 265689130? i used first time polynomial hashing with mod 1e9+9 and prime 31 only 28 test cases passed , then later using 1e9+9 and prime as 53 it passeD 83 test cases. Is it possible that there exists a more better prime for this to pass with single hashing ? le0n null_awe
Not actually sure — I don't remember ever using string hashing, so I don't have experience with hacks and string hashing. I think I've heard people say to randomize primes so nobody can reverse engineer hack test cases (if you want to stay single hashing).
Again, I don't know string hashing very well, so not sure if this is a robust solution.
i see okay thanks
The problem ratings are out, you can maybe update the prediction table? null_awe
i gotchu
For Div-2 D
I think a much simpler solution can work,
We will try to find candidate $$$t$$$ where the first and the last characters are non-$$$a$$$
Such a $$$t$$$ must start from the first non-$$$a$$$ character. we will iterate over the endpoint to check for validity.
To check if we have a valid $$$t$$$ we will check for all non-$$$a$$$ characters if the number of characters in $$$t$$$ are divisors of the frequency of corresponding alphabet in $$$s$$$, and all of them leave the same quotient.
There will be at most $$$O(\sqrt[3]{n})$$$ such $$$t$$$, which will be even lower in practice.
And you can check the validity of this $$$t$$$ in $$$O(n)$$$.
Also you can compute how many prefix and suffix $$$a$$$ can be added to this $$$t$$$ and still result in a valid $$$t$$$.
Submission — 265896129
Yeah did the same but used string hashing for faster check too, I believe my solution is just $$$\mathcal{O}(n + \sqrt[3]{m} * log{(m)})$$$ where $$$m$$$ in the number of non-a characters, 265886340
if my time complexity is incorrect lemme know the correct one
Hello , can someone help me with my doubt in Ques D) Magnitude(Hard version) , it would be great great help. My solution is giving a wrong answer on this far test case.
This is the code that i have ---->
https://codeforces.net/problemset/submission/1984/268592519 PLZ PLZ PLZ HELP ME , AM STUCK AT THIS
Why is checking adjacent elements only sufficient for F?
Same question