A — The Third Three Number Problem
Authors: antontrygubO_o, Gheal
An answer exists only when $$$n$$$ is even.
$$$a \oplus a = 0$$$
$$$a \oplus 0 = a$$$
First and foremost, it can be proven that $$$(a \oplus b) + (b \oplus c) + (a \oplus c)$$$ is always even, for all possible non-negative values of $$$a$$$, $$$b$$$ and $$$c$$$.
Firstly, $$$a \oplus b$$$ and $$$a+b$$$ have the same parity, since $$$a + b = a \oplus b + 2 \cdot (a \text{&} b) $$$. Therefore, $$$(a \oplus b) + (b \oplus c) + (a \oplus c)$$$ has the same parity as $$$(a+b)+(b+c)+(a+c)=2 \cdot (a+b+c)$$$.
Therefore, if $$$n$$$ is even, one possible solution is $$$a=0$$$, $$$b=0$$$ and $$$c=\frac{n}{2}$$$. In this case, $$$(a \oplus b) + (b \oplus c) + (a \oplus c)= 0+\frac{n}{2}+\frac{n}{2}=n$$$. Otherwise, there are no solutions.
Time complexity per testcase: $$$O(1)$$$.
#include<bits/stdc++.h>
using namespace std;
void testcase(){
int n;
cin>>n;
if(n%2==0)
cout<<"0 "<<n/2<<' '<<n/2<<'\n';
else
cout<<"-1\n";
}
int main()
{
ios_base::sync_with_stdio(false); cin.tie(0);
int t;
cin>>t;
while(t--)
testcase();
return 0;
}
This is actually the third iteration of the problem, which was suggested by antontrygubO_o.
The first iteration had $$$|a-b|+|b-c|+|a-c|=n$$$, and the second one had $$$gcd(a,b)+gcd(b,c)+gcd(a,c)=n$$$.
B — Almost Ternary Matrix
Author: Gheal
The general construction consists of a $$$2 \times 2$$$ checkerboard with a $$$1$$$-thick border. Here is the intended solution for $$$n=6$$$ and $$$m=8$$$:
https://codeforces.net/predownloaded/75/c4/75c46e19cc3cf6f890139b0e74774c3a6fc387db.png
Time complexity per testcase: $$$O(nm)$$$.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
void testcase(){
ll n,m;
cin>>n>>m;
for(ll i=1;i<=n;i++){
for(ll j=1;j<=m;j++){
cout<<((i%4<=1)!=(j%4<=1))<<" \n"[j==m];
}
}
}
int main()
{
ios_base::sync_with_stdio(false); cin.tie(0);
int t;
cin>>t;
while(t--)
testcase();
return 0;
}
D — Almost Triple Deletions
Authors: antontrygubO_o, Gheal
Consider the opposite problem: What is the smallest possible length of a final array?
For which arrays is the smallest possible final length equal to $$$0$$$?
Considering the second hint, it is possible to completely remove some subsequences from the array.
Lemma: An array $$$a_1,a_2,\ldots, a_n$$$ can be fully deleted via a sequence of operations if and only if it satisfies both of the following constraints:
$$$n$$$ is even
The maximum frequency of any element in the array is at most $$$\frac n2$$$.
Proof
If $$$n$$$ is odd, then any final array will also have an odd length, which can't be $$$0$$$.
An optimal strategy is to always delete one of the most frequent elements and any one of its neighbours. If the most frequent element occurs $$$k \gt \frac n2$$$ times, then the final array will have at least $$$n-2 \cdot (n-k)=2\cdot k - n \gt 0$$$ elements. Otherwise, this strategy ensures the full deletion of the array, since, after performing an operation, it is impossible for an element to occur more than $$$\frac {n-2}{2}$$$ times in the array.
Since the maximum frequency of a value for every subsequence $$$[a_l,a_{l+1},\ldots,a_r]$$$ can be computed in $$$O(n^2)$$$, it is possible to precompute all subsequences which can be deleted via a sequence of operations.
Let $$$dp[i]$$$ be the maximum length of a final array consisting of $$$a_i$$$ and some subarray from the first $$$i-1$$$ elements. Initially, $$$dp[i]$$$ is set to $$$1$$$ if the prefix $$$[a_1,a_2,\ldots, a_{i-1}]$$$ can be fully deleted. Otherwise, $$$dp[i]=0$$$.
For every pair of indices $$$i$$$ and $$$j$$$ ($$$1 \le j \lt i \le n, a_i=a_j$$$), if we can fully delete the subsequence $$$[a_{j+1},a_{j+2},\ldots a_{i-1}]$$$, then we can append $$$a_i$$$ to any final array ending in $$$a_j$$$. Naturally, $$$dp[i]$$$ will be strictly greater than $$$dp[j]$$$. This gives us the following recurrence:
$$$dp[i]=\max_{j=1}^{i-1}(dp[j]>0 \text{ and } a_i=a_j \text{ and } [a_{j+1},a_{j+2},\ldots,a_{i-1}] \text{ is deletable}) \cdot (dp[j]+1)$$$
If we define a final array as a subarray of equal elements from the array $$$a$$$, to which $$$a_{n+1}$$$ is forcefully appended, then the final answer can be written as $$$dp[n+1]-1$$$. Note that, when computing $$$dp[n+1]$$$, $$$a_j$$$ should not be compared to $$$a_{n+1}$$$.
Total time complexity per testcase: $$$O(n^2)$$$.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll NMAX=5e3+5;
ll dp[NMAX],v[NMAX],fr[NMAX];
void testcase(){
ll n,ans=0;
cin>>n;
for(ll i=1;i<=n;i++){
cin>>v[i];
dp[i]=0;
}
for(ll i=0;i<=n;i++){
if(i && dp[i]==0) continue;
ll frmax=0;
for(int j=1;j<=n;j++) fr[j]=0;
for(int j=i+1;j<=n;j++){
if((j-i)%2 && frmax<=(j-i)/2 && (i==0 || v[i]==v[j]))
dp[j]=max(dp[j],dp[i]+1);
frmax=max(frmax,++fr[v[j]]);
}
}
ll frmax=0;
for(int j=1;j<=n;j++) fr[j]=0;
for(int i=n;i>=0;i--){
if((n-i)%2==0 && frmax<=(n-i)/2) ans=max(ans,dp[i]);
frmax=max(frmax,++fr[v[i]]);
}
cout<<ans<<'\n';
}
int main()
{
ios_base::sync_with_stdio(false); cin.tie(0);
int t;
cin>>t;
while(t--)
testcase();
return 0;
}
E — Three Days Grace
Author: tibinyte
We can see that in the final multiset, each number $$$A_i$$$ from the initial multiset will be assigned to a subset of values $$$x_1, x_2,....,x_k$$$ such that their product is $$$A_i$$$. Every such multiset can be created. Also let $$$vmax$$$ be the maximum value in the initial multiset.
Consider iterating through the minimum value. To get the best maximum value that has this minimum we fixed, one can use dynamic programming $$$dp[i][j] = $$$the best possible maximum if we had number $$$i$$$ and the minimum value in the product is $$$j$$$, $$$j$$$ is a divisor of $$$i$$$. This dp can be calculated in $$$O(vmax \cdot log^2(vmax))$$$ for all values. We can also process all updates when incrementing the minimum and keeping the result with a total effort of $$$O(vmax \cdot log^2(vmax))$$$. Thus we have a total time complexity of $$$O(vmax \cdot log^2(vmax))$$$. However, this ( we hope ) won't pass.
Here is a way more elegant solution ( thanks to valeriu ):
To get things straight, we observe that when we decompose a number, we just actually write it as a product of numbers. We still consider fixing the minimum value used in our multiset, call it $$$L$$$. We will further consider that we iterate $$$L$$$ from the greatest possible value (i.e. $$$vmax$$$) to $$$1$$$, and as such, we try at each iteration to calculate the minimum possible value which will appear in any decomposition as the maximum value in said decomposition.
We shall now retain for each element the minimal maximum value in a decomposition where the minimum of that decomposition is $$$L$$$, let's say for element $$$i$$$, this value will be stored in $$$dp[i]$$$. Naturally, after calculating this value for every number, we now try to tweak the calculated values as to match the fact that, after this iteration concluded, we will decrease $$$L$$$. For further simplicity, we denote $$$L' = L - 1$$$.
So, we changed the minimum value allowed. What changes now? Well, it is easy to see that any element that is not divisible by $$$L'$$$ won't be affected by this modification, as much as it is impossible to include $$$L'$$$ in any decomposition of said number. So it remains to modify the multiples of $$$L'$$$. Let's take a such number, $$$M$$$. How can we modify $$$dp[M]$$$? Well, we can include $$$L'$$$ in the decomposition as many times as we want, and then when we decide to stop including it, we remain with a number which needs to be further decomposed. The attributed maximum of this value should already be calculated, so we can consider it as a new candidate for the update of $$$dp[M]$$$. This idea could be implemented simpler by going through multiples of $$$L'$$$, and for an element, updating $$$dp[i]$$$ with $$$dp[i / L']$$$ (by taking the minimum of either)
We now need for each iteration to keep track of the attributed maximums of each element that actually appears in our initial list. This can be done by keeping a frequency of all these elements, and after all updates, taking the (already known) maximum of the previous iteration and decreasing it until we find another element that actually appears in our set (this can be verified by simply checking the frequency). This is correct, as much as all the values gradually decrease as $$$L$$$ decreases, so their maximum would have to decrease as well.
Final time complexity: $$$O(vmax * log(vmax))$$$
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
const int nmax = 5e6 + 5;
int appear[nmax];
int mxval[nmax];
int toggle[nmax];
int main()
{
cin.tie(nullptr)->sync_with_stdio(false);
int t;
cin >> t;
while (t--)
{
int n, m, mn = nmax, mx = 0;
cin >> n >> m;
for (int i = 0; i <= m; ++i)
{
appear[i] = toggle[i] = mxval[i] = 0;
}
for (int i = 0, x; i < n; i++)
{
cin >> x;
appear[x] = 1;
toggle[x] = 1;
mn = min(mn, x);
mx = max(mx, x);
}
for (int i = 0; i <= mx; i++)
{
mxval[i] = i;
}
int ptr = mx, smax = mx - mn;
for (int i = mx; i >= 1; i--)
{
for (ll j = (ll)i * i; j <= mx; j += i)
{
if (appear[j])
toggle[mxval[j]]--;
mxval[j] = min(mxval[j], mxval[j / i]);
if (appear[j])
toggle[mxval[j]]++;
}
while (toggle[ptr] == 0)
ptr--;
if (i <= mn)
smax = min(smax, ptr - i);
}
cout << smax << '\n';
}
}