chokudai's blog

By chokudai, history, 21 month(s) ago, In English

We will hold UNIQUE VISION Programming Contest 2023 New Year (AtCoder Beginner Contest 287).

The point values will be 100-200-300-400-500-500-600-600. We are looking forward to your participation!

  • Vote: I like it
  • +50
  • Vote: I do not like it

| Write comment?
»
21 month(s) ago, # |
  Vote: I like it +5 Vote: I do not like it

I can't seem to figure out why my F gets TLE. I used the same trick as the editorial. Link.

Is it some cache-friendly stuff that I am missing? Or is there some other stupid thing that I am doing in my code.

  • »
    »
    21 month(s) ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    It is due to modulo operator most probably, try using Mint.

    • »
      »
      »
      21 month(s) ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      That doesn't seem likely to me, because modint is just a class-level abstraction for modular operations that makes it less likely to make overflow errors. I don't think that would actually make it less costly (in fact, because of the wrapper, I would guess it makes the operations slightly more costly).

      I did try it though: it still TLEs on most of the tests: Link

  • »
    »
    21 month(s) ago, # ^ |
      Vote: I like it +24 Vote: I do not like it

    Having lots of small-sized vectors is bad. Simply replacing all your 2-sized vectors with array<int, 2> makes it run in 639ms. Submission.

    • »
      »
      »
      21 month(s) ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thanks for your insight. I saw tourist casually use multiple 3D vectors in his streams, and so thought that it was perfectly fine (although later he did change it to arrays after getting a TLE xD)

»
21 month(s) ago, # |
  Vote: I like it +13 Vote: I do not like it

Could have solved G for the first time in contest today. But I am so bad at implementation :((

BTW today's contest seemed a lot less mathy. Good job!

»
21 month(s) ago, # |
  Vote: I like it +1 Vote: I do not like it

I have an $$$O(N \log N + \sum |S_i|)$$$ solution for E. Observe that the string having maximum LCP with $$$S_i$$$ would be, the one before it and the one after it, when we sort the array $$$S$$$. So, we manage two multisets $$$L$$$ and $$$G$$$, each sorted non-decreasing and non-increasing. Now the rest of the solution is basically calculating the LCP value. Submission here.

  • »
    »
    21 month(s) ago, # ^ |
    Rev. 2   Vote: I like it +8 Vote: I do not like it

    We do not need two multisets. We can just put the strings in an array, sort the array and for every string: check the LCP of the two neighboring strings. Submission here.

    • »
      »
      »
      21 month(s) ago, # ^ |
        Vote: I like it -34 Vote: I do not like it

      Yes, I knew, I was too lazy to do some work with indices/iterators.

»
21 month(s) ago, # |
  Vote: I like it +10 Vote: I do not like it

After I continually get WA in C, I got impatient and finally got -58 in this round.

After seeing the editoral, I thought my mistake is quite stupid.

:(

  • »
    »
    21 month(s) ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Some people falsely got AC for C.

»
21 month(s) ago, # |
Rev. 2   Vote: I like it +3 Vote: I do not like it

Problem E can be easily solved using Trie with a complexity of $$$O(\sum {|S_i|})$$$. Here is my solution

  • »
    »
    21 month(s) ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    True. I was also going the same direction but I realized the max LCP should be largest for the closest string lexicographically greater or smaller than the given string.

    This was faster to code but definitely slower than the trie approach.

  • »
    »
    21 month(s) ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Decided to use Trie so that I could practice implementing it.
    Did it for the first time in several months, lol.
    The set-based implementation would've been a lot faster to code.

»
21 month(s) ago, # |
  Vote: I like it 0 Vote: I do not like it

I passed E with Time Complexity $$$O(n\log^{2} \sum S_i)$$$. Maybe the TL limit should be a little tighter.

»
21 month(s) ago, # |
  Vote: I like it 0 Vote: I do not like it

I use double hash to solve E. My idea is converting all possible prefixes using double hash into a map

For each string I do binary search the answer tp check whether a certain prefix exist in the map

Thanks for the problemset! I love E and G (can't solve G on contest but the double fenwick tree + compression is cool!)

»
21 month(s) ago, # |
  Vote: I like it 0 Vote: I do not like it

Thanks for the round. Really one of my favorite rounds which includes diverse problems in different topics.
btw, it's my first rated round.

»
21 month(s) ago, # |
  Vote: I like it 0 Vote: I do not like it

@chokudai I think Task C has weak test cases. I have dmed you the details.

»
21 month(s) ago, # |
  Vote: I like it 0 Vote: I do not like it

G is solvable using treap. Submission

»
21 month(s) ago, # |
  Vote: I like it 0 Vote: I do not like it

For problem F, during contest I have almost figured out the solution but failed the complexity analysis part. I could not convince myself that it is O(N^2) rather than O(N^3), until reading the editorial and realize that maybe it is similar to this problem https://codeforces.net/contest/581/problem/F.

I have seen this problem/idea during virtual participation recently but missed such observation during today's contest. Although it is a pity, still thanks to writers for providing such invaluable problems.

»
21 month(s) ago, # |
Rev. 2   Vote: I like it +7 Vote: I do not like it

Explanation for $$$F$$$ (Same Editorial's idea but trying to make it more clear):

Let $$$dp[i][is\_chosen][cnt]$$$ be the number of subsets with $$$cnt$$$ connected components over the subtree rooted at node $$$i$$$. $$$is\_chosen$$$ is $$$0$$$ or $$$1$$$ and represents whether the subsets include $$$i$$$ or not. After processing the $$$k^{th}$$$ child of $$$i$$$, $$$dp[i]$$$ should have the number of subsets over $$$i$$$ and the subtrees rooted at its first $$$k$$$ children.

When processing the $$$(k+1)^{th}$$$ child (let it be $$$j$$$), all we have to do is to merge $$$dp[j]$$$ to $$$dp[i]$$$, i.e., iterate on every $$$cnt_i$$$-$$$cnt_j$$$ pair and add their contribution $$$dp[i][is\_chosen_i][cnt_i]\cdot dp[j][is\_chosen_j][cnt_j]$$$ to $$$merged\_dp[is\_chosen_i][cnt_i+cnt_j-(is\_chosen_i\ \&\&\ is\_chosen_j)]$$$.

After processing the $$$(k+1)^{th}$$$ child, just assign $$$merged\_dp$$$ to $$$dp[i]$$$. Note that we subtract $$$(is\_chosen_i\ \&\&\ is\_chosen_j)$$$ because if both $$$i$$$ and $$$j$$$ are chosen, they are merged into $$$1$$$ component. We need to take care here because the expression $$$cnt_i+cnt_j-(is\_chosen_i\ \&\&\ is\_chosen_j)$$$ can be $$$-1$$$.

Proof that complexity is $$$O(N^2)$$$:

Observe that when we iterate on every $$$cnt_i-cnt_j$$$ pair, we can imagine this (in terms of iterations count) as iterating on every pair of nodes where the first node is from the currently merged subtrees and the second node is from the new subtree to be merged. Hence, every pair of nodes will be considered only once, i.e., at their lowest common ancestor. Since we have in total $$$\dfrac{N\cdot (N-1)}{2}$$$ pairs, the total number of iterations is bound by that count.

Submission.

»
21 month(s) ago, # |
  Vote: I like it 0 Vote: I do not like it

Does anyone know how the test cases for E is generated?

I hashed every prefix using a mod of 10^9+7 with a random base (tracking the hashes of different lengths separately). I thought because of the low number of test cases on atcoder, that would be enough. But surprisingly I still failed 3/33 test cases. It passed when I switched to a different (and larger) mod.

»
21 month(s) ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

give me test cases where

#include <bits/stdc++.h>
using namespace std;

int main(){
    int n, m;
    cin>>n>>m;
    vector<int> deg(n, 0);
    for(int i = 0; i<m; i++){
        int a, b;
        cin>>a>>b;
        deg[--a]++, deg[--b]++;
    }
    int deg1 = 0, deg2 = 0;
    for(auto e: deg){
        if(e==1){
            deg1++;
        }
        if(e==2){
            deg2++;
        }
    }
    if(n==1 || (deg1==n-2 && deg2 == 2)){
        cout<<"Yes\n";
        return 0;
    }
    cout<<"No";
    return 0;
}

this will give the wrong answer.

for problem C

  • »
    »
    21 month(s) ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The second diagram in the editorial

    • »
      »
      »
      21 month(s) ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      #include <bits/stdc++.h> using namespace std; int main(){ int n, m; cin>>n>>m; vector<int> deg(n, 0); for(int i = 0; i<m; i++){ int a, b; cin>>a>>b; deg[--a]++, deg[--b]++; } int deg1 = 0, deg2 = 0; for(auto e: deg){ if(e==1){ deg1++; } if(e==2){ deg2++; } } if(n==1 || (deg1==2 && deg2 == n-2)){ cout<<"Yes\n"; return 0; } cout<<"No"; return 0; }

      why this one

      • »
        »
        »
        »
        21 month(s) ago, # ^ |
        Rev. 3   Vote: I like it 0 Vote: I do not like it

        try this testcase:

        10 9
        1 6
        6 7
        7 1
        2 3
        3 8
        8 2
        4 5
        5 9
        9 10
        

        answer:No