Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3831 |
3 | Radewoosh | 3646 |
4 | jqdai0815 | 3620 |
4 | Benq | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | gamegame | 3386 |
10 | ksun48 | 3373 |
# | User | Contrib. |
---|---|---|
1 | cry | 164 |
1 | maomao90 | 164 |
3 | Um_nik | 163 |
4 | atcoder_official | 160 |
5 | -is-this-fft- | 158 |
6 | awoo | 157 |
7 | adamant | 156 |
8 | TheScrasse | 154 |
8 | nor | 154 |
10 | Dominater069 | 153 |
Name |
---|
Fastest Editorial release *_*
There is a feature now, that it cannot be earlier than the systests, so don't hope for much in the future :P
wow! thank you its realy useful
When there where 35 minutes left in the contest, I thought "F**k it, let's assume that F is convex and write this thing with crazy treaps". Unfortunately, my treaps skills weren't good enough for this to work. Did you assume that we should believe that it is convex?
C and E are nice, thanks. D is weird and easy, didn't get your intentions.
Well, I think that we should always assume that people have to believe a bit, but ofc during the preparation it's good to have proofs.
A priority queue is enough to maintain the function. You can just maintain the turning points of the increasing and decreasing parts. Adding |x| will result in the movement of $$$O(1)$$$ turning point between two parts.
61176125
Good contest with Fast Tutorial :: Thanks Radewoosh
I spent a lot of time thinking how to solve Div2D in $$$O(n)$$$ instead of $$$O(n^2)$$$
WTH is wrong with me!
We actually don't think this problem can be solved in $$$O(n^{2 - \varepsilon})$$$ time for any $$$\varepsilon > 0$$$ — we can prove that it's at least as hard as 2-Orthogonal Vectors problem (definition 6 in this paper). If we assume that the Strong Exponential Time Hypothesis is true, then 2-Orthogonal Vectors (and therefore Div2D as well) cannot be solved faster than $$$O(n^{2 - \varepsilon})$$$.
Chill out, dude.Yeah, I thought the same thing during the contest. Sure.
Dear Radewoosh,
I have a question about the intended solution for E. My E is accepted while I was expecting to get a TLE: 61172291
I knew that the number of primes will not be beyond $$$\log(10^{12})$$$, and for each gcd operation, the number of primes will not increase. Therefore, I decided to implement a naive approach, by caching the node id and the current gcd of nodes from one of its ancestor. Honestly, I didn't expect to get it accepted (I just tried my chance).
I see better solutions that ended up with less than 400 ms while mine is very close to the time limit.
I am wondering if you purposefully intended to accept solutions like mine.
At every node, we are sending all the possible gcds from the ancestor as well as the current node. Now, this current node can take logn different values and if the height of the tree is n, wouldn't it become nlogn different values. Can you help me where I am going wrong?
In "**Dawid and Bags of Candies**" problem, how did you come to the conclusion that "**one of them should take only the biggest bag or the biggest and the smallest bag**"?
I have checked all the possibilities which are as follows (a,b,c,d are inputs)
a + b = c + d
a + c = b + d
a + d = b + c
a + b + c = d
a + b + d = c
b + c + d = a
c + d + a = b
For Div1A, what should be the output of following test :
4
5 5 2 6
3 6 10 1
Shouldn't it be optimal to take all the guys in the group since their skill sets are such that for everyone, for every jth skill there is a guy with same skill ? So No one will think he is better than everyone.Hence answer should be 20. But I am getting 9 while running this case on some ACed code. Or I guss, I still didn't get the question.
Edit: Sorry, my bad. The question says that any person in the group should not be better than every other person in the group.
I have not understood yet. why not 20?
can you explain this case
10 3 3 5 5 6 6 1 2 4 7 1 1 1 1 1 1 1 1 1 1
ans : 9
Hi, sorry i was wrong earlier. I edited the previous edit. see again. According to the problem, the person with ai == 7 will consider himself best so we can not include it in group. I hope you get it.
can somebody explain how to solve the 3rd problem Anadi and Domino (Div 2) for n=7 case
For n=7 it's similar to n<=6 but we can have two nodes having same number as long as the number of dice and given condition ensured.So, iterate on pair of nodes for having same colour and find max among all cases
can you explain for this test case how answer is 14 not 13..
7 15 4 6 7 3 3 1 6 5 2 7 3 6 7 6 2 6 7 5 3 5 5 4 4 7 2 1 2 4 2 3
You will get 14. Just consider giving the domino with one dot to the 7th vertex. Also consider the case of giving (1,1) domino to edge 2-7 or 3-7.
Am i the only one who used brute force for C by check all 7^6 possible assignments of x for all vertices?
I did that too, and it works
Just keep an array of all dominoes you have left, by their upper half number and their lower half number
I was thinking too, like this in contest about assigning each vertex a number by recursion for all possible color vertex-color combination, but after that I was confused by how to check it, may be I am missing, can you explain your approach if possible
61159724 Check this out
I actually did that too, but this code is brutal haha. Follow my implementation : 61155872
[DELETED]
For problem Div1 A: Can someone describe how do we decide if two students can atudy calm? For the first test case, I couldn't get it. Thanks.
Edit : A person in the group should not be better than every other person in the group.
Thank you.
Would you explain sample test 2?I'm not able to understand why can't we send all people. Thanks in advance...
In the second test, in each group of at least two students someone will always think that he is better than everyone else in the subset
001 010 011 So if we take group of all 3 people than nobody will think that he is better than everybody else. So why we can't send all 3 people?
Maybe, 011 will think that he's better than 001 (skill #2) and better than 010 (skill #3).
Can anyone explain why the number of swaps is proportional to the change in potential for Div1 C? How about all the in-edges going from left to right initially that now point right to left? Also in the editorial did the author mean to write each vertex can have no more than root(2m) adjacent vertices to the right, not left?
I understood it as follows.
Consider the change in potential after each query: (number of edges that start pointing left to right but were pointing from right to left before) (positive change) — (number of edges that start pointing right to left but were pointing from left to right before) (negative change).
Consider the process of adding edges too at the beginning (while forming the graph, before performing any query) as adding 1 to potential if edge is left to right.
Now, the positive change can only happen at most sqrt(2*m) times for each query and hence all the positive change would sum to m+q*sqrt(2*m) at best.
The negative potential change sum can't exceed the positive change sum (as potential can't be negative), hence its max value is also m+q*sqrt(2*m).
One operation of one query basically only increases or decreases the potential by 1. So, the total number of operations would be (positive change sum) + (negative change sum) which is 2*(m+q*sqrt(2*m)) at best.
Oh so you're starting off with an initially empty graph, and treating the process of building the graph as part of the total cost? This is a nice way to guarantee that the initial potential is zero. Thanks for your explanation.
Kamil Making a Stream : YT Advertisement Alert!! Jk thanks for the problems :)
Or maybe it's a subliminal message to make his channel more popular than T-Series. Who knows :P
In Div1 C Editorial:
"It now turns out that each vertex is adjacent with at most sqrt(2m) vertices to its left"
How is this true? Simple counterexample is just a tree (degree 1, 1, ..., N — 1). The last vertex will be adjacent to N-1 vertices from the left.
Or am I missunderstanding something?
Thanks for finding a typo!
s/increasing order/decreasing order/
We'll push a fix shortly.
I am also not sure how the number of flips is proportional to the change of the potential.
For a query about a node u, the edges joining u with the nodes on its left side should decrease the potential if it's being flipped, while the edges on the right should increase it.
So, for example, if the number of edges being flipped on its left side is equal to those on its right side, the potential will not change after the query even though there could be lots of edges being flipped.
I was wondering the same thing as you (I left a comment above, unfortunately no response), but I think I might understand how Radewoosh's solution works. (This is my first time learning amortized analysis so please correct any error I might've made.)
Consider the operation of swapping $$$u$$$'s incoming edges into outgoing edges. If $$$u$$$ has $$$A$$$ incoming edges from the left and $$$B$$$ incoming edges from the right, the change in potential is $$$\Delta \phi = B-A$$$. However, the total cost of removal is $$$C_{actual} = B+A$$$, so the amortized cost is $$$C_{amortized} = C_{actual} + \Delta \phi = (B-A)+(B+A) = 2B \leq 2\sqrt{2m}$$$, since $$$B \leq \sqrt{2m}$$$.
It is not always true that $$$\phi(0) \leq \phi(q)$$$, but it is true that $$$\phi(0) \leq n \sqrt{2m} \leq \phi(q) + n \sqrt{2m}$$$.
Edit: We can also bound $$$\phi(0) \leq m \leq \phi(q) + m$$$ which is much simpler.
Therefore the sum of the amortized costs is $$$\sum_{i=1}^{q} C_{amortized, i}=\phi(q) - \phi(0) + \sum_{i=1}^{q} C_{actual, i} \geq \sum_{i=1}^{q} C_{actual, i} - n\sqrt{2m}$$$, where the first equality follows from a telescoping sum.
Therefore the total cost is bounded by $$$\sum_{i=1}^{q} C_{actual, i} \leq n \sqrt{2m} + \sum_{i=1}^{q} C_{amortized, i} \leq n \sqrt{2m} + 2q\sqrt{2m}$$$
I don't understand why
C_amortized = C_actual + (Change of Potential)
The sum of amortized time should be equal to the sum of the actual time, if my understanding about the amortized analysis is correct.
The way I understand it is the amortized cost is just a function with certain properties. I could have replace $$$C_{amortized}$$$ with $$$f$$$ and the argument would not change.
Ok. I think I now understand the solution as well.
Thanks a lot :D
even I didnt understand how you wrote C_amortized = C_actual + (Change of Potential) can you give some source from where you studied this?
I just read these notes https://www.cs.cmu.edu/afs/cs/academic/class/15451-s07/www/lecture_notes/lect0206.pdf
mnbvmar
I think there are more typos based on this typo. In the next paragraph, I think "left to right" should be "right to left" and vice versa.
Thanks for the feedback, I'll try to rewrite this part of the editorial today.
Can anyone help me with my solution for div2D/div1A. I implemented the same idea as the editorial, but somehow get wa in test 7 https://codeforces.net/contest/1230/submission/61167796
Thanks for the contest!
I have accepted the following cute solution for E1 in upsolving (had to leave ~35 minutes before the end of the contest, so didn't debug in time then): let's just run the normal algorithm for maximum matching, and every time it needs to check if a given edge exists, we branch to try both options. The intuition is that in most branches, we won't care about the state of many edges, and this will run in time. Practically it took 5.3 seconds out of 7 in Java :)
To implement this, our recursive search function needs to take the current state of the dfs stack as input: https://codeforces.net/contest/1229/submission/61184672
For Div 2 D, why is the ans for this test case 9 and not 10?
aren't all the skills subset of skills present in 7?
THe problem states that the the '7' can only be in the group, if he isn't better than each of them In that case the 7 is better than all the other numbers (you have to check one by one)
In problem C why did I pass the pretests but finally RE on test 1? Now I submit the same code again but get "accepted".I feel puzzled.
the same code: https://codeforces.net/contest/1230/submission/61149579 RE https://codeforces.net/contest/1230/submission/61186531 AC
I may be wrong but , doesn't the wording of A. Marcin and Training Camp give the impression that we need to take into account only the entire group and not the subsets ?
I thought so and didn't bother checking the "Notes" section . Only too late into the contest did I realize from the explanation of pretest 2 that we have check whether the subsets / subgroups of at least 2 students can not have anyone that thinks he's better than everyone else .
If you read only the question statement it implies that this constraint only applies to the group as a whole and they will be participating in the contest together . Doesn't it ? 1229A - Marcin and Training Camp
You have to check the entire group. Pretest 2 is 1,2,3. If the team is 1,2,3 then 3 will think he is better than everyone if the team 1,2; both will think they are better than everyone(only one teammate in this case) if 1,3, 3 will think he is better than everyone
Help in Div2. D. ( I´m getting TLE )
My idea:
Build a directed graph with students as nodes. There will be a edge from node $$$A$$$ to node $$$B$$$ if and only if the student $$$A$$$ doesn´t think that he is better than student $$$B$$$.
With this in mind, the solution to the problem is the sum of the skill levels from all those nodes such that there is a path from that node to a cycle.
I implemented this and i think that the complexity is $$$ O( V+E ) \approx 2.5 \cdot 10^7 $$$ worst case ( when it is a completed graph ).
I tried taking all the students that know exactly the same algorithms as one node but it did´nt work. code
Did i make a mistake in the complexity or i miss something in my code?
Thanks in advanced.
I solved it with the same idea. I add an edge from $$$i$$$ to $$$j$$$ iff $$$j$$$ has all the skills that $$$i$$$ has. As you mentioned we can see that we can add all the nodes to the answer that have a path to a cycle (or are part of a cycle). I run $$$dfs$$$ on every node but keep a $$$dp$$$ which prevents to run $$$dfs$$$ if you already know whether this node leads to a cycle or not. — 61210128.
I did the same $$$dp$$$ but on a bool array, maybe i made a mistake. I will try it again, thank you for your response.
Now that I saw your solution again, I noticed when you are building the graph you are doing it in $$$O(60n^2)$$$ which is almost $$$2\cdot10^9$$$ in the worst case and that is not surprising that you are getting TLE on a test case with $$$7000$$$ nodes. The problem with your implementation is probably
validEdge
. You need to do it faster.Yes, you were right! I did it in the same way as you and now i got AC. But i always thought that bit operations ( ans & , or | , xor ^ ) had the same complexity to do a loop over the bits. So now i´m a little bit confused, how is that doing this is really much faster?
AC.
Modern computers are capable of doing bitwise operations in almost 1 clock cycle which is really really fast compared to looping over bits! You can always assume that bitwise operations are done in $$$O(1)$$$ time. That's because of the computer's architecture.
Hi! I also used same approach(and doing same style of implementation) and was getting TLE, then I did what you said and I was able to reduce execution time by significant amount for first 15 cases. But I m still getting TLE on 16th case.
I use Java hence, may be it does it in a bit slower way so if you mind having a quick look at my submission. I m sure that it is while building graph as I tried some debugging. Here is submission: 61288151
Hi! I think that you are building the graph fast enough. Try to debug you $$$dfs$$$. You are probably running $$$dfs$$$ multiple times.
I also solved this building a directed graph. But instead of finding cycles I stored out degree of nodes and the reverse graph. Then for each node with out degree 0, decreased out degree of every other node that it came from(From the reversed graph). And finally took the values with at least one out degree remaining. Code
If you didn't fully understand $$$Div1$$$ $$$C$$$, you can think about it like this:
Assume the list in which employees are sorted in decreasing order of their degrees is $$$S$$$. Now, when Employee $$$v$$$'s salary is revised, we process all incident edges at $$$v$$$ to reverse them. So, what is the upper bound of all reversals count?
When we reverse an incident edge at $$$v$$$, we have $$$2$$$ cases:
could you give me more elaborated explanation on problem C (div 2) in case $$$n = 7$$$. I still don't understand it.
In problem Div2C Since, I can't understand editorial, can anybody explain in easier way?
Note that $$$n \leq 7$$$ by problem constraints.
For $$$n \leq 6$$$, the answer is just $$$m$$$, since we can always place dominoes such that the ones point to node 1, the twos point to node 2, etc, and we end up using the dominoes with the same pip numbers as the edge numbers given in the input. Since there are no duplicate edges, every edge can have a domino placed on it.
For $$$n = 7$$$, things get slightly trickier, as two vertices must have the same domino number pointing to it. We can brute-force every pair of vertices as a candidate for same-domino-number assignment. For each such pair $$$u$$$ and $$$v$$$, the number of "colliding" edges is equal to $$$|S_u \cap S_v|$$$, where $$$S_u$$$ is the set of vertices connected to vertex $$$u$$$ (or in other words, the number of vertices connected to both candidate vertices). Find the pair with the minimum collisions, and subtract the collisions from $$$m$$$ for the final answer.
Sample code (in Kotlin) 61213061
Why wrong answer verdict for my code for "Ania and Minimising" was showing during the contest and was failing on test case no 5 But giving the right answer when I checked it later with these inputs on other online compiler.
Codeforces runs on 32-bit systems, hence
long int
is only 32 bits long here.While working on problem E2 (matching, hard version), I found this: A227414, which is the number of nxn bipartite graphs with a perfect matching for $$$n \le 7$$$.
I actually think our model solution can be used to extend this entry by $$$n=8$$$ (or maybe even $$$n=9$$$, but this may take hours or even a couple of days). The idea is to solve the problem for all probabilities = $$$\frac12$$$, and then multiply the result by $$$2^{n^2}$$$.
I think that the result for $$$n=8$$$ is
17190232260306391039
Can someone confirm it?
Just put those values in Berlekamp–Massey, easy xd
+1
can anyone plz explain the solution for div 1 B. I am not getting the editorial.
Let $$$S_u$$$ be the multiset of all values $$$f(t, u)$$$ where $$$t$$$ is an ancestor of $$$u$$$. Then if $$$v$$$ is a direct child of $$$u$$$, $$$S_v = \{x_v\} + \mbox{map}(S_u, \{ i \mapsto \gcd(i, x_v) \})$$$. ($$$\mbox{map}$$$ refers to the higher-order function that makes a new collection with each element in the old collection transformed by the given function)
How is this derived? The $$$\{x_v\}$$$ part comes from the new element $$$f(v, v) = x_v$$$, while the $$$\mbox{map}$$$ part comes from the fact that $$$f(t, v) = \gcd(f(t, u), x_v)$$$
The number of distinct values in $$$S_v$$$ for any vertex cannot be more than $$$\log_2 \max(x)$$$, as noted in the editorial.
This transition can thus be efficiently implemented by computing $$$S_u$$$ as a hashmap, then passing that map to the children for use in computing their own $$$S$$$ multisets.
Example code (in Kotlin): 61249254
Can somebody tell more about the solution of div1B / div2E / (Kamil and Making a Stream) that using jump-pointer(binary lifting, i guess) ?. It would be nice
Hi, i did that approach.
The main idea is that for a fixed node $$$u_0$$$ you can decompose the path from $$$u_0$$$ to $$$1$$$ in differents paths like this ( let $$$f[v]$$$ be the parent of node $$$v$$$ ) :
From $$$u_0$$$ to $$$u_1$$$ , from $$$ f[u_1] $$$ to $$$u_2$$$ , ... , from $$$ f[ u_{k-1} ] $$$ to $$$u_k = 1$$$, such that $$$u_1$$$ is an ancestor of $$$u_0$$$. Such that the $$$gcd$$$ from node $$$u_0$$$ to a node $$$v$$$ its equal to the $$$gcd$$$ from node $$$u_0$$$ to a node $$$w$$$ if and only if $$$u$$$ and $$$w$$$ are in the same semi-path.
In other words, the nodes of a same semi-path has the same $$$gcd$$$ to node $$$u_0$$$ and nodes from different semi-paths has different $$$gcd$$$ to node $$$u_0$$$.
And to do this, you can use binary lifting to find first node of every semi-path.
I don´t know if i explained it well, but here is my code 61246386, ask me every thing you want. :D
now understood, thank you!
Can someone explain why I can pass Div2D/Div1A by using n^3 method? 61205254
unable to understand Div 2 D why secind test case answer is not 6 ?explain pls thanks
Binary representation of $$$1$$$, $$$2$$$ and $$$3$$$ are $$$01$$$, $$$10$$$ and $$$11$$$ respectively. If you put all these people in a group then $$$11$$$ will think he is better than $$$10$$$ because he knows an algorithm that the $$$10$$$ doesn't know. And again $$$11$$$ will think he is better than $$$01$$$ because he knows an algorithm that $$$01$$$ doesn't know. So he will think he is better than everybody else. That's why you can't group $$$1$$$, $$$2$$$, and $$$3$$$. Both $$$10$$$ and $$$01$$$ will think that they are better than the other one. So you can't find a group at all.
Can't understand Div 2 E can anybody explain this problem's editorial in easier way?
In this problem you need to find the cummulative gcd of vertices from the current vertex to it's ancestor and then sum it all up. The way it's to be done is using the fact that there won't be more than logn different blocks with same gcd for a given vertex.Take an example [2 3 6 6]. Suppose u are at vertex 6, there are 3 blocks here 6,6 with gcd 6, 3 6 6 with gcd 3 and 2 3 6 6 with gcd 1. Basically each time you go left you will either get the same gcd or a gcd that divides your current gcd. And as the smallest divisor in this case can be 2. It proves that there will be atmost log(n) different blocks with same gcd. Now seeing this the rest in implementation and easy to do. just go through each node and then store all the gcd along with their counts in a map. Now what you can do is use the gcd of parent vertex and then find gcd wit the current vertex and add to answer cnt*gcd(prev_accumulatedgcd,va[current]. After doing this for all prev_accumulated gcds update the map with all the gcd's you just found and then do dfs. This way you will get your sum. So overall complexity is O(n -> for dfs * logn -> for inserting values into map * logn -> for counting of values with logn blocks of previous gcd).
Thanks , but if i start from the root , then the map will store values of gcd from the root to all the vertecies untill the vertex u , but i will need gcd's from vertex u to all of it's ancestors , do you mean i starting dfs from leaves ?
No, i mean starting dfs from root and storing values upwards. Initially the gcd,cnt pair will be empty when u call first dfs the value of root will be added with cnt 1 and then u will pass this to the next dfs call to the children and rest procedure i have already explained above. I will do dfs from top to bottom and calculate gcd of children using the gcd's i calculated above
I have understood you and got Acc , thank you for your help
np
Hi, I have implemented same idea.
But I think I am missing a small thing.
Code is very straightforward. can you help me out ? https://codeforces.net/contest/1230/submission/61536704
How to prove the lemma in problem F?
(1) The answer won't change if xi could be any arbitrary real values.
You can model the problem as minimum-cost flow: take $$$n$$$ vertices $$$v_1, \dots, v_n$$$ corresponding to the scales, and add a source $$$s$$$ and a sink $$$t$$$. The units of flow will correspond to the paths taken by each coin. Let $$$M > \max(n, \sum_{j=1}^n a_j)$$$. The edges go as follows:
We can see that the minimum cost of the maximum flow in this network is equal to the minimum time needed to solve the riddle, minus $$$M \cdot \sum_{j=1}^n a_j$$$. Now, it's a well-known fact that in the flow instances with integer capacities, there exists an optimal flow using only integer flows (even if we allow any real flows). This solves part (1).
(2) The arithmetic mean of two feasible solutions is a feasible solution.
If $$$(p_1, p_2, \dots, p_n)$$$ and $$$(q_1, q_2, \dots, q_n)$$$ are feasible solutions, consider a solution $$$(r_i)$$$ where $$$r_i = \frac{p_i + q_i}{2}$$$. We only need to prove that $$$a_i - r_i + r_{i-1} \in [l_i, r_i]$$$. However, the left-hand side is the arithmetic mean of $$$a_i - p_i + p_{i-1}$$$ and $$$a_i - q_i + q_{i-1}$$$. Both these expressions were in $$$[l_i, r_i]$$$ interval, therefore $$$a_i - r_i + r_{i-1}$$$ is good as well. This solves part (2).
Let's now prove that $$$F(p) + F(q) \geq F\left(\frac{p+q}{2}\right)$$$ for any real $$$p$$$, $$$q$$$. Take $$$(p_1, \dots, p_n)$$$ as the optimal solution for which $$$p_1=p$$$, and $$$(q_1, \dots, q_n)$$$ as the optimal solution for which $$$q_1=q$$$. Now, notice that $$$x \to |x|$$$ is a convex function. Therefore, $$$|r_i| = \left|\frac{p_i + q_i}{2}\right| \leq \frac{|p_i|}{2} + \frac{|q_i|}{2}$$$ for each $$$i$$$. As $$$(r_1, \dots, r_n)$$$ is some solution (not necessarily optimal) for which $$$r_1 = \frac{p+q}{2}$$$, we have that $$$F\left(\frac{p+q}{2}\right) \leq \sum |r_i| \leq \frac12 \left(\sum |p_i| + \sum |q_i|\right) = \frac12 (F(p) + F(q))$$$.
Therefore, $$$F$$$ is convex and has an optimum at some integer point. Hence integer ternary search works.
In Div1 A, what is meant by "a smaller set of skills"? Are you talking about the number algorithms the student knows? If so, the solution doesn't make any sense because a potential student having a smaller set of skills than another student doesn't guarantee that the condition will be satisfied after he is added.
Radewoosh, I'm having trouble understanding why your Div1D solution has complexity $$$O(nk!k)$$$. It is clear that you calculate the (at most) $$$k$$$ distinct groups that end in $$$r$$$, and obviously do this for every $$$r$$$ from $$$1$$$ to $$$n$$$.
However, you assume that every time you insert a new permutation into a specific group, the DFS/BFS algorithm you apply to obtain the new bigger group has time complexity of $$$O(k!)$$$, but isn't it $$$O((k!)^2)$$$ at worst?
Correct me if/where I'm wrong:
By definition BFS/DFS has complexity of $$$O(n + m)$$$, where $$$n$$$ is the number of nodes, and $$$m$$$ is the number of edges. It is obvious that $$$n = k!$$$ in this case. But $$$m$$$ can go up to $$$n^2$$$. When you insert a permutation to a group you have to compose it to every other one in the group. This produces several other new permutations for the group, so each of them should similarly be composed to all the existing permutations in the group. This process is repeated on and on, until no new permutations are produced. Therefore, the $$$xth$$$ permutation added was composed with $$$x - 1$$$ other permutations. So in order to obtain a group of cardinality $$$n$$$, the complexity is $$$O(n^2)$$$.
I'm a rookie in group theory, so I don't know if there's some property on groups/composition that brings up a heuristic that reduces the number of operations on the proposed solution. I'd like to know if this is the case, or the actual complexity is $$$O(n(k!)^2k)$$$, and it passes in time thanks to microoptimizations/a good constant factor.
Thanks for the contest btw, it was challenging and interesting.
We create a set of $$$O(k)$$$ generators: $$$g_1, g_2, \dots, g_\ell$$$. Each element of the subgroup is then a product of these generators in some order, possibly with some generators occurring multiple times (or not at all). You can therefore have at most $$$|V| \cdot k$$$ edges in your graph: you only need to multiply by $$$g_1, g_2, \dots, g_\ell$$$ in each BFS/DFS iteration.
Therefore, each BFS/DFS will take at most $$$k!k$$$ time. This would suggest that the total runtime of processing each prefix is $$$k!k^2$$$. It's not true, though — each new subgroup in the subgroup chain is at least twice as large than the previous one. Therefore, the running time is something like $$$\sum_{i=0}^k 2^{-i} k!k = O(k!k)$$$.
Got it. I thought you needed to keep $$$O(k!)$$$ generators, but you're right, $$$O(k)$$$ is enough. Then you perform a BFS/DFS on all the $$$O(k!)$$$ permutations, but only considering those $$$O(k)$$$ generators as transitions per each node.
However, this means I coded a $$$O(n(k!)^2k)$$$ solution and got AC instead of TLE: 61572593. Weak testing?
Either way, thanks a lot for the quick response.
Is it $$$O(n(k!)^2k)$$$ or $$$O(n(k!)^2)$$$? If you compute a subgroup in $$$O(\mathrm{size}^2)$$$ time, then computing all the subgroups in the chain is $$$O((k!)^2)$$$ (same geometric progression thing).
Actually, it was really hard to make $$$O(n(k!)^2)$$$ much slower than $$$O(nk!k)$$$, so we were okay with someone squeezing the former in TL. Setting $$$k=6$$$ was not really an option, as $$$n$$$ would have to be significantly lower and we had some quick $$$O(n^2 + f(k))$$$ solutions (for some function $$$f$$$) that would pass without any problems.
In Div2D/Div1A, what is meant by "strictly smaller set of skills"? Do you literally just compare the bitmasks like numbers? Also, shouldn't there be a proof for this, as part of the editorial?
Regarding Marcin and Training Camp . It is written that if any student has a set of skill strictly less than the skills of everyone in any group then he can be safely included.
Consider this example in which the group consists of skills 4(100) and 4(100) ,now including 1(001) will make him think himself better than others because no one else in the group has first bit set. Am i missing something ??
I am unable to understand question A. Marcin and Training Camp. Can someone please explain especially test case 2?
1-> "01" 2-> "10" 3-> "11" if we take all 3 as a group then no one in the group thinks that he is better than everyone else right? . Why ans=0?