Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
№ | Пользователь | Рейтинг |
---|---|---|
1 | tourist | 3993 |
2 | jiangly | 3743 |
3 | orzdevinwang | 3707 |
4 | Radewoosh | 3627 |
5 | jqdai0815 | 3620 |
6 | Benq | 3564 |
7 | Kevin114514 | 3443 |
8 | ksun48 | 3434 |
9 | Rewinding | 3397 |
10 | Um_nik | 3396 |
Страны | Города | Организации | Всё → |
№ | Пользователь | Вклад |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 156 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
10 | nor | 152 |
Название |
---|
Problem E is not original! link
So what? Same thing could be said about countless other problems. One could say the same thing whenever he encounters an MST or shortest path problem.
I agree with you! Maybe what I was trying to say is that the other problem's editorial could be a little help to whoever didn't get this one. I'm just trying to point out the similarities, that's all!
it helped me so thank you :)
I had a different idea for E.
First, let's take the vertices whose degree is bigger than N/2. Any two such vertices must be in the same connected component, because their neighbor sets must overlap.
What about the other vertices? Well, their degrees are smaller than N/2. That means that N-deg(v)>deg(v) for each of them. but the sum of N-deg(v) is at most 2M, hence their sum of degrees is at most 2M as well. So we can apply a regular algorithm (dfs, or dsu) to those vertices.
You have to extend the set of vertices where degree is bigger than N/2 to its maximal, since vertices of degree no more then N/2 may also in the set.
Yes, I guess I wasn't clear.
For the vertices whose degree is less than N/2, you process ALL their edges — the ones leading to each other, as well as to the big set.
You're right.
Thank you very much for this explanation.
Now I got E
Sorry, I don't get this part:
but the sum of N-deg(v) is at most 2M, hence their sum of degrees is at most 2M as well
Why "N-deg(v) is at most 2M", and what does "N-deg(v)" imply? and how we can derive "their sum of degrees is at most 2M" from that?
N — deg(v) counts the vertices w, such that (v,w) is NOT an edge in the graph.
There are M pairs (x,y) that are NOT edges, and 2M if we count both (x,y) and (y,x).
Thus if we add up (N — deg(v)) for every vertex v, we will get 2M. And if we add up (N — deg(v)) for only some vertices, we will get a number at most as big as 2M.
Now, if deg(v) < N — deg(v), then the sum of deg(v) in the set is also less than the sum of N — deg(v) in the set, which we have already established is at most 2M.
Nice idea
can you share your code for this logic. ?
https://codeforces.net/contest/920/submission/34908754
Wow, my code looked so different 5 years ago
someone know why i get RE on Test 9 by using it=vis.lower_bound(*it) ,setvis for unvisited node and auto it=vis(iterator) 34911809
someone know why i get Re on test9 by using it=vis.lower_bound(*it) ,
set<int>vis,it=vis(itertator)
34911809 ,I get AC after changing *it to x (int x=*it
)I belive its because you should not keep using the iterator it after you erased it from the set
Editorial for E looks unfinished.
Could someone explain it in a better way/more understandable way? Or Can someone provide the Implementation?
Thanks.
Here's the implementation: http://codeforces.net/contest/920/submission/34871219
Just keep all the unvisited nodes in a set and keep erasing vertices as you visit them in the bfs. You are now selectively only going to traverse through the unvisited nodes, which brings down the complexity to O(NlogN) [use of sets gives the logN factor.
Is it good to use unordered_set instead of set?
Yes, it should make the execution a bit faster
I tried HashSet in Java, which is unordered_set in c++. It actually TLE. But TreeSet (set in C++) passes the test.
In C++, using unordered_set reduces time. Here's my submission using unordered_set: http://codeforces.net/contest/920/submission/34926295
Worst-case complexity of unordered_set is O(n) because it uses hashing. Average case is O(1), so there must be a case that brings out it's worse complexity.
However, TreeSet/Set will always be O(logn)
My java submission 34926586 using HashSet TLE. But apar03's c++ submission 34926295 works fine. Maybe it is due to different implementations in different languages.
add 1~n node in "unvisited node"set , take out one of the node untile set is empty, for the node you take out ,denote it U, find the edge"you can go",which is [upper_bound s — find the smallest integer y from the set such that y > s.] in the tutorial and those nodes you find from node U will be one of the component.
keep the graph bu using STL:set it will be easy to implement.
I tried to implement and I'm getting Runtime error in CF whereas it is working fine on my computer. I got the error, it is because the set is dynamically getting changed during the recursion.
You made a mistake on C++ container iteration. When you want to erase something in an iteration, be sure to update the iterator, or it becomes an undefined behavior. For example, you should change your code to
All erase in iteration should be done so, see the cppreference of std::set::erase, or any other container's erase function
Yeah, thanks, man..! I got. I fixed it using the following.
The advantage of my approach to yours is that it works for C++98 also, whereas yours works only for C++11 and above.
Anyway, Thank you.
I got it fully now.
Unfortunately, I'm not sure what to write there additionaly.
The algorithm is just typical DFS, but instead of iterating on adjacency list of current vertex and trying to find an unvisited vertex, we have it backwards: we iterate on the set of unvisited vertices and try to find the one that can be reached from current vertex.
Perhaps my implementation can help to understand it better.
I was confused about how we choose next vertex, so you could write about step "try to find the one that can be reached from current vertex" in details.
If the question is on a directed graph. I think we have to Iterate on all the edges of the set. Right? However, if we iterate on all the vertices in the unvisited set in this question also, I think the complexity remains the same because atmost after deg(v) (where v is a vertex in unvisited set and deg(v) is the degree of v in the given graph) v will be removed from the unvisited set. So overall complexity will be m*log(n).
Can someone explain G?
What is p, x and k in the editorial?
Did you even read the problem statement?
Что за рюкзаковая динамика? В интернете не находится ничего адекватного по этой фразе.
Неужели трудно сделать разбор задачи чуть подробнее, видя что решили её единицы, при том что она на уровне сложности D.
Поищи knapsack problem.
For Problem E,can the problem be solved in the same complexity if the graph is directed?
if you cant add edge 1->2 it means you can add 2->1 ,so if the still add the edge 1->2,because the probelm is asking component so it doesnt matter. I think so...
In first string of problem G you have a little mistake. Not gcd(z, y) = 1 but gcd(z, p) = 1
Thank you, fixed.
For F, you can do it with a BIT and a set. Store sums in the BIT, then for range updates, you can store the indices of all numbers that aren't 2 in the range in the set with lower_bound and traversing forward. If you set something to 2, erase it from your set.
The solution is still O(qlog n) but should be easier/faster to code.
Here's my implementation of this idea with a segment tree and a set: 45337647. Hope it helps anyone looking to implement the problem this way as its much easier than the 2 segment trees idea in the editorial.
Can someone share their model solution to F ? I am not able to understand most of the codes.
Hi guys, I'm having trouble with problem F.Here is my solution .I did everything that was written in editorial but my solution doesn't fit time limit in test case #69. Moreover, it works on other testcases ~1900 ms. Can somebody give me some pieces of advice?
I'd finally fixed it. In update procedure added one more verification. If somebody needs, here is my final solution
Can someone explain to me G-List Of Integers a bit more detailed? I don't understand the editorial solution
Well, the main difficulty is that you have to be familiar with inclusion-exclusion principle.
Let's use binary search to find the answer. Then we need to somehow calculate the number of good integers (good integers are coprime with p) from integer x to integer mid (the middle element in binary search). It's better to rewrite it as count(mid) - count(x), where count(x) is the number of good integers not exceeding x.
Now we somehow have to calculate this count(x). We need to find all good integers from [1, x]; these are all integers from [1, x] excluding those that are divisible by some prime divisor of p. To find the latter, we use inclusion-exclusion principle: let our sets A1, A2, ..., An be the sets of integers divisible by the first, the second, ..., the n-th prime divisor of p. And then we apply inclusion-exclusion formula as it is — since the maximum number of primes in factorization of p is 7, then we can just iterate through all possible combinations of these sets A1 ... An.
Thank you so much, I finally understand!
Can inclusion-exclusion be used to calculate euler's phi function? Won't it be a special case of it?
Well, actually if we set in A(p, y) mentioned in editorial p = y, then we will get exactly φ(p). So I believe Euler's function can be calculated using inclusion/exclusion, it's just not convenient to do it this way.
I just wrote the same solution described in editorial and got TLE http://codeforces.net/contest/920/submission/34930869
Is the constant factor bad, or have I misunderstood something?
Nevermind, I'm doing range updates wrong.
The thing is that you can't just try updating each of the indices from [l, r] if you have REPLACE query.
You have to use the segment tree for your updates. It is difficult for me to explain. You have to make the queries in a similar manner to mass change queries in segment trees, but instead of pushing some values you need to do the following if you have to update the whole segment:
1) If the maximum element on this segment is 1 or 2, then just stop updating the segment, you don't need to do anything in it.
2) Otherwise, if the segment contains more than one element, divide it into two segments (just the way segment tree does it) and try updating these segments.
I think that our implementation may help. Take a look at how we have written upd function.
Now I see it. Thank you.
Can someone please explain the editorial for problem D. I don't understand it.
In problem D, how can we handle cases like
3 5000 0 10^15 10^15 10^15
when we cant remove enough water to get V because1 ≤ cnt ≤ 10^9
.I think the SPJ of problem D has a bug, because on testcase 5, an output like this YES 2 6 5 2 6 4 2 6 3 2 2 1 3 2 6 reports wrong answer, but actually it is correct.
F can be done by just using a BIT and std set. Use the set to keep track of the numbers that have not yet become 1 or 2 (so it serves the same function as the max segment tree) and use the BIT to query range sums.
Edit: Somebody already suggested this
I'm getting TLE with this logic(35463032). Could you help me out? UPD: solved
Problem F Sum and Replace taught me a lot ! If the update function converges rapidly, then we can just keep a max tree and ignore the nodes which have already converged. In the worst case, we will do 6 O(n) scans !
It's a powerful idea, which can be applied to this SPOJ question too, which also has a rapidly converging update function.
I used DSU to solve problem C. =)))
Thanks for this solution. It helped me a lot
I'm very glad to here that.
Can u elaborate your approach? i am curious to find a solutio using dsu
I think my code can help you.
Used ordered set instead of seg tree beats in 920F - SUM and REPLACE got TLE on tc39 :(
My submission: 93546773
The run time is bound by 6*N for the ordered set operations and maxnLgmaxn for preprocessing
Another approach for c. if every number can reach its actual position by adjacent swapping then array is sorted... suppose we take an element that is greater than next element and we're able to swap it, either this element will reach its actual position or it will encounter element greater than previous element. In either case if we are able to do these swapping then we can sort array. similarly, if element on right is smaller than left then we should be able to perform adjacent swapping. https://codeforces.net/contest/920/submission/269323443
https://codeforces.net/contest/920/submission/272617937
Can someone please guide me where does the solution go wrong?