Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3831 |
3 | Radewoosh | 3646 |
4 | jqdai0815 | 3620 |
4 | Benq | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | gamegame | 3386 |
10 | ksun48 | 3373 |
# | User | Contrib. |
---|---|---|
1 | cry | 164 |
1 | maomao90 | 164 |
3 | Um_nik | 163 |
4 | atcoder_official | 160 |
5 | -is-this-fft- | 158 |
6 | awoo | 157 |
7 | adamant | 156 |
8 | TheScrasse | 154 |
8 | nor | 154 |
10 | Dominater069 | 153 |
Name |
---|
So for the 6th subtask in B, we do centroid decomp on the tree, and then for each node in the centroid tree, we store a lazy propagating segment tree that computes the greatest value over all the nodes in this node's subtree ordered by their DFS preorder in this component of the original tree when rooted at the centroid and also an ordered set for each node to maintain the two children with the farthest leaf nodes, and we store one big ordered set to keep track of the longest path going through each node so we can find the maximum over all nodes quickly, right?
What do you think about the problems? I know diameter is a standard-ish problem, but I tried and failed to find it in any contest and it's good enough for CEOI. Skyscrapers was prepared by me and boy was it a pain in the ass. I started by proving that it doesn't work (the original version was with 4-connectivity everywhere), then by finding how it can work and making a formal proof. It was intended as a hard problem and I'm glad it proved to be the case.
I think B was too standard. Also my code for B was 300 lines, I dont know whether this is much more than needed. But i think a code of this length is too long for such a competition.
C was nice. For A, I found t = 1 quite trivial.
If you use this code gets quite short.
My code for B was also 300 lines, but it's 200+ lines of building HLD and doing generic HLD queries and a fairly standard segment tree (set/add/get_max), the part specific to this problem is maybe 30 lines.
The contest lasts 5 hours, so it's not all that much.
Fun fact: $$$t=2$$$ wasn't originally intended to be in A and none of us saw the simple solution for $$$t=1$$$ until about 2 weeks before the contest. Even then, it was found by simplifying a much more complicated solution. The intended solution was always about supporting arbitrary removals and keeping/updating a set of removable cells, so we changed it (added the $$$t=2$$$ format) to make sure only those solutions would pass. Of course, the input is a random permutation of some valid order (yes, I know it's better to choose the permutation more carefully).
I didn't find $$$t=1$$$ all that easy to find, but easy enough to "find" by accident. Turns out it really was pretty easy and it's a good thing I pushed for not making it the 100pt solution...
My code for B was about 150 lines, it's easy to think but hard to write. I used edge based divide and conquer, tree chain dividing and binary index tree. I wrote and debugged 2+ hours, I felt desperate. (My English is poor, don't care about my English grammar mistakes)
How does edge-based d&c work? Does it work fast with tree where degrees can only be x or 1? x>10
It can work fast on every tree. I have rebuilded the tree
First, you need to build edge-based d&c tree, it's a binary tree and it's deepth is $$$O(logn)$$$
For a query, you can find the edge on the edge-based d&c tree and jump to root to update the diameter, and you need to write a tree chain split to calculate the distance between u and v
Can you share your code. Thanks
code
Skyscrapers was awesome. It's really cool that the final solution doesn't need any fancy data structures. Unfortunately I only found the solution after the contest, while eating dinner in a much colder room. In general, grid problems where you consider some sort of local structure are nice (seats from IOI 2018 is another such problem).
I had though of the Dynamic Diameter problem before, but I couldn't find a solution without LCT/HLD/centroid decomposition, so I dismissed the problem as too ugly. (LCT/HLD are excluded from the ioi syllabus, so for our local IOI selection, we avoid using task requiring them.) I'm a bit disappointed that the intended solution is just that.
Considering the bipartition in Cubeword is neat. I only though of BFS-like brute-force orderings.
Trust me, there were more out-of-syllabus proposals. At least HLD is so well-known at this point everyone should be able to do it. Maybe it'll get added back to syllabus soon (it was in IOI 2011). If you can see such a solution, you should start writing code from memory immediately. I don't like it all that much either, but not everything can be skyscrapers.
A and C were really nice! (I was only a bit angry that I had to optimize my $$$|\Sigma|^8$$$ solution to C in order to pass the first subtask.)
B was IMHO a bit too standard. At least it was a good thing that you could solve subtask 5 first (and verify it passes the system tests) and then implement centroid decomposition on top of it. (On the other hand, the existence of this subtask could suggest the solution to the contestants.)
A and B were both implementation-heavy. I have no clue what the tasks on the second day will be, but having these two tasks on two separate days could have been a good idea.
I didn't know that could happen. $$$6^8 \approx 10^6$$$ and the inner loop is simple, without even any cache misses.
Subtasks that suggest the solution, or more generally, possible ideas a contestant could be missing, was kiiind of intended and it's pretty common in IOI-style problems in my experience. The other, more important, purpose is that if there's a bug in a full solution, it will still get some extra points. Imho if someone can figure out a full solution just with the help of subtask structure in-contest, that's fine and that contestant deserves the points just as much as those who wrote HLD from memory.
I don't see A as particularly implementation-heavy. If you have the main idea (that you need to write one function to locally decide removability and how to do that), which is the hard part, everything else is quite simple. My code is written for clarity and some efficiency rather than code golf and it has "just" 300 lines.
[B] Okay, I think I can agree with you that suggesting the solutions can be kiiind of intended. (In particular, if used correctly, this can be a nice tool to control the difficulty of the problems.)
[A] Sorry, maybe I phrased myself wrong. I agree with you that the implementation is kinda straightforward -- each of the building blocks of the algorithm is rather simple. I however needed some time to get each of them to work correctly. (Disclaimer: this is totally OK for a 5h contest.) If I figured out how to solve A, say, 40-50 minutes before the end of the contest, I'd probably have slim chances to get it accepted. (Again, it's OK. And I'm probably not too good at IOI-style tasks.)
The same applies to B, though. It's not a perfect situation imho, but I think I'm fine with it.
Thanks for today's contest. :)
Yeah, it was kind of annoying to debug due to the size of the code. I found one bug only after Java threw an OutOfBoundsException, C++ worked just fine with it. I'd prefer if B was the only implementation-heavy problem (HLD is hard to implement when you don't know exactly how), but when you need to start by writing 80 lines of parsing grid->graph+components, there isn't much to do about that.
You can solve B with the same trick you used on the problem you created!
If you do it this way you can avoid tedious implementation!
https://codeforces.net/contest/1149/problem/C
Awesome! That's exactly the type of 'simpler to implement' solution I was looking for.
I got 3rd subtask of 3rd problem with solution $$$\mathcal O(n + |\Sigma|^5)$$$
Where can we see the Final standings of this mirror?
For the last subtask of problem C, I didn't use symmetry, but instead, I used pragmas. Strangely
#pragma GCC target ("avx2")
was key to get 100 :DI submitted something like https://pastebin.com/Na0xdqLb in contest but I got TLE. I also submitted it on the CF mirror now and it also gets TLE. Can you share your code? The codes of others don't seem to be visible right now...
My codes:
"You are not allowed to view the requested page"
Aw damn. Probably because I'm in admin mode with no way out of admin mode. I'll try to do something about it.
UPD: Fixed. Now I can see it in porn mode too.
How can we see other people submissions ?
I also want to see other's submissions too.
Hello! May I ask what is the standard way of processing the input tree information in problem B? i.e., what data structure should I use for this sort of problem?
Adjacency list. Also segment tree can be helpful. https://codeforces.net/blog/entry/53170
Thank you so much! Just to clarify that we usually choose adjacency list over adjacency matrix since it saves more space, is it correct?
The only time I ever used adjacency matrices were back when I was learning about Floyd-Warshall, which I never used in a contest. So the vast majority of the times adjacency lists are the way to go both because they use way less memory and are faster for sparse graphs (graphs where the number of edges is significantly less than the maximum of
V * (V - 1) / 2
. In this problem, you wouldn't be able to fit an adjacency matrix in either the memory limit or the time limit for the full problem.As far as I know, the adjacency matrices are only useful when you need to access the weight of an edge between two nodes in constant time, and of course only if it fits in the time and memory limits.
I think bitset of size n*(n-1)/2 will not get MLE.
Well,
n * (n - 1) / 2
doesn't even fit in a 32-bitstd::size_t
. I tried to create a 1.2e9 bit bitset just to see if an array of bitsets could possibly fit in the memory limit but that just made the CF custom test hang, showing "Running..." forever when I used G++. It appears to work with VC++ though.It's $$$n(n-1)/2$$$ bits, $$$n(n-1)/16$$$ bytes (which is the unit of
size_t
). Approximately 600 MB.There are worse problems here, though. Allocation can fail on 32-bit systems, since finding a contiguous 600 MB chunk in virtual memory is much harder than, say, 100 chunks of 6 MB. I'm not sure what happens on 64-bit systems. The number of cache misses will be insane, so it's much easier to get TLE. And of course, you'll probably get MLE due to allocating something else that's huge, with this kind of approach to memory.
The hanging program could be "while allocation fails try to allocate". MSVC could simply fail to allocate, which should eventually get segfault on reading. Did you try that?
The problem is that when you declare a bitset you have to specify the size in bits instead of bytes, and the template argument is a
std::size_t
. On VC++ both reading and writing to the 1.2e9 bit bitset appears to work fine, and the memory usage is close to what I would expect. I also tried submittingbitset<499995> bs[10000];
for one of the problems and it appears to work for both G++ and VC++. So I guess you could actually allocate an adjacency matrix for this problem...Don't use
std::bitset
, use just a bitset — C array.Can somebody tell me what's the use of the constraint w<=10000 in task b. It seem to be useless
You can use int32_t instead of int64_t :thonk:
Prob A: What exactly is a 4-reachable and 8-articulation cell
For example, an 8-articulation is an articulation in a graph where cells (vertices) that share a side or a corner have an edge between them. I didn't want to write just "articulation" or "connected".
Can somebody explain what do we store in the segment tree in each node of the centroid tree and how do we perform the updates. Thanks in advance
for every node, we do a dfs and order leaves by euler tour. The segtree will have the distance of each leaf from the node in the order of the euler tour. The segtree has to be capable of doing lazy sum upd and range max query.
Each node also maintains a multiset of the longest paths from each immediate child. For eg. if a node has 3 children named A, B, C, then the multiset will contain 3 elements: longest path from node to a leaf in subtree of A, longest path to a leaf in subtree B....
When an update comes you can do a range update on the segtree and then recalculate the max value for the affected child and edit it in the multiset.
The longest path going through the node will be the sum of the 2 biggest values in the multiset.
Maybe in problem B it was meant that we consider all centroids that are ancestors (in centroid decomposition) of the deepest vertex from the root? Because otherwise we'll consider $$$O(N)$$$ vertices as I understand.
I solved 1,2,3,4 and 6 subtasks in B, but my code failed to pass 5th subtask. Any ideas how did this happen or what I may be missing in my code? Are there any tricky corner cases? UPDATE: I just had one int instead of long long.
good
I found some people write a method on B which seems to be O(n log n) with segment tree. like https://codeforces.net/contest/1192/submission/57854306 I seems to understand how it works. It looks great! But it need the w_i to be positive(or zero). Is there any idea to solve the problem B in O(n log n) when w_i can be negative? Or is there anything wrong with it? If you know, please tell me. thx.
Read the comments. Here is the Shame Cube Algorithm.
Thx. Nice trick!