# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3814 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3517 |
7 | Radewoosh | 3410 |
8 | hos.lyric | 3399 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 169 |
2 | maomao90 | 162 |
2 | Um_nik | 162 |
4 | atcoder_official | 161 |
5 | djm03178 | 158 |
6 | -is-this-fft- | 157 |
7 | adamant | 155 |
8 | awoo | 154 |
8 | Dominater069 | 154 |
10 | luogu_official | 150 |
Name |
---|
The round starts in 45 minutes.
Sorry ,just a little misunderstanding ..
I think in this context tee means T-shirt :) (https://www.urbandictionary.com/define.php?term=tee)
You should suggest Google to produce famous Code Jam tea as a reward for the contestants.
I Don't have any contacts in Google , there is no way i can suggest them (except email).
There's always the Codejam Google Group.
The constraints for B were a bit unfortunate, I think it would have been preferable to include a second visible set with positive values of $$$X$$$.
It would also have been helpful to have a test case in D where some of the numbers were not 1 ...
(I was super lucky in the sense that I was doing a problem quite similar to D just before the contest, and that I managed to catch an error 5 min before the end of the contest without actually constructing additional test cases...)
How to solve B2?
The important idea is that, for a list of (numerical) latency, there always exists a way to set the weight on the edges conforming the latency requirement: for an edge (x-y), the weight is
if they are not equal, or any nonzero number otherwise.
The rest is described as gultai4ukr
Me waking up on 14:06 UTC
(Thus, those who wear shirts are shameful)
WTF, B has two samples? Don't do things like that...
What's wrong with having to test your solution yourself?
What's wrong is putting the second sample at the end of the statement, way below the first sample, so that some people could have missed it (and some did).
+1
LOL, I didn't notice it too, until now.
Same here. Submitted considering that sample 1 contains cases with X > 0.
Breaking news: According to Google UI/UX team working on the GCJ website, this is a great decision that will attract more people due to how user-friendly it is.
I did and it costed me the qualification because of a stupid mistake when copy pasting part of the code. This is so frustrating
I still don't get it. I also didn't notice it until now but I don't know how it affected the competition.
And no sample for D-hard. I would understand this from some random competition but I always treated GCJ as the best competition in the world and now this shit...
EDIT: well, it's obviously a small detail in the whole competition but next time I would be happy to see useful samples. It makes the competition less random. (I had another opinion a few years ago btw.) It's just hard to imagine organizers deciding that it's better not to provide a test for D-hard.
Could you help explain me why uncommenting line 41 in this source code gives WA for problem C?
https://pastebin.com/JkaeuBRi
The third argument of
std::unique
must be==
, not<
, maybe that's whyWow, ok. Good to know. Thanks!
Edit: What a combo-breaker for the sort -> unique paradigm; I wonder why they chose it like that.
Because unique has weaker contract on data
I was mistaken, please ignore.
Does anyone have an idea of what's wrong with my solution for C? I've tested this with tens of thousands of random inputs against Scott Wu's solution and I can't find a case that makes it fail.
Edit: I got it to work after reading the analysis, but I'm not exactly sure why my solution is wrong.
Probably overflow.
Testing cases with large coordinates also doesn't seem to work...
Random tests are shit in this problem, you won't get any special combinations of collinear points.
I also did random tests with small coordinates to try to increase the chances of those. I figured with 7 points there aren't that many combinations and using points in a 20x20 square should hit at least some of them. Manual testing also didn't work.
I Think this is wrong:
Try this:
answer is 6
That was it, thank you!
Can someone help me to understand the following case for the third problem (Wormhole in One)?
I see accepted solutions printing $$$6$$$, but my solution outputs $$$7$$$ and here's how:
Connect points $$$(0, 0) - (0, 5)$$$.
Connect points $$$(0, 3) - (8, 8)$$$.
Connect points $$$(0, 4) - (11, 11)$$$.
Here is a figure to visualize the scenario. The same colored points are connected.
Now from the point $$$(11, 11)$$$, we hit the ball straight up. It first goes to $$$(0, 4)$$$ because they are connected, from there to $$$(0, 5)$$$, from there to $$$(0, 0)$$$ because of the wormhole, to $$$(0, 2)$$$, then $$$(0, 3)$$$ which teleports it to $$$(8, 8)$$$ finally visiting all the holes.
What am I missing here? Did I misunderstand the problem?
The ball gets stuck at $$$(0,2)$$$ for your construction. When a ball enters a hole that is not connected to another hole, it stops.
The ball stops at (0, 2). Note they are holes, so once the ball hits a hole, it will stop unless the hole being connected to another hole via a link.
Thanks FieryPhoenix and ll931110, my bad. This simple misunderstanding ruined the contest for me, should have re-read the statement again instead of trying to debug correct solution of a different problem.
Have you noticed the second sample test in problem B?
strawpoll
Lol ,it took me a minute to realise what you are talking about because I was really not having any idea about second one. RIP to me
I did, but pretty late.
Well, here is my screencast with some commentary. I tried to solve all problems and ended up getting the 525th place https://www.youtube.com/watch?v=kvzvsyUZp7Q
My solution in D large was similar to my recent problem: Giant Penguin
At first, we should note that we can change $$$CC$$$ to $$$(CC)$$$ by adding additional brackets with zero $$$l_i, r_i$$$ and infinite $$$p_i$$$. Like that, we can change $$$CCCCC$$$ to $$$(((CC)C)C)C$$$, so our "bracket tree" will be binary.
In this binary tree, we can find a subtree with the size $$$\frac{1}{3}n \ldots \frac{2}{3}n$$$. We can note that if we delete two border vertices of the segment corresponding to this subtree, we will get two independent problems with sizes $$$\leq \frac{2}{3}n$$$, so we can build something similar to "centroid decomposition," like in the problem that I've mentioned before.
What is your user name there, I was just curious to look at your solution. Thanks in advance for replying
CCCiq
Don't thank me (I've got insider info) *_*
Why did I get WA instead of RE for index out of bound issue in problem B?Such misguiding verdict cost me a lot of time.
C++ arrays have garbage values in them outside the declared bounds. As far as I know, declaring arrays in C++ is simply allocating a bunch of memory (unlike Java, where arrays are objects and out-of-bound access is an undefined operation and will throw an exception). So if you don't go too far, the compiler may not even notice.
I remember in problem B of round 1A (Pascal Walk) I had an int[500][500] and I went as far as index 1000 in the first testcase, but got AC (lol), so it is just something to pay attention to.
In fact different compilers give different verdict on the same issue.I once got RTE for exceeding memory limit.But I wish GCJ compiler would have been advanced enough to catch index out of bound issue.I wasted 45 minutes only to find I didnt declare enough memory.
Btw, have you stress tested your code? I mean, it's unlikely you would get AC with out-of-bound behavior, so......
Nope.I got ac after declaring enough memory.
My screencast without many commentaries, where I learn that non-ac verdicts on sample tests aren't accounted in penalty and that trying to read numbers eternally from an empty stream under their constraints causes MLE, not TLE
I don't like hidden verdicts :(. Failed A's second test case because of integer overflow and lost my shot at a t-shirt.
What's wrong with immediate feedback?
Me also, strongly don'e like it, as I always write buggy code.
Set instead of multiset in B, and 1e9 instead of $$$\sqrt{2e18}$$$ in A were enough to fail hidden tests in both and to kick me out of qualification. Although, my time was very enough and good.
Tho I don't like hidden verdicts as well, A could have been easily tested by trying out max values (I myself caught about 3 bugs related to this), so I think it's kinda unfair to blame the hidden verdicts for that.
Who said there is some mistake in immediate feedback. Each contest(or better, website) has its own way. Its nice that there are different variants of contests so its not boring with a same ICPC format.
Yes. There's nothing like the awesome atcoder where you can see all tests. Then Yandex where you can see the number of first failed test in all tests. Then the codejam and codeforces, but codeforces is bit better as the pretests are diverse. Then, the worst topcoder, where I even can't guarantee positive score. IT'S PERSONAL OPINION.
I actually miss the days when a lot more competitive programming contests were evaluating the skill of testing your implementation. In the long run I feel that it certainly helped me to write working code on the first try more often.
So the last 15 minutes of this round, for example, instead of focusing on D.small, I specifically was testing my solutions (1-line max test for A, multiple manual tests for B.large since B.small is useless for testing that etc.). Every contest has different rules and you have to be able to adapt your strategy to each one of them.
That moment when your A is 1 character away from being correct, and you miss an opportunity to make round 3 :(
I know that feel bro: screencast. Fixed one char and got AC 50 seconds after the end
That screencast is so sad yet op. You should get a t-shirt just for being closest to deserve making round 3 but not with video proof. Rip. At least you can probably make a failure meme template out of that lol.
Another way how to think about D-large:
Construct the weighted directed graph $$$G$$$ which is naturally implied by the problem. $$$G$$$ has treewidth $$$2$$$. In other words, there exists a tree $$$\mathcal{T}$$$ ("tree decomposition"):
For instance, the example test:
(The graph on the left consists of bronze and blue edges (actually, pairs of edges); blue edges move between adjacent characters, while bronze edges move between matching parentheses. Tree decomposition on the right.)
Now, the main idea of the algorithm boils down to the centroid decomposition of $$$\mathcal{T}$$$. In each recursive call, we find a centroid bag in $$$\mathcal{T}$$$ run (at most) $$$3$$$ Dijkstras from and $$$3$$$ Dijkstras to each vertex in this bag, answer the queries we can do, and carefully recurse with remaining queries. We still solve the problem in $$$O(n \log^2 n + q \log n)$$$ time.
I figured it might be worth mentioning about this method as it works for any graph with bounded treewidth ("looking kinda like a thick tree").
Why do you complicate things introducing treewidth even though you can say that removal of any pair of matching parentheses disconnects graph, so you can take centroid of the tree on the left and forget about the right one? Of course I am a big fan of treewidth but I don't see any profit here
EDIT: OK, I understand, removal of any pair of matching parentheses indeed disconnects the graph, but not in the way as the left tree disconnects, cause brothers of removed vertex stay connected. Now I see the point.