kostka's blog

By kostka, 3 years ago, In English

The Code Jam Qualification Round has officially begun! → goo.gle/cj2022

You've got until 02:00 UTC on April 3 to register AND score enough points (at least 30) to advance to Round 1.

Registration closes at the end of the round!

  • Vote: I like it
  • +10
  • Vote: I do not like it

| Write comment?
»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

lol they actually implemented Punched Card Python. I'm curious if anyone used it to solve a problem.

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +45 Vote: I do not like it

    I sent a e-mail to them on 20th November 2021, requesting to add -frelease option to their GDC compiler command line:

    Spoiler

    They thanked me for my feedback, but did nothing. Now I know that they were too busy implementing this Punched Card Python thingie... Because priorities do matter.

    Edit: This is resolved now and GDC command line has been updated to add "-frelease" option. I also got a notification e-mail from GCJ people. Looks like they are reading codeforces blogs :)

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Hey, are you guys also facing the same problem for Twisty little passages ? My code passes well above required in the testing tool but gives wrong answer while submitted.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +45 Vote: I do not like it

    Maybe you need to check the content of the local testing tools, it generates the test randomly with some strategies. While the official test case maybe not (with human guides or manually generated).

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it -22 Vote: I do not like it

    Yep, it looks like the testing tool does indeed test, but only with graphs of a certain structure, unlike with system tests. You can see how I handled this at 1:25:25 in my screencast/solution video which will be available as soon as the contest ends.

  • »
    »
    3 years ago, # ^ |
    Rev. 5   Vote: I like it +10 Vote: I do not like it

    The local tool is really weak (IMO they could have explained that more clearly). E.g. I tested a solution which did not use the W command at all (basically equivalent to the first paragraph of the official editorial), and it passed locally.

    I edited the local testing tools to generate only stars (https://en.wikipedia.org/wiki/Star_(graph_theory) ). Then my first submission which passed that locally also AC'd. In fact I'm wondering if the remote test is also somewhat weak as I did not expect my solution to pass.

    I guess they did not want to add a star generator to the local tool, as realising that uniform sampling converges too slowly for such graphs (i.e. containing high degrees) is a part of the solution.

»
3 years ago, # |
  Vote: I like it +11 Vote: I do not like it

Is there system testing? I have enough points to make round 1 if everything passes, and I'm too tired to get more than that.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +18 Vote: I do not like it

    If the problem has the green check mark, it passed all the tests. If it has a question mark, the results won't be given to you. (Since most problems don't have hidden tests for this round, if you have enough points for this round, then you're good to go)

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Is there still an opportunity to somehow edit the list of users one's following? If I remember correctly, there was a star-like button in standings, but now I see no things like that.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Just below your name where the search button is, click in to the search space and wait for a pop up menu. Then click on following. You'll have a list of all your starred participants.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      My question was about editing that list, i. e. how to add a new user to it.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Search for the person's name the same place you search for countries and then click the star beside the persons name to add to your list.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Oh, thank you. Apparently, one of my browser extensions blocked these stars for some reason, so the standings page looked

          like this

          . to me. Now I've disabled the extension and the stars are back. Thank you again.

»
3 years ago, # |
Rev. 2   Vote: I like it +18 Vote: I do not like it

So, I solved Twisty Little Passages by an intutive guess. It appears to be correct since the analysis lists it as a correct approach, but doesn't help in proving the correctness since it offers no proof as to why its works:

My idea is to perform the following $$$\frac{k}{2}$$$ times:

  • Teleport to a random node we haven't teleported to before.
  • Walk to one adjacent node.

Then the guess is $$$(\frac{2n}{k} \cdot (\text{sum of degrees for teleport nodes})) + \text{sum of degree of walk nodes that haven't been reached by teleport}$$$.

My intuition for this was that:

  • Randomly chosen teleport nodes represent the average, makes sense to scale them.
  • If a few larger degree nodes exist, they are way more likely to be selected by walks, so it doesn't make sense to scale them.
  • Since the error is relative, a few smaller degree nodes barely matter at all.

But I have zero clue how to prove anything about that the accuracy or precision of this heuristic.

I even ignored this idea at first and left the contest since I didn't think this would work, only coming back to code and submit it 12 hours later when I was far more bored.

  • »
    »
    3 years ago, # ^ |
    Rev. 3   Vote: I like it 0 Vote: I do not like it

    My solution was slightly different, but using same ideas.

    • Rather than walking one room and then teleporting away, I kept walking until I visited 2 rooms I already was in in a row.
    • I actually didn't differentiate between how I reached the node when I learned its degree. So to accomodate large-degree nodes, I took median instead of average.

    On the proof, I had flashbacks to my Master's degree's thesis in sublinear algortihms. For example if you want to tell with a certain confidence whether the graph is bipartite, it can be proven that if you choose a specific sublinear size sample of nodes and check whether the sample graph is bipartite or not, you can actually prove that that answer will hold for the whole graph with certain probability.

    So I had a hunch that you just need to sample, and the fact that they gave us walks was a big hint.

    Although I imagine there's certain element of luck to it and some sensible ideas probably wouldn't work in reality.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      For example if you want to tell with a certain confidence whether the graph is bipartite, it can be proven that if you choose a specific sublinear size sample of nodes and check whether the sample graph is bipartite or not, you can actually prove that that answer will hold for the whole graph with certain probability.

      Cool! I hope such a kind of problem would appear in the future (e.g. GKS, they seem to like such statistics-flavoured problems). (But maybe not this specific one, it seems easy to AC it without proof)

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +45 Vote: I do not like it

    Section 3 of this paper gives a solution to this exact problem. It is indeed very similar.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      There is also this paper. It seems that one should also be able to solve the problem with random walks.

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      idk why but whenever I visit that site it shows me this screen (I've never downloaded, and couldn't download, anything from there)

      dle

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    Also did the teleport to random node X (didn't even check that it wasn't visited before) then walk to adjacent node Y and repeat k/2 times.

    For the sum I passed the tests with an even simpler idea, which is summing up degree(X) if and only if degree(X) <= degree(Y) (and then multiplying by N / number of rounds).

    The idea comes from a double-counting avoidance idea of assigning edges an orientation from lower degree end to higher degree end. Since this method of sampling is inherently oriented, you only count if you hit the right orientation — which is a very simplified version of methods mentioned in these papers: https://arxiv.org/pdf/2011.14291.pdf and https://arxiv.org/pdf/1604.03661.pdf

»
3 years ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

Here's my solution to Twisty Little Passages:

First note that if $$$N <= K$$$ then we can visit all the rooms, and calculate that the number of passages is exactly (sum of degrees) / 2.

Now if $$$K < N$$$, if we know the average degree $$$D_{avg}$$$ then the answer is $$$\frac{D_{avg}*N}{2}$$$. How can we estimate $$$D_{avg}$$$? An initial approach is to teleport to $$$K$$$ random rooms, and guess that $$$D_{avg}$$$ is the average of degrees of the visiteed rooms. However, this will not work well for a 'star' input, with one vertex of degree 99999, and 99999 vertices of degree one. (In general, it doesn't work well for graphs where most vertices have 'low' degree, and very few vertices have 'high' degree, because the few high-degree vertices can have an impactful shift on the average degree yet not make it into our calculation.)

So we save a few (I used 50) commands just for finding the ultra-high degree vertices. Note that if most vertices have low degree, but a few have high degree, we are likely to find one of the high-degree vertices by just walking from a random node. Here is the algorithm sketch:

  • Visit 7950 random points, and let their degree average be $$$D_{avg}$$$ and max degree be $$$D_{max}$$$.
  • Set S = 0, it is the sum of degrees of ultra-high-degree vertices, which we define as vertices with degree > $$$D_{max}$$$.
  • When visiting the last 50 of our initial random points, do one "W" command. If any of these hit a previously unseen vertex with ultra-high degree, add its degree to $$$S$$$. Let $$$U$$$ be the number of ultra-high-degree vertices seen.
  • Final guess is $$$\frac{S+D_{avg}*(N-U)}{2}$$$.
»
3 years ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

Hi,

Could anyone provide the intuition behind Chain reaction ?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +11 Vote: I do not like it

    The given graph is a directed tree (actually a forest or unconnected tree). Now imagine the abyss as the root and those with in-degree 0 as a leaf. Imagine at every leaf a person is standing, they want to go to the leaf. Consider a junction, at a particular junction if 3 people (3 children) are waiting only one person can pass through the junction (or only one can activate the junction). For the rest of the people who cannot pass, their journey will end there and their value will add to the answer. So for every junction sort all the values of the child... make the min element pass through the junction, assign max(min_ele,val[junction]) to the junction, and add the rest to the answer.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

The following solution for Chain Reaction was leaked yesterday during the contest:
https://www.codepile.net/pile/ae18xmX8
A lot of contestants submitted this code with or without modifications.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +54 Vote: I do not like it

    It’s a bit silly but as someone pointed out to me last year, collaboration is actually allowed during qualification. These people will have no chance during the more time limited rounds. I wouldn’t worry.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Did someone managed to solve the last problem using Importance Sampling approach from the editorial?
As I understand it, the following pseudocode should work:

sum = 0.0;
count = 0.0;

for (i = 0; i < maxAttempts; i++) {
    if (i % 2 == 0) {
        currentRoom, currentRoomLinks = interactor.Teleport(rnd());
        // Sample |currentRoomLinks| with weigth 1.0
        sum += currentRoomLinks;
        count += 1;
        prevRoomLinks = currentRoomLinks;
    } else {
        currentRoom, currentRoomLinks = interactor.Walk();
        // Sample |currentRoomLinks| with weigth A/B equal to |prevRoomLinks / currentRoomLinks|
        sum += 1.0L * prevRoomLinks;
        count += 1;
    }
}

return sum / count * n * 0.5;

It passes the local testing tool samples with a small error, but fails in system tests.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    In count you need to consider the weights, rather than the number of nodes. Inside the else block code, if you replace the count += 1; line with count += prevRoomLinks / currentRoomLinks; it should pass the system tests.

»
3 years ago, # |
  Vote: I like it +9 Vote: I do not like it

I think there is a family of test cases that can hack (or challenge) the "importance sampling" solution explained in the editorial for last problem Twisty Little Passages. I modified the local testing tool provided in the problem statement to generate these test cases that the official solution should fail. It can be downloaded from here.

The idea is to fix $$$N = 100000$$$ vertices and have:

  • a complete graph $$$A$$$ on a minority fraction of them (say, 10%); and
  • a complete bipartite graph $$$B$$$ over the rest of the vertices, where one of the $$$2$$$ sets/parts has very few vertices (say, a pair).

In the modified local testing tool I didn't make graph $$$A$$$ complete. I made every vertex have the same high degree, but not as high as $$$|A| - 1$$$, so that the python code could run faster. With degrees being high enough, the test cases should work fine.

For instance, let's take:

  • subgraph $$$A$$$ has $$$10\text{k}$$$ vertices with degree $$$100$$$ (when ordering the vertices, each one is connected to previous $$$50$$$ and following $$$50$$$ vertices, cyclically).
  • subgraph $$$B$$$ has $$$90\text{k}$$$ vertices and has edges connecting 2 of them (let's call them special) to the other $$$89998$$$ vertices.

Average vertex degree for graph is:

$$$ \frac{10\text{k} \times 100 + 2 \times (90\text{k} - 2) + (90\text{k} - 2) \times 2 }{ 100\text{k} } \sim 0.1 \times 100 + 0.9 \times 4 = 13.6 $$$

Let's see what the estimation of the importance sampling solution could be. Recall that we have $$$K = 8000$$$ operations available, $$$4000$$$ will be teleporting ("T") to a random vertex and $$$4000$$$ will be walk ("W") through a random edge from those vertices. Each teleporting operation will determine in which subgraph ($$$A$$$ or $$$B$$$) next edge walk will be:

  • With $$$0.1$$$ probability we are in subgraph $$$A$$$. Both vertices visited will have degree $$$100$$$, the first vertex (after "T") will be added with weight $$$1$$$ and the second one (after "W") with weight $$$\frac{100}{100} = 1$$$ as well. This means we increment our sum by $$$200$$$, and our weight by $$$2$$$.
  • With $$$0.9$$$ probability we are in subgraph $$$B$$$.
    • If the teleported vertex is one of the $$$2$$$ special ones, it will be added with degree $$$89998$$$ and weight $$$1$$$, and the other vertex (after "W") will be added with degree $$$2$$$ and weight $$$\frac{89998}{2}$$$. This means we increment our sum by $$$89998 + 89998 \sim 180\text{k}$$$, and our weight by $$$45\text{k}$$$.
    • If the teleported vertex is not a special one, it will be added with degree $$$2$$$ and weight $$$1$$$, and the other vertex (after "W") will be special and it will be added with degree $$$89998$$$ and weight $$$\frac{2}{89998}$$$. This means we increment our sum by $$$2 + 2 = 4$$$, and our weight by $$$\sim 1$$$ (a little bit more than one).

Since there are just $$$2$$$ special vertices among the $$$100\text{k}$$$ total vertices, it is unlikely that one of them is chosen in any of the $$$4000$$$ teleportations. We have 2 cases:

  • If none of them are ever chosen (as it will happen most of the times), the estimation provided by the solution is expected to be around:
$$$ \sim \frac{0.1 \times 200 + 0.9 \times 4}{0.1 \times 2 + 0.9 \times 1} = \frac{23.6}{1.1} \sim 21.45 $$$

$$$\implies$$$ The relative error of this estimation is approximately $$$ \frac{21.45}{13.6} - 1 \sim 1.577 - 1 = +0.577 $$$, way above the maximum allowed of $$$+0.333$$$.

  • If some of the special vertices is ever chosen (as it may happen a few times), it will be much more likely it is chosen only once than twice or more times. Moreover, if it is chosen more than once the estimation is expected to be even worse. If it is chosen once, since there are just $$$4000$$$ teleportations, a relative estimated value of $$$\frac{1}{4000} = 0.00025$$$ will be assigned to these samples. Then, the estimation provided by the solution is expected to be around (the $$$0.1$$$ and $$$0.9$$$ would be slightly lower):
$$$ \sim \frac{0.1 \times 200 + 0.9 \times 4 + 0.00025 \times 180\text{k}}{0.1 \times 2 + 0.9 \times 1 + 0.00025 \times 45\text{k}} = \frac{23.6 + 45}{1.1 + 11.25} \sim 5.55 $$$

$$$\implies$$$ The relative error of this estimation is approximately $$$\frac{5.55}{13.6} - 1 \sim 0.408 - 1 = -0.592$$$, way below the minimum allowed of $$$-0.333$$$.

Note: Since it is convenient not to put insignificant "epsilons" around, the approximations made in the formulas are for making them easier to process mentally. Nevertheless, numbers in the results are the same as the true value, up to the number of digits shown. The modified local testing tool generates $$$100$$$ test cases, randomly choosing the number of special vertices (between $$$1$$$ and $$$4$$$), the size of subgraph $$$A$$$ (between $$$10\text{k}$$$ and $$$15\text{k}$$$) and the degree of their vertices. I expect the official importance sampling solution to fail all $$$100$$$ tests with a "near $$$1$$$" probability.