Traditionally, SnarkNews runs two New Year Contests.
The New Year Blitz Contest (Multiple Language Round) will start at 16:00 31.12.2021 and ends at 08:00 01.01.2022 Moscow Time. Contest consists of 10-14 problems. Some of those problems are based on problems, used at official or training contests in 2021. Contest rules are based on the ICPC system with two important modifications.
Penalty time is calculated as distance between time of first successful submit for the contestant on this problem and 0:00:00 01.01.2022, i.e. successful submissions at 23:50:00 31.12.2021 and at 0:10:00 01.01.2022 will both have penalty time 10 minutes (600 seconds).
Multiple Language Round rule: If for the first successful submit for the contestant on the some problem was used the programming language, which was never used by this contestant in his previous first successful submits on other problems, contestant receives language bonus: his summary penalty time is decreased by 100 minutes (6000 seconds). Note that different implementations of the same language are counting as same language, i.e. Java 8 and Java 15 are both counted as Java, C++ 32 bit and C++20 64 bit both as C++. Additionally, C and C++ are counted as the same programming language.
Contest will be running on Yandex.Contest system. Link for registration and participation.
18'th Prime New Year Contest will start at 00:00 31.12.2021 and finish at 23:59 10.01.2022 Moscow time. Traditionally, Prime New Year Contest consists of problems, which were presented at team programming contests in 2021 and never solved on the contest. For the Prime New Year contest plain ICPC rules are used.
Idea of Prime Contest was first implemented by Andrey Lopatin and Nick Dourov at Summer Petrozavodsk training camp at 2003; in Russian word "Prostoj" used in meanings "prime" and "easy", so, contest was called "Prostoj contest" but was composed of extremelly hard problems, numbered with prime numbers (2,3,5 etc). Since then such a numeration is traditionally used for contests, consisting of very hard problems only.
Contest will be running on Yandex.Contest system. Link for registration and participation. If you plan use your OpenCup login (or other Yandex.Contest internal account), use that link instead.
Both contests have English statements. Registration is open till the end of the contest.
Sorry for an unrelated question, but do you plan to revive SNSS/SNWS someday?
Looks like yes: Link
Kind of weird to see NERC Onsite tasks since they were solved in the mirror.
+
Did my yearly ritual of wanting to solve Prime contest for a day, submitting everything we solved in upsolving of camps and closing it forever, was surprised to submit 2 problems we actually accepted in contest and 2 more problems we almost accepted in contest.
Which ones are these? Btw I also submitted one problem that I solved during the contest. But one which lasted for a week haha — Algorithmic Engagements (Gigantic Tree is from there)
19 and 29 from NERC we got in contest. For 61 we had the right complexity in contest, but the TL was a bit tight (in my opinion), 79 we solved in 5:05 when we were testing the Moscow subregional.
I like how I got accepted with 13 distinct languages at some moment, and G is rejudged so now I have 12 without C++ :) Penalty somehow still decreases though.
I am quite sure that some asserts in my code give me RE and some give me PE, that's weird checker behaviour...
Though I got AC in the end, so it's not longer that important to me... It's quite bad that checker doesn't distinguish between "queries limit exceeded" and "your program ended without sorting the permutation" (and again — asserts here won't help you :>)
Around 1/6 of my submissions gets random CEs
I believe that I proved that problem 43 (Lazy Judge) is unsolvable. Isn't there any mistake in the current version of the statement? (in particular I believe I am able to solve that problem if patience will stay at least 3 instead of at least 2)
I want to prove that using questions that use up $$$2n-2$$$ patience we can uniquely identify the permutation (so, starting from $$$2n$$$ and getting to $$$2$$$ patience) what gives contradiction.
If $$$n=2$$$ then we ask a question of type 2 and we are done.
If $$$n \ge 3$$$ then we ask two questions of type 3 about pairs $$$(1, 2)$$$ and $$$(1, 3)$$$. If answers to these questions are equal (say of value $$$v$$$) then I know that $$$a_1=v$$$. If answers to them are $$$b$$$ and $$$c$$$ then if $$$b<c$$$ then $$$a_2=b$$$ and if $$$b>c$$$ then $$$a_3=c$$$. In each case I reduce n by one. Q.E.D.
I have also written a bruteforce solution that verifies that impossibility for n=4 lol.
Maybe Um_nik or an unknown to me author can answer that?
Yeah, that's one of the reasons we didn't solve it in contest :) At least 3 is correct.
Thanks. Wasted 2 hours because of this >_>. As if this contest doesn't already drain too much of my time
Huh, the PDF version says strictly more than 2 patience remaining (i.e. at least 3), and I think it has since the start of the contest. I guess the wrong image got uploaded.
Do I correctly understand that the answer of 3. depends on float precision? If we change sample 3 to
then the answer changes from 5 to 0. Right?
On one hand my code agrees with you. On the other ... I looked into it and I read v as integers (so you may safely assume you can do so as well), quite contrary to what is written in the statement. I think this line saying that $$$v_j$$$ may be given in scientific notation should have said so about $$$q_j$$$. Weird that I have gotten it accepted and do not remember noticing it
That made a lot of sense to me too. I submitted a clarification request where I listed all that weird shit, so far no answer.
Ok, so little wrap-up of my 50-100 hours masochistic struggle with possible spoilers. Not sure if anyone will read it, but since I wasted so much time on it, additional 0,5h won't hurt. Happy that after all these years and claiming a good number of 2nd places I finally won it :P. There was not much opposition though, basically just Radewoosh. I think if he tried a bit more he would easily overtake me cause 43 and 79 were not hard.
Solved 19/22 problems. Without 2 (The Exam), 13 (Business Semiconductors Units) and 59 (The Goldberg Machine 2).
2: it's very hard coding-wise, but doable, maybe if I had one or two more days...
13: As I think I have proved throughout this contest, I can deal with quite some terrible shit, but even I have my own borders and this one is far beyond it.
59: I don't know how to solve faster than $$$O(n^2)$$$ per query and that seems to be to slow. (EDIT: Well, it seems that Radewoosh has exactly that and I have just lacked plain memorization of answers which actually very nicely plays out with some other optimizations that we had (even though memorization crossed my mind — but I wrongfully disregarded it as negligible cause I did not note that interplay with other opts). But I reckon the model solution has better complexity anyway. Anyone knows whether there is any editorial to it or knows who is the author?)
Regarding ones I've solved, I used editorials for 5 (Rectangle Painting), 17 (Count Min Ratio), 67 (The Struggle) and in some way for 11 (Number of Colorful Matchings) and I knew 31's (Newton's Symbol) solution beforehand. Others I've solved fully on my own.
3 (Revenue), 37 (Gigantic Tree): I had them already solved. Revenue is mainly understanding the statement, very approachable during the contest if someone just gives it time to sink. 37 I initially hated because I struggled a lot with it, but in the hindsight is fine I think
5 (Rectangle Painting): That's very hard to come up with... The idea is surely cool.
7 (The Hash Table): Very nice one, had fun solving it.
11 (Number of Colorful Matchings): It's surprising it could be done in $$$O(n^3)$$$. Nice
17 (Count Min Ratio): I think this is the peak of competitive programming in a meaning I don't think I have ever seen a harder problem. Respect to maroonrk or whoever has authored it cause that's just completely unsolvable imho
19 (ASCII Automata Art): ???
23 (Fiber Shape): I have no idea what this problem is doing here. I am positive I would solve it during the contest if given half an hour on keyboard and half an hour off keyboard. And official editorial is definitely overcomplicated. Just shrink the coordinate system in the direction of longer axis so that ellipse becomes circle and everything is super easy — no binary search for intersecting with ellipse, no complicated integrals — no nothing and just trivial $$$O(n)$$$ working 0.06s. That's my code minus the library: https://ideone.com/OovtSS
29 (Hard Optimization): The main part is indeed quite easy, however restoring the solution is a nightmare that took me two hours, much longer than the main part itself.
31 (Newton's Symbol): After one starting idea this is ez pz
41 (Kingdoms and Quarantine): This is beautiful problem, definitely brought me the most joy out of all problems here
43 (Lazy Judge): That's a cool one. Handling just the 3rd type gives a good idea on how the solution should look like. Um_nik's mistake from previous comment through me off the loop when he said that the TL for this was quite tight, while all my ideas were linear time ifologies (and $$$n \le 50000$$$) xD He later on corrected that he meant 61 and then I was able to finally solve it xD
47 (Nimber Sequence): Cool one as well, I was very satisfied when I finally noticed all the tricks required for it. But why did you require optimizing multiplying nimbers to the absolute limits??? That took much more coding time than actually solving the main part.
53 (Historic Breakthrough): Oh well, that took me quite some time before I realized how easy it is, definitely one of the easiest in this problemset. The trick is that Rho-Pollard works not just in $$$O(n^{0.25} \log n)$$$, but in $$$O(\sqrt{p} \log n)$$$, where $$$p$$$ is the second largest prime factor what definitely helps in factoring out numbers like $$$pq(p-1)(q-1)$$$, where $$$p, q$$$ are around $$$10^9$$$. Also, I have to say that Rho-Pollard algorithm is super beautiful
61 (Crab's Cannon): The algorithm looks cool, but actually implementing it... Well, Um_nik said he almost got it during the contest modulo tight TL, but the way my code looked like it took me like 50 small bug fixes and is TOP1 EVER in terms of how unlikely I think the code would be working on the first try. It was an immense pain to me, but I imagine it is possible to solve it in a better way. Also, no idea why the premise of my solution is right (i.e. I have a few rules of type "if x and y is a palindrome and they satisfy sth then z is a palindrome as well" and no idea why these are sufficient)
67 (The Struggle): Yes, it was a struggle. It's probably the easiest problem I looked up editorial to, I was somehow oblivious to the presence of FWHT here, which should be clear. It's cool I finally learnt and understood FWHT, cause I never got this one before. The idea is cool, but optimizing time and memory was not fun, but it was understandably there in order to not let $$$O(n \log^2 n)$$$ solutions.
71 (3D-chans are not needed): Super fun, I was proud of my quite crazy sounding solution. That's easily reduced to number of lattice points within some weird shape in 4D (kinda like a generalization of octahedron to 4D, but not exactly). If I focus on OX axis and partition it into intervals when there is no vertex of this shape I guess it should behave polynomially (no proof for that), so I can determine vertices of that shape (many Gausses), sample a few x values from each interval, solve it recursively in one less dimension (up to 1D) and interpolate polynomials. Radewoosh got some super weird solution that has literally 0 common part with mine though.
73 (Imprecise Permutation Sort): That's probably the most attractive looking problem from the whole problemset and the solution is cool as well, but terrible protocol of return communicates marred my experience with it
79 (Gleb and Litenyi Avenue): After initial thoughts this looks very easy as the whole problem reduces to almost independent single crossings or pairs of crossings. But it took me much more time than I expected to solve the case of pairs and it was terrible. I have actually managed to take 2 tests from the system through time and memory measurements in these ~70 submission and both have helped me :P
Thanks snarknews for putting this up, it surely gave me a lot to do
Congratulations with the win! Why isn't this considered as prestigious as an opencup win or smth... Maybe awarding with some cup at ptz like with other competitions.
Editorials are for plebs. I used UTFG for 17, Jensen's (no idea if it's the same Jensen as for Jensen's inequality) formula massively simplifies it.
I cheesed 5 through some sqrt structures and some quadratic stuff since I was pretty confident there won't be any cases where the quadratic stuff has a bad constant factor. Main idea: it's easy to solve the problem in $$$O(Q \log)$$$ if all queries of type 2 are on the whole range. I think it's one of the easier problems because the huge TL allows a lot of things.
I also squeezed through 11 by writing the $$$O(N^5)$$$ GEM on matrices of polynomials mod 2, then using 64-bit packing for data and the SSE extension "carry-less multiplication" to cut it down to $$$O(N^5/64^2)$$$. Not doable in-contest without digging in printed Intel docs but fairly straightforward otherwise if you know the trick for single-coloured case.
19: No idea why ???, you're told exactly what to do!
67: Are $$$O(N \log^2)$$$ solutions much easier here?
Agreed on 3 and 23, I could've solved them if I didn't have other stuff to do.
17: Huh, how do you use Google for 17? Only Jensen's formula I am able to find is about some complex analysis.
11: What is that GEM? Btw it is possible to compute the characteristic polynomial of a matrix in $$$O(n^3)$$$ (and it doesn't even have to be mod 2). Not sure why, but copied code did the thing. The problem basically asks you to compute the characteristic polynomial of $$$A+Bx$$$ and by careful Gaussian elimination you can reduce to the case $$$B=I$$$ and then run the blackbox.
67: Yes, $$$O(n \log^2 n)$$$ is much simpler. Additional log allows you to use black box FWHT, while for $$$O(n \log n)$$$ you need to dig deep in FWHT insides and understand well what is happening there
17: "jensen binomial", 3rd result, right at the top, I was digging around in papers on polynomial convolutions and found that.
11: Not characteristic polynomial of $$$A+Bx$$$, but determinant of $$$A+Bx$$$, which is a polynomial. I guess I did non-careful Gaussian elimination — very straightforward with "multiply polynomials mod 2" and "invert polynomial mod 2". Inverting needs the polynomial to be non-divisible by $$$x$$$ but that's easy to achieve since we only need the determinant, if there's no such polynomial in the first column then we divide them all by $$$x$$$ and multiply the determinant by $$$x$$$.
Sure, I meant determinant of $$$A+Bx$$$, not characteristic polynomial. To be clear, there are two steps. First $$$A+Bx \to A+Ix$$$, which is done by careful Gaussian elimination, and then compute determinant of $$$A+Ix$$$, which is done by ... careful Gaussian elimination :P (I don't know how the second step works though).
Btw there is a cute trick to omit divisions when they are not easily computable (from what I understand this is not very important to you, but maybe useful to remember). If you want to subtract a row with first value $$$c$$$ from the row with the first value $$$d$$$ in order to make $$$d$$$ equal to $$$0$$$ you usually multiply the first row by $$$\frac{d}{c}$$$. Instead, you can first multiply the second row by $$$c$$$ and then subtract the first row mutiplied by $$$d$$$!. Naturally the determinant gets multiplied by $$$c$$$, but sometimes that's not a big problem.
And ah, GEM is just ... Gaussian elimination xd?
59 is by me. It's the first one in this writeup.
I was unable to spend so much time on it but here are my short summaries as well. Thank you snarknews really, I love this stuff
3: Statements are not bad, but it's really painful to actually understand what the problem asks for us. After that it is a simple greedy with some leap of faith
5: Really nice problem. I upsolved it quite a long ago
7: Can't say anything. rkm0959 solved it for me :) I thought it requires some weird number-theoretic knowledge, but in hindsight, this doesn't seem too hard. But I couldn't solve it so I shouldn't say anything about it
11: Also interesting stuff as well. I solved it quite a long ago
13: This somewhat resembled me the task from last IOI. I wanted to try this a little bit, but seems that it would need a significant dedication
17: I also upsolved it quite a long ago. And I can't recall what the solution is (it was so complicated)
19, 23, 29: Yeah... basically these are just implementation problems. 19 just asks to implement what is said (the parser is a bit complicated and error-prone IMO). 23 also same but as Swistakk said it helps to think in terms of circles. For 29 I found the solution when I was participating the NERC contest virtually, but I don't see a good way to implement this, especially given that I have to do reconstruction.
41: I gave a lot of thinking time on it but failed. :( But I really enjoyed thinking about this problem. It was the most interesting problem in this set.
47: Savior-of-Cross solved this for me, kind of. He recently wrote a blog (Korean) about the Bostan-Mori Algorithm. I thought like "ok cool, but why do I need this stuff?". Turns out that I need this exactly for this problem.
53: 16silver solved it for me (now my account seems like a team account of bunch of Koreans :)). The idea differs from the apparently "cheesy" solution described by Swistakk. The solution slightly resembles Miller-Rabin. Let $$$n \phi (n) = 2^k (2m + 1), T = a^{2m+1}$$$ for some random $$$a$$$. Then $$$T^{2^k} - 1$$$ is a multiple of $$$n$$$. We can try $$$T - 1, T + 1, T^2 + 1 \ldots, T^{2^{k - 1}} + 1$$$ and with good probability there is some number where its gcd is neither $$$1$$$ or $$$n$$$ if $$$n \neq p^k$$$. I was told that this kind of idea is standard in number theory.. well ok.
59: I opened an editorial for this and was just overwhelmed. Probably one of the hardest in this set
67: I think it's not hard to reduce $$$O(n \log^2 n)$$$ to $$$O(n \log n)$$$. And ellipses are meh.
47: Do you have a quick summary of the Bostan-Mori Algorithm? Is it just an optimized form of the standard "take $$$x^n$$$ mod the characteristic polynomial" but from MSB downwards and with common FFT intermediates stored? Do you still need to do convolution over a finite field of characteristic 2? I know there are algorithms for this (and even convolution over arbitrary rings), do you have references for those?
It is a different approach with the same complexity and lower constant factor. This is the link to the original paper.
TLDR version:
Let's say we're given a sequence over a ring $$$R$$$ defined by the initial terms $$$a _ 0 \cdots a _ {d-1}$$$ and the linear recurrence relation
$$$\begin{align} a _ {i + d} = \sum _ {j=0} ^{d-1} c _ j \cdot a _ {i + j} \end{align}$$$
Let $$$F(X)$$$ be its generating function and
$$$\begin{align} Q(X) = 1 - \sum _ {i = 1} ^ d c _ {d - i} X ^ i \in R[ [X] ] \end{align}$$$
Then
$$$\begin{align} P(X) := F(X) \cdot Q(X) \end{align}$$$
is a polynomial with degree less than $$$d$$$. And $$$P(X)$$$ can be computed with a single convolutions of polynomials of degree $$$d$$$. Our goal is to find the coefficient $$$[X ^ N] F(X) = [X ^ N] P(X) \cdot Q(X) ^ {-1}$$$ given some $$$N \in \mathbb{Z} _ {\ge 0}$$$.
We use the fact that for an arbitrary $$$S(X) \in R[ [X] ]$$$, there exists a unique pair of series $$$S _ {even}(X), S _ {odd}(X) \in R[ [X] ]$$$ such that $$$S(X) = S _ {even}(X ^ 2) + X \cdot S _ {odd}(X ^ 2)$$$.
Let $$$U(X) = P(X) \cdot Q(-X)$$$ and $$$V(X) = Q(X) \cdot Q(-X)$$$. Clearly, $$$V(X) = V _ {even}(X^2)$$$.
$$$\begin{align} [X ^ N] \frac{P(X)}{Q(X)} &= [X ^ N] \frac{P(X) \cdot Q(-X)}{Q(X)\cdot Q(-X)} \newline &= [X ^ N] \frac{U _ {even}(X ^ 2) + X \cdot U _ {odd}(X^2)}{V _ {even}(X^2)} \newline &= [X ^ N] \frac{U _ {even}(X ^ 2)}{V _ {even}(X ^ 2)} + [X ^ N]X \cdot \frac{U _ {odd}(X ^ 2)}{V _ {even}(X ^ 2)} \newline &= \begin{cases} [X ^ N] \frac{U _ {even}(X ^ 2) }{ V _ {even}(X ^ 2) } \hspace{2cm} \text{if N is even} \newline [X ^ N]X \cdot \frac{U _ {odd}(X ^ 2)}{V _ {even}(X ^ 2)} \hspace{1.5cm} \text{if N is odd} \end{cases} \newline &= \begin{cases} [X ^ {N / 2}] \frac{U _ {even}(X)}{V _ {even}(X)} \hspace{2cm} \text{if N is even} \newline [X ^ {(N - 1) / 2}]\frac{U _ {odd}(X)}{V _ {even}(X)} \hspace{1.5cm} \text{if N is odd} \end{cases} \end{align}$$$
$$$V _ {even}(X)$$$ is a polynomial of degree at most $$$d$$$ and $$$U _ {even}(X)$$$ and $$$U _ {odd}(X)$$$ are polynomials of degree less than $$$d$$$. Therefore, we obtain a subproblem with $$$N$$$ being reduced to half. Total complexity is $$$O(M(d) \cdot \log (N))$$$ where $$$M(d)$$$ is the complexity of the convolution of degree $$$d$$$ polynomials.
You can further optimize the constant factor if we assume that the convolution is done with the FFT. In my benchmark on the blog, this algorithm was about 5~6 times faster than the "characteristic poly mod $$$x ^ d$$$" method.
Hm, the modular exponentiation algorithm in your blog can definitely be constant factor optimized in a few ways:
First, we should use the MSB-first approach to exponentiation by squaring, i.e. $$$x^N = (x^{N/2})^2$$$ if $$$N$$$ is even and $$$x^N = x * (x^{(N-1)/2})^2$$$ if $$$N$$$ is odd. Note that multiplying by $$$x$$$ and reducing should be done in linear time, so this cuts out on average 1/3 of the modular polynomial multiplications. This is similar to the Bostan-Mori recursion, and is a generic optimization.
Then, there are a few NTT-specific optimizations. We should definitely precompute the NTTs of $$$Q$$$ and $$$Q^{-1}$$$. Also, our recursion now only squares polynomials instead of multiplying them, so we can save 1 NTT call. Furthermore, the optimized transformations of taking only the first half of a NTT are essentially analogous to the optimized transformation of doubling/halving the size of an NTT, so we could use those as well.
As far as I understand, the Bostan-Mori algorithm is exactly the linear transpose of the exponentiation algorithm, so at least when working in finite fields, they should have an identical number of operations. I'm not sure if this the runtimes will match in practice, but it's at least true that every Bostan-Mori optimization should have an analogous optimization for the modular exponentiation algorithm.
I've tried implementing your optimizations and I was able to get the factor down to about 2.5 compared to my Bostan-Mori implementation.
What is a "linear transpose" of an algorithm? I couldn't find any relevant result on google.
I was referring to https://specfun.inria.fr/bostan/publications/BoLeSc03.pdf. The Bostan-Mori paper mentions this. I'm not quite sure that what I said about identical number of operations is completely true, though, since this program is only linear in $$$P$$$, not $$$Q$$$.
In the given field $$$X = -X$$$ and $$$(\sum a_i x^i)^2 = \sum a_i x^{2i} $$$ holds. Using this fact, it suffices to do every polynomial operation naively.
So you submitted here codes that your friends have written? Shame :P! My friends got 5, 11 and 13 ACed on our common team account, but my decency rule of thumb is that I need to understand the solution and code it on my own (except for library parts). I sometimes use editorials, but try not to, it's half lost in that case :P.
Btw if you want I can try describing 41's solution or I can also give some gradual hints.
And I see nothing wrong with ellipses in 67. It kinda perfectly fits all requirements that we need for a shape for this problem (nice and convex so that analysis from editorial goes through, not too simple, can get tricky) and it is also easily computable.
I think I see the main idea of problem 41: if you label the vertices by their original parity, we have 4 groups A0, A1, B0, B1. The condition tells you that, for each A0 vertex, the edges should alternate B0 and B1, so the number of B0 edges minus the number of B1 edges must be exactly 0 or 1 (and symmetric).
It turns out these conditions are sufficient, so we can rewrite this into a max-weight circulation problem: direct all original edges from A0 to B0 to A1 to B1 to A0, and let them each have weight 1 and capacity 1. Also, add a weight 0 capacity 1 edge from the source to each A vertex and from each B vertex to the sink (also connect the sink to the source with an infinite capacity weight 0 edge to make the circulation). Then, the set of edges we take is exactly a max-weight circulation.
However, I'm not actually sure how to solve max-cost circulation efficiently. Naively cancelling positive cost cycles with Bellman-Ford could take up to $$$O(cost * VE)$$$ or something; is there something better we can do?
No idea whether that can be done faster than $$$O(cost \cdot VE)$$$. I copied our very fast SPFA and that was enough
Turns out you can do it in $$$O(f * (E + V \log(V))$$$ runtime, where $$$f$$$ is the total flow of all negative edges. You can do this by building the graph up slowly, maintaining a min-cost circulation, the residual, and a potential function. Insert negative weight edges one at a time. Each time you insert a new edge, run Dijkstra-based MCMF from the new edge's sink to the new edge's source, capped at the new edge's weight and capacity.
This extends to $$$O(E * MCMF)$$$ runtime in general, if you have some other faster MCMF than Dijkstra.
Ofc I understood the solution and wrote it by myself :)