Hi all,
The second contest of the 2021-2022 USACO season will be running from January 28th to January 31st this weekend. Good luck to everyone! Please wait until the contest is over for everyone before discussing anything contest-related (including but not limited to your scores or anything about the problems).
Link for youtube
I could not see the contest on the site. Any help?
yeah, I think it would be great if these posts would contain the timezone/a time-date link like CF contests do!
Let's hope I break my streak of solving only 1 problem every single gold contest (seriously — I've taken 4 Gold contests and my scores haven't ever changed, even though my CF rating went up).
UPDATE: Again solved 1 problem rip.
Noooo :(( Which one was it ?
Problem 1
IMO p2 << p1, I hope you'll do better next time :)
Message on Contest Page
We are considering making a slight adjustment in scoring in the near future — possibly on this contest — where sample cases are no longer counted towards one's final score (they still must be solved along with all other cases for an in-contest promotion).
Good luck everyone! Hopefully, I don't screw up Silver. Edit: Got 500, no promotion for me
I found USACO problems are pretty good, but python 3.6 is so slow in computing. Wish in the future pypy3 is allowed in the contest.
Would people get relegated to lower division if their performances weren't good?
No.
Thanks for answering it!
If someone score some subset of testcases using one program and some other using another program, will the final scoring be addition of total testcases passed or the best submission is taken into account.
Neither. It's by last submission.
The contest is over! Here are brief summaries of my solutions for the Platinum problems:
Each time, bring to the left end the smallest element for which this can be done, "delete" that element, and repeat. Use a segment tree, where elements are ordered by their value (not position), to keep track of the number of blocking values (differing by more than $$$k$$$ from the current element) to the left of each element.
Notice that the subsequence formed by taking all the even-numbered elements never changes, and the same is true for the odd elements. Do a DP where the state is the current odd-numbered element and the amount of even-numbered elements that have been swapped with that element (and the direction in which these elements moved). The DP needs to check that each block (bounded by odd-numbered elements or the sequence's ends) of even numbers has nonnegative size, and that all swaps made are valid.
Take the convex hull of each group of vectors, then take the Minkowski sum of the convex hulls (making sure to choose the two smallest convex hulls each time to sum together). It is then enough to check the vertices of the final Minkowski sum.
you can just sort all edges of the polygons and take their union to get the minkowski sum of everything. you seem to be doing some sort of set merging which is probably unnedded.
the explanation for Plat problem can be found at this youtube channel by an AlphaStar student.
Does anyone have any idea how to solve P3 Silver?
Bipartite matching, but I think that exist some solution without this.
Consider each pair (x,y) as an edge, for any tree, the dfs order will make every edge satisfied. Therefore, for each component, you either print its dfs order of any spanning tree, or pick an edge on the cycle first and print its dfs order of any spanning tree. So it is just DSU and dfs.
I have an interesting method to solve Platinum P2 that uses $$$O(n)$$$ memory.
It relies on the fact that if we perform a certain number of valid operations on the haystacks and then find the answer for that arrangement, the answer will be the same as for the original arrangement. This means that we can perform a certain number of valid operations on the haystacks before we compute the answer. I transformed the array such that if the array looks like this $$$...a...b...$$$ and a possible end result is $$$...b...a...$$$, then all the elements $$$x$$$ inbetween $$$a$$$ and $$$b$$$ satisfy $$$\vert b-x \vert = 1$$$. This is always possible and can be achieved in $$$O(n^2)$$$ time. Then you can do a dp on how many different end arrays are possible if only the first $$$i$$$ elements are used. The transition uses the same logic as the case for when all the heights are at most 3.
Anybody knows how to solve silver P1?
I solved it in O(logN*logN), I can give you some hints->
Imagine the first time when you multiply by 2. Then prove that it is never optimal to use division by 2 operation subsequently. (You can only then either multiply by 2 or add 1).
Before using any multiplication operation, you can only choose divison and addition by 1, If the number is odd, obviously you can only add, but if number is even then you can add +1 or divide by 2. Try to prove that IF you choose to add +1, then it is never optimal to use division operation in any next subsequent step.
Let's say you wanna convert a->b using multiply by 2 and adding +1. Then consider the reverse, b->a using divison by 2 and subtracting 1, then if at any stage the number is odd, you can only subtract 1, but if number is even then you can do both, but IF you choose to subtract, then it is never optimal to divide again (in next step or in any next subsequent step). (Prove this again).
How does this work for the testcase "997 120"?
Let's say, opr(a,b) denotes min no of operations required to convert a->b using only multiplication by 2 and adding +1. If a>b then opr(a,b) = INF (not possible to convert in this case) Then->
997 -> 996 [ Suppose I stop dividing here then I need 1+opr(996,120) operations ] -> (otherwise divide again) 498 [2+opr(498,120)] -> 249 [3+opr(249,120)]-> 125 [5+opr(125,120)] -> 63 [7+opr(63,120]-> 32[9+opr(32,120)->16 [10+opr(16,120)] -> 8 [11+opr(8,120) -> 4 [12+opr(4,120)] -> 2 [13+opr(2,120)] -> 1 [14+opr(1,120)].
Basically I'm iterating over how many times I divided by 2 and doing opr (number obtained just after last divison,996) and taking minimum of all answers.
For Gold B, I think the hardest part is to notice the Add operation only occurs between two ‘active’ nodes, which will indicate any node won’t be relevant after it becomes irrelevant. Then we can solve it by offline+DSU. Though it is contestants’ responsibility to read the statement correctly, I do think it will be much appropriate if it can be highlighted.
For Platinum A, B, I think they share the similar idea of reducing the problem to find/compute topology orders of a graph. Under this reduction, problem A is all about using segment trees (or persistent) to maintain the graph. Note that in last platinum contest (December) problem A, from my perspective it is of the same kind as this one. I think I may try to avoid two problems of ‘same kind’ in a row.
For Platinum C, I really don’t think it is appropriate to put a raw Minkowski Sum problem in such a ‘slide window’ contest. But this is just my own opinion.
Silver C and Gold C is cool. Btw, I also heard people saying that system of difference constraints is not a ‘gold knowledge’, it is good to have a proof for me to argue next time.
Please can you explain how plat A is reduced to a graph ? I found a sol but I don't really see how it reduces to find topology order of a graph..
Also how to solve Gold C please ! My only idea was that you essentially have some sort of trees (like when i point to j you can add an edge), but I can't find anything else...
Thanks by advance for your explanations :)
For any index i and j (i<j), if their relative positions cannot be swapped, we add a directed edge from i to j. Then any topology order is corresponding to a final sequence.
Oh! So then you need to sort of compress the graph by only adding edge to "nearest" swappable value ?
Silver P3 had similar flavor to https://codeforces.net/problemset/problem/1209/D
How to solve Gold P3? I hopelessly tried various bad heuristics in lieu of partials.
Gold P3 can be solved w/tree-dp.
Let $$$A$$$ be the input array (transform s.t its values are 0-indexed), and let $$$j = A[i]$$$ be the rightmost index such that in our resulting solution $$$B, B[i]+k \ge B[j]$$$.
Now, imagine we have a directed graph: Let vertex $$$i$$$ represent the value at index $$$i$$$ in our answer. For every $$$0\leq i < n$$$, let there be a directed edge from vertex $$$A[i]+1$$$ to vertex $$$i$$$.
Observe that once all of these edges are drawn, our graph is simply a rooted tree with root vertex $$$n$$$.
After building our graph, run a depth-first search from the $$$n$$$th vertex (yes, this technically doesn't "exist", but we relax the condition for the sake of this solution) Allow the $$$n$$$th vertex to take on some large value, say $$$B[n]=10^{18}$$$. Then, in our depth-first search, we visit all child vertices $$$v$$$ of $$$u$$$ in descending order w.r.t to their corresponding positions in array $$$B$$$.
Every time we visit a child $$$v$$$ of vertex $$$u$$$, we set $$$B[v] = B[u] - k - 1 - \sum{sz[u]}$$$, where $$$k = 10^{10}, $$$ and $$$\sum{sz[u]}$$$ is the sum of the sizes of the subtrees of vertex $$$u's$$$ children that have been processed so far. Once our dfs finishes, our target array will have been constructed.
So, why does this work?
Sketch of Correctness:
1) The time complexity of this solution is straightforward. Our graph is a tree with at most $$$n \leq 10^5$$$ nodes, so surely our solution will run in $$$O(n)$$$.
2) The tree is connected by the nature of the construction, which means array $$$B$$$ will be filled by the end of this process.
3) For every $$$0 \le i \le j < n$$$, $$$B[i] \leq B[j]$$$ holds. To show this, it suffices to prove that the length of the path from vertex $$$n$$$ to vertex $$$i$$$ is always greater than or equal to the length of the path from vertex $$$n$$$ to vertex $$$j$$$. This is due to the fact that $$$k$$$ >>> $$$n$$$, so $$$k$$$ is the only value that could potentially shift the inequality and is directly proportionate to the length of the path back to the root.
The proof of this is simple. Consider the transpose of this graph. Then, the following condition statement must hold:
Where $$$p[x]$$$ denotes the parent (next vertex) of $$$x$$$ along its path to the root.
Why is this true? Suppose there exists two indices such that $$$x \le y$$$, but $$$p[x] > p[y]$$$.
However, recall that $$$B[x] \le B[y]$$$, so it must be true that $$$B[p[x]] \leq B[p[y]]$$$ by the monotonic property of our array. But we assumed that $$$p[x] > p[y]$$$, which also means $$$B[p[x]] > B[p[y]]$$$. Contradiction.
Hence by the nature of our construction, for any vertex $$$x$$$ and $$$y$$$ s.t $$$0\leq x \leq y < n \implies B[x] \leq B[y]$$$.
Note that this condition is sufficient because there is always a path that leads from vertices $$$i, j$$$ to the root, and the length of the path from vertex $$$j$$$ will be a lower bound for the length of vertex $$$i's$$$ path due to the proof above.
4) For every child $$$v$$$ of a vertex $$$u, (v < u)$$$, setting $$$B[v] = B[u] - k - 1 - \sum{sz[u]}$$$ allows for the condition $$$B[v]+k < B[u]$$$ to be satisfied. Furthermore, notice that $$$B[v]+k \ge{B[u-1]}$$$, because of the contribution of $$$\sum{sz[u]}$$$ and by applying 3).
How is the margin of points for promotion determined? Shouldn't it be availible before contest? Really, I know a handful of people (myself included) who would've tackled problem Gold A/Gold B instead of Gold C, but as we all had assumed this limit would've been 750 (as it has been in the previous competitions), the only mathematical way to promote would've been by solving C (and one of A and B + subtask of the other). Instead, qualifying to platinum would've been acheived in this case by merely solving A and B
(Standard disclaimer: I am not USACO staff.)
The promotion threshold is never determined prior to the contest and depends on the results of the contest — harder contests have lower thresholds and easier contests have higher thresholds. You can clearly see this from last year's Gold contests. Therefore, we can infer the promotion threshold has never always "been 750 (as it has been in the previous competitions)."
If you have further concerns about how promotion thresholds are set, you should reach out to the USACO contest director.
You could also do 50% partials on C and full solve A and B Anyway, I think that in 4 hours you definitely have the time to try all the problems, it's important (in terms of contest strategy) to avoid being stuck for too long on the same problem (unless you are BenQ and you are 100% sure you can solve with more thinking)
Lol I made the same mistake. I had the right idea for B, but figured that I must solve C to promote, so I spent most of my time on C. I didn't leave enough implementation time for B, but if I did, it would've been promo.