Блог пользователя Xellos

Автор Xellos, 10 лет назад, По-английски

These days (26.-27.3.2015), another national round of Slovak Olympiad in Informatics took place. Just short problem statements for now, I'll write it in more detail and with solutions later:

1.

There's a hazelnut chocolate of size N × M, which can be viewed as a 2D grid. In each cell of the grid, there's a given number of nuts. Two players play a game, alternating turns: each player has to cut off either the bottom row or the rightmost column of the remaining part of the chocolate in her turn (and do with it guess what). The cut off row/column has to contain an even number of nuts. The player that can't make a move loses. Determine the winner if both play optimally.

2.

You have two numbers N, K (), one input file with N - K distinct numbers between 1 and N and 2K + 2 empty files. Find the missing K numbers.

Your program has a very limited amount of memory (the files' content isn't stored in that memory), just 10 kB for K ≤ 100; its memory complexity can't depend on N. Primarily, you need to minimise the worst-case number of reads+writes to files; only then are you minimising the time and memory complexity of your algorithm.

Files work like queues and can be erased in small constant time.

3.

a very long introductory text and some heavily theoretical magic with simulating boolean conditions using other conditions and proving that it's always possible/impossible

4.

There are N numbers up to 109 and another number R, also up to 109. Find up to 4 numbers (one number can't be used multiple times) among these N that sum up to R or decide that they don't exist. N ≤ 4000.

5.

There's a straight river of constant width, N points on one side of it and M on the other side. You need to connect some of them with (obviously straight) lines that have the minimum total length in such a way that each of these N + M points has at least one line connecting it with another point on the other side of the river. N, M ≤ 20000.

The first 3 problems are theoretical (points for explaining your algorithm), the other 2 practical (typical contest style, partial scoring); the max. time limit was 10s in both.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +31
  • Проголосовать: не нравится

Автор Xellos, 10 лет назад, По-английски

Complete problemset + tests + original presentation of solutions.

Official scoreboard, public contest scoreboard, used for training elsewhere.

Short hints first, more detailed solutions afterwards.

A. Avoiding the Apocalypse

(difficulty: medium-hard, code)

Maxflow in a suitably selected graph. Ford-Fulkerson is sufficient.


The vertices of our graph need to correspond to states that one person can be in when traversing the original road network; the number of people moving in a group is then represented by flow along edges. Therefore, the most natural choice is to use vertices corresponding to states (location, time).

The rules from the problem statement say that at most c people can move along an edge e = (u -  > v) in the original at any time. That means if we add an edge from (u, t) to (v, t + s) (s is the time it takes to traverse edge e), it needs to have capacity c; such edges fully describe these rules.

Staying in place is also moving, just into the same vertex; it takes one time unit. That adds some more edges (with infinite capacity, as any number of people can stay in place).

Now, a path of some person is represented by a unit flow in this graph. In order to find the maximum number of people that can get somewhere, we clearly want a maximum flow in this graph.

Wait, people can't wait forever! We still haven't used the restriction on time S till zombification, but that's simple — just don't use vertices corresponding to time  > S. This also bounds the number of vertices to O(NS) and edges to O((N + M)S).

Furthermore, we need a source vertex vS, a sink vertex vT and edges from/to them; the answer is the maxflow from vS to vT. It's obvious that we need 1 edge from the source to the starting vertex of our group of G people (at time 0), and that edge should have capacity G. For edges to the sink, we'll use the last remaining part of the input (finally, the input is spent!): they need to go from the medical facilities, at time S (there's no harm in waiting for a film-like cliffhanger), and have infinite capacity.

All that's left is applying a standard maxflow algorithm, for example Ford-Fulkerson. That one has complexity O((V + E)F), where F is our answer (the maxflow), V and E are vertices and edges of our graph — in this case, it's O((N + M)SG). It actually runs in time, because its constant is fairly small and most tests are small or have small answers.

B. Button Bashing

(diff: easy-medium, code)

BFS in a graph of cooking times.


Again, we can construct a directed graph of states. In this case, each state will be a time set on the microwave so far; from it, you have N edges corresponding to button presses that lead you to a different time as described in the problem statement.

Obviously, you want to find the minimum distance from vertex (t = 0) to (t = T), or min. distances to vertices (t > T) (since you can always keep pressing one positive button, at least vertex (t = 3600) is guaranteed to be reachable) when necessary. One BFS is enough to find the min. distances of all vertices from (t = 0) and loop over all vertices (t ≥ T) until you find one with non-infinite distance.

There are O(3600N) edges in our graph, O(N) vertices, so our BFS takes O(3600N) time.

C. Citadel Construction

(diff: med-hard, code)

Convex hull; fix 1 vertex (u) of the quadrilateral, iterate the opposite one (w) along the convex hull and recalculate the furthest vertices v, x from line u - w on each of its sides. O(N2).


We need to pick a quadrilateral (possibly degenerated into a triangle) with maximum area. Obviously, a triangle is better than a concave quadrilateral, so let's stick to convex ones.

Suppose that we picked one diagonal of our quadrilateral. Its area is the sum of triangles made by this line segment and 2 vertices on opposite sides from it. Using the well-known formula: area (of a triangle) = base * height / 2, we realize that the most distant points from the line give the largest area, because the chosen diagonal is each triangle's base.

We can imagine taking a line parallel to our diagonal and moving it perpendicularly to it; the last vertices it crosses (depending on the direction of its movement) are the most distant ones. But this is equivalent to saying that these most distant vertices must lie on the convex hull — if we draw a line through a point that's not on the convex hull, there will be points (stricty) on both sides from it, the ones belonging to the convex hull. Applied to both diagonals, it means we only need to use points on the convex hull.

Let's now pick a diagonal between 2 vertices on the convex hull and rotate our coordinate system so this diagonal coincides with the x-axis. We need to find the topmost point (farthest from the x-axis on one side, the other side can be done analogously). By definition of convexity, as we go from left to right on the hull, the angle between the x-axis and the side of the hull we're currently moving along will be non-increasing, which means the y-coordinates of points we pass first increase (up to the first segment with negative angle) and then decrease.

If we rotate this diagonal around one vertex u of the hull, starting with position parallel (angle ) to one side u - v of the hull in that place, adding vertices to the part "above" the diagonal, the angles of all segments with the diagonal increase equally and the first segment with negative angle moves along the hull in the same direction we're adding vertices in. Therefore, we can use two pointers to store the other point of the diagonal w and the topmost point v.

Since the "bottommost" point x can be recomputed using the 2 pointers' method with w in parallel to recomputing v and we can calculate the respective area easily using vector cross products, this gives us (together with the convex hull) an O(N2) algorithm.

D. Dropping Directions

(diff: med-hard, code)

Build a graph with vertices (intersection, direction); it's a specific union of cycles. Adding a signpost points all locations on 0 or 2 cycles to the goal.


The graph we're given is not very good, since the next intersection depends on the previous one. A much better graph is a directed one with vertices (intersection, direction). In this graph, the next vertex is given uniquely and the vertex from which we had to arrive is also given uniquely, so it must be a union of cycles.

Furthermore, if we had a cycle in it along some intersections, then there must be another cycle in which the order of intersections is opposite — it corresponds to traversing the same path in the original graph, just taking the opposite direction at each intersection.

Now, what would happen if we added some signposts? Each vertex would still have outdegree 1, but some would have indegree 0, so each connected component would be a cycle with some trees (directed to the root) appended to its vertices as roots. The situation we want to obtain is for each component's cycle to contain one of the goal's vertices (there are 4 of them, and at most 4 such cycles).

Let's look at the intersections in the goal's cycles, and at signposts in them. Such an intersection corresponds to 4 vertices, two of which originally lied in the goal's cycles already. Adding a signpost would therefore point at most 2 other cycles to the goal. After we added such a signpost, we could look at the graph again, find an intersection that leads tourists to the goal in two directions (either directly along an original goal's cycle, or by finding the previously added signpost) and add a signpost to it, and repeat until the goal is always reachable.

In fact, we can always find an intersection such that adding a signpost would point 2 cycles to the goal this way. Thanks to our strategy and the symmetric property of our graph, we know that pointing one cycle to the goal means pointing its opposite direction cycle to the goal as well, so we point 0 or 2 cycles. And if we could only point 0 cycles, that would mean the original graph isn't connected.

Therefore, we can calculate the answer very easily: it's the number of components that don't contain the goal's intersection / 2. All it takes is one BFS for each component, in O(N) time total.

E. Excellent Engineers

(diff: medium, code)

A classical problem solvable simply using sorting and minimum-BIT in .


The order of engineers doesn't matter, so let's sort them by their first ranks. Now, the only people who can kick an engineer out of the shortlist now have to be before him in the sorted order, so let's process them in that order.

For an engineer (r2, r3), we need to find out if, among people processed before him and with smaller r2, there's someone also with smaller r3. That's straightforward if we use a minimum-BIT: if we store in the BIT an array A[] with A[r2] = r3 for already processed engineers and A[r2] = ∞ for the rest, then it's equivalent to asking if . Based on the answer, we can decide whether to add him to the shortlist. Then, we need to add the currently processed engineer by updating A[r2] to .

Sorting can be done in O(N), queries and updates on BIT work in , so we have an algorithm.

F. Floating Formation

(diff: hard, code)

Boats are edges, designs are edges. Compress the graph to a tree with its root marked, then keep marking the vertex that has the most unmarked vertices on the path from it to the root. It can be done with preorder numbering and a segment tree.


The input graph has specific form that simplifies stuff a lot — it's connected and has a part that always stays afloat. We can identify this part and compress it into the root of a tree. How? By identifying the part that doesn't stay afloat, and that can be done simply by cutting off leaves of the graph (stored in a queue) and adding their neighbours to the queue if they become leaves.

Now, we have a rooted tree, with only its root marked as "stays afloat". If we connect a boat to its unmarked vertex, that vertex and all on the path from it to the root will also become marked that way. Obviously, we want to connect boats to leaves only; in fact, we want to connect a boat to a deepest vertex (if there are more, any one will do). If we didn't connect a boat to any of the deepest vertices, we could look at the first vertex w on the path from one of them (v) to the root that becomes marked in the end, take a boat from its subtree (connected to u) and connect it to that deepest vertex (v) instead, gaining at least one marked vertex, because we mark all vertices from w to v, unmark at most all vertices from u to w and v must have been deeper than u.

We can connect the first boat to this vertex and mark all vertices on the path from it to the root; this can be done by simply moving along the path until we encounter a previously marked vertex. Where to put the next boat? Again, to the vertex with the most unmarked ones on the path from it to the root (let's call this number "true depth"). The argument is that the cost is the same as if we just took all marked vertices and merged them into the root (we did this at the beginning, remember?), obtaining another tree with all but the root unmarked, which is the same situation as when choosing the vertex for the first boat.

We now have a clear idea of what to do — K times "pick the truly deepest vertex and mark all vertices above it (including itself)". Then, we just need to count unmarked vertices to get the answer. The only question remaining is how to do this.

Marking vertices is straightforward, the only problem is updating true depths and finding the truly deepest vertex. What often helps with this type of problems is renumbering vertices so that each subtree would contain vertices from one interval. For example, with preorder numbering, which can be computed with one DFS — the root gets number 0, its first son gets number 1, then the subtree of the first son is numbered recursively, the second son gets the first unused number, then its subtree is numbered recursively, the 3rd son gets the first unused number again etc. This produces the desired numbering, and updating true depths when vertex v is marked turns into subtracting 1 from all true depths in an interval (corresponding to v's subtree).

So, we need to find the maximum and subtract a constant from an interval. Does that remind you of anything? Yes, maximum interval/segment tree. With lazy updates and able to find the vertex that this maximum belonged to. Lazy propagation is a standard thing, so I won't describe the tree here, but I'll mention that in order to get the vertex, you can keep the maximum of pairs (true depth, corresponding vertex). The time complexity is , because all but the query answering/updating takes just linear time and a segment tree can be constructed in O(N).

G. Growling Gears

(diff: easy, code)

Take the derivative of function F(R), find the optimal R and F.


This is one of the most basic problems in calculus. F(R) is a polynomial, which is differentiable in , so its maxima must satisfy

In this case, it means 2aR = b, -2a < 0$. The second rule is always satisfied, the first one has exactly one solution: , with the corresponding maximum F(Rmx) = b2 / 4a + c.

We want just solutions with R > 0, but in this problem, it's always satisfied.

We get a straightforward O(N) code; doubles are fully sufficient for max. values of F.

H. Highway Hassle

(diff: harder, I'm not sure if "very hard" is accurate)

Lol nope. Look for it down in the comments.

I. Interesting Integers

(diff: medium, code)

N = Gn = Fn - 1b + Fn - 2a; try all reasonable n, solve the equation (think why it's equivalent to N=F_{n-1}b+F_{n-2}a$)


We know that N = Gn = Fn - 1b + Fn - 2a with F - 1 = 1, F0 = 0 (it's not called a linear recurrence without reason); proof by seeing that it satisfies all conditions it needs to satisfy. For a, b > 0, Gn ≥ Fn and Fibonacci numbers increase quickly, so we can try all n, for which Fn ≤ N holds.

Let's fix n ≥ 3. Modulo Fn - 1, we know that N ≡ Fn - 2a; consecutive Fibonacci numbers are also coprime (easy proof by induction), so Fn - 2 has a modular inverse and

a = kFn - 1 + NFn - 2φ(Fn - 1) - 1 .

We can precompute Euler's φ of all Fn - 1 ≤ N after factorising them. Then, we can compute the smallest positive a0 that can lead to a good pair (a, b) using fast exponentiation and its respective b0.

We can add any multiple of Fn - 1 to a0 and subtract the same multiple of Fn - 2 from b0 to get all the other pairs (a, b). The largest k, for which a0 + kFn - 1 ≤ b0 - kFn - 2 holds, is (integer division). If b0 < a0, there's clearly no way to make a good pair (a, b) with this n (a0 must be the smallest possible a here, but we'd try to subtract something from it). Otherwise, the pair corresponding to this k must be good.

We just need to pick the optimal one of so found good pairs (a, b). There are possible n-s, each requires factorisation for φ and exponentiation; all remaining steps are O(1), so we get complexity per query with precomputation.

J. Jury Jeopardy

(diff: easy, code)

Trivial simulation of what's described in the statement (walk around the maze and mark cells as empty), just bug-prone. Nothing else to add, you can read my code to understand better.

K. Key to Knowledge

(diff: medium, code)

Meet in the middle, pick one half of correct answers. 3112 < 1018, so a vector can be compressed into a 64-bit integer.


The simplest bruteforce would try all possible combinations of correct answers, count the number of correct answers each student got and decide if it's correct. But that's too slow.

A better solution uses meet-in-the-middle approach. We can bruteforce all combinations of the first correct answers and count how many of them each student answered correctly, as a vector v of size N. The same can be done for the last answers, obtaining vectors w of how many answers each student needs to answer correctly in the other half. The answer to the problem is the number of pairs (v, w) with v = w.

We could just throw all vectors v into a map<> and compute the answer directly by looping over all w. But to make sure it doesn't TLE, we can convert the vectors into numbers in base M + 1. Since these numbers are at most (M + 1)N ≤ 1018 here, 64-bit integers are sufficient for this representation. Comparing vectors in O(N) now turns into comparing integers in O(1).

There are O(2M / 2) halves of correct answers to check (O(MN)), compress (O(N)) and drill into or search in a map (), so the time for this is O(2M / 2MN).

Полный текст и комментарии »

  • Проголосовать: нравится
  • +116
  • Проголосовать: не нравится

Автор Xellos, 10 лет назад, По-английски

The last blog got downvote bombed into oblivion, but I have enough contribution to spare.

TC SRM 638 takes place soon (ಠ_ಠ).

Feel free to discuss the problems here after the round ends (because TC apparently doesn't have placeholders for new matches in its forum anymore...).

Полный текст и комментарии »

  • Проголосовать: нравится
  • +121
  • Проголосовать: не нравится

Автор Xellos, 10 лет назад, перевод, По-русски
  • Проголосовать: нравится
  • -21
  • Проголосовать: не нравится

Автор Xellos, 10 лет назад, По-английски

This was originally intended as an answer for this comment, but eventually I realized that it's too long (also, the Preview feature takes way too long to process it), so it's a blog post instead.

This has complexity of and a horrible constant. If there's a better solution, write it in the comments.

UPD: KADR provides a solution using polynomial interpolation in O(K2).

Problem statement

There are N boxes numbered 1..N; box number i contains i candies and 1 stone. We pick 1 random object from each box. Calculate , where p is the probability that K of these N objects are candies.

Constraints: 1 ≤ K ≤ N ≤ 109, 2·109 ≥ M ≥ 109 and M is prime.

Solution

It's clear that since the probability of picking a candy from box k is and the probability of not picking one is , the answer is (Sn denotes the set )

Let's denote this sum as P(N, K) and introduce another sum (N / 2 is integer division)

which calculates the same sum P(N, K), but with an additional restriction that S can only be subsets of range .

If that restriction was instead that S can only be subsets of range , the sum would obviously be just P(N / 2, K).

Also, let's denote .

Naive solution

If the constraints were small, we could use simple DP on P(N, K). If we don't use N in S, then we sum up π(S') of the same subsets S' as in P(N - 1, K). If , we only need K - 1 elements from the range SN - 1, so we sum up π(S) = Nπ(S') of the same subsets S' as in P(N - 1, K - 1).

This runs in O(NK) and would obviously give TLE. What we can do instead is splitting the range SN into 2 halves of size N / 2 (and possibly 1 integer N / 2 + 1 in the center for odd N).

Calculating P(N, K) — even N

Suppose we knew how to calculate P2(, ) already. In order to calculate P(N, K) for even N, we can split any disjointly and uniquely into and .

If |Sa| = i, then |Sb| = K - i; any 2 sets Sa and Sb can be merged to form one set S and so we have for fixed i

Since i can be any integer in [0, K], we get

Calculating P(N, K) — odd N

In this case, we can again split S disjointly and uniquely into Sa, Sb as above and another set . We have 2 choices for Sc: either it's empty or contains N / 2 + 1; if it's empty, then we get the same sum as for even N.

If Sc is non-empty, we can again use that π(S) = π(Sa)π(Sb)π(Sc) = π(Sa)π(Sb)(N / 2 + 1), where we can take N / 2 + 1 out of the sum and get a similar sum as for even N — the only change is that if |S1| = i, then |S2| = K - 1 - i and

Again iterating over all i from 0 to K - 1, we get a formula for odd N:

Calculating P2(N, K)

Let's expand the sum as

It's clear that if |S'| = K - i, then

where denotes the number of sets S (of size K) satisfying the condition.

How to count that number? We can just exclude set S' from all S and SN / 2 (since they all contain S'); then, we can see that we just need to count the number of subsets of SN / 2\ S' of size i. That's obviously just , so

Binomial coefficient

The binomial coefficient has kinda big arguments, no? We can't just pre-compute a Pascal triangle or factorials, but i is sufficiently small, so we can just use one of the most basic formulas for binomial coefficients

and so, for given N, K, compute the necessary binomial coefficients along with corresponding terms in the sum for P2(N, K). It's useful to have modular inverses precomputed; here, we can use that M is a prime larger than K, so by Fermat's little theorem, .

Complexity

We now have a fairly simple way to compute P(N, K) and P2(N, K) in O(K) time with some pre-computation. The important thing is that thanks to integer division, the N in the argument can only be N from the input divided by powers of 2; since there are just such powers that don't lead to the trivial case N = 0, and with fixed N, we only spend O(K2) time on computing P() and P2() for all possible K, the complexity is .

The limits are too tight, though — the biggest problem is there are a lot of modulos that take a lot of time. One possible improvement is only taking the modulo when computing the sums for P() and P2() after every 4 additions, that gives me worst-case runtime of around 2.1 seconds (so close TLE T_T). Also, we can precompute , which saves us some modulo operation compared to precomputing (N + 1)i and separately. Code

Полный текст и комментарии »

  • Проголосовать: нравится
  • +60
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

Badum badum...

The Internet Problem Solving Contest takes place again in 2014! The registration's started (those of you who competed last year should've gotten an e-mail about it) and will be open until the end of the contest.

The contest will take place from 15.6.2014 10:00 UTC 15.6.2014 11:00 UTC to 15.6.2014 15:00 UTC 15.6.2014 16:00 UTC and will be preceded by a short practice session.

You can find the registration form, archive and everything at the contest site.

The rules can be found at the contest site or in my blog post about last year's IPSC. That's why this post is short — you can just read that one and replace the dates.

Also, if you think one blog post isn't a good enough emphasis, here are two more :D

*CONTEST HAS ENDED!* You can find the solutions in the archive.


Here is a summary of some registered teams by CF handles. It's not very long, but better than nothing.

Team name Member 1 Member 2 Member 3 Place
Zenith I_love_Hoang_Yen flashmt ngfam_kongu 39
MSU Trinity Zlobober sankear malcolm 31
cutie, dropout & useless woman vadimmm Rubanenko baba_beda 113
XZ Team eatmore AlexFetisov winger 26
KPZ eduardische popoffka Alex_2oo8 28
Break a leg Olja Oleg805 andgein 270
Charles University Legion fhlasek k21 simsa.st 12
SPb SU 4 Dmitry_Egorov PavelKunyavskiy yeputons 9

And separately for single-person teams.

Team name Member 1 Place
Alexander Udalov udalov 28
duolCauqA Aquacloud 170
Xellos Xellos 15
Snowbear marat.snowbear 44
Futures Leaguer gs12117 12

Полный текст и комментарии »

  • Проголосовать: нравится
  • +68
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

This is a continuation of my blog post about day 1.

You can find the constraints and samples in the original problem statements (in Slovak, but that doesn't matter with numbers :D).

UPD2: The contest is in Gym now (both days merged into 1 contest, one problem omitted due to being too theoretical). It only took a year!

Problems: 4 5 6

Solutions: 4 5 6

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

The last USACO contest, that is, US Open, takes place this weekend. You can start the contest anytime in the 3-day window starting at April 4th.

Well, at least it's not during the next weekend (Crazy Weekend of April 2014, there's one every month :D).

You'll be able to access the contest from the USACO front page. You need to register for an account on the site to compete.

I hope I'll be able to post my solutions again here after the contest ends. I also hope that I'll be able to compete at all, because I'll be who knows where (I certainly don't :D) during the weekend and I wonder if there'll be Internet connection...

Also note that this contest lasts 5 hours, unlike the previous ones. For the same number of problems.

Problems and solutions (just gold so far; post the silver/bronze problems if you have access to them):

Problem 1: Fair Photography.

Given N ≤ 105 integers in range [1, 8] placed on integer points of the x-axis and an integer K ≥ 2, find the longest interval that satisfies the following conditions:

  • at least K distinct integers occur in the interval,

  • all integers that do occur in it must occur there an equal number of times.

The interval must start and end at some of the given points. Print the maximum length of such an interval, or -1 if there's none.

Solution

If all integers must occur in the interval, the problem can be solved easily in time. This is basically problem ABCSTR of Codechef; read its editorial for further explanation.

In this case, however, a subset of integers from the range [1, 8] could occur in the chosen interval. Ok, why not just run the algorithm on all subsets? :D We only need to count the differences as in ABCSTR for integers from the present subset, and we'll remove all entries from the map<> whenever we encounter an integer that's not in that subset — this way, we can make sure that none of these integers can occur in the selected substring. We get complexity , which isn't particularly great.

We can get rid of the logarithmic factor by storing hashes of vector<>s in the unordered_map<> (actually, we can't get rid of it in the contest without coding an entire hash table... shame on USACO for not updating the compiler). A polynomial hash of elements of the vector<>by 2 moduli (109 + 7 and 109 + 9, for example) is strong enough.

We can also get rid of the factor M by updating hashes in O(1). The only 2 updates we need to do are: add 1 to one element of the vector; subtract 1 from all its elements. For polynomial hashes ( of vector A, where the  + N is in order to have all terms non-zero and thus remove obvious collisions), these operations correspond to adding pi - 1 and subtracting (you don't have to calculate that, just take the difference of hashes of A filled with 1-s and filled with 0-s).

Notice that so far, we can't check if the selected interval actually contains K distinct integers. We can separately calculate for each right endpoint of an interval the largest possible left endpoint for which this condition is satisfied. That can be done by processing points in sorted order and, for each integer, storing its rightmost occurence in a map<>; the rightmost left endpoint is the K-th last of these occurences and can be found by iterating over the map<>. Adding a point only leads to changing 1 entry of the map<>, so this takes time (the whole algorithm now takes time). Afterwards, we can easily check if some interval contains at least K distinct integers.

This leads to a more efficient algorithm. Notice that any point can only be the right endpoint in O(M) different subsets — if you decrease the left endpoint, you can only add integers to the subset, which can be done just O(M) times. The above mentioned algorithm gives us an easy way to compute these subsets, even. Similarly, any point can only be the left endpoint in O(M) different subsets, which can be found by running that same algorithm in the opposite direction.

Let's not try subsets separately, but keep 2M unordered_map<>s, in which we'll store the hashes of vector<>s.

We need a way that allows us to modify hashes quickly — for example, the i-th element of the vector<> is the number of occurences of integer i among the first j points if it isn't in the subset; otherwise, it's the difference between number of occurences of i and of the smallest integer in the subset. Note that when adding integers into the subset, these numbers change, but only all in the subset by the same number c, which makes polynomial hashes rather easy to modify — decreasing all numbers in the subset S by c means decreasing the hash by , where the sums for all subsets can be pre-computed.

Try all subsets which have j as the right endpoint and contain at least K elements, in the order in which they go when we move the left endpoint to the left; for each of them, find out if there was some vector<> before that could occur in that subset (was placed in the hash-map of that subset) and was identical to the one we have now — such identical vectors correspond to balanced intervals. Then, add our current vector<> to the hashmaps of all subsets that can have the j + 1-st (think why not j-th) point as their left endpoint.

The resulting complexity is . There are, of course, simpler algorithms where you don't use hashes or clever updates of them at the cost of larger powers of M.

Problem 2

There's a laser placed at point (0, 0) in a plane, shining in the direction of positive y-axis, and N ≤ 105 mirrors (reflecting on both sides) placed at integer points and tilted at 45° to the coordinate axes, that is, as '/' and '\'. The laser beam reflects from these mirrors. You should add one such mirror so that the laser beam passes through a given point (Bx, By). There are also additional restrictions:

  • the laser beam can't pass through the point (0, 0) again (this condition is satisfied in the initial configuration),

  • the new mirror can't be placed at the same point as one of the original ones.

It's also guaranteed that the laser beam doesn't pass through (Bx, By) in the initial configuration. Print the number of points at which the new mirror can be placed. Coordinates can be large, up to 109 in absolute value.

Solution

It's obvious that the laser beam will always be parallel to one coordinate axis.

Store the mirrors in every row (equal y-coordinate) in sorted order (by x-coordinate), and in every column (equal x-coordinate) also in sorted order (by y-coordinate). Trace the beam's path by remembering from which point in which direction it went and finding the first mirror in its path, until you find out that it started to loop or went out of bounds.

Trace back the beam's path that could lead to it hitting point from each of the 4 possible directions. This time, stop when it starts to loop (including hitting point again), goes out of bounds or hits point (0, 0).

We've obtained line segments of 2 types — "real" and "desired" and 2 orientations — vertical and horizontal. The new mirror should and can be placed at the intersection of any two segments of both different types and orientations, because we can pick the direction in which it could be placed to tilt the "real" beam by 90° and have it go in the direction of the "imaginary" one into point .

The problem now reduces to a classic "compute the number of intersections between vertical and horizontal segments". This can be solved by compressing y-coordinates, sorting and sweeplining the segments by the x-coordinate of the larger endpoint. Then, iterate over the vertical segments and remember how many times points covered by horizontal ones; you can calculate the number of horizontal segments that intersect a vertical one using a Fenwick tree. You can keep track of the horizontal segments to add using 2 pointers and of the ones to remove using a map<> of their left endpoints, or just replace each horizontal segment by 2 half-segments which add 1 and -1 to the Fenwick tree.

Time complexity: .

Problem 3

You're given a rooted tree of N ≤ 2·104 vertices. Each vertex can contain a digit from 0 to 9, inclusive. You're also given M ≤ 5·104 rules as pairs (vertex, 5-digit string); string s at vertex x says that if you read the first 5 letters from x (inclusive) to the root, you have to get a string different from s.

The tree is rooted at vertex 1 and vertex i has parent p(i) < i.

Print the number of ways in which you can't assign digits to vertices of the tree (e.g. such that at least 1 of the M rules is broken), modulo 1234567.

Solution

Instead of computing the number of ways in which we can't assign the digits, we'll compute the number of ways in which we can; the answer is then 10N minus that (there are 10N ways to assign digits, in total).

This can be solved by dynamic programming. Let DP[i][j] be the answer for the subtree rooted at vertex i, in case the 4 digits above that vertex are read as j, but this time from top to bottom. There are at most 10 digits we could put in vertex i; for each digit d that we can put there, the number of ways to fill the subtree is the product of DP[x][(10j + d)%10000] over all sons x, because we can fill the subtrees of sons independently and the last 4 digits read on the path from the root to each son will be (10j + d)%10000. Then, DP[i][j] is the sum of these results over all d. We can then find the final answer in DP[1][j] (we can pick any j, it's like creating a path of 4 vertices above the root and choosing the digits in them; there are obviously no rules starting at vertices with depth  < 4 below the root, so these digits won't affect anything).

This approach has complexity O(105N) — every edge of the tree is used O(105) times.

We can improve this by noticing that in case all digits are allowed, then the first digit of j (taken as a 4-digit number with leading zeroes) doesn't really matter. In fact, there are just M possible states of DP where it can matter, because each of the M rules forbids one digit d for one state of the DP. So if we calculated for 3-digit numbers j, then if some state of the DP has all digits allowed, we can immediately say that DP[i][j] = A[i][j%1000]. If a state doesn't have all digits allowed, we can just compute it the same way we did before.

This approach gives a complexity of O(104N + 100M), which I dare say could work in a second or two, because it's just DP, with really just that many operations.

It'd be even better if we could always replace moduling by subtracion if necessary, but we can't, because we need to compute products sometimes, not sums. The large modulus (around 106) also forces us to use 64-bit variables. BTW, this got me thinking about how the Chinese Remainder Theorem can sometimes be useful — if the modulus is composite, we can decompose it into prime powers, compute partial answers modulo each of them and the final answer is just LCM of the partial ones. In this case, 1234567 = 127·9721, which would allow us to use ints, but I doubt it'd be actually helpful.

I don't know USACO's time/memory limits, but I suppose this would still need memory optimization. If the tree was thin enough (there were few vertices in any depth), the DP could be done by decreasing depth, where we'd remember just the answer for states of vertices in the last 2 depths, but I'm not sure what to do in a general case.

UPD: The results are up. Optics and Code have really bad results...

Полный текст и комментарии »

  • Проголосовать: нравится
  • +51
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

This Thursday (27.3.2014), the first (theoretical) day of the national round of Slovak Olympiad in Informatics took place (and I appeared there, much to everyone's surprise :D). So, I bring you the problems and solutions. The second, practical day, can be found here.

UPD: The original version of the problems and solutions is up. In case you were curious, final results. In Slovak, of course :D

You can have fun trying to find CF handles of some contestants :D

UPD2: The contest is in Gym now (both days merged into 1 contest, one problem omitted due to being too theoretical). It only took a year!

Problems: 1 2 3

Solutions: 1 2 3

Полный текст и комментарии »

  • Проголосовать: нравится
  • +53
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

Continuing the recent streak of competitions, TopCoder SRM 612 takes place at this time (in Europe, this night). Still, that's no reason not to participate!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +37
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

Note that this weekend (Friday 7.3. to Monday 10.3. inclusive,  ±  time zone), this year's last round of USACO (before US Open) takes place.

You'll be able to access the contest from the USACO page. Registration is required.

Feel free to discuss the problems here after the contest ends (which is, btw, not very clear, because there are no official start/end times posted — around Tuesday afternoon in UTC). I'll probably also post my solutions here.

Also notice how similar this is to my blog post about COCI :D. This is one of the usual Crazy Weekends — there's COCI, USACO and start of the Codechef Long Challenge. It's like the contest organizers do this on purpose...

UPD: The results are up now!

UPD: Negative votes... trolls growing strong here!

UPD^2: Trolls are gone now. Lol.

**UPD **

Solutions (gold)

The Lazy Cow (code)

We're given N points (xi, yi) in a plane, each with a weight gi and are supposed to find a point (x, y) which maximizes the sum of gi of points located at a distance of at most K from (x, y). The distances are Manhattan ones — the distance of point (xi, yi) from our (x, y) is |xi - x| + |yi - y|. Constraints: N ≤ 105, coordinates 0 ≤ xi, yi ≤ 106, 1 ≤ K ≤ 2·106.

Let's say that we picked a point (x, y). We can transform coordinates (x, y) to (x', y') = (x + y, x - y) and similarly for all points i. In these new coordinates, the distance of our point from any given one is (evaluating all possibilities for what the abs. values can look like) , and the region for which it's  ≤ K is a rectangle around (x', y').

Let's sort the given points by x'-coordinate. Now, we can use a sweepline + interval tree, which supports operations "add value v to all elements of interval I" and "get maximum of all elements". We'll try adding the points one by one, in the order of increasing x'-coordinate, and in the interval tree, we'll remember the sum of gi of points that we could get by picking y' as the other coordinate in the element y'. That's done by adding gi to all elements from yi' - K to yi' + K inclusive; removing a point is the same, just with subtracting gi from all those elements.

We need to remove the points whose xi'-coordinates are  < x' - 2K (we're basically forcing the right side of our rectangle to touch some point and store in the interval tree only answers when accounting for points that have the xi'-coordinate  ≥ x' - 2K; if it didn't, we can move the rectangle to the left without changing the result). Since the points are already sorted, we can use two pointers... well, just one additional "pointer", which points to the leftmost point still in the interval tree.

The time complexity of constructing an interval tree is O(maxX); each query can be processed in time and the sorting takes time, so the resulting complexity is . With y'-coordinate compression, we can get runtime.

Sabotage (code)

We're given an array A of size N ≤ 105, and are asked to compute the minimum arithmetic average of all elements of this array after removing a substring (contiguous subsequence) from it. Plus some additional constraint that the elements of this array are non-negative, at most 104 and we can't remove the first or last element from A.

This type of problems (serarching for some arithmetic average) can usually be solved with bin-search. It's clear that if we can achieve an average of at most x, then we can achieve also at most y > x. So let's fix x and check if we can achieve  ≤ x.

Prefix sums are useful. Let S[i] denote the sum of the first i elements (S[0] = 0). Let's say that we erased the subarray A[i..j] (2 ≤ i ≤ j ≤ N - 1) and got an average  ≤ x. It means that

Let's say that we fixed j. Now, the best choice (minimizing the left side of this inequality) is obviously picking the smallest S[i - 1] - x(i - 1). This gives us a simple way of checking if this inequality ever holds: iterate over j from 2 to N - 1, remember and update the minimum of all S[i - 1] - x(i - 1) (always just picking the minimum of smallest the one found so far and S[j - 1] - x(j - 1)) and check if the smallest value for given j is  ≤ 0. If it never is for given x, there's no way to get an average  ≤ x.

This check works in O(N) time; plus binsearch, it's time.

Counting friends (code)

We're given an array of N + 1 elements (N ≤ 500) and are supposed to find, and list in sorted order, all possible elements of this array such that after removing said element, the remaining ones can be degrees of a simple undirected graph.

Why not try to remove each element and check if the remaining ones are degrees of a graph? We have a simple way of checking whether they are: the Erdős–Gallai theorem, which provides a necessary and sufficient condition.

Let's sort the numbers beforehand. Now, we can get a sorted array after removing an element in O(N) time without much difficulty. For each removed element, we can just compute the sums on both sides of the inequality from Erdős–Gallai theorem for all k (possibly with some optimizations), which gives us an O(N3) time complexity.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +55
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

Note that this Saturday, this year's last (before the Olympiad's Open) round of COCI takes place.

Registration and entry into the contest: here. Watch out — don't register for HONI (local version), but COCI.

Feel free to discuss problems here after the contest ends. I'll probably post my solutions here, too.

The contest is over. The results are accessible in your account on evaluator.hsin.hr under the Results tab. The top 5 are:

  1. bmerry
  2. andreihh
  3. eduardische
  4. svanidz1
  5. Topi Talvitie

Solutions

(all full, except the last problem; you can find the solution of that problem in the comments)

VJEKO

(code)

Do I even have to write anything? For every word on the input, check if the part of the pattern before the asterisk is its prefix and the part after it is its suffix. If they both are, the answer's yes. Can be implemented more comfortably using the STL substr() function. Any bruteforce works.

FONT

(code)

Assign a bitmask of 26 bits to each word — the i-th bit is 1 iff the i-th letter of the alphabet is present in the string. It's obvious that we're asked to compute sets of words whose bitwise OR is equal to 226 - 1.

We face 2 difficulties here. First, the time limit is too low for an O(N2N) solution. We need an O(2N) one, using something like Gray code. But we can't do a classic Gray code memoization, because we don't have enough memory. We need to find some kind of trick to get a full score. I iterated over all numbers with N - 1 bits (corresponding to subsets of the last N - 1 words) and used the lastone() function from Fenwick trees to always get the last non-zero bit of the number; I remembered the bitmasks of the words corresponding to all bits (in 2 arrays, to save memory; it just adds an if-else condition) and used bitwise magic to make my program faster. When I know the OR value of bitmasks of all words in that subset, I just check separately whether that value's good and whether its OR with the last word's bitmask is good. Time complexity: O(2N); memory: O(2N / 2).

The TEST option helps a lot in estimating what's fast enough.

KOCKICE

(code)

We can only choose the middle column's height. Let's say that this height is H; then, the cost for changing the i-th column of array R is , and similarly for array S. So then, let's define a new array A with 2N elements: , and sort it. If we subtracted H from all elements of A (the sorted order of elements remains the same), the result would be the sum of absolute values of all elements of A.

If, after subtracting H > 0 from all elements of A, we got A[N + 1] < 0, then subtracting H - 1 would give us a smaller answer. That's because all absolute values of A[1..N + 1] would decrease and at most N - 1 remaining ones could increase — the answer would decrease by at least 1. The same applies for A[N] > 0 (so A[N + 1] ≥ A[N] > 0) and subtracting H + 1 instead of H ≥ 0.

That means: if A[N + 1] ≤ 0, then H = 0 (H < 0 is digging a hole in the ground...); if A[N + 1] > 0, we want H = A[N + 1]. Pick the right H, subtract and calculate the answer. Time complexity: due to sorting.

KRUZNICE

(code)

This problem would be called the same in Slovak :D

Observe that we don't need whole circles, the intervals that are formed by their intersection (as full circles) with the x-axis are sufficient. That's because the circles can touch only at the x-axis (proof to the reader). Each circle cuts out one new region from the whole plane or a circle that it's inside of; for each circle c, one more region's added if there are two or more smaller circles that form a chain from one to another end of c. In this case, a chain of circles (or intervals) is a sequence of circles that touch (or intervals with a common endpoint). It's obvious that if such a chain exists, the inside of circle c is cut into 2 symmetric halves; otherwise, those 2 halves are connected, because no 2 circles can intersect, and no other regions can be cut out (once again, proof to the reader; during a contest in this case, an insight into this is sufficient and there's no need for solid proofs).

This sounds like a graph with endpoints of intervals being vertices and intervals connecting the endpoints as edges — participants of Codechef Feb Long Challenge would agree :D. A chain in this case is a cycle formed when adding circles (intervals, edges) by increasing radius.

This kind of adding edges sounds a lot like DSU. In fact, you can apply the DSU algorithm (the answer is then N + 1, for each circle plus the region that remains from the initial plane, plus the number of cyclic edges found during DSU), or just realize that the number of cyclic edges is given just by the number of connected components and vertices of the graph.

Time complexity: , if we want to compress the x-coordinates or store them in a map<>.

HASH

(code)

This problem was another pain in the ass when optimizing time and memory. Seriously, was it so hard to use N ≤ 8 or a time limit of 3 seconds?

My solution is a bruteforce using the meet-in-the-middle trick. The point is that we can calculate hashes of the first letters for all ways of picking those letters, and also what those hashes would have to be in order to give K when hashing them with the last letters.

The first bruteforce can be done by remembering hashes in a 2D array: H[i] stores all hashes that we can get with i letters (26i elements), and we can get the elements of H[i + 1] by iterating over all elements of H[i] and trying to add each of the 26 i + 1-st letters. It's clear that the hash for every word with  ≤ i letters will be computed exactly once and in O(1) time, and the number of those words, and so the time complexity of this part, is .

What about the second bruteforce? Xor is its own inverse and 33 is coprime to the modulus, so it has a modular inverse; its value %2M is I = 33φ(2M) - 1 = 332M - 1 - 1. So if the last letter had an ordinal value of x, then the hash of our word without that letter is ((K^xI)%2M (modulo trashes all but the last M bits, so xor doesn't affect it, and multiplying by modular inverse is an inverse operation to multiplying by 33 %2M, so it gives us the hash we're looking for by definition). This is almost identical to the first bruteforce, we just start with K instead of 0 and do the operations (xor, multiplication) in reverse order. The time complexity is the same.

How to merge the 2 results? One more array with some 225 = 3.3·107 elements can still fit into the memory. This array, P, will be the number of occurences of hash P among the first bruteforce's results (the hashes of elements). Traverse all those hashes H, for each of them increment P[H], then traverse the results of the second bruteforce (the required hashes) and for hash H add P[H] to the answer. Once again, every word with hash K will be counted in the answer exactly once.

Time and space complexity: O(26N / 2 + 2M). It actually fits into the limit, if you don't do something stupid like sorting the bruteforces' resulting hashes and using 2 pointers to compute the answer (quicksort is fast, but not fast enough, even if it cuts off the 2M factor from complexities).

GRASKRIZA

(code)

You can look for an optimal solution in the comments. But let me tell you that a stupid bruteforce (add traffic lights to all grid points of the smallest rectangle bounding the input traffic lights, that are above any already existing traffic light) gives 96 points :D

I tell you, I don't miss the times (first COCI contest of 2013/14) when a solution that was significantly faster than a bruteforce got 30% of the points just like a bruteforce.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +51
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

This is a repost of the blog post from before Black Day. I'm lazy, though, so it's just printscreened from Google Cache :D. It got less sharp, but no less readable. Now in an improved version! (still a picture :D)

I really liked this problem (394E - Lightbulb for Minister), because it involves taking derivatives. You don't see this much in programming contests :D. Therefore, I'd like to post my solution in detail.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +13
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

I'm surprised there's no blog post to this... just so nobody misses it:

Round 3 of Croatian Open Competition in Informatics takes place at this time today, and lasts 3 hours.

You can log in to the contest site at evaluator.hsin.hr, or register an account there if you don't have one already. Make sure to select the right contest from the list when logging in — it's not HONI, but COCI.

Everything related to the contest should be posted here in a few weeks' time (except the solutions, it seems).

Feel free to discuss the problems here after the contest is over.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +40
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

Welcome to The Editorial!

Keep the upvotes piling up! muhehe

IZ.COMPLETE.

A. Rasheda And The Zeriba

Полный текст и комментарии »

  • Проголосовать: нравится
  • +108
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

A. The Power of the Dark Side

(difficulty: easy)

Let's sort the parameters of every Jedi: a ≥ b ≥ c. The "coverted" Jedi obviously wants to use his strongest 2 parameters (ak, bk) against the opponent's weakest 2 (bi, ci) to get 2 victories as assuredly as possible; besides, the optimal order is ak against bi and bk against ci, because if he can win when using them in the opposite order, then he'll win after swapping them, too. So we want to find all Jedis that have ak > bi and bk > ci for all i.

That's simple, because those are the same conditions as ak > B = max(bi) and bk > C = max(ci). When processing the input, we can count B, C and then we just check for every Jedi if he satisfies these 2 conditions, in linear time.

I decided during the contest to code a solution which is a bit slower, but more powerful — it allows us to answer stronger versions of this problem like "which Jedis can we choose, if we're satisfied with them only defeating K other Jedis". The trick I use here is that every Jedi creates 2 pairs: (bi, ci) and (ai, bi); then, for every pair of the 2nd type (and the corresponding Jedi), we want to count how many pairs of the 1st type smaller in both properties there are. For that, we sort the pairs by the first property, and process them in that order; now, we only need to count already processed pairs of 1st type smaller in the 2nd property, which can be done easily using a Fenwick tree, in time.

B. Similar Strings

(difficulty: easy-medium)

Let's call an "index list" of string s and character c a sequence A of all indices i such that s[i] = c (e.g. s[A[j]] = c for all j and if s[i] = c, then A[j] = i for some j).

The required mapping exists between strings s, t when the index lists of all characters of s form a permutation of those of t. That's because a bijection between characters is just a permutation of index lists. So we could generate all index lists (in an array B) for every string, sort them, and check if 2 strings are similar simply by checking if B[s] = B[t].

The lists can get nasty large, though — just inserting them into a map will TLE for sure. We could at least use the fact that there are not many distinct characters in a string, so B will have reasonably few index lists.

My approach was hashing the values in every index list polynomially. That is, let's denote , where p is a prime (I use 999983; it's a good idea to have the prime larger than all possible A[i]) and M is really large prime (I use 10^9+9); then, we replace the index list B[i] by ai.

Chances are high that no two hashes for distinct index lists would be identical, so comparing two ai is the same as comparing their corresponding index lists. And since integer comparison works really fast compared to the vector one, it's enough to pass the tests when we note, in an STL map, how many strings have this B; for every B given by x strings, it means those x strings are similar, so the answer is a sum of for all B.

Time complexity: (σ is the alphabet size, at most 26 in this case, and the maximum size of any B, and L is the size of input), for reading the input, producing B for every string, and mapping them; inserting a vector of size S into a map takes , because comparing vectors takes up to O(S) time.

C. Victor's Research

(diff: easy-medium)

Let's denote the prefix sums . Then, we're looking for all pairs (i, j) (0 < i ≤ j ≤ N) such that B[j] - B[i - 1] = S. In other words, for every B[j], we're looking for the number of indices i ≤ j such that B[i - 1] = B[j] - S. And since there are just N + 1 prefix sums (including B[0]), we can store them in an STL map.

Just count the current sum (sa) as we process the input, then count how many times we mapped the prefix sums to sa - S, add it to the answer and add sa to the map. Time: .

Note: with an STL unordered_map (hashmap), we can achieve asymptotic O(1) complexity of map-operations, so we can solve it in optimal O(N) time.

D. Hamming Distance

(diff: easy)

It's clear that characters at different indices don't affect the answer, so we can choose for every position in the resulting string the character that minimizes this position's contribution to Hamming distance.

When we choose a character that's entirely different that the 3 we have, the contribution is 3. If we choose one of those 3, it's at most 2, so that's better already. If there are 2 identical characters, we could choose that, and the distance is at most 1, and if we choose a different one, it's at least 2, so we want to choose that character. If all 3 chars are different, the contribution to Hamming distance is 2 regardless which one we choose.

Following those rules, we can choose the best char in O(1) time, so O(N) for the whole input.

E. Of Groups and Rights

(diff: medium)

Firstly, realize that the groups form a rooted tree, and we're given the ordered pairs (parent,son). Every node of the tree has a state, "allowed" or "denied" or "not set, e.g. state of its parent".

The first thing we should do is complete the list of states. If the root (0) is not set, the root has state "denied". Then, DFS the tree, and when moving into a son with state not set, set it to the parent's state (which is always set, because the root has it set and we only move into a son after its state is set).

Now, we know that if a group contains a user with access denied, the group can't have access allowed, because that'd allow access to that user. The groups a user is contained in correspond, in graph theory, to all vertices on the path from a leaf (representing the user) to the root. So do a BFS from all leafs with denied access; always move to the parent of the current vertex, and mark it as "can't have access allowed in the resulting structure".

The nodes that haven't been marked in the previous step can always have access allowed without violating the rules, because all users in their subtrees have access allowed. But we need to mark as few vertices as possible with "allowed". And it turns out that we need to mark exactly those vertices that haven't been marked with "can't have access allowed" and whose parents have been marked so (this 2nd condition is always true for the root — imagine it as the root being the son of a superroot that can't have access allowed).

That's because if we set "access allowed" to some node in the resulting structure for which we can do it, it's a better solution to set it to allowed for its parent instead — the number of nodes with "allowed" doesn't increase, and the sum of depths decreases by 1. We can see that this setting always makes a user with access allowed in the original structure to have it allowed in this one — if we trace the path to the root, we'll get to a vertex that can't have access allowed or to the root eventually; the vertex on the path before it has access allowed (there's at least one — the leaf we started in). And we can't have less vertices with access allowed, because then, we'd have to remove one of those chosen so far, and that'd cause the leaves under it (at least one) that want access allowed to not have it.

We only do some DFS and BFS (I did 2xBFS, though), so the time required is O(N).

F. Battle Fury

(diff: medium)

The ideal approach to this problem is binary search, because if some number of hits is sufficient, any larger number is also sufficient.

Let's say we need to check if h hits are sufficient. We can split each hit into 2: q to every monster and p - q to the monster we actually hit. So first of all, we can decrease the HP of every monster by hq. We need to hit the still living monsters with at most h hits to decrease all their HPs to zero or below. But we don't have much of a choice there; if a monster has x HP, then we need hits. Watch out for the case p = q — here, we need all monsters to be already dead after the h "-q" hits. If the sum of hits we need to finish all mosters off is at most h, this value is sufficient; otherwise, it isn't.

We only need such checks, and one check can be done in linear time, so the total time is .

G. City Square

(diff: medium-hard)

It appears that the condition necessary and sufficient is for N to be a sum of squares: N = a2 + b2, with a, b ≥ 0. I don't know why that works, but it works. I'll write it if I find out, or you can write it in th comments if you know :D

How to check if it's a sum of squares? It's clear that , so we can try all a-s and for each of them, check if N - a2 is a perfect square. Thanks to the sqrt() function, we can do that in O(1) time, so the total time complexity is .

H. Secret Information

(diff: easy)

Let's make another string s, with s[i] = 0 if the i-th numbers in the input strings are equal (both 0 or both 1), and s[i] = 1 otherwise. XOR, basically.

The answer is the number of groups of consecutive 1-s in s (we always mean the largest we can make, e.g. bounded by 0-s or the borders of the string). Proof: if we swap the numbers in some interval that contains k such groups, it contains at least k - 1 groups of consecutive zeroes (between those groups of ones), which will turn into groups of ones, bounded by zeroes on their sides; therefore, any operation can decrease the number of groups of consecutive 1-s at most by one. So if there are K such groups in the whole string, we need at least K operations. That's also sufficient, just choose one group of consecutive ones in every operation.

Counting those groups is easy, for example by using the fact that every such group is determined uniquely by its first 1 — so K is the number of pairs s[i] = 0, s[i + 1] = 1, plus 1 if s[0] = 1 (there's no 0 before that one). Time: O(N).

I. Meteor Flow

(diff: medium)

The condition from the problem can be formulated as: for every i (with subset of not shot meteors with t ≤ ti denoted as S), .

The trick here is that we don't have to decide immediately if we'll need to shoot down a meteor to prevent breaching the shield later, we can decide to do it only when facing that risk. And if it's happening, the best meteor to shoot down is the one we haven't shot yet (but have processed already, e.g. tj ≤ ti) and causes the largest damage.

So, we process the meteors by increasing time, and store ones we haven't shot (including the one currently being processed) in a heap; we also remember the damage they do. As long as the damage is too large, we just shoot the most damaging meteor, and update the total damage. Time: because we add/remove every meteor to/from the heap at most once.

This really gives the optimal solution, because shooting down meteors doesn't worsen the situation after processing any earlier meteor, and choosing to shoot the most damaging meteor doesn't worsen any later situation, wither (it gives us the minimum damages to the shield in all later situations). And as long as the shield is safe, we don't have to shoot any more yet, so we don't do it.

J. The Best Statement

(diff: easy-medium)

The statements that can be chosen form a subsequence strictly increasing in ai, because if k is given, all statements that have ai < k won't be chosen, and the first one with ai ≥ k will — so if ai ≤ aj and i > j, then if k is sufficient for i to be chosen, then it's also sufficient for j.

This sequence can be produced directly when reading the input: we remember the largest ai yet seen, and add the currently processed ai to the sequence if it's strictly larger. And it's possible to choose k matching the choice of any of those statements, by simply setting it equal to that ai.

We're not interested in as, though. We want to choose the largest b. We see that we can choose any statement in our sequence, so we choose the one with largest b. Done.

Time complexity is clearly O(N).

K. Three Contests

(diff: medium-hard)

A pair (i, j) isn't counted if ai < aj, bi < bj, ci < cj; otherwise, it is, because there are 2 contest (w.l.o.g. a, b) such that ai < aj and bi > bj, so team i considers itself stronger based on contest b, and so does j based on contest a. Let's say there are x such pairs; there are

Unable to parse markup [type=CF_TEX]

pairs in total, so the answer is .

Let's sort and process the teams in increasing order of ai. Then, we need to count the number of teams that have both bj and cj smaller than bi and ci, respectively. This reminds me a lot of problem Game from IOI 2013, just that it's an offline problem, the coordinates (if we imagine it is points in a coordinate plane) are already small, and we're only looking for a sum, which has an inverse operation, unlike GCD. We could still solve it with a compressed 2D interval tree in time, but that's way too ugly to implement.

It's much simpler to just think about an sqrt{N} solution. Firstly, let's think about queries "count the number of points in rectagle with border points (0, 0) and (x, y)"; the update "add point (x, y)" is then quite easy. We can use the fact that all points have both coordinates different and small ( ≤ N), and note for every x the y corresponding to it (if there's no such y yet, just set it larger than N so it won't be counted) as A[x], and similarly for every y the x corresponding to it as B[y]. Let's also choose a constant K (over , I chose 500 for this case) and make an 2D Fenwick tree F, in which we remember F[i][j] as the number of points with and (integer division); its dimensions are then just 400x400.

How do we count points now? Firstly, we can just check the already added points with x-coordinates from x - x%500 to x, and count how many of them have the other coordinate  < y. Then, we can say that x = x - x%500, and do the same with y-coordinate. We're left with a rectangle made entirely of smaller 500x500 rectangles, which is a rectangle with border points (0, 0) and in our BIT (Fenwick tree). And that's just 1 query. The updates are even simpler: set A[x] (the original x) to y, B[y] to x, and add 1 to .

This approach has time complexity

Unable to parse markup [type=CF_TEX]

(the first term is for checking all coordinates modulo K, the second one for operations on a 2D BIT), which passes comfortably.

L. For the Honest Election

(diff: easy-medium)

What's the minimum number of people in the first option? If we have a group of size N, and K out of them are P's followers, the worst case is if the remaining N - K are his opponents; in that case, all K vote for one of them, and all N - K for one of them. For P to certainly win, his supporter must get strictly more votes, so K > N - K, e.g. (division is classical integer division here) must be the supporters.

Let's now denote the answer (considering both options) for N as A[N].

In option 2, we have M groups of size ; M is a divisor of N. Those form one group of size . We need the supporter of P to win that group, and for that, we need groups to be won. There's no point in sending any supporters of P into the other groups, so the answer for this M is .

However, P's friend can choose the option and M (in case of option 2) that minimize the number of people. So if we want to find A[N], we find A[M] for all divisors, in the same way using dynamic programming, and choose the minimum of and for A[N].

DP on all numbers smaller than N is obviously too slow, but we can use the fact that all numbers i for which we'd need to compute A[i] this way are also divisors of N, so we can just do DP on divisors, and remember the values for example in an STL map or the previously mentioned hashmap (so the complexity of inserting/finding A[i] for some i is O(1)).

N has at most around 1500 divisors (http://www.javascripter.net/math/calculators/ramanujanhighlycompositenumberslist2.jpg), so even if we try divisibility by all numbers up to to generate all divisors of every one of those divisors, it's fast enough to pass the tests. Asymptotically, if we assumed N to have d divisors, if we had to try divisibility by numbers up to for all of them, it'd be O(N) time, but in reality, it has a small constant because — it's something like O(0.001N) for the largest inputs.

Codes: A, B, C, D, E, F, G, H, I, J, K, L

Полный текст и комментарии »

  • Проголосовать: нравится
  • +37
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

A. Arrangement of RGB Balls

(difficulty: easy)

If, among 3 consecutive balls, there are no two of the same color, then there's exactly one ball of every color among them. Thereforce, the sequence is determined by the order of the first 3 balls. Imagine these are RGB; then, the sequence continues as RGBRGBRGB...

There are only 3! = 6 possible initial triples, so we can try all the sequences defined by them, and for every one of them, check if it can be constructed.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +148
  • Проголосовать: не нравится

Автор Xellos, 11 лет назад, По-английски

My first blog post, so exciting ^_^

Ehm, the Internet Problem Solving Contest will take place on Saturday 8.6.2013. The practice session lasts 24 hours and ends at 8:00 AM (UTC), while the contest itself starts at 10:00 AM and lasts 5 hours.

The tasks have similar format to CodeJam ones — there is an easy and a hard subtask for every problem, you're given the input data for each subtask and have to submit the output in the format required. Except there's no time limit between downloading the input and submitting, and you aren't required to submit your code (in fact, not even to code anything :D). But there are limits on the number of submissions for each problem!

The problemset consists of standard algorithmic tasks of varying difficulty, as well as some unusual tasks (like a task without a problem statement), tasks touching what you might experience in real life, or interactive ones, where you get feedback for your output and base your next output on this feedback.

The contest uses ACM-ICPC scoring rules, and every subtask is worth 1 point.

There are 2 divisions: open and secondary-school. There is also a separate scoreboard for single-person teams in each division.

I'm sure the contest will be great. Don't miss it!

EDIT: If possible, send a postcard to the organizers. Apart from earning -60 penalty minutes (in case it arrives before the contest), you're enlarging their collection :D

Полный текст и комментарии »

  • Проголосовать: нравится
  • +93
  • Проголосовать: не нравится