Блог пользователя paladin8

Автор paladin8, 13 лет назад, По-английски

How can I write math in my blog now? The old functionality (using dollar signs) no longer seems to work, e.g.

a = b + c

Update: It's fixed :)

Полный текст и комментарии »

  • Проголосовать: нравится
  • +25
  • Проголосовать: не нравится

Автор paladin8, 13 лет назад, По-английски

This post is motivated by a problem I recently saw, Problem G of NCPC 2007. This is a standard problem that I'm sure many of you have seen before, but the general topic of partially ordered sets is not too well known.

Definitions

Let S be a set of elements and  ≤  be a partial ordering on the set. That is, for some elements x and y in S we may have x ≤ y. The only properties that  ≤  must satisfy are reflexivity (x ≤ x), antisymmetry (if x ≤ y and y ≤ x, then x = y), and transitivity (if x ≤ y and y ≤ z, then x ≤ z). Note that because it is a partial ordering, not all x and y are comparable.

An example of a partially ordered set (poset) is the set of points (x, y) in the Cartesian plane with the operator (x1, y1) ≤ (x2, y2) iff x1 ≤ x2 and y1 ≤ y2 (e.g. NCPC 2007 problem).

We define a chain C to be a subset of S such that the elements of C can be labelled x1, x2, ..., xn such that x1 ≤ x2 ≤ ... ≤ xn. A partition P is a set of chains where each element occurs in exactly one chain.

We define an antichain A to be a subset of S such that for any x and y in A we have neither x ≤ y nor y ≤ x. That is, no two elements of an antichain are comparable.

Dilworth's Theorem

We can define the width of a poset in two ways, which is the result of Dilworth's Theorem. One definition is the size of the maximum antichain; the other is the size of the minimum partition. It is easy to see that any partition must have at least as many chains as the size of the maximum antichain because every element of the antichain must be in a different chain. Dilworth's Theorem tells us that there exists a partition of exactly that size and can be proved inductively (see Wikipedia for proof).

Calculating the Width

So how does one calculate the width of a poset? To solve this problem in general, we can use maximum matching on a bipartite graph. Duplicate each element x in S as ux and vx and if x ≤ y (for x ≠ y) then add the edge . If you compute the maximum matching on this graph, this is equivalent to partitioning the elements into chains, where if the edge is chosen then x and y are in the same chain. In particular, if the size of the matching is m and |S| = n, then the number of partitions is n - m. Notice that this gives a bijection between partitions and matchings, so the maximum matching gives the minimum partition, which we know is equal to the width as given by Dilworth's Theorem.

Problem G of NCPC 2007

So now that we can calculate the width of a poset, we can just apply that to our problem, right? Not quite.

The bounds on our problem are up to 20, 000 so we can't use maximum matching. Luckily, points in the Cartesian plane are more structured than general posets, and we can use the other definition of width (maximum antichain) to solve this problem more efficiently. Consider iterating over the points sorted in order by x. We maintain a set of pairs (a, b) which indicates that there is an antichain of size b that ends at y-value a (among all points that have already been processed). Thus for any future point (x, y) we can insert (y, b + 1) into the set as long as y < a.

Notice, however, that if we have two points in the set (a, b) and (c, d) such that c ≤ a and d ≤ b then the latter is redundant (it is always worse) and we can remove it. In this way we keep the set small and only need to check a single element whenever we insert, which is (a, b) with minimal a such that a > y. All of this can be done with a C++ set, for example. At the end, the largest antichain we recorded is indeed the maximum one, and we are done.

Exercise

A recent Google Code Jam problem uses these ideas.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +73
  • Проголосовать: не нравится

Автор paladin8, 13 лет назад, перевод, По-русски

(Это перевод оригинального текста, пожалуйста, старайтесь использовать английский в комментариях) 

Всем привет.

Пока на Codeforces нет раундов, я подготовил тренировочное соревнование в новом проекте Codeforces::Тренировки. Будут использованы задачи с прошедшего в Стэнфорде контеста, который я помогал проводить в конце октября. Для нас этот контест был отборочным на полуфинал соревнования ACM-ICPC. Задачи будут сильно различаться по сложности, каждый найдет для себя что-нибудь интересное.

Контест начнется в субботу, 28 января 2012 в 08:00 утра (московское время). Продолжительность - 4 часа. Надеюсь, что вам понравятся задачи!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +92
  • Проголосовать: не нравится

Автор paladin8, 13 лет назад, По-английски

It's been some time since my last blog post, so this is the first one of the new year. Although I didn't participate in Round 102, I worked through several of the problems, and this post is motivated by Problem D. It is about a variation on our favorite algorithmic game, Nim. There is already an editorial for the round, but I intend to use this post as future reference in case I come upon such problems again. If you don't see the reduction from the problem to the Nim variation, please read the editorial.

Nim

The normal many-pile version of Nim is played as follows. There are n piles of stones, and two players take turns removing stones. On any turn, a player may remove as many stones as desired from a single pile. The player who cannot make a move loses.

A very nice and classical result is that a position a1, a2, ..., an is a losing position if and only if , where is the XOR operator. To prove this, we simply realize that from any losing position, if we change the value of one pile, the XOR will no longer be zero (thus a winning position). And from any winning position, we can look at the highest order bit which does not XOR to zero, pick a pile in which that bit is 1, change it to 0, and adjust the rest of the bits in that pile to make the XOR zero.

Nim Variation

The variation in Problem D lies in the fact that a player may remove stones from up to k different piles on each turn rather than just a single one. As such, the winning and losing positions change. We begin by defining the operation bi(x) which equals the i-th bit of x (always 0 or 1). Then define the p-XOR of the values a1, a2, ..., an to be

.

That is, for each digit in the binary expansion, we add up all the values and take the result modulo p. If we choose p = 2, for example, we get the sequence of bits resulting from the normal XOR of the values.

It turns out that in this variation of Nim, a position a1, a2, ..., an is a losing position if and only if is all zeros. The proof of this result is a little less nice, so I will just outline the idea. From any losing position, consider the highest order bit affected among any of the piles. This bit always goes from 1 to 0 (or else the pile is getting bigger). Since at most k piles are affected, the (k + 1)-XOR of that bit can no longer be 0, leaving us in a winning position.

From any winning position, we again look at the highest order bit which does not (k + 1)-XOR to 0. Let's say those bits add up to m modulo k + 1. Choose m piles which have that bit set to 1 and change them to 0, as well as zeroing out all of the lower order bits in those piles. Then move on to the next bit of lower order. If we can fix the (k + 1)-XOR of that bit using just the piles we have selected so far (by changing some 0's to 1's), then do so and move on. Otherwise, choose as many additional piles as necessary to change, noting that this will be at most k - m. If we continue this process we can set the (k + 1)-XOR of all bits to be 0 while choosing at most k piles.

Conclusion

It is useful to understand Nim and its variations very well because game problems often reduce to them. If you know what you're trying to reduce to, it can often be a lot easier to find the reduction. Then knowing how to solve a simple algorithmic game is all that's left!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +101
  • Проголосовать: не нравится

Автор paladin8, 13 лет назад, По-английски
This blog post is motivated by an interesting problem I solved today: Timus 1846. I knew about segment trees previously for solving range-minimum query, but was not fully aware of the obvious generalizations of this nice data structure. Here I will describe the basics of a segment tree and some applications.

Problem

The problem that a segment tree can solve is the following. We are given an array of values a[0], a[1], ..., a[N - 1]. Assume without loss of generality that N = 2n; we can generally pad the computations accordingly. Also, consider some associative binary function f. Examples of f include sum, min, max, or gcd (as in the Timus problem). Segment trees allow for each of the following two operations on O(logN) time:
  • compute f(a[i], a[i + 1], ..., a[j]) for i ≤ j; and
  • update a[x] = v.

Description

So how does a segment tree work? It's quite simple, in fact. We build a single two-dimensional array t[0...N - 1][0..n] where t[x][y] = f(a[x], a[x + 1], ..., a[x + 2y - 1]) where x is divisible by 2y (i.e. most of the entries are ignored, but it is easiest to represent this way). We can initialize the entries by the following procedure:
  • t[x][0] = a[x]; and
  • t[x][y] = f(t[x][y - 1], t[x + 2y - 1][y - 1]) for y = 1, ..., n,
where the second step uses the associative property of f. The logic is that t[x][y - 1] computes f over the range [x, x + 2y - 1) and t[x + 2y - 1][y - 1] computes f over the range [x + 2y - 1, x + 2y) so when we combine them we get the corresponding computation of f for t[x][y]. Now we can describe how each of the two operations is implemented (at a high level).

We'll start with computing f(a[i], a[i + 1], ..., a[j]). It's pretty easy to see that this amounts to representing the interval [i, j] as disjoint intervals [i1, j1];[i2, j2];...;[ik, jk] where each of these is of the form [x, x + 2y - 1] where x is divisible by 2y (i.e. it shows up in the table t somewhere). Then we can just combine them with f (again using the associative property) to get the result. It is easy to show that k = O(logN) as well.

Now for updating a[x] = v, notice that there are only a few terms in the segment tree which get affected by this change. Obviously, t[x][0] = v. Also, for y = 1, ..., n, only the entry t[x - (x&(2y - 1))][y] will update. This is because x - (x&(2y - 1)) is the only value divisible by 2y (notice the last y bits are zero) that also covers x at level y. Then it suffices to use the same formula as in the initialization to recompute this entry in the tree.

Code

Here's a somewhat crude implementation of a segment tree. Note that the value of IDENTITY should be such that f(IDENTITY, x) = x, e.g. 0 for sum,  + ∞ for min,  - ∞ for max, and 0 for gcd.

void init() {
  for (int x = 0; x < N; x++)
    t[x][0] = a[x];
  for (int y = 1; y <= n; y++)
    for (int x = 0; x < N; x+=(1<<y))
      t[x][y] = f(t[x][y-1], t[x+(1<<(y-1))][y-1]);
}

void set(int x, int v) {
  t[x][0] = a[x] = v;
  for (int y = 1; y <= n; y++) {
    int xx = x-(x&((1<<y)-1));
    t[xx][y] = f(t[xx][y-1], t[xx+(1<<(y-1))][y-1]);
  }
}

int get(int i, int j) {
  int res = IDENTITY, h = 0; j++;
  while (i+(1<<h) <= j) {
    while ((i&((1<<(h+1))-1)) == 0 && i+(1<<(h+1)) <= j) h++;
    res = f(res, t[i][h]);
    i += (1<<h);
  }
  while (i < j) {
    while (i+(1<<h) > j) h--;
    res = f(res, t[i][h]);
    i += (1<<h);
  }
  return res;
}


Application

One possible application is letting f just be the sum of all arguments. This leads to a data structure that allows you to compute sums of ranges and do updates, but is slightly less efficient than a Binary Indexed Tree. The most well-known application seems to be the range-minimum query problem, where f is the minimum of all arguments. This also allows you to compute the least-common ancestor of nodes in a tree.

Lastly, in Timus 1846, we can use it with f as the GCD function. Every time we remove a number we should update the corresponding entry to zero (effectively removing it from the GCD computation). Then the answer at each step is f(a[0], a[1], ..., a[k]) which can be computed directly from the segment tree.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +22
  • Проголосовать: не нравится

Автор paladin8, 13 лет назад, По-английски
  • Проголосовать: нравится
  • +44
  • Проголосовать: не нравится

Автор paladin8, 13 лет назад, По-английски

This was a pretty good SRM for me, tantalizingly close to red now. I'll talk about the easy (decent) and medium (really cool), but I didn't have time to take a look at the hard.

Easy (250)

We observe immediately that if n is composite, it takes only 1 move, so all of those cases are done. Then when n is prime, we can just use 1 and n - 1 (which is even), so it takes 2 moves and we're done, right? Almost! For n = 2, the answer is indeed 2 because 1 is not prime, but for n = 3, the answer is 3 because 2 is prime. Those are the only two interesting cases, so now we're really done.

Medium (500)

This is a very nice problem in my opinion because it's not obvious where to start, but once you figure it out it's quite simple. First, let's consider the values in C. If they are all positive or all negative, the answer is  - 1. Otherwise, there is a finite sequence because for values  + a and  - b we can never have a sequence of length ab as it would have to be both positive and negative.

Now, we want to find the longest sequence a1, a2, ..., an satisfying the constraints of the problem. First, we'll note that if N is the maximum value of n that works, then n ≤ N all work and n > N all do not work, so we can binary search on the answer. I don't have a tight bound on the maximum but in the contest I just used 1000000 which is definitely high enough due to the argument above (I'm curious as to a proof of a better one -- it definitely exists because the algorithm does not run in time if the answer is too high).

The way the constraints are specified make them tricky to work with, so instead we'll consider p[i] = a1 + a2 + ... + ai for i = 0, 1, ..., n in which case the constraints can be written as p[i + C[j]] - p[i] > 0 (note this works for both positive and negative C[j]). This induces a directed graph of constraints on n + 1 nodes where an edge from u to v means p[u] < p[v], which reduces the problem to verifying whether this graph of constraints can be satisfied. This is then equivalent to checking for cycles. We can do this in O(n * |C|) time by running a topological sort; alternatively, notice that if there is a cycle then there must be one through 0, so we can just do a BFS from 0. Thus the total running time is O(N * logN * |C|) where N is the maximum possible answer.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +64
  • Проголосовать: не нравится

Автор paladin8, 13 лет назад, По-английски
Today's contest had some pretty interesting problems; I particularly liked Problem C and Problem D. I'm going to describe my solution to D which I think is quite nice (and results in simple code). I don't doubt that there are even easier solutions to the problem, so please enlighten me if you have one :).

Impossibility

Let us denote the sequence by a1, ..., an and assume it has been sorted already. We can note immediately that there are some easy cases we can rule out. If n is odd, by a parity argument it is impossible. Moreover, let m = an - a1, i.e. the range of the values. If m = 0 it is impossible since n ≥ 3, and if m > n / 2 it is impossible since we cannot get from a1 to an and back going around the circle. Now that these cases are out of the way we move on to the actual algorithm.

Transformation

It's natural to transform the input we're given into a form that's easier to work with, i.e. a histogram. As such, we define ci to be the number of values aj with aj - a1 = i for i = 0, 1, ..., m. Now we replace each ai with ai - a1 so that they are all in the range [0, m].

Observation

A nice observation we can make about any valid sequence is the following. Consider the positions of any 0 and any m in the circle. There are two paths from 0 to m. We observe that if any valid sequence exists, then there exists one for which, along each of these paths, the value never decreases twice in a row (that is, we can think of each path as "increasing" with some oscillation). Suppose this does happen in our sequence, i.e. we have something like

0, 1, ..., i - 1, i, i - 1, ..., j + 1, j, j + 1, ..., m - 1, m

where j < i - 1 so i, i - 1, ..., j + 1, j is the decreasing sequence. Then we note that j + 1 appeared somewhere between 0 and i, so we can take the values j, j + 1 and move them directly after the j + 1 that was between 0 and i (easy to see that the sequence is still valid). Repeating this for whenever we have a decrease of two or more gives us paths that never decreases twice in a row.

Validation

Now with our transformation and observation, we can come up with the method of checking whether or not the histogram can produce a valid sequence. To do so, first place a 0 on the circle. Then we have L = c0 - 1 zeros left to place. Note, however, that each zero must be adjacent to a one. So in fact there must be at least L + 2 ones to "hide" all of the zeros from the rest of the numbers (one on each side and one for each additional zero). If there are fewer than that many, it is impossible. After placing the remaining zeros and ones arbitrarily (but alternating) on each side of the initial 0, we see that there are L = c1 - (L + 2) ones remaining. By the same argument, we know how many twos there must be (again, L + 2). We can continue like this until we get to m. If there are L remaining values of m - 1, we need cm = L + 1 to complete the circle (one to be the "end" and one for each additional m - 1). If this is true, then it is possible and otherwise it is not.

Note that the above placement strategy works because of the observation we made (we know we can use the small numbers greedily) and the fact that it doesn't matter how many of each number we put in each path as long as they end in the same value.

Code


int a[100100], c[100100];

int main() {
  int n; cin >> n;
  for (int i = 0; i < n; i++) cin >> a[i];
  sort(a, a+n);
  int m = a[n-1]-a[0];
  if (n % 2 == 1 || m > n/2 || m == 0) { cout << "NO" << endl; return 0; }

  memset(c, 0, sizeof(c));
  for (int i = 0; i < n; i++) c[a[i]-a[0]]++;

  bool poss = true;
  int L = c[0]-1;
  for (int i = 1; i < m; i++) {
    if (c[i] < L+2) { poss = false; break; }
    c[i] -= (L+2); L = c[i];
  }
  poss = poss && (c[m] == L+1);
  cout << (poss ? "YES" : "NO") << endl;
  return 0;
}

Полный текст и комментарии »

Разбор задач Codeforces Beta Round 94 (Div. 2 Only)
  • Проголосовать: нравится
  • +44
  • Проголосовать: не нравится

Автор paladin8, 13 лет назад, По-английски

In my quest to regain my red color on Topcoder this morning, I failed once again in SRM 523. I'll present a quick summary of the solutions here; the problem set was actually quite easy, but with some tricks and edge cases.

Easy (250)

We can compute the number of values from the arithmetic series as (upperBound - a) / b + 1 if a ≤ upperBound and zero otherwise. Then we simply check all values in the geometric series (only a logarithmic number) to see if they are in the arithmetic series. If not, count them in the result. There is some trickiness with d = 1 and when a, c > upperBound.

Medium (500)

This is a pretty straightforward dynamic programming problem. The approach is to compute dp[i][j] for 0 ≤ i ≤ h and 0 ≤ j ≤ w which is the number of structures of height i and width j. We will also compute the values ways[i] which is the number of ways to get a 1 × i structure with no gaps. The latter is a simple DP, and we can compute dp[i][j] as follows (we will think of this as filling in the bottom row and then using the previously computed values to determine how many structures can exist on top of it):
  • dp[i][0] = 1, and for j ≥ 1,
  • dp[i][j] = dp[i][j - 1] (position j is empty)
  • dp[i][j] = dp[i][j] + ways[j] * dp[i - 1][j]
  • dp[i][j] = dp[i][j] + dp[i][j - L - 1] * ways[L] * dp[i - 1][L] for 1 ≤ L ≤ j - 1
The last two equations are obtained by considering the last contiguous sequence of blocks in the bottom row. If it is j, i.e. the entire row, then we just multiply the number of ways to build a 1 × j structure with no gaps (ways[j]) with the number of ways to build the structure of height i - 1 above it (dp[i - 1][j]). Otherwise, we consider all lengths L with 1 ≤ L ≤ j - 1. We need to leave a gap of one (or else the contiguous sequence might be larger), so there are dp[i][j - L - 1] ways to build everything before the last sequence and ways[L] * dp[i - 1][L] ways to build the 1 × L structure with no gaps and everything above it. Computing all these values gives dp[h][w] as the desired answer.

Hard (900)

We'll start with a simple idea: for each starting location, i.e. grid squares with a letter, just do a DFS to depth 20, keeping track of all letters we have seen so far (as a bitmask). We never visit a cell with a letter we've already seen (so we never visit cells multiple times). This solution has something like n * m * 3^20 running time, which is not fast enough. We can use a clever trick to reduce this to something manageable.

Consider two paths starting from the same cell which see sets of letters A and B, respectively. We'll ignore paths that visit the same letter multiple times and paths which visit the letter c which is in the starting cell (no such paths can be part of a Latin alphabet path). Then we claim that we have a Latin alphabet path if and only if each Latin letter is in exactly one of A, B, and {c}. This is pretty easy to see, as we just start at the end of the first path and go to the end of the second, and for any Latin alphabet path, there is a unique midpoint letter c which splits the path into the corresponding A and B. So instead of computing all paths of depth 20, we just consider all paths of length 10 and for each subset S keep track of count[S], the number of such paths which visit the letters in S exactly. Finally, multiply the results for complementary sets together, i.e. count[A] * count[B] for the condition above, adding up these values for all positions in the grid. Note that each set A will have exactly one complementary set B which is L - A - {c} where L is the set of all Latin letters. This reduces the complexity to something like n * m * 3^10 which runs in time.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +62
  • Проголосовать: не нравится

Автор paladin8, 13 лет назад, По-английски
This is the first post in my Codeforces blog! I plan to use this as a place where I can write down new algorithms I learned, problems that I found interesting, or stupid mistakes I made during contests.

Today, the topic will be the Z Algorithm, which I learned as a result of failing to solve Problem B of Beta Round 93 (http://codeforces.net/contest/126/problem/B). There are some other solutions like binary search + hashing, but this one is quite nice. Anyway, first, a description of the algorithm and why it works; it is simple and makes a lot of sense (as all good algorithms are).

Algorithm

Given a string S of length n, the Z Algorithm produces an array Z where Z[i] is the length of the longest substring starting from S[i] which is also a prefix of S, i.e. the maximum k such that S[j] = S[i + j] for all 0 ≤ j < k. Note that Z[i] = 0 means that S[0] ≠ S[i]. For easier terminology, we will refer to substrings which are also a prefix as prefix-substrings.

The algorithm relies on a single, crucial invariant. As we iterate over the letters in the string (index i from 1 to n - 1), we maintain an interval [L, R] which is the interval with maximum R such that 1 ≤ L ≤ i ≤ R and S[L...R] is a prefix-substring (if no such interval exists, just let L = R =  - 1). For i = 1, we can simply compute L and R by comparing S[0...] to S[1...]. Moreover, we also get Z[1] during this.

Now suppose we have the correct interval [L, R] for i - 1 and all of the Z values up to i - 1. We will compute Z[i] and the new [L, R] by the following steps:
  • If i > R, then there does not exist a prefix-substring of S that starts before i and ends at or after i. If such a substring existed, [L, R] would have been the interval for that substring rather than its current value. Thus we "reset" and compute a new [L, R] by comparing S[0...] to S[i...] and get Z[i] at the same time (Z[i] = R - L + 1).
  • Otherwise, i ≤ R, so the current [L, R] extends at least to i. Let k = i - L. We know that Z[i] ≥ min(Z[k], R - i + 1) because S[i...] matches S[k...] for at least R - i + 1 characters (they are in the [L, R] interval which we know to be a prefix-substring). Now we have a few more cases to consider.
  • If Z[k] < R - i + 1, then there is no longer prefix-substring starting at S[i] (or else Z[k] would be larger), meaning Z[i] = Z[k] and [L, R] stays the same. The latter is true because [L, R] only changes if there is a prefix-substring starting at S[i] that extends beyond R, which we know is not the case here.
  • If Z[k] ≥ R - i + 1, then it is possible for S[i...] to match S[0...] for more than R - i + 1 characters (i.e. past position R). Thus we need to update [L, R] by setting L = i and matching from S[R + 1] forward to obtain the new R. Again, we get Z[i] during this.
The process computes all of the Z values in a single pass over the string, so we're done. Correctness is inherent in the algorithm and is pretty intuitively clear.

Analysis

We claim that the algorithm runs in O(n) time, and the argument is straightforward. We never compare characters at positions less than R, and every time we match a character R increases by one, so there are at most n comparisons there. Lastly, we can only mismatch once for each i (it causes R to stop increasing), so that's another at most n comparisons, giving O(n) total.

Code

Simple and short. Note that the optimization L = R = i is used when S[0] ≠ S[i] (it doesn't affect the algorithm since at the next iteration i > R regardless).

int L = 0, R = 0;
for (int i = 1; i < n; i++) {
  if (i > R) {
    L = R = i;
    while (R < n && s[R-L] == s[R]) R++;
    z[i] = R-L; R--;
  } else {
    int k = i-L;
    if (z[k] < R-i+1) z[i] = z[k];
    else {
      L = i;
      while (R < n && s[R-L] == s[R]) R++;
      z[i] = R-L; R--;
    }
  }
}

Application

One application of the Z Algorithm is for the standard string matching problem of finding matches for a pattern T of length m in a string S of length n. We can do this in O(n + m) time by using the Z Algorithm on the string T Φ S (that is, concatenating T, Φ, and S) where Φ is a character that matches nothing. The indices i with Z[i] = m correspond to matches of T in S.

Lastly, to solve Problem B of Beta Round 93, we simply compute Z for the given string S, then iterate from i to n - 1. If Z[i] = n - i then we know the suffix from S[i] is a prefix, and if the largest Z value we've seen so far is at least n - i, then we know some string inside also matches that prefix. That gives the result.

int maxz = 0, res = 0;
for (int i = 1; i < n; i++) {
  if (z[i] == n-i && maxz >= n-i) { res = n-i; break; }
  maxz = max(maxz, z[i]);
}

Полный текст и комментарии »

  • Проголосовать: нравится
  • +108
  • Проголосовать: не нравится