Блог пользователя SummerSky

Автор SummerSky, 8 лет назад, По-английски

A. Alphabet Cake

I solved this problem by first filling the array row by row, and then column by column.

For each row, at first I enumerate from the first element to the last one until I find an element that is not '?', and then break the loop. If such an element that is not '?' does not exist, it means that this row consists of only '?'. Such a row is skipped, however do not worry since it will be implemented later. Now suppose that we have found such an element which is not '?' and denote it as a variable x (char x;). Then, I enumerate from the first element again, and

1) when a '?' is met, just replace it with x;

2) when x is met (note that x must be the first letter that is met, since I immediately break the loop when the first element that is not '?' is found), do nothing (or replace it with x, which achieves the same result);

3) when a letter different from x is met, just update x with this letter, and go to step 1).

After the above operations, for instance, ???A??B??CD? will be changed into AAAAAABBBCDD. However, it can not cover the case where some row consists of only '?', and thus we repeat the above operations again column by column, i.e., the three steps are exactly the same but the enumeration is implemented in a column.

For instance,

???A??B??CD?

????????????

E????F???GHI

will be first changed into

AAAAAABBBCDD

????????????

EEEEEFFFFGHI

then

AAAAAABBBCDD

AAAAAABBBCDD

EEEEEFFFFGHI

B. Ratatouille

Well, this is a tough problem for me. At first, I could not figure out how to solve large input so I wrote a special version for small input. It was about 15 minutes before the end that I finally realized how to deal with N>2...

At first, we should consider under what conditions that two packages corresponding to different ingredients can be used to constitute a kit (or can be paired). For some given two packages, we denote the quantity they contain as Q1 and Q2, while the number of corresponding ingredients are R1 and R2. If we want to use Q1, we must find some integer K1 that satisfies R1*0.9*K1<=Q1<=R1*1.1*K1, and similarly R2*0.9*K2<=Q2<=R2*1.1*K2 if we want to use Q2. Therefore, if both Q1 and Q2 can be used as a kit, there must be at least one common integer K that satisfies K1=K=K2. Moreover, it is not necessary to find out such a precise K since we can find out the intervals that K1 and K2 falls into, respectively, and then check whether the two intervals intersect with each other or not. If they intersect with each other, it means that at least one common integer K=K1=K2 can be found. According to R1*0.9*K1<=Q1<=R1*1.1*K1, we have Q1/(1.1*R1)=<K1<=Q1/(0.9*R1). Note that the "double division" should be avoided since otherwise you might get stuck in "precision problem". We can multiply both denominator and nominator by 10, which gives (10*Q1)/(11*R1)<=K1<=(10*Q1)/(9*R1). Then, as K1 is an integer, we can use "integer division" to obtain the lower bound (inclusive) as (10*Q1)/(11*R1)+((10*Q1)%(11*R1)!=0), and the upper bound (inclusive) as (10*Q1)/(9*R1), where the '/' is "integer division". Therefore, we can check whether the intervals [(10*Q1)/(11*R1)+((10*Q1)%(11*R1)!=0), (10*Q1)/(9*R1)] and [(10*Q2)/(11*R2)+((10*Q2)%(11*R2)!=0), (10*Q2)/(9*R2)] intersect with each other or not.

Then, suppose that there are only two ingredients, and we consider how the corresponding packages can be "paired" to achieve the maximum number of pairs (or kits). We adopt a greedy idea. We adopt an array Q[N][P] (specifically N=2), where Q[i][j] denotes the quantity of the j-th package corresponding to the i-th ingredient. We first sort the packages in an increasing order of their quantities for each ingredient, i.e., Q[i][0]<=Q[i][1]<=...Q[i][P-1]. Then, we enumerate from the first package to the last one for the 0-th ingredient (a simple example will be shown later). For each enumerated package Q[0][j1], we enumerate from the first package to the last one for the 1-th ingredient, and find out the first one Q[1][j2] that can be paired with Q[0][j1]. Then, we break the loop for Q[1][j2] and go back to the loop for Q[0][j1]. However, for this time, when we deal with Q[0][j1+1], we should start from Q[1][j2+1] but not Q[1][0]. The correctness of such greedy idea can be proved but omitted here. Here is a simple example. Suppose that we have

Q[0][0], Q[0][1], Q[0][2], Q[0][3]

Q[1][0], Q[1][1], Q[1][2], Q[1][3]

We first deal with Q[0][0], and try from Q[1][0] to Q[1][3] one by one, and assume that Q[1][1] is the first one which can be "paired" with Q[0][0]. Then, we deal with Q[0][1], and try from Q[1][2] to Q[1][3], and assume that Q[1][3] is "paired" with Q[0][1]. Then, for Q[0][2] and Q[0][3], they cannot be "paired".

Finally, I introduced a "flag" array to denote whether a package has been selected to be "paired" with some other package or not, i.e., flag[N][P] where flag[i][j]=0 means that the j-th package corresponding to the i-th ingredient is not "paired" while flag[i][j]=1 means that it is "paired" with some package corresponding to the (i+1)-th ingredient. With such a "flag", I can deal with N>3 case. For Q[N][P], I enumerate from N-2 to 0, and for each Q[i][j1], I use the above greedy idea to find out which Q[i+1][j2] can be "paired" with Q[i][j1]. The 'can be "paired"' condition is: Q[i+1][j2] intersects with Q[i][j1] (recall the intervals mentioned above) and flag[i+1][j2]==1. Specifically, the row of flag[N-1][ ] is initialized with all "1" values. When Q[i][j1] is "paired", I update flag[i][j1]=1. Here is a simple example:

Q[0][0], Q[0][1], Q[0][2], Q[0][3]

Q[1][0], Q[1][1], Q[1][2], Q[1][3]

Q[2][0], Q[2][1], Q[2][2], Q[2][3]

with flag as

0, 0, 0, 0

0, 0, 0, 0

1, 1, 1, 1

We first deal with the 1-th row and 2-th row. For Q[1][0], we try from Q[2][0] to Q[2][3] and find Q[2][1] "paired" with Q[1][0], then flag is changed into

0, 0, 0, 0

1, 0, 0, 0

1, 1, 1, 1

For Q[1][1], we try from Q[2][2] to Q[2][3] and find no Q[2][] "paired" with Q[1][1], then nothing is doen.

For Q[1][2], we try from Q[2][2] to Q[2][3] and find no Q[2][3] "paired" with Q[1][2], then the flag is changed into

0, 0, 0, 0

1, 0, 1, 0

1, 1, 1, 1

For Q[1][3], it cannot be "paired". Then, we deal with Q[0][0], and try from Q[1][0] to Q[1][3] but note that Q[1][1] and Q[1][3] must be skipped since flag[1][1]=flag[1][3]=0. Assume that Q[0][0] is not "paired", and then we move on to Q[0][1], and try from Q[1][0] to Q[1][3] (Q[1][1] and Q[1][3] are skipped), and suppose that Q[1][2] is "paired", then the flag is changed into

0, 1, 0, 0

1, 0, 1, 0

1, 1, 1, 1

For Q[0][2] to Q[0][3], they cannot be "paired".

The answer is just the number of "1" in the 0-th row of flag, i.e., 0+1+0+0=1. I think this is similar to the idea of dynamic programming. To constitue as many kits as possible, the packages corresponding to the last row (ingredient) Q[N-1][ ] must be "paired" with some other ones. Thus, we start from Q[N-2][ ] and Q[N-1][ ], and try to find out as many "pairs" as possible. Then, we deal with Q[N-3][ ] and Q[N-2][ ], but the "pairs" must be made based on the "pairs" already exist in Q[N-2][ ], which is indicated by flag[N-2][ ]. The last row flag[N-1][ ] is initialized with all "1" since for Q[N-2][ ], any Q[N-1][ ] can be selected without any restriction. This implies that the number of "1" in flag[0][ ] is just the maximum number of kits (pairs). Finally, I deal with N=1 as a special case.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -22
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. Shell Game

The solution is straightforward by swapping the integers as required by the problem.

B. Warehouse

The warehouse can be viewed as a two-dimensional array. If '+1' is met, we just enumerate the positions from the given one to the final one in the order required by the problem, and put the drink into the first empty box if there exist any. If '-1' is met, we can enumerate from the first position to the final one, and if any one that has the same ID is found, we just output the corresponding position; otherwise output "-1 -1".

C. Fire Again

Suppose that the position at which the fire begins is denoted as (f1,f2). Then, for any position (r,c), the fire gets to it after time |r-f1|+|c-f2|, which is referred to as Manhattan Distance as far as I consider. Therefore, we can first enumerate the positions where fire starts, and for each such position, we enumerate every feasible position and calculate the time when the fire arrives at it. As all the fires start at the same time, we should only record the shortest time when the fire gets to some position during the above enumerating process. Finally, we enumerate all the positions again and find out which is the last one that the fires reach. The total complexity is O(NMK).

Later, I noticed that some people used another simpler method. The formula |r-f1|+|c-f2| in fact has shed some light on this idea. It can be observed that the first term |r-f1| and the second one |c-f2| are independent of each other. Therefore, it is not necessary to enumerate positions "r" and "c" at the same time, but can be implemented respectively. In other words, we can first enumerate "r" and record the time when the fire gets to it, and then repeat this for "c". Finally, we find out the largest "r" and "c", and combine them as the position, which serves as the answer.

D. Animals

As provided by the problem, c[1],c[2],...,c[n] denotes the food that an animal eats every day if it arrives at the farm on the i-th day. We can further calculate the total amount of food that an animal eats if it joins the farm, which can be written as f[1],f[2],....,f[n], where f[i]=c[i]*(n-i+1). As the problem asks to find out the maximum number of animals that can join the farm, it is equivalent to maximizing the number of terms in the sum f[i1]+f[i2]...+f[im] under the condition that f[i1]+f[i2]...+f[im]<=X. Therefore, we can first sort f[1],f[2],....,f[n] in an increasing order of their values. Then, we enumerate from the first one and count the number at the same time until the sum exactly exceeds X. The number just serves as the answer.

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. Reconnaissance 2

This problem asks to find out two neighbouring elements with the minimum difference. Specifically, the difference between the first element and the last one should be taken into consideration as well.

B. Sale

Bob can only earn money by "buying" those TVs with negative price. Therefore, we can sort all the TVs in an increasing order of their price, and denote the number of TVs with negative price as M. Then, we add the prices of the first min{M,m} TVs together to obtain the final answer.

C. Page Numbers

This problem is straightforward however a little bit complicated. As the largest page number does not exceed 1000, we can adopt a "hash table" to record which pages have appeared in the sequence. At first, we should calculate the exact page number from the given sequence. Then, we set the value of hash table corresponding to the current page number to 1 to denote that this page is required to print. Finally, we enumerate the hash table from the first index to the last one. Whenever we meet a "1", we check whether the next one is "1" or not. If it is, then we move on to check the following one until we meet a "0", and output them in a "i-j" manner; otherwise we directly output the current page number. Take care of the position at which we should output a comma ','.

D. Road Map

This problem is in fact equivalent to reversing a single linked list.

The map can be viewed as a tree, and thus the representation of the road provides the parent node for each node. Suppose that we write the connection relationship between r2 to the original root node r1 as r2->A->B->...->r1. When the root node is changed from r1 to r2, it can be observed that only the parent nodes of r2,A,B,...,r1 should be altered. More specifically, the connection should be altered as r2<-A<-B...<-r1. Therefore, it is sufficient to reverse such a single linked list with complexity O(n).

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. What is for dinner?

The problem in fact asks to find out the minimum number in each row. As the number of rows is not quite large, less than 1000, we can adopt an array to record the minimum integer in each row, i.e., whenever a new integer is added to some row, we compare it with the previously stored integer and update the entry with the smaller one. Finally, we add up the minimum integer in each row together, and obtain the possibly "maximum amount of crucians". By comparing it with k, we select the smaller one as the output.

B. String Problem

Well, I got stuck in this problem for a quite long time, until I realized that not only can we change letter A into B or B into A, but also we can change both A and B into some other letter C, which may achieve a smaller sum of money...

This problem can be solved based on graph, where a letter is viewed as a node. The solution can be summarized as the following several steps:

1) Whenever a possible changing from letter A to B is provided, we can build a directed edge from A to B with the money as the cost; as more than one directed edge from A to B might be provided, we should only store the one with minimum cost to avoid multiple edges between any two nodes;

2) Implement the famous "Floyd-Algorithm" to calculate the shortest path between any two nodes; here is a simple example; suppose that letter 'a' can be changed into 'b' with cost 2 while 'b' can be changed into 'c' with cost 1; then 'a' can be changed into 'c' with cost 3, which can be updated by floyd-algorithm;

3) Compare each letter in string s1 with that in string s2; whenever they are different, enumerate every feasible changing, i.e., from letter 'a' to 'z', and select the one with the minimum cost as the common letter into which both letters should be changed.

C. Wonderful Randomized Sum

At first, we consider what form will the optimal option have. In fact, we can prove that the optimal prefix and suffix should not intersect with each other, since if they do, the integers belonging to the intersection will be multiplied by -1 for two times, which just gives their original values, and this results in a new prefix and suffix which do not intersect with each other at all. Therefore, the optimal prefix and suffix should not intersect.

Inspired by the above arguments, we can immediately come up with a straightforward method that enumerates the ending position of prefix and starting position of suffix, and find out the optimal one. However, this takes O(n^2) complexity, and it will almost surely lead to a Time-Limited-Error. To avoid this, we can try to reduce the number of enumeration, which actually turns out to work successfully by only enumerating the starting position of suffix. For each starting position of suffix, the original problem is equivalent to finding out the optimal prefix, which can be computed previously. The optimal prefix is in fact the one which has the minimum sum, since by multiplying -1 it turns out to be the maximum sum. However, this conclusion holds only if the minimum sum is less than 0, i.e., if no sum of prefix is less than 0, the optimal prefix is just empty.

Suppose that we use prefix[n] to denote the sum of a[0]+a[1]+...+a[n], which can be computed with complexity O(n) by using prefix[n]=prefix[n-1]+a[n]. Then, we use min_prefix[n] to denote the minimum sum of some prefix with ending position before n or at n. The initialization can be implemented by min_prefix[0]=min{a[0], 0}, and generally we have min_prefix[n]= min {prefix[n], min_prefix[n-1]}. Moreover, we use suffix[j] to denote the sum of a[n]+a[n-1]+...a[n-j], which can be calculated with complexity O(n) in a similar manner as computing prefix[n].

Now, we are ready to solve the problem. For each starting position j of suffix, the current maximum sum is just (prefix[n]-suffix[j]-min_prefix[j-1]) + (-min_prefix[j-1]) + (-suffix[j]). The maximum sum obtained during the enumeration of j serves as the final answer, which takes O(n) complexity.

D. Knights

This problem turns out to be not quite difficult if we have realized that it in fact asks to compute the total number of distinguishing circles that encircle the given two points.

At first, we sort the circles in an increasing order of their radiuses with complexity O(mlogm). Then, for each point, we calculate the number of circles that encircle it with complexity O(nm). Finally, we count the number of different circles that encircle the requested two points with complexity O(km). Thus, the total complexity is O(km). (I have hearded of an offline Tarjan' algorithm which can be used to solve this problem, also known as Lowest-Common-Ancestor problem)

Полный текст и комментарии »

  • Проголосовать: нравится
  • -11
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. Reconnaissance

The solution to this problem is straightforward. As it is stressed that (i,j) and (j,i) are different, we can adopt a nested-2-loop to enumerate all the feasible combination of i and j, and check whether the difference of height satisfy the condition or not.

B. Borze

The problem is simplified since it is guaranteed that the given sequence is valid Borze code. We can enumerate the letters from the first one, and implement the decoding according to the following rules:

Rule 1): whenever we meet a '.', decode it as "0" and move to the next letter;

Rule 2): whenever we meet a '-' and the next one is '-' as well, decode it as "2" and skip the next letter and move directly to the one following it;

Rule 3): whenever we meet a '-' and the next one is '.' as well, decode it as "1" and skip the next letter and move directly to the one following it;

C. Flea

Note that at first the flea is at the center of the cell, and no matter how it jumps later, it should always stay at the center of some cell. Therefore, although there are n*m cells, the board can in fact be viewed as a grid with (n-1)*(m-1) points and the flea can only jump from one single point to the other ones. Suppose that the flea first starts from point (1,1), then it can jump to point ( 1+(m-1)/s*s, 1) along the parallel axis, while jumping to point ( 1, 1+(n-1)/s*s ) along the perpendicular axis (note that only "integer division" is included here, i.e., 5/2=2 but not 2.5). Therefore, we have found out ( (m-1)/s+1 )*( (n-1)/s+1 ) points which are completely equivalent to each other. Note that (m-1)/s*s<(m-1) might hold since we adopt "integer division", and the different is just (m-1)%s. This implies that if the flea starts from points (2,1), (3,1),...( 1+(m-1)%s, 1 ), it can also cover ( (m-1)/s+1 )*( (n-1)/s+1 ) points. Similarly, it leads to the same result if it starts from points (1,2), (1,3), ... ( 1+(n-1)%s, 1). Therefore, there are totally ( (m-1)/s+1 )*( (n-1)/s+1 ) * ( 1+(n-1)%s )*( 1+(m-1)%s ) points which are equivalent, and this is just the answer.

D. Constellation

At first sight, it seems that we should find out all the reasonable constellation and then sort them as the problem asks, while taking out the k-th one as the answer. However, we soonly realize that the number might be too large to store. In fact, we can just enumerate the constellation one by one in the order requested by the problem, by adopting a nested-3-loop. At first, we enumerate every feasible x, which will not exceed max(n,m) (in fact it might be half of max(n,m), but later you will find that this constant 1/2 is not important as ususally done in the analysis of computational complexity), then for the second loop, we enumerate indices i from 1 to n (or 0 to n-1), and the third loop deals with j from 1 to m. Whenever we find a reasonable constellation, we increase the "counter" by 1, and when the k-th constellation is reached, we store the positions of the five stars, and give the final answer, while outputing "-1" if the counter is still less than k after the last reasonable constellation is counted.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -2
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. Worms Evolution

As n is at most as large as 100, we can adopt a nested-3-loop to find out the answer, i.e., we enumerate all the possible positions of i, j and k in turn, and check whether the condition a[i]=a[j]+a[k] can be satisfied or not. One more thing to notice is that i, j and k should be three distinct integers.

B. Sysadmin Bob

This problem is not quite difficult but one should be very careful.

At first, notice that if '@' appears at the first position or the last position, the answer will be "No solution". Then, if there exist any two successive positions i and i+1 with s[i]=s[i+1]='@', the answer should be "No solution" as well. Except for this, if one can find out two positions i and i+2 with s[i]=s[i+2]='@', it also results in "No solution". Finally, if no '@' can be found, the answer is still "No solution".

After excluding the above cases, all the other cases should have reasonable answers. Without loss of generality, once we have reached the first letter following '@', we can immediately add ',' to obtain a reasonable solution. However, there is one exception, i.e., no ',' shoulbe be added after the last '@'.

C. Schedule

At first, we can sort the groups in an increasing order of the starting time of their lessons. Then, we enumerate the groups from the first one to the last one, and for each group, we check whether it is possible to achieve the state where no two time periods of lessons intersect by deleting this group. The test can be simply implemented by comparing the i-th group and (i+1)-th group, for every feasible i. In other words, we check whether the finishing time of group i is later than the starting time of group i+1, and if all the pairs lead to answers "NO", then it means that the remaining groups can achieve the state at which no two time periods of lessons intersect.

D. Chocolate

I solve this problem by using a divide-and-conquer technique. I also find that many people solved this problem based on DFS. But I do not quite understand the principle behind this idea. It would be very nice if anyone can shed some light on this idea...

I adopt an array perpendicular[x1][y1][y2] to denote that there is a line with two ends (x1,y1) and (x2=x1,y2). Similarly, I use another array parallel[y1][x1][x2] to denote a line with ends (x1,y1) and (x2,y2=y1). Then, we can start with the whole plane, i.e., (x1=0,y1=0,x2=W,y2=H), and try to calculate the areas, which consists of the following three phases:

phase 1): enumerate x from x1+1 to x2-1 (note that x1 and x2 are not included since the problem guarantees that the plane is cut into two pieces which are not empty) and check whether perpendicular[x][y1][y2]=1 or not. If perpendicular[x][y1][y2]=1, it implies that the current considered piece is cut into at least another two pieces since there is a line with two ends (x,y1) and (x,y2). Thus, we can divide it into two subproblems (x1,x,y1,y2) and (x,x2,y1,y2);

phase 2): enumerate y from y1+1 to y2-1 (similar reasons as above), and check whether parallel[y][x1][x2]=1 or not. If parallel[y][x1][x2]=1, it means that the current piece is cut into at least another two pieces by a line with ends (x1,y) and (x2,y). Thus, it can be divided into two subproblems (x1,x2,y1,y) and (x1,x2,y,y2);

phase 3): no lines cut the current piece and it stays as it is. We can directly calculate the area by (x2-x1)*(y2-y1), which is referred to as "conquer".

Finally, we sort the areas as the problem requests and output them.

E. TV Game

This problem can be solved by adopting a dynamic programming technique. We use F[x][y] to denote that for the first x+y digits, x of them are taken by person H while y of them are taken by person M, and the value of F[x][y] is the maximum sum that can be achieved. Suppose that by giving the (x+y)-th digit to H will increase the sum by number(x+y,H) while giving to M leads to an increment number(x+y,M). Then, F[x][y] is only determined by F[x-1][y] and F[x][y-1]. In detail, F[x][y]=MAX(F[x-1][y]+number(x+y,H), F[x][y-1]+number(x+y,M)). F[][] is initialized by setting F[0][1]=number(1,M) and F[1][0]=number(1,H). When updating F[x][y], we should implement this process by increasing x+y from 1 to 2n, i.e., first update all F[x][y] with x+y=2, then x+y=3, and finally x+y=2n. To record the digits taken by H and M, we can further introduce another array R[x][y], which is initialized by setting R[0][1]='M' and R[1][0]='H'. R[x][y] is updated as F[x][y] is calculated, i.e., if F[x-1][y]+number(x+y,H)>F[x][y-1]+number(x+y,M), then we have R[x][y]='H'; otherwise R[x][y]='M'. Finally, we start with F[n][n], also R[n][n], and backtrack until F[0][0] is reached. This can be implemented by comparing F[x-1][y]+number(x+y,H) and F[x][y-1]+number(x+y,M), and then we know which one to backtrack.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +6
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. Accounting

This problem asks to find out a feasible solution to the equation A*X^n=B. This can be solved based on the following cases:

1) A=0, B!=0: this results in no solution;

2) A=0, B=0: any integer can serve as a feasible solution;

3) A!=0, B%A!=0: this leads to no solution;

4) A!=0, B%A=0, A*B<0, n%2=0: this means that X^n should be a negative integer, however this is impossible since any X^n with n%2=0 must be a positive integer;

5) A!=0, B%A=0, A*B<0, n%2=1: this is equivalent to finding out a solution to A*X^n=(-B), which turns out to be case 6);

6) A!=0, B%A=0, A*B>0: as both the values of A and B are limited to be less than 1000, we can enumerate X from 1 in an increasing order until A*X^n>=B is satisfied. Then, for this X, by checking whether A*X^n=B holds or not, we can obtain the final answer.

B. Codeforces World Finals

This problem is a little complicated to deal with. To obtain the final answer, we have to check all the feasible 3*2*1=6 dates when Bob "can be born". As the number of permutation is rather small, we can previously generate it with an array

int permutation[6][3]={ {0,1,2}, {0,2,1}, {1,0,2}, {1,2,0}, {2,0,1}, {2,1,0} };

to simplify the following operations. Besides, we can store the number of days in each month by using two arrays as well, one for leap year and one for nonleap year:

int nonleap[12]={31,28,31,30,31,30,31,31,30,31,30,31};

int leap[12]={31,29,31,30,31,30,31,31,30,31,30,31};

For each feasible permutation, we generate the birthday of Bob, and first check whether the month is between 1 and 12, and then check whether the day is between 1 and the last date of this month, to guarantee that this is a reasonable birthday. Then, we compare it with the date of the final. The comparison can be implemented in the order of year, month and day. Take care that all the above operations should be implemented for leap year and nonleap year, respectively.

C. Shooting Gallery

At first, we should sort all the targets in an increasing order of the time of their appearance. From now on, we assume that the sorting has been completed and the following arguments are all based on the sorted targets.

Then, we can solve the problem based on dynamic programming. It asks to compute the maximum expected value of the amount of targets, and this is in fact equivalent to finding out the the maximum sum of pi we can obtain. We use f[n] to denote the maximum sum of pi we can obtain when the n-th target is exactly hit. Then, we consider what might happen before the n-th target is hit. We can directly aim at the n-th target from the first start while skipping the previous n-1 targets. Besides, we might have hit the i-th target, where i<n, and immediately move to the n-th target. Therefore, f[n] can be calculated in the following manner. For each f[i], i<n, we calculate the time it takes to move from target-i to target-n, and if this time is shorter than the interval between which they appear before and after, we compare f[i]+p[n] with the current value of f[n], and select the larger one to update f[n]. Do not forget that we can aim at the n-th target from the first start. Thus, f[n] cannot be less than p[n], and p[n] can just be used to initialize the value of f[n].

Finally, we find out the maximum one of f[1],f[2],...,f[n], and it just serves as the answer.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +6
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. Spit Problem

For a camel with given parameters x[i] and d[i], he (or it) can hit at position x[i]+d[i] according to the problem. As the number of camels is at most up to 100, it is feasible to check for any two camels whether they can hit each other or not.

B. Traffic Lights

When the car arrives at the traffic light, if it is green, then the car can move on as if there were no traffic lights; while if it is red, the car has to wait until the light turns to be green, and then it can reach the destination. Therefore, the key point is in fact to determine whether the car has to wait or not, and if yes, how much time should it wait for.

We can take the total time that green light and red light lasts as an entirety (or a cycle), i.e., t=g+r, and then we should find out at which cycle does the car reach the traffic light. At first, we compute k=d/((g+r)*v), where "integer division" is adopted. The value of k means that when the car reaches the traffic light, it takes at least k cycles. Then, the problem can be solved based on the following two cases:

1) (g+r)*v*k<=d && d<(g+r)*v*k+v*g: this means that when the car arrives at the traffic light, it is green, and thus the car can directly pass the light and the total time is l/v (mathematical division, or 1.0*l/v);

2) d>=(g+r)*v*count+v*g: this means that when the car reaches the traffic light, it is red, and the car has to wait for (g+r)*(k+1)-1.0*d/v, and therefore, the total time is (g+r)*(k+1)-1.0*d/v+1.0*l/v.

C. Mail Stamps

The indices of cities are integers from 1 to 10^9, which makes it difficult to directly deal with since the indices are too large. Thus, we can adopt a 'map<int , int>' (Standard Template Library in C++) to reduce the indices to a smaller range, i.e., from 1 to 10^5, due to that n is up to 10^5. Whenever a city is processed, we can use function ".find(key)" to check whether this index (or called key) has occurred before or not, if no, we can map this index to a smaller one. As ".find(key)" is implemented based on a binary search (perhaps a balanced binary tree), the complexity O(logN) is fast enough. Moreover, we should maintain another array reverse[n] to indicate the real index corresponding to the one after mapping, i.e., reverse[value]=key, to output the correct route.

After the above process, the left problem is relatively simpler. We can establish a graph, in fact a link, and implement a traversal from any starting city to obtain the route. The starting city can be found by checking the degree of each city after the graph is established. As the problem guarantees, there should be exactly two cities with degree one, and any one of them can serve as the starting city.

D. Ant on the Tree

This problem can be solved by directly visiting the leaf nodes in the requested order. At first, we can establish an adjacency matrix to indicate the connection relationship. Then, the nodes with degree one are leaf nodes (note that the root node may have degree one as well, and take care for this case). Then, we can implement a BFS (Breadth First Search) to generate another array parent_node[] which stores the parent node of any node. Specifically, the parent node of root node can be set to -1.

Next, we go to the first leaf node. This route just starts from the root node and ends at the leaf node, which is not difficult to find out. Then, we move from one leaf node leaf[i] to the next one leaf[i+1]. With the help of parent_node[], we can find the nearest common parent node of leaf[i] and leaf[i+1]. Then, the route first starts from leaf[i] and ends at this common parent node, and then moves on to leaf[i+1]. For the last leaf node leaf[k-1], the route starts from leaf[k-1] and ends at the root node. During the above process, we can count how many times one edge has been visited. If some edge has been visited for more than two times, then the answer is "-1"; while on the other hand, the requested route exists, and we can directly output the route that has been stored before.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -9
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

Well, this is a tough round...

A. Bender Problem

The description of this problem might seem to be a little bit complicated, however, it turns out to be not that difficult as it looks like. Using a mathematical language, at first we have n points, and they can be connected only in the order as they are given. Then, we have m segments, and for each of them, we can fold it at any points to form an angle of 90 degrees. The problem asks to find out whether it is feasible to connect all the given points by using the provided m segments while any folded position must overlap with some one of the given points.

Suppose that we connect the n points (0,1,2,...,n-1) in the order as they are given, and denote the segments as a[0],a[1],...a[n-1]. Now, we consider the condition under which the given n points can be precisely connected by only using the provided segments. As the segment is folded at some position, it can cover exactly two segments of a[0],a[1],...a[n-1]. Thus, we have the following two cases.

1) (a[0],a[1]), (a[2],a[3]),...,(a[n-2],a[n-1]) are paired, which implies that we should have segments whose length is a[0]+a[1], a[2]+a[3],....,a[n-2]+a[n-1]. Therefore, we can check whether the provided m segments contain all the requested length, and if it is "YES", the folded position is thus 1,3,5,...,n-1;

2) (a[1],a[2]), (a[3],a[4]),...,(a[n-3],a[n-2]), (a[n-1],a[0]) are paired, which means that the provided m segments contain the length a[1]+a[2], a[3]+a[4], .... a[n-1]+a[0], the answer is "YES", and the folded position is 2,4,6,...,n-2,0;

As the maximum length is no longer than 200 000, we can adopt the hash idea to store the number of rods for some given length, and check whether one the above two cases can be satisfied.

B. pSort

At first, we provide some claims which might help to understand the problem. We use A->B->C to denote that A can directly interchange with B, but cannot directly interchange with C.

Claim 1: Suppose that a chain A->X[1]->X[2]->....->X[m]->B exists for some integer m. Then, we can interchange A with B without changing the positions of the other elements.

Proof: As A->X[1]->X[2]->....->X[m]->B, by first interchanging A with X[1], we obtain X[1],A,X[2],....,X[m],B. By repeating the above operations for m+1 times, we have X[1],X[2],....,X[m],B,A. Then, we interchange B with X[m] to get X[1],X[2],....,B,X[m],A, and by repeating this operation for m times, we arrive at the state B,X[1],X[2],...,A.

According to the problem, the i-th cell can interchange with cells i+di and i-di, while take care that these two positions may not be reasonable (out of the range). Therefore, we can form one or several independent chains as in Claim 1. The given permutation in fact asks to interchange some A with B while not affecting the other elements. According to Claim 1, if both A and B belong to the same chain, they can successfully interchange with each other. Thererfore, the problem is equivalent to finding out whether a pair of two elements that are requested to interchange with each other belongs to the same chain or not.

We can adopt the union-find idea to solve the problem. At first, the root of each element is itself, i.e., they belong to their own chain. When we deal with i and di, we may find that two elements can interchange with each other and thus should belong to the same chain. To achieve this, we find out the root of the two elements, and connect one root to the other one (which serves as the root does not matter). Finally, when we deal with the permutation sequence, we just check the roots of any two elements that should interchange with each other, and if they are the same, the ans is "YES"; otherwise it is "NO".

Полный текст и комментарии »

  • Проголосовать: нравится
  • +1
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. Next Test

As the range of input size is only up to 3000, we can maintain an array to indicate which number has appeared in the given test. Then, we enumerate this array from index 1 to n, and the first number that cannot be found is the answer.

B. Tournament

It seems that there exist some other simpler methods than mine... The key idea of my solution is to build a directed graph. The players can be viewed as nodes in a directed graph, and node A has en edge directed to node B if we can find a match in which A wins against B. Then, we can maintain an array to count the number of matches that a player has participated in. It is straightforward that we will find out two players who have only participated in n-2 matches while the other ones all have participated in n-1 matches. After finding out the two players, without of loss generality, we denote them as A and B, and then we implement "graph traversal" for two times witht the starting point as A and B, respectively. If we can reach node B from node A, then it means that A wins against B, or if we can reach node A from node B, it implies that B wins against A. However, we might arrive at the third case where we cannot reach B from A or reach A from B, either. For this case, it means that we cannot tell who wins against whom based on the results of provided matches, and we can select any result as the answer.

C. Unordered Subsequence

It is easy to find out that if unordered subsequence exists, the shortest length should always be 3. Assuming that the shortest length should be some integer larger than 3, without loss of generality, we write the subsequence as x[1],x[2],...x[m], where m>3. Now, let us focus on the first three elements. Their relationship can only be x[1]>x[2] && x[2]<[3], or x[1]<x[2] && x[2]>x[3], since otherwise we can achieve a shorter length by deleting x[1]. However, this in fact has indicated that the shortest length is 3 since it is enough to keep x[1], x[2] and x[3] only while deleting all the following elements. For the following arguments, we denote the three indices as 'left', 'middle' and 'right', which should be initialized as '-1'.

Now, we can first enumerate the sequence and when we find that the next number is larger or smaller than the current one, we stop and adopt a 'flag' to indicate which case it belongs to. For instace, if a[i+1]>a[i], we denote flag=1 while if a[i+1]<a[i], we set flag=-1. Next, we enumerate the sequence from the first element again, and we have the following cases.

1) a[i+1]>a[i], if flag==1, then we can set left=i and middle=i+1; otherwise, we set middle=i and right=i+1 and stop the enumeration;

2) a[i+1]<a[i], if flag==1, then we set middle=i and right=i+1 and cease the enumeration; otherwise we set left=i and middle=i+1.

Finally, we check whether all 'left', 'middle' and 'right' have been given reasonable values, if it is, output them as the answer; otherwise we can conclude that no such unordered subsequence exists.

D. Ring Road 2

I think this is well known as the "2-sat" problem. I hope that the following arguments can serve a general method to solve such problems.

At first, for two given roads, we need to judge whether they are mutual exclusive, i.e., if they have to be built as 'io' or 'oi', they are mutual exclusive. For instance, for roads [4,6] and [1,5], they are mutual exclusive, and we cannot build both of them inside or outsied the ring. Without loss of generality, we denote any two roads as [a1,b1] and [a2,b2]. It is obvious that if they have common ends (a1==a2 || a1==b2 || b1==a2 || b1==b2), they are not mutual exclusive; otherwise, we can enumerate from a2 to b2 and if we can meet exactly one single end belonging to road [a1,b1], then they are mutual exclusive. Be careful that this enumeration should be done by implementing 'module operation' based on the number of cities 'n', so that it does not matter even if a2>b2. It is important to notice that when we obtain '0' during the 'module operation', it in fact means that we have reached position 'n', which should be handled specifically.

Then, for every road, we find out its mutual exclusive roads, which can be done with complexity O(m^2). We can build an adjacent matrix E[m][m] to store the results and thus establish a graph, for instance, if road A is mutual exclusive with road B, then E[A][B]=E[B][A]=1. Next, we implement breadth frist search (BFS) based on this graph, and when we meet a new node, set it as 'o' if its parent node is 'i' and vice versa. The root node is free to choose 'i' or 'o' (it can be proved that this will not affect the final answer). After all the nodes (or roads) in the graph have been visited, they are exactly divided into two groups with type 'i' and 'o'. The left thing to do is to check again whether there are any two roads that are mutual exclusive in one single group.

The above arguments can be summarized as the following steps:

  1. find out a method to judge whether two nodes are mutual exclusive;

  2. build a graph, and the edge means a mutual exclusive relationship between two nodes;

  3. implement BFS and guarantee that all the nodes have been visited, while dividing the nodes into two groups;

  4. for each group, check that whether there are mutual exclusive nodes

E. Number With The Given Amount Of Divisors

A little difficule problem for me...

At first, we should figure out how to compute the number of divisors for some given integer N. According to the Fundamental Theorem of Arithmetic, N can be uniquely written as N=(p1^n1)*(p2^n2)*..., where pi is a prime number. For any divisor of N, it must be some combination of p1, p2,.... Therefore, the total number of divisors can be calculated as n=(n1+1)*(n2+1)*...

The problem asks to find the minimum number which has exact n divisors, where n takes values from [1,1000]. As ni=0 means that N does not have a prime divisor pi, we only consider those prime divisors which has ni>=1. Thus, ni+1>=2, and it can be observed (n1+1)*(n2+1)*...(n10+1)>=1024>1000. This implies that it is sufficient to consider only the first ten prime divisors, i.e., {2,3,5,7,11, 13,17,19,23,29}.

With the above arguments, we can adopt Dynamic Programming to solve the problem. Let us use dp[i][j] to denote the minimum number which has exactly n divisors and the first j prime divisors, where i>=1 and j>=1 to simplify the description. Note that "has exactly the first j prime divisors" means that n1, n2,...nj should all take values that are larger than zero. Now we try to figure out how to calculate dp[i][j]. In fact, dp[i][j] can be uniquely determined based on dp[1][j-1], dp[2][j-1],...dp[1000][j-1]. The reason can be clearly seen through the following simple example. Assuming that N=(2^4)*(3^2)*(7^2) is the minimum number which has (4+1)*(2+1)*(2+1) divisors, nevertheless, this is incorrect sine we can immediately find a smaller N'=(2^4)*(3^2)*(5^2). This example implies that the prime divisors cannot "jump" but "show up" exactly one by one.

Therefore, once dp[1][j-1], dp[2][j-1],...dp[1000][j-1] are determined, dp[i][j] has already been determined as well. Thus, we can calculate dp[i][j] from every one of dp[1][j-1], dp[2][j-1],...dp[1000][j-1], and find the minimum one as the final answer. For dp[r][j-1], if r is a divisor of i and i/r-1>0, then we can obtain a reasonable dp[i][j] from dp[r][j-1], i.e., dp[i][j]=dp[r][j-1]*(p[j])^(i/r-1), where p[1]=2, p[2]=3, p[3]=5,...,p[10]=29. By enumerating all the feasible dp[r][j-1], we can find the minimum one and update dp[i][j] by this minimum number. However, the term (p[j])^(i/r-1) is likely to take a very large value. As the question ensures that the answer will not exceed 10^18, we can implement the multiplication step by step, i.e., whenever we find an appropriate dp[r][j-1], we start from dp[r][j-1]*(p[j])^(1), then dp[r][j-1]*(p[j])^(2), then dp[r][j-1]*(p[j])^(3), and so on, and if some dp[r][j-1]*(p[j])^(k)>10^18, we can immediately terminate the calculation since this is definitely not the answer. Finally, the initialization can be done by setting dp[r][1]=2^(r-1) (of course, the 10^18 limitation should be considered). As some dp[i][j] may not have a reasonable value, for instance, dp[2][2], which is at least 2^1*3^1 and thus have at least 4 divisors. For simplicity, we can assign -1 to such positions.

Now, the problem is equivalent to finding the minimum number among dp[n][1], dp[n][2],...dp[n][10], which serves as the final answer.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +9
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. Almost Prime

This problem can be viewed as a special case of a more general problem, which asks to find the number of prime divisors for some given positive integer N. In general, N can be written as N=(p1^n1)*(p2^n2)*(p3^n3)*..., which is well known as "Fundamental Theorem of Arithmetic", where pi is a prime divisor. Thus, we can test from 2 to N-1, and whenever a prime divisor pi is found, we just keep dividing N by pi and update N as N=N/pi, until the updated N is not a multiple of pi. Meanwhile, we count the number of different pi, and if the result is two, it means that N is almost prime. Note that when we enumerate from 2 to N-1, it is guaranteed that only the prime divisor of N will be counted. To prove this, suppose that an integer M which is a divisor of N but not a prime oneis found. Then, M can be written as M=(p1^m1)*(p2^m2)*... As we enumerate the divisors from smaller ones to larger ones, pi must have been found before M is reached, so that M cannot serve as a divisor of N, which is contradictory to our assumption.

B. Suppose that we enumerate the bracket sequence from left to right. A pair of reasonable (or regular) bracket will be found whenever we meet a ')', under the condition that at least one '(' has been met before. As this ')' should be paired with the "nearest" '(', this '(' cannot be used to pair with the following ')'. The above process can be simply implemented by keeping a variable 'B' to store the number of currently unpaired '(', and when a '(' is met, we add B by one while subtracting it by one if ')' is met. However, one should be careful that when B=0, no subtraction should be implemented, since this situation implies that no '(' is available to be paired with the currently met ')'. The problem can be solved by doubling the number of subtraction implemented to B as the final answer.

C. Parquet

At first, we try to figure out whether it is possible or impossible, and we can consider the following four cases.

1) Both m and n are odd numbers. It is straightforward that the answer is impossible, since every kind of plank will contribute 2 or 4 meters, and thus it is impossible to perfectly cover a floor which has a size of odd number;

2) m is an odd number while n is an even number. Let us focus on one single row, and suppose that we only use planks with 1*2 and 2*2 sizes. Note that it will always contribute 2 meters to the current row whenevere we use a plank with 1*2 or 2*2 size to cover it. Therefore, we have to use at least one plank with 2*1 size if we want to cover the current row perfectly. As a 2*1 size can cover two rows, we need at least n/2 planks with 2*1 size. Therefore, if the number of planks with size 2*1 is less than n/2, the answer will be impossible; otherwise it might be possible (not surely). Furthermore, if it is possible, we can first put n/2 planks with size 2*1 at the last column, without loss of generality. Then, m becomes m-1, which is an even number, and it turns out to be the same as case 4);

3) m is an even number but n is an odd number. As in case 2), we apply similar arguments and will find that if the number of planks with size 1*2 is less than m/2, it is impossible to cover the whole floor perfectly; otherwise, it might be possible. Furthermore, we can put m/2 planks with size 1*2 first, and turn it into case 4) again;

4) Both m and n are even numbers. We can solve the problem in such a manner:

step 1: we put planks with size 2*2 from the upper left corner of the floor first to the right bottom corner; if the number of such planks is sufficient to cover the whole floor, then we are done; otherwise, we go to step 2;

step 2: we combine two planks with size 1*2 to form a virtual plank with size 2*2, and continue as step 1 indicates; if the whole floor is covered, then we are done; otherwise go to step 3;

step 3: we combine two planks with size 2*1 to form a virtual plank with size 2*2, and continue as step 1 indicates; if the whole floor is covered, then we are done; otherwise it means that the answer is impossible.

I think that the above steps will work efficiently and correctly, but I cannot figure out how to prove it... Except for this, I find it not quite easy to provide a pattern explicitly (as the problem asks) which can cover the floow perfectly. I adopt a rather complicated method. I start from the upper left point and implement the above several steps, and whenever I put down a plank, I will check the "indicators" (letters with lower case, suhc as 'a', 'b', etc.) of the adjacent points, and as there will be at most 8 adjacent points, it is always feasible to select a letter that can be used to indicate the current plank. However, this is somehow very complicated....If anyone has any better solutions, it is very nice for you to share your ideas.

Полный текст и комментарии »

  • Проголосовать: нравится
  • 0
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. IQ test

For a given array consisting of positive integers, the problem asks to find out the unique number which is different from all the other ones after implementing modulo-2 operations. We can count the number of odd integer and even integer as num_o and num_e, respectively, while storing any arbitrary positions at which an odd number and an even number appear. As the "unique" number only appears one single time, we can find out num_o==1 or num_e==1, and output the stored positions as the answer.

B. Phone numbers

This problem can be solved through a classification based on the value of n modulo 3, which contains the following cases:

1) n%3==0; the phone number can be seperated into groups of 3 digits;

2) n%3==1; we can seperate the first 4 digits into 2 groups of 2 digits, while the following digits can be seperated into groups of 3 digits;

3) n%3==2; the first 2 digits of the phone number is output, and the following digits can be seperated into groups of 3 digits

C. Roads in Berland

The solution is similar to the famous Floyd Algorithm. For two given cities S and T, let us consider how their minimum distance can be altered if a new road between A and B is built. We can imagine that there is a map which describes the roads between each pair of cities (if they exist), although this map is not explicitly known or given. For any given route from S to T, if this route does not use the newly built road betwwen A and B, the minimum distance between S and T stays the same, since nothing changes. Thus, the minimum distance might be updated if and only if the newly built road betwwen A and B is used. Then, the distance between S and T, d_m[S,T] can be divided into three independent parts, S to A, A to B and B to T. Note that according to the Minimum-Distance-Table, the minimum distance between S and T is d_m[S,A]+d_new[A,B]+d_m[B,T] for the current route, where d_new[A,B] denotes the length of the newly built road between A and B. Be careful that there is another feasible route which uses the newly built road as follows: S to B, B to A and A to T, which contributes a distance of d_m[S,B]+d_new[B,A]+d_m[A,T].

Finally, we should also notice that the newly built road may be "farther" then the old one. Therefore, the minimum distance between S and T should be d_m[S,T]=MIN{d_m[S,T], d_m[S,A]+d_new[A,B]+d_m[B,T], d_m[S,B]+d_new[B,A]+d_m[A,T]}. We have to implement the above operations for every pair of cities, which gives complexity O(n^2). As K roads are newly built, the total complexity is O(n^2*k). A last question to consider is the range of the final answer. Let us construct a worst case. Suppose that the cities form a linked list, where each node has exactly one single father node (except for the head node) and one child node (except for the tail node). Then, the largest answer can be

1000*(n*(n-1)/2+(n-1)*(n-2)/2+...+2*1/2)=1000/2*( (n-1)^2+(n-1)+(n-2)^2+(n-2)+...+1^1+1 )=1000/2*n*(n-1)*(n+1)/3>2^32 (choosing n=1000). Therefore, we should adopt the "long long int" type to compute the final answer.

D. Roads not only in Berland

I think perhaps I have used a little bit complicated method. One can refer to Union-Find (I am not sure of this terminology) for another simpler solution.

I adopt DFS (depth first search) for this problem. During DFS, when wemeet a node that has been visited, it means that this edge (might be referred to as backward edge) is not necessary to keep the current component connected, and thus this edge can be removed. However, there are seceral details that should be considered (all the following arguments are based on DFS).

1) Suppose that node A is the parent node of node B, and node B is the currently processed node. As the edge is not directed, node A will be visited again, but we cannot say that this edge is not necessary. The edge is not necessary if and only if the following two conditions are satisfied: a) a node that has been visited is met; b) this node is not the parent node of the currently processed node;

2) Any backward edge may be visited twice. Thus, they should be stored only once when it is visited for the first time;

3) As the graph may consist of several connnected components, it is likely to implement the DFS for several times. Whenever a new DFS is implemented, we can select any one single node belonging to this component as the "delegate";

4) Store all the backward edges and all the "delegates". The final answer is: remove all the backward edges and build new edges to connect every connected-component.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -4
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. Ring road

This problem gives us a ring with n nodes and n directed edges, while aksing to reverse several edges to build a strongly connected ring (graph) at the minimum cost. We use Sr to denote the minimum cost. Without loss of generality, we can start with node 1, and implement a DFS. During this process, we may meet such a situation that no further nodes can be visited, if the currently being visited edge is not directed to the next node. Whenever this occurs, we just reverse the edge so that we can move on to the next node, while adding the cost of this edge to Sr.

The above DFS in fact only provides one feasible way to build a strongly connected graph with a total cost of Sr. If we keep the edges reversed above as what their original directions are, but reverse the other edges instead, we will obtain another strongly connected graph (recall that this is a ring) with a total cost of St-Sr, where St denotes the sum of the cost of all the directed edges. Therefore, the final step is to select the minimum one of {Sr, St-Sr} as the answer.

B. F1 Champions

This problem is not quite difficult but a little bit complicated. One may adopt a "struct" (C/C++) to denote each player, which consists of an integer storing the scores, an array storing the number of each rank, and a string denoting the player's name. In each race, whenever we deal with a player, we first check whether the name of this player has been recorded in the "struct". If he or she is a new one, we store him or her in a new "struct" and update the corresponding results; otherwise, we find the index of this player and update the results. Finally, output the answer as the problem asks.

C. Sequence of points

For simplicity, we first focus on the 1-dimension case. It is straightforward to find that M[1]=2A[0]-M[0], M[2]=2A[1]-M[1],....,M[n]=2A[n-1]-M[n-1]. By some careful calculation, we have M[n]=2A[n-1]-2A[n-2]+2A[n-3]-...-2A[1]+2A[0]-M[0], and we further write A=2A[n-1]-2A[n-2]+2A[n-3]-...-2A[1]+2A[0]. Then, we have M[n+1]=2A[n mod n]-M[n]=2A[0]-M[n], M[n+2]=2A[n+1 mod n]-M[n+1]=2A[1]-M[n+1],...,M[2n]=2A[n-1]-M[2n-1]. By some careful compution, we can obtain M[2n]=2A[n-1]-2A[n-2]+2A[n-3]-...-2A[1]+2A[0]-M[n]=A-M[n]=A-(A-M[0])=M[0]. Therefore, M[2n]=M[0], i.e., M[j]=M[j mod 2n]. Thus, we can directly calculate M[j mod 2n] instead of M[j]. As n is of order 10^5, it is sufficient to calculate from M[1] to M[j mod 2n] one by one. For 2-dimension case, the above arguments still hold.

E. Berland collider

I learned a lot from this problem!

I think we should implement some preprocessing first. We enumerate the particles from left to right and denote the position of the first particle that moves to the right as PoR. Similarly, we enumerate from right to left and find the position of the first particle moving to the left as PoL. Obviously, if PoR>PoL, it implies that no particles wil collide. On the other hand, PoR<PoL means that we only need to take the particles between PoR and PoL into consideration.

The key idea of the solution is to implement a binary serach in terms of time. Suppose that collision occurs before some time T (an upper bound). It is then sufficient to run a binary search in the time interval [0 T]. An appropriate value of T can be found by computing the time when the particles at positions PoR and PoL will collide. For each time interval [TL TR], we test that at time Tm=(TL+TR)/2, whether collision have occurred or not. This can be done by enumerating the particles from left to right one by one. When we meet a particle moving to the right, we update the maximum position Pmax that can be reached at time Tm. On the other hand, whenever a particle moving to the left is met, we denote the position it can reach at time Tm as Pleft, and compare it with Pmax. If Pleft>Pmax, it means that there is still no collision, and we move on to the deal with the next particle; otherwise, collision has occurred. Then, if collision occurs, the time interval used for the next binary search should be [TL Tm]; otherwise we should set it as [Tm TR].

Finally, it it necessary to consider the precision problem, due to the "float type" calculation involved in the binary search, since otherwise we might get stuck in infinite loop. A general idea is to limit the times of binary search up to some value M. This value M in fact determines the resolution of time. For the first search, we have one single potential point to check, while for the second search we will have two possible points. By some simple induction, we have 2^(M-1) potential points to check at the M-th search. This in fact has generated 2^0+2^1+...+2^(M-1)=2^M-1 points and they satisfy a uniform distribution. In other words, for a given time interval [0 T], it has been discretized into about 2^M sub-intervals with the same length T/2^M (approximately), and the resolution is 1/2^M. To achieve the requested precision 10^(-9), as 10^(-9)=1/10^9>1/16^9=1/2^36, it is thus sufficient to use binary search for only 36 times.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -7
  • Проголосовать: не нравится

Автор SummerSky, история, 8 лет назад, По-английски

A. You're Given a String...

The problem asks to find out such a substring that has the longest length and occurs at least two times, which is in fact equivalent to looking for two substrings (they might overlap with each other) with the longest length while they are exactly the same with each other. The solution might be simplified if one has noticed that the string consists of lower-case Lattin letters merely, i.e., a,b,...,z. The idea behind the simplification is that if two substrings are exactly the same, then their first letters must be the same as well. Thus, we can first use 26 arrays to store the positions at which each letter appears. For instance, we can assign 26 vectors (in C++ STL) for each letter, and "puch_back" the corresponding index. This can be done by traversing the string for a single time, with complexity O(N) (we use N to denote the length of string).

For each vector, we enumerate all the feasible combination of two indices (or positions), and start with these two positions and check their next one single letter at the same time. It is obvious that we will obtain two substrings and as we should keep them exactly the same, we must stop if the next one single letter of each substring is different. For instance, suppose that letter "a" has positions {1,5,7}. Then, we should check {1,5}, {1,7} and {5,7}. We adopt {1,7} as an example, i.e., we should check whether the two substrings {s[1],s[2]} and {s[7],s[8]} are the same or not. If they are the same, then we move on to check {s[1],s[2],s[3]} and {s[7],s[8],s[9]}; otherwise, we stop and store the current length. Finally, we output the maximum length as the answer.

Now, we calculate the complexity of the above solution. We denote the length of the 26 vectors as x1,x2,...,x26. For each vector, we should check xi*(xi-1)/2 combinations. For each combination, we will check at most 2N letters. Thus, the total complexity is {x1*(x1-1)/2+x2*(x2-1)/2+...+x26*(x26-1)/2}*2*N=O(N^3).

B. Party

Well, this problem is somewhat marvellous...

We first prove that the answer cannot be n. Note that each person can only have friends with number of 0,1,...,n-1. Therefore, when we count from 0 to n-1, some of them must leave.

Next, we prove that the answer cannot be n-1. Suppose that the person leaves when we count at x. This means that for the "stayed" n-1 people, the number of friends is reduced by one for x of them while nothing changes for the other n-1-x people. Due to similar reasons at the first case, the n-1-x people cannot all stay at the party, which is contradictory to our initial assumption. However, this can be true if and only if n-1-x=0, which gives x=n-1, i.e., the single person leaves when we count at n-1. This implies that the other n-1 people all have friends with number of n-1, since none of them leaves when we count from 0 to n-2. Nevertheless, this also means that when we count at n-1, all the n people will leave, which is contraditory to the initial assumption again.

Thirdly, we prove that the answer is n-2. Actually, the above two cases may have inspired us a little. Note that when some people leave when we count at x, the number of friends may be reduced by one for some of the "stayed" people, and it is just these people that have chances to survive until the end while for the other "unchanged" (number of their friends) people, they cannot all survive due to similar reasons at the first case. The reason is that: the "changed" people might have friends no more than x after some people leave, and as we count from x+1 to n-1, they will never leave. In a word, we should find out the minimum number of people that are "sacrificed" to protect the other people to survive till the end.

Here is one feasible construction. We let the two persons have friends with number of n-2 while the other n-2 people have friends with number of n-1. Then, when we count from 0 to n-3, all people stay at the party while when counting at n-2, the two persons leave and the currently "stayed" n-2 people have friends with number of n-1-1. When we count at n-1, the previously "survived" people continue to survive.

Finally, my question is: how could the first person come up with such a solution....I think it is more difficult to analyze this problem without any prior knowledge than to figure out the reasons and logics if I have been told that the answer is n-2. The above solution seems to be some "exclusive" method, but how to realize that I should try from n,n-1,n-2 other than 0,1,...

C. Oranges and Apples

I think this is a delicately designed problem! At first we introduce some preliminaries. Suppose that there is a sequence x[1],x[2],...,x[2*N-1], which is sorted in a decreasing order, i.e., x[1]>x[2]>x[3]>...x[2*N-2]>x[2*N-1] (the following derivation still holds if replacing any ">" with ">="). Then, we have the following two results:

1) First rewrite x[1]>x[2]>x[3]>...x[2*N-2]>x[2*N-1] as x[1]>x[2],x[3]>x[4],x[5]>x[6],...,x[2*N-3]>x[2*N-2],x[2*N-1]>0. Then, add the terms on the left hand side and right hand side, respectively, which gives x[1]+x[3]+x[5]+...+x[2*N-1]>x[2]+x[4]+x[6]+...+x[2*N-2]. Suppose that the total sum is S=x[1]+x[2]+x[3]+...+x[2*N-2]+x[2*N-1]. It can then be seen that x[1]+x[3]+x[5]+...+x[2*N-1]>S-(x[1]+x[3]+x[5]+...+x[2*N-1]), which is equivalent to x[1]+x[3]+x[5]+...+x[2*N-1]>S/2.

2) Rewrite x[1]>x[2]>x[3]>...x[2*N-2]>x[2*N-1] as x[1]>0,x[2]>x[3],x[4]>x[5],...,x[2*N-2]>x[2*N-1]. Then, add the terms on the left hand side and right hand side, respectively, and we obtain x[1]+x[2]+x[4]+...+x[2*N-2]>x[3]+x[5]+x[7]+...+x[2*N-1]. Recall that S=x[1]+x[2]+x[3]+...+x[2*N-2]+x[2*N-1], which implies that x[1]+x[2]+x[4]+...+x[2*N-2]>S-(x[1]+x[2]+x[4]+...+x[2*N-2]), i.e., x[1]+x[2]+x[4]+...+x[2*N-2]>S/2.

It can also be observed that: for case 1), we have (2*N-1+1)/2=N terms, i.e., x[1],x[3],x[5],...,x[2*N-1], of which the sum is not less than half of the total sum; for case 2), we also have 1+(2*N-2)/2=N terms, i.e., x[1],x[2],x[4],...,x[2*N-2], satisfying the condition that their sum is not less than half of the total sum.

Based on the above results, we can sort the boxes in a decreasing order of the number of apples (or oranges), and then calculate two sums: S1=o[1]+o[3]+o[5]+...+o[2*N-1] (if the boxes are sorted based on oranges, then we compute S1=a[1]+a[3]+a[5]+...+a[2*N-1] instead) and S2=o[1]+o[2]+o[4]+...+o[2*N-2] (if the boxes are sorted based on oranges, then we compute S2=a[1]+a[2]+a[4]+...+a[2*N-2] instead). Now, if S1>=S2, then we select those boxes with indices 1,3,5,...,2*N-1; otherwise select those ones with indices 1,2,4,6,...,2*N-2. The reason is that: for S1>=S2, the number of oranges satisfy the request since S1>=S-S1, i.e., S1>=S/2, while the number of apples can meet the request as well according to case 1); for S1<S2, it can be seen that both the number of oranges and apples is not less than half of each total sum, according to S2>S/2 and case 2), respectively.

The above arguments in fact demonstrate that the answer will always te YES.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -5
  • Проголосовать: не нравится

Автор SummerSky, 8 лет назад, По-английски

A. Second Order Statistics

This problem asks to find out the smallest element which is strictly larger than the minimum one. One simple way to solve this is to sort the array a[n] which stores the input data in an increasing order, and then find the first element that is different from a[0]. If such an element exists, then output this as the answer; otherwise output "NO" since all elements are exactly the same.

B. Bargaining Table

This is equivalent to finding out a rectangular which contains no '1's and has the largest perimeter at the same time. A trivial method is to check each feasible rectangular, i.e., enumerate the upper left point which takes O(mn) complexity and for each fixed upper left point, enumerate the bottom right point which takes O(mn) complexity as well, and finally check that whether this rectangular consists of only '0's or not, which takes O(mn) complexity. If it contains no '1's, then calculate the current perimeter and update the value of largest perimeter. Therefore, the total complexity is O(m^3n^3), which seems too slow...

The above solution can be improved by adopting a DP technique. We use an array a[x1][y1][x2][y2]=0 to denote that the rectangular with upper left point (x1,y1) and bottom right point (x2,y2) contains no '1's. Thus, a[x1][y1][x2][y2]=0 if a[x1][y1][x2-1][y2]=0, a[x1][y1][x2][y2-1]=0 and room[x2][y2]=0 all hold, where room[x2][y2] (just the input data) denotes the value '0' or '1' at point (x2,y2). Therefore, we can first compute a[x1][y1][x2][y2] for all the rectangulars, which takes O(m^2n^2) complexity. This can be done by starting with a[x1][y1][x1][y1] (just a point) and calculate the other values first column by column, then row by row, i.e., a[x1][y1][x1][y1+1], a[x1][y1][x1][y1+2],...a[x1][y1][x1+1][y1],a[x1][y1][x1+1][y1+1]... Note that when updating any arbitrary a[x1][y1][x1][y1], either a[x1][y1][x2-1][y2] or a[x1][y1][x2][y2-1] may not exist due to that x2-1 or y2-1 is out of index, and take care of this boundary case. After obtaining a[x1][y1][x2][y2], then we can enumerate the upper left point and bottom right point again and if a[x1][y1][x2][y2]=0, then we calculate the current perimeter which is equal to (x2-x1+1+y2-y1+1)*2 and compare this with the stored largest perimeter. The total complexity is O(m^2n^2).

C. System Administrator

First note that if m<n-1, then no connected graph can be constructed. Thus, m>=n-1 must be satisfied. The problem requires that if node v is deleted, then the previously connected graph is not connected anymore. Suppose that we first pick out node v, and then we seperate the left n-1 nodes into two parts so that any two nodes belonging to the same part can be directly connected to each other and to node v as well while any two nodes belonging to different parts can not be connected to each other directly. Denote the number of nodes in each part as x and y, and it is obvious that x+y=n-1. Furthermore, this division can 'consume' at most x*(x-1)/2+x+y*(y-1)/2+y edges. The formula x*(x-1)/2+x+y*(y-1)/2+y achieves the maximum value if we set x=1 and y=n-1-1, and suppose that the value is M after substituting. Now, we can say that if m>=n-1 && m<=M, then the answer is possible; otherwise impossible. We fisrt prove this conclusion. Suppose that m>M, and this actually implies that even if we set x=1 and y=n-1-1, where the y=n-1-1 nodes have been fully connected, there still exist some edges that are not used. Thus, we have to 'consume' the extra edges by connnecting the single node to some nodes belonging to this 'y=n-1-1 part'. As the 'y=n-1-1 part' is fully connected, it means that even if node v is deleted, the left 'x part' and 'y=n-1-1 part' are still connected.

Therefore, the answer for 'possible' or 'impossible' can be obtained by checking whether m>=n-1 && m<=M holds. Then, if this holds, then we can first connect the left n-1 nodes (except for node v) to node v, which 'consumes' n-1 edges (Note that this can always be done since m>=n-1 holds). Next, we can connnect arbitrary two nodes belonging to 'y=n-1-1 part' until the total number of 'consumed' edges is equal to m (This can always be done since m<=M). This can be done with O(m) complexity. For instance, we can use a vector to store the index of the y=n-1-1 nodes, and use two 'for' loops to enumerate the nodes that will be connected.

D. Segments

Suppose that all the segments have been drawn down explicitly along X-axis. Now we start from point -10000 and walk along the X-axis towards point 10000. Then, we will meet the first segment and denote it as [x1 y1], where x1 and y1 stand for its left end and right end (inclusive), respectively. It is obvious that at least one nail should be driven between the interval I=[x1 y1], since otherwise this segment can never be covered. We continue and might meet the second segment, which is denoted as [x2 y2]. If y2<y1, then one can see that if a nail between the interval I=[x2 y2] is driven down, both segments [x1 y1] and [x2 y2] can be covered simultaneously. If y2>y1, then a nail driven down between interval I=[x2 y1] can cover both segments [x1 y1] and [x2 y2] too.

One may have noticed that we should keep an interval I=[xi yi], and whenever we meet a new segment, we update the interval I by I=[x(i+1) min(yi, y(i+1) )]. The key idea behind this is that we want to always keep a feasbile interval between which all the met segments can be covered by just driving a single nail. Note that as we go along the X-axis, sometimes we might have I=[xi yi] with xi<=yi but I=[x(i+1) min(yi, y(i+1) )]=I(x(i+1) yi) with x(i+1)>yi. If this occurs, it implies that I=[xi yi] is the last feasible interval between which we can drive a nail to cover all the segments that have been met.

Thus, it is clear that the solution should be like this: we keep an interval I=[xi yi] and update it whenever we meet a new segment; if I=[xi yi] with xi<=yi but I=[x(i+1) min(yi, y(i+1) )]=I(x(i+1) yi) with x(i+1)>yi occurs, then we drive a nail at point yi and update the interval as I=[x(i+1) y(i+1)]. Specifically, the so-called "whenever we meet a new segment" can be done in this manner: we sort all the segments in an increasing order of xi, and if any two segments have the same xi, then they are sorted in an increasing order of yi; next we enumerate the segments one by one just as what "we start from point -10000 and walk along the X-axis towards to point 10000" describes, and update the interval I=[xi yi] and store the location yi where we drive down a nail while counting the number of used nails at the same time.

In fact, the above soluton is somewhat a greedy algorithm. We can prove that it will always achieve the optimal result (the smallest number of nails). The idea behind this proof is quite general when we try to prove the correctness of similar greedy algorithms, i.e., we assume that the optimal result different from the one found by greedy algorithm exists, and then we replace this "optimal" result by what is found by our greedy algorithm "little by little", while proving that the result is not degraded during this process. Then, the proof is completed by claiming a paradox since we have assumed that the optimal result is different from what is found by our greedy algorithm. Now, we give proof for this specific problem.

Proof: Suppose that the result found by our greedy algorithm is [g1,g2,...,gn], and the optimal result [b1,b2,...,bm] exists, i.e., m<n. Then, we walk along the X-axis from -10000 towards to 10000 again, and keep updating the interval I. Suppose that I=[xi yi] is the first interval that I=[x(i+1) yi] with x(i+1)>yi holds. Then, as indicated by our algorithm, all the segments that we have met before can be covered by driving a nail down at point yi, which gives g1=yi. Note that b1<=yi must be satisfied, since otherwise at least one segment with right end yi can never be covered. However, b1 can never be "better" than g1, since g1=yi can cover all the segments that have been met, and it is thus impossible for b1 to cover more segments. Therefore, if we replace b1 with g1, the modified result [g1,b2,..bm] can only be improved but not degraded! Next, we deal with b2. If b2<g1=yi, then it is obvious that b2 makes no sense since b1 is sufficient to cover all the previous segments and b2 can thus be deleted. This leads to a better result than the optimal one, and thus a paradox is found! If b2>g1=yi, then we can repeat what has been dealt with b1, and will end up with showing that b2 can also be replaced with g2 with no degradation. Finally, as m<n is assumed, it turns out that the optimal result can not cover all the segments, unless m>=n. Therefore, [g1,g2,...,gn] is the optimal result.

E. Scheme

This is somewhat quite intricate. The problem actually asks to build a strongly connected graph and thus we solve it by constructing a directed graph. First note that the out-degree of each node must be 1 while the in-degree can be any number. Then, if we start from a node with zero in-degree and implement a DFS (a linked list as a special DFS), it turns out that we will surely end up with a cycle. The proof is simple since if no cycle is met, then we can go along the directed edge and meet new nodes forever! However, there are only limited nodes, and this implies that we will always meet a node that has been visited before, which just forms a cycle.

A cycle is in fact a strongly connected component, and any node belonging to this component is equivalent when used to construct a strongly connected graph. Thus, all the nodes belonging to some strongly connected component can be reduced to any one of them. We implement this idea by the following algorithm: we start from a node s1 with zero in-degree and implement DFS until some node t1 is visited for the second time; then we store (s1,t1) as a pair. This (s1,t1) stands for a connected component but with only single direction, which is in fact a linked list, where s1 and ti denotes the starting node and ending node, respectively. For each node with zero in-degree, we can obtain its own (si,ti). Now, the solution might seem clear if we write them as (s1,t1),(s2,t2),...,(sn,tn). To build a strongly connected graph, we can add a link between ti and s( (i+1)%n ), which is equivalent to connecting several linked lists by adding links from head to tail.

One more thing to notice is that there might exist a strongly connected component, which can not be found out if we only take the nodes with zero in-degree into consideration. Therefore, after we have tried all the nodes with zero in-degree, we should check if there still exist some other nodes that have never been visited, and these nodes must form several (or a single one) independent strongly connected components. We can use a special (si,ti) with ti=si to denote such a component (actually a special linked list with only one node), and never forget to add such (si,ti).

Полный текст и комментарии »

  • Проголосовать: нравится
  • +31
  • Проголосовать: не нравится