Блог пользователя iscsi

Автор iscsi, история, 7 лет назад, По-английски

Hello CodeForces Community!

Chef is back again to treat you with his next delicious offering: The November Long Challenge! You are invited to showcase your coding talent and bag exciting cash prizes and goodies.

Joining me on the problem setting panel, we have:

I hope you will enjoy solving the problems. Please give your feedback on the problem set in the comments below after the contest.

Contest Details:

Time: 3rd November 2017 (1500 hrs) to 13th November 2017 (1500 hrs). (Indian Standard Time — +5:30 GMT) — Check your timezone.

Details: https://www.codechef.com/NOV17

Registration: You just need to have a CodeChef handle to participate. For all those, who are interested and do not have a CodeChef handle, are requested to register in order to participate.

Prizes: For Indians: 1st prize: INR 12000/- 2nd prize: INR 8000/- For Rest of the World: 1st prize: $400 2nd prize: $300 Top 10 global and top 20 Indian winners get 300 Laddus each, with which the winners can claim cool CodeChef goodies. Know more here: https://www.codechef.com/laddu. (For those who have not yet got their previous winning, please send an email to [email protected])

Good Luck! Hope to see you participating!

  • Проголосовать: нравится
  • +49
  • Проголосовать: не нравится

»
7 лет назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

I'm stupid

»
7 лет назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

How to solve POLY? I wrote convex hull trick for 30 points, but obviously, if function is not linear this will not work.

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    Contest is still going... (40 min left)

  • »
    »
    7 лет назад, # ^ |
    Rev. 7   Проголосовать: нравится +23 Проголосовать: не нравится

    This is what I did.

    Precomputation : For every multiple of 1000, less than 10, 000, and for every multiple of 2500 less than 100, 000, store 10 minimal Yi.

    Query : For each query, if X < 100, brute force. Now check for the highest x1 where x1 < X and lowest x2 where x2 > X for which you stored 10 minimal Yi. Checked only for those 20(or lesser) Yi(x).

    I know it sounds silly for a 9th problem, but maybe it is tough to make a case in which this fails. Keen to know the intended solution.

    • »
      »
      »
      7 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Did you get 100 points with that solution?

      • »
        »
        »
        »
        7 лет назад, # ^ |
          Проголосовать: нравится +2 Проголосовать: не нравится

        Yes, he got. I don't know how this got AC...

        • »
          »
          »
          »
          »
          7 лет назад, # ^ |
            Проголосовать: нравится +13 Проголосовать: не нравится

          It is very funny and brave solution, good job, sinus_070. I don't get why his answer was downvoted, he shared with all AC-solution. Yes, it was not very strict, but it is problem of authors and test-creators, not participant.

    • »
      »
      »
      7 лет назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      Too much proness there!

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится +27 Проголосовать: не нравится

    I generalized convex hull trick for higher powers.

    Let's call diff(x, A, B) = (a0 — b0) * x^0 + (a1 — b1) * x^1 + (a2 — b2) * x^2 + (a3 — b3) * x^3.
    It's obvious, that diff has no more than 3 roots than diff(root, A, B) = 0.
    Let's say than roots are R1, R2, R3.
    For X < R1 diff(x, A, B) < 0 — A is better.
    For R1 < X < R2 diff(x, A, B) > 0 — B is better and so on.

    So what we will do: find best function for the leftmost X = MIN_X, let's call it BEST.

    For each possible X we will handle list of queries and list of 'interesting' functions.
    Function is interesting at XR, if it could became new best function at X = XR.

    At the start we for every function F find roots of diff(x, BEST, F) — R1, R2, R3 (Ri >= MIN_X). For R1 we add F as interesting, for R2 — BEST, for R3 — F again.

    Now we process all queries offline from MIN_X to MAX_X.
    At every Xi we are trying to update our best function with 'interesting' functions for this Xi.
    After possible update we find roots for diff(x, NEW_BEST, NOT_BEST) (Ri >= Xi) and similarly to initial actions add interesting functions to the roots.

    Roots for line I found with Binary search.
    For quadratic function — with formula.
    For cubic — I found roots of derivative and did three BS (R < left, left < R < right, right < R)

    For excluding problems with non-integer calculations I looked not only roots, but all Xs on the segment [Root — 3, Root + 3] (+-2 was not enough).

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится +11 Проголосовать: не нравится

    I created a comparator function for polinomials, which compares them by a3, then a2 and so on.

    For first I precompute answer up to 320 (the constant is because 320^2 > 10^5) by brute-force.

    After that I sort the queries by ascending. If for some t query (t > 320), we find a p and a q polinom, where the p polinom compares less than q polinom, and p(t) < q(t) stands, then we can remove q polinom from the set, because for every x > t, p(x) < q(x) will hold also. It can be proved indirectly, using the fact t > 320.

    I'm unsure about the number of runs for a given polinomial, but it isn't too much, as I got AC in 0.26.

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится +6 Проголосовать: не нравится

    This is what I did.

    https://www.codechef.com/viewsolution/16266475

    Sort the given functions by a3,a2, and so on. Remove the useless functions, for e.g functions having the same a3,a2,a1, only one of them will always give optimal output among them, so remove them, that makes sure there are no 2 functions with same a3,a2,a1 coefficients.

    Taking linear functions for example, for a given x, realize we don't need to evaluate on all functions to find the minimum value, some of them can never be optimal no matter what. For e.g. functions ax+b and cx+d. If (c-a)*x> 10^5 then lines with coefficients of x >=c will never give better solution than the line ax+b. This way we only check limited lines for all 0<=x<=10^5, and the total number of lines checked will be 10^5 * log(10^5). Similar arguement can be used for higher order polynomials to eliminate the unnecessary functions, and use brute force among the remaining lines for every x.

    Works pretty fast, but I know it's far from intended, curious to know the author's solution.

»
7 лет назад, # |
Rev. 2   Проголосовать: нравится +7 Проголосовать: не нравится

What was the logic behind Chef and Subarray Queries and Product on the Segment?

Problem Links

https://www.codechef.com/NOV17/problems/SEGPROD

https://www.codechef.com/NOV17/problems/CSUBQ/

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    SEGPROD : Chinese Remainder Theorem CSUBQ : Segment tree

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    you can find number of subarrays with maximum element less than equal to R and number of subarrays with maximum element less than L than take difference of both to find answer of query. To find subarrays use segment tree; https://www.codechef.com/viewsolution/16147584

    • »
      »
      »
      7 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Damn!! I had the exact same idea but couldn't implement it...

    • »
      »
      »
      7 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I read your code. Didn't understand quite a few things.

      What are your update functions doing?

      And what is the use of each element in struct?

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится +11 Проголосовать: не нравится

    SEGPROD:

    Firstly, divide ai into 2 factor, like ai = bi * ci such that gcd(bi, P) = 1. So, in each query you can calculate answer for bi factors with modular inverse. Factorize each ci into pi, where pi are prime factors of P. So, ci = Π piei. Now, for each query, answer for ci factors is Π piel + el + 1 + ... + er which you can handle with prefix sums on exponents of pi's. This is O(NlogN + Q * (number - of - primes - product < P) where (number of primes product < P) can be 9 maximally since 2 * 3 * 5 * 7 * 11 * 13 * 17 * 23 * 29 > 109.

    • »
      »
      »
      7 лет назад, # ^ |
        Проголосовать: нравится +9 Проголосовать: не нравится

      This solution is intended to TLE, our intended solution is divide and conquer O(n log n) pre-processing then exactly O(1) per query. unfortunately, some people managed to pass with asymptotically slower solutions using non-asymptotic optimizations even though we made TL a bit strict to prevent that.

      • »
        »
        »
        »
        7 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        And I'm in this 'some people' group :P

        Waiting for editorial :)

      • »
        »
        »
        »
        7 лет назад, # ^ |
          Проголосовать: нравится +11 Проголосовать: не нравится

        During the competition I had come up with the same solution as the one you have mentioned. The basic idea of the DS: Define M to be the midpoint of the array. 1/2 of all possible queries would be such that its Left endpoint is before M and its right endpoint is after M. So it would be efficient to keep prefix products from M to all other indices(even the ones before M). Any query which has a Left endpoint on the left of M and right endpoint on the right of M can be easily answered in O(1) using this prefix. Now the idea is to recursively do the same thing for the left and right half separately. That's why memory and pre-processing takes Nlogn time/mem.

        This DS can subsitute anything a segment tree can do but in O(1) time. However, it takes more memory and doesn't support updates.

        My code: https://www.codechef.com/viewsolution/16141888

      • »
        »
        »
        »
        7 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        umm. I actually did really stupid things to optimize. Like rocket io/ changin i%64 to i&63. and x%y to if x>=y:x%=y. I removed functions and wrote functions recursive instead of iterative. Too much effort. Ughh.

    • »
      »
      »
      7 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      That's exaxtly my solution but I didn't manage to squize it under TL (got TLE on only 1 test) ,congratulation anyway !

    • »
      »
      »
      7 лет назад, # ^ |
      Rev. 2   Проголосовать: нравится +8 Проголосовать: не нравится

      Or you could use PyPy and write an iterative segment tree.

      https://www.codechef.com/viewsolution/16212017

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    CSUBQ:

    You can make segment tree with keeping answer, length of left child's answer, length of right child's answer, node index of left child's answer, node index of right child's answer. In merge function, answer for parent node will be left_ans + right_ans + left_len*rigth_len. Update lengths of childs with indexes of nodes you kept.

»
7 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Solution for Day Schedule?

»
7 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How to solve Lovers Gift?

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    This is what I did:

    Find the centroid of the tree and root it at the centroid, update the answer for all queries of type 2, with the best two answers that you can get, with the condition that the path must pass through the root, This can be done in O(N+Q). Then, remove the centroid, this will split the tree into parts. At this point, you can also split the queries based on which part contains the node corresponding for that query. If you do this recursively, the process will have O(LogN) levels, and at each level, the time taken to update the answers for the queries will be O(N+Q). Overall complexity is O((N+Q)*LogN)

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    Look at it this way: adding a bank is detaching a vertex from a tree. A query is asking for second highest number not present in the current connected component. If we process queries backwards, the adding a bank queries simplify to DSU.

    Let's have a segment tree, which keeps two top values. If a vertex is a root of some set in DSU, the leaf of segment tree corresponds to the whole set. If a vertex is not a root, the leaf is empty. The segment tree is updated on union operation.

    A query "max not in a set" is equivalent to max("max to the left", "max to the right") which can be answered using the segment tree.

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится +6 Проголосовать: не нравится

    My Approach:

    Imagine if we are marking a node a bank, we are removing it from the tree. So,we have a forest now. For a node v, the only nodes which we can't reach without passing through a bank are the nodes which are in the same tree as the node v is in. We have to calculate the second largest number <= n which is not in the same tree.

    Reverse the queries. Now, we are basically merging tree and maintaining the std::map of number present in the tree and the largest number not present in the tree. For merging, just merge the smaller tree in larger one at each step. When we query for a node, just iterate from the largest number not present in the tree and return the second largest number not present in it and update the largest number not present in the tree. Didn't prove why this should work in given TL but it just did.

    Proof by AC, I guess