anup.kalbalia's blog

By anup.kalbalia, 11 years ago, In English

CodeChef invites you to participate in the January Cook-off 2014 at http://www.codechef.com/COOK42.

Time : 19th January 2014 (2130 hrs) to 20th January 2014 (0000 hrs). (Indian Standard Time — +5:30 GMT) — Check your timezone.
Details : http://www.codechef.com/COOK42/

Registration: Just need to have a CodeChef user id to participate.

New users please register here.

It promises to deliver on an interesting set of algorithmic problems with something for all.
The contest is open for all and those, who are interested, are requested to have a CodeChef userid, in order to participate.

  • Vote: I like it
  • +39
  • Vote: I do not like it

| Write comment?
»
11 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I enjoyed this contest a lot, esplecially Substraction Game problem. I also like the problem orders and I'm quite courious how to solve it?

  • »
    »
    11 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    The problem asks basically to find Sprague–Grundy number of the pair (I, J) on which we can perform subtraction operation (I, J) → (I - K * J, J) (I > J, I - K * J > 0).

    If I = K * J, then, obviously SG numbers of pairs (K * J, J), ((K - 1) * J, J), ..., (J, J) are K - 1, K - 2, ..., 0 respectively. (As from each of the earlier positions we can get to the later one.)

    Now, if I = K * J + R, then we have a sequence of (K * J + R, J), ((K - 1) * J + R, J), ..., (J + R, J), (J, R), such that from each earlier one we can get one of the later. If we know SG(J, R) = S, then all the other terms have SG numbers 0 for (J + R, J), 1 for (J + J + R, J), 2, ..., S — 1, S + 1,...,. So, by carefully investigating values of SG(J, R) and (I / J) (number of items in the sequence) we can obtain the SG(K * J + R, J), thus reducing the problem in the way of the GCD algorithm.

    Code is pretty self-explanatory: http://pastie.org/8648883

    • »
      »
      »
      11 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thanks a lot :) However, I asked for solution of problem about orders, since all the other editorials are already published on CodeChef's website :)

      • »
        »
        »
        »
        11 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Oh, I thought you liked the problem order, meaning they were ordered from the easiest one to the most difficult:)

        • »
          »
          »
          »
          »
          11 years ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          There is pretty straightforward idea with MinCost flow algorithm, however it got TL:( But after contest I take a look at all accepted solutions, all of them used some flow algorithms, so I think the main problem here is very fast MinCost implementation.

          • »
            »
            »
            »
            »
            »
            11 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Yes, the time limit seemed very tight to me, too. The idea I used during the contest (which I still had to optimize further to get AC) was to consider the orders in decreasing order of penalty (i.e. largest penalty first, ...). For each order I used a max-flow algorithm in order to have as many items of this type of order scheduled within its time interval, but without dropping items of orders with higher penalties. Although I started with a max-cost flow algorithm (maximizing the cost of the non-penalized items), I ended up with just a simple sequence of max-flow algorithms. However, even this was too much by itself and I had to optimize the implementation in order to not get TLE. By the way, in case it's not obvious, the bipartite graph on which the max-flows are computed contains the N orders on one side and the intervals of time moments on the other side (there can be up to 2*N-1 such intervals).

            • »
              »
              »
              »
              »
              »
              »
              11 years ago, # ^ |
                Vote: I like it +17 Vote: I do not like it

              You can solve that max-flow part by greedy. -Assuming a bipartite graph with the following property for each vertices in the left side all of its adjacent vertices in the right side are continuous. There is an easy greedy solution for that problem. This problem is in fact a single dimensional version of this problem http://main.edu.pl/en/archive/oi/3/wie -Now consider that for each vertex i in the left side you need to match X[i] times. If you consider in flow sense X[i] is the capacity for each vertex in the left. You can still solve it with same greedy approach which depends on the number of vertices in the right. -But now the problem is that the number of vertex in the right can be as large as 10^8. But instead of a single vertex you can consider them as a range. And for each range you can apply the previous greedy approach to do bulk matching for each range. -A very simple implementation for such greedy will take O(n^2). -And finally if we combine this with your decreasing order of penalty (i.e. largest penalty first, ...) the overall algorithm will take O(n^3).

              • »
                »
                »
                »
                »
                »
                »
                »
                11 years ago, # ^ |
                  Vote: I like it +1 Vote: I do not like it

                Also note that we can use some segment update for those bulk matching to improve the greedy solution to O(nlg^2n) but thought that the problem with current bound was safe enough to differentiate with the flow based solutions.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  11 years ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  I thought about the greedy algorithm after reading the editorial (for the case when we assume that all the orders have penalty 1). Can't we simply solve this in O(N*log(N)) time? While traversing the ranges we add to the current range items from "active" orders whose deadline is the smallest, until either the range is full or we have no more "active" orders. If this works, then a simple heap with "active" orders sorted by deadline would be enough.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  11 years ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  Ah nice. I was thinking the matching from the left side with binary search and segment update. But your solution seems to be simpler/nicer.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  11 years ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  Except that I'm not entirely sure it is correct :) I will implement it when I have time and write another comment then.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  11 years ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  @ ballon, mugurelionut : Do you guys use a previous implementation of algorithms like maxflow or do you implement it from scratch during the contest?

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  11 years ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  Ideally, I should have a class or set of functions and data types/structures so that I wouldn't have to implement maxflow (or min-cost maxflow) algorithms from scratch every time. However, I have been too lazy so far, so I always end up implementing them from scratch in every contest (when I realize that they are needed, of course).