Блог пользователя codeonwort

Автор codeonwort, 8 лет назад, По-английски

This post doesn't include a special or creative solution. It's about how to implement the solutions in the editorial. This kind of post is unnecessary for C++ or Java, but it's Haskell and there's no one who completed the round with Haskell ;) I can't assure if my codes are in pure functional way or very efficient, but at least they were accepted. I hope it will be helpful.

734A - Anton and Danik

Take the outcome as a String and count the number of 'A' and 'D' in it. It's very simple problem so I just traversed the string twice (two calls of length . filter.) If you want to do it 2x faster, you can use partition in Data.List.

code: 22282938

734B - Anton and Digits

Another simple problem. Here is the line that reads the input:

[k2, k3, k5, k6] <- (map read . words) `fmap` getLine :: IO [Integer]

If input is small like this, it has no problem. When you have to read a large number of integers, there is two considerations — speed and memory. They'll be examined in later problems.

code: 22282969

734C - Anton and Making Potions

In this problem I'll discuss two things — input and binary search.

First, we need to read a huge number of integers, possibly up to 8 * 105 numbers. When you try to read a line with 105 integers in this way:

xs <- (map read . words) `fmap` getLine :: IO [Int64]

It will give you TLE. Believe me, parsing with String and read is really, really slow. Here is my favorite approach:

import Data.Maybe (fromJust)
import qualified Data.ByteString as BS
import qualified Data.ByteString.Char8 as BS8

getInts = (map (fst . fromJust . BS8.readInt) . BS8.words) `fmap` BS.getLine :: IO [Int]
getInt64s = (map (fromIntegral . fst . fromJust . BS8.readInteger) . BS8.words) `fmap` BS.getLine :: IO [Int64]

It performs in reasonable time, but what about space? If you see my submission you will notice that the memory usage is 64 MB. In C++, to hold 8 * 105 integers of 64-bit, you only need (100000 * 8 * 8) bytes = 6.1 MB. I'll discuss it in problem F.

Second, we need to use binary search. But I couldn't find a function like std::upper_bound() of C++ STL in Haskell's base package. I implemented it as follow:

upperBound :: Array Int64 Int64 -> Int64 -> Int64 -> Int64
upperBound ary n x = go 0 n where
    go lb ub
        | lb >= ub = ub
        | ary ! mid <= x = go (mid+1) ub
        | ary ! mid > x = go lb mid
        where mid = (lb + ub) `div` 2

It assumes a few things. ary has elements in indices from 0 to (n-1). It returns an integer from 0 to n. n means x is larger than any elements in ary. This is not an idiomatic Haskell style. It can be generalized as follows:

upperBound :: Ix i
            => Array i e    -- array to binary search
            -> (i,i)        -- left bound, right bound of search
            -> e            -- element to find the upper bound
            -> i            -- desired upper bound

But I'll keep going with the first version, as it works well on this problem...

    let ub = upperBound ds k s
    let freePotion = cs ! (ub - 1)
    let noFirst = if ub > 0 then (max (n - freePotion) 0) * x else n * x
    let bestUsingFirst = minimum $ map (make as bs cs ds n k s) [0..m-1]
    print $ min noFirst bestUsingFirst

This is the code that finds the answer. Here comes lazy evaluation. ub can be 0 but freePotion doesn't mind it. freePotion is used to evaluate noFirst, only when ub > 0. Hence freePotion = cs ! (-1) never happens.

Last thing, a parade of parameters to make. If I defined make like let make i = ... with other lets, as bs cs ds n k s are not needed:

    let ub = upperBound ds k s
    let freePotion = cs ! (ub - 1)
    let noFirst = if ub > 0 then (max (n - freePotion) 0) * x else n * x
    let make i      -- define make here
        | bs ! i > s = maxBound :: Int64
        | otherwise = (max 0 remain) * (as ! i)
        where
            remain = n - (if ub == 0 then 0 else (cs ! (ub - 1)))
            ub = upperBound ds k (s - bs ! i)
    let bestUsingFirst = minimum $ map make [0..m-1]
    print $ min noFirst bestUsingFirst

It causes parse error, meanwhile in my desktop there's no problem. I use GHC 8.x and Codeforce's GHC version is 7.x. That's the reason I guess.

code: 22285536

734D - Anton and Chess

Input is not just a list of integers, but a list of (char, int, int). Again, if you try to parse it with String and read:

main = do
    ...
    blacks <- replicateM n $ do
        [post, x, y] <- words `fmap` getLine
        return (post, read x :: Int, read y :: Int)
    ...

You will receive TLE. ByteString will save us again.

import Data.Maybe
import qualified Data.ByteString as BS
import qualified Data.ByteString.Char8 as BS8

parseInt = fst . fromJust . BS8.readInt

main = do
    ...
    blacks <- replicateM n $ do
        [post, x, y] <- BS8.words `fmap` BS.getLine
        return (BS8.unpack post, parseInt x, parseInt y)
    ...

Remaining parts are done just as the editorial says. But as you can see a lot of memory is used — 167 MB! My C++ code only used 15 MB. It's same situation as problem C. The reasons are list and tuple. in case of tuple, we have to unpack three elements of tuples. My previous post discusses the same problem and as I said I'll return to this in problem F.

code: 22286544

734E - Anton and Tree

Although the input is a tree, I'll consider it as an undirected graph. If it were C++, I would wrote as follow:

vector<int> G[MAX];

for(int i=0; i<n; i++){
  int u, v;
  cin >> u >> v;
  G[u].push_back(v);
  G[v].push_back(u);
}

How to do the same thing in Haskell? To model the sample tree in the problem there are several choices.

List of List
adj = [[], [2,3], [4,5], [1,8,9], [2], [2,6,7], [5], [5], [3], [3,11], [3], [9]]

i-th element of adj is the list of neighbors of vertex i. It's definitely useless because we need random access on vertices. random access on list, well, good luck.

Adjacency matrix using 2D array

n up to 200000. At least 1525 GB of memory is needed :(

Adjacency list using Data.Map.Strict
import qualified Data.Map.Strict as Map
type Graph = Map.Map Int [Int]
type Edge = (Int,Int)

buildG :: [Edge] -> Graph
buildG es = foldl addEdge Map.empty es

addEdge :: Graph -> Edge -> Graph
addEdge g (u,v) = Map.insertWith (++) u [v] (Map.insertWith (++) v [u] g)

This is a walkable approach, but we can do better regarding that vertices are numbered in compact from 1 to n.

Adjacency list using Data.Array
import Data.Array
import Data.Tuple (swap)

type Graph = Array Int [Int]

buildG :: [Edge] -> Int -> Graph
buildG es n = accumArray (flip (:)) [] (1,n) es

main = do
    ...
    edges <- ...
    let edges' = edges ++ (map swap edges)
    let g = buildG edges' n
    ...

Graph representation is fine. To calculate the diameter of it, we need two vertices in longest distance. It's simple:

  1. Start DFS or BFS on random vertex (say, 1). Find a vertex with maximum level. Let's call it u.
  2. Start DFS or BFS on vertex u. Find a vertex with maximum level. Let's call it v.
  3. The diameter is the level of v.

It's a piece of cake to implement above in C++, but it's not that simple in Haskell.

I seperated the traversal into two steps: generating a list that represents DFS order and creating an array of tuples of (vertex, level). Here is my first try to generate the DFS order list:

preorder :: Graph -> Int -> [(Int,Int)]
preorder g start = dfs start start where
    dfs parent v = (v,parent) : concatMap (dfs v) (filter (/= parent) (g ! v))

In this problem we need to compare a vertex's color with it's parent vertex's color, so I zipped each vertex with it's parent.

At first it looked simple, elegant, and efficient... until I got TLE. this version of preorder is incredibly slow as n (number of vertices) goes big. For example,

n = 200000
colors = 1 0 1 0 1 0 1 0 ...
edges = 1 2
        2 3
        3 4
        ...
        199999 200000

For this simple chain preorder took 50 seconds to terminate. I guess concatMap part is the bottleneck, but can't assure. If someone knows the exact reason, please let me know.

UPD: Thinking ko_osaga's suggestion I expanded the evaluation of dfs. In the case above all of adjacency lists contain just one neighbor vertex. So it is expanded like this:

(1,1) : ((2,1) ++ []) : ((3,2) ++ []) : .... : ((n,n-1) ++ []) : []

It's time complexity is O(n). But concatMap is defined using foldr which lazy evaluate expressions. dfs constructs a very long expression containing up to 200000 calls of foldr and then starts to evaluate the expression. I think building this expression takes most of the execution time.

Next try was generating BFS order. In this time I used Data.Sequence to merge visit orders.

import qualified Data.Sequence as Seq
import Data.Foldable (toList)

preorder :: Graph -> Int -> [(Int,Int)]
preorder g start = toList $ bfs Seq.empty [(start,start)] where
    bfs nodes [] = nodes
    bfs nodes queue = bfs (nodes Seq.>< Seq.fromList queue) (concatMap nexts queue)
    nexts (v,parent) = map (\u -> (u,v)) $ filter (/= parent) (g ! v)

Seq is deque-like data structure and the time complexity of merging two Seq by (><) is O(log(min(n,m))) where n and m are lengths of two sequences. This version of preorder was reasonably fast which made me conclude that concatMap was the problem.

Now we know in which order vertices should be visited. Traverse the graph and calculate each vertex's level based on the level of the parent of the vertex. (caution: We didn't compress the graph, that is we didn't merge adjacent vertices of same color. So if a vertex v and it's parent have same color, v has the same level with its parent.)

code: 22324674

734F - Anton and School

The size of arrays b and c is up to 200000. If We read b, c and calculate d, a of the editorial as follow:

main = do
    n <- getInt
    bs <- getInts                -- defined as problem C
    cs <- getInts
    let ds = zipWith (+) bs cs
    let as = ...                 -- get the original array

There will be four [Int], each containing 200000 integers. If a list contains n integers it consumes 5n words. In this case 4000000 words = 4000000 * 64 bytes are needed. Well... it's 244 MB. Moreover we need more arrays than a, b, c, and d.

Clearly the problem is List. Fortunately in this problem we can use unboxed array because all elements of input are required in entire calculation and they are all integers.

At first, reading input as [Int] should be avoided:

import Data.Array.IArray
import Data.Array.Unboxed

getIntArray n = listArray (1,n) `fmap` getInts :: IO (UArray Int Int)

main = do
    n <- getInt
    bs <- getIntArray n
    cs <- getIntArray n

What? It construct an unboxed array but still uses getInts to build it! But this time that [Int] is garbage collected as soon as each integer is filled in the array, so [Int] of length 200000 is never made. This is a funny part of Haskell ;)

There are several lines that contain elems as which is passed to map and forM_ as a parameter. My GHC 8.x is fine with traversing array (as is UArray), but Codeforces' GHC raise a compile error. I guess array is not an instance of Traversable in GHC 7.x. List is surely an instance of Traversable, and elems returns the list representation of elements of an array.

As a result total memory usage is 81 MB which is comparable to my C++ code (54 MB.)

code: 22329902

Полный текст и комментарии »

  • Проголосовать: нравится
  • +85
  • Проголосовать: не нравится

Автор codeonwort, история, 8 лет назад, По-английски

Educational Round 15 — Problem 702E (Analysis of Pathes on Functional Graph): http://codeforces.net/problemset/problem/702/E
The official editorial: http://codeforces.net/blog/entry/46324?locale=en

This article is not about another solution, but how to implement the solution that editorial provides with Haskell, efficiently. After reading the editorial I tried to solve it with Haskell, but I got memory limit exceed on test 28: http://codeforces.net/contest/702/submission/19550505

First I wondered if my code is wrong, so I ported my Haskell code to C++ and it was accepted (0.7ms, 138MB): http://codeforces.net/contest/702/submission/20682955

So problem is the nature of Haskell — strong type system and rich expressivity at the cost of excessive memory usage and low performance. Clearly heavy memory usage came from all that state nodes. each state is an array of nodes. There can be at most 10^5 nodes and 34 states (k <= 10^10 and 10^10 is 34 characters in binary representation).

Each node consists of (next node, total weight, minimum weight) — whose type is Int, Int64, and Int64 respectively. In C++, this is (4+8+8 bytes) * 10^5 * 34 = 64.85MB. How many memory is needed for a node in Haskell?

type Node = (Int, Int64, Int64)

Node has a bunch of pointers and wrappers so one Node consumes 10 words. (You can see Haskell types' memory footprints here: https://www.youtube.com/watch?v=_pDUq0nNjhI) A word is 8 bytes on a 64-bit machine, so one Node takes 80 bytes. To hold all states, we need 259MB. This is only for holding states. Other operations need their memory therefore memory limit (512MB) is exceeded.

To save memory I unpacked all fields of Node:

{-# LANGUAGE BangPatterns #-}
data Node = Node {-# UNPACK #-} !Int {-# UNPACK #-} !Int64 {-# UNPACK #-} !Int64

It removes all pointers and wrappers inside a Node. Regarding constructor overhead, now a Node takes 4 words = 32 bytes. Now total states consume 103MB and total memory usage of the program is about 210MB. But it's not over. TLE on test 35 (n = 10^5, k = 10^10) :(

Again what matters is Haskell's powerful feature; lazy evaluation. Our states are memory-heavy and they need a lot of numeric calculations. There is no unnecessary work and we have to utilize all of the evaluated results. This is not a place for lazy evaluation, but for strict evaluation.

To force evaluation of states, I tried evaluate in the Control.Exception module: http://codeforces.net/contest/702/submission/20793048

evaluate :: a -> IO a

But there was no improvement. Stucked here, I was going mad but realized that evaluate only evaluates to weak head normal form. To fully evaluate something we need force in Control.DeepSeq.

To force a Node, Node should be an instance of NFData.

import Control.DeepSeq
instance NFData Node where
    rnf (Node a b c) = rnf a `seq` rnf b `seq` rnf c

Finally it was accepted (1.84s, 242MB): http://codeforces.net/contest/702/submission/20805671

With all my efforts, My Haskell solution is 2.6 times slower than C++ one which I wrote quite easy. The problem's time constraint is 2 seconds and Haskell version took 1.84s. So tough!

This problem made me give up to use Haskell on contests. Maybe I have to use Haskell only for practice :(

Полный текст и комментарии »

  • Проголосовать: нравится
  • +21
  • Проголосовать: не нравится

Автор codeonwort, история, 8 лет назад, По-английски

Educational Round 15 — Haskell version. I can't solve E and F. :P

702A - Maximum Increase

I think it's trivial, so just see: 19483133

One thing to mention is that you don't need to hold all integers in memory. You can process integers one by one, thus space complexity can be reduced to O(1), but in this case the problem size is small (N <= 100000) so I reused my previous code that reads whole string at once.

Time complexcity : O(N)

702B - Powers of Two

All of n integers satisfy 1 <= a[i] <= 10^9. Then possible sums that are any power of 2 are 2 <= 2^x <= 2^30.

justification:

  • 1 + 1 = 2
  • 10^9 = (10^3)^3 < (2^10)^3 = 2^30

Therefore for each a[i] and 2^x (1 <= x <= 30), we count the number of a[j] such that a[i] + a[j] = 2^x.

First, traverse the input and map each integer to the number of appearence of it. Let's call this mapping book. I used Data.Map.Strict.Map.

Second, for each a[i] and 2^x, count the combinations.

  • If a[i] >= 2^x, then there is no solution.
  • If 2^x - a[i] is not in the book, there is no solution.
  • If 2^x - a[i] == a[i], then there is book[2^x - a[i]] - 1 solutions.
  • Otherwise, there is book[2^x - a[i]] solutions.

Sum up all solutions and divide by 2 (We didn't consider i < j constraints, so all combinations have been counted twice), then print it: 19513379

Because the answer can be pretty big, I used Integer rather than Int. I always forget this. In my computer Int is 64-bit, but it seems in Codeforces Int is 32-bit. So always use Data.Int.Int64 or Integer for big answers.

Time complexity: O(NlogN)

702C - Cellular Network

Let's consider the line as a 1D Voronoi diagram. That is, we assign for each tower an area in which any city is related to that tower. Because it's 1D situation and all towers are given sorted, this is very easy: the boundaries are just the middle of consecutive two towers.

Sorted boundaries given, for each city we binary search the tower it relates to, calculate the distance between them, update the maximum distance so far: 19530243

It's annoying that Haskell doesn't allow O(1) destructive update of an array, but in this case we only need reading array. Happy array time!

Time complexity: O(N + MlogM)

702D - Road to Post Office

This is a math problem, nothing specific to Haskell. See the official editorial: http://codeforces.net/blog/entry/46324?locale=en

Полный текст и комментарии »

  • Проголосовать: нравится
  • +20
  • Проголосовать: не нравится