bihariforces's blog

By bihariforces, history, 5 months ago, In English

Mode of an array is the most frequent element, mode frequency is frequency of most frequent element.

For example, $$$[1,3,5,5]$$$ has mode $$$5$$$ and it's frequency is $$$2$$$ so mode frequency of this array is $$$2$$$.

For an array of length $$$N$$$ we need to find sum of mode frequency of all subarrays.

Is there any way to do this better than $$$O(N^2)$$$?

Full text and comments »

  • Vote: I like it
  • +32
  • Vote: I do not like it

By bihariforces, history, 6 months ago, In English

Greetings Codeforces Community!

I invite you to Bihari Coding Contest, a 2 hour DSA Styled Coding Contest, today (June 4th) at 19:00 IST (7 PM).

Contest will consist of 7 tasks of varying difficulty, There won't be any prizes, however your participation will greatly appreciated, this is first of the series that I am planning to host regularly and active participation and feedback will be helpful in improving my contest-setting skills.

You can register for the contest here, Hope you find the tasks enjoyable. https://www.hackerrank.com/bihari-coding-contest

Congratulations to Top 5 Winners:

  1. tyr0Whiz

  2. Egor

  3. Homie_Lander

  4. javieri_oliva_s1

  5. MeetBrahmbhatt

First Solves:

  1. 18o3

  2. tyr0Whiz

  3. tyr0Whiz

  4. 18o3

  5. tyr0Whiz

  6. tyr0Whiz

  7. Egor

Full text and comments »

  • Vote: I like it
  • +41
  • Vote: I do not like it

By bihariforces, history, 10 months ago, In English

Suppose we're given a graph with $$$N$$$ vertices and $$$M \space (\le \frac{N \times (N - 1)}{2})$$$ undirected edges, vertices are labelled from $$$0$$$ to $$$N - 1$$$.

We've to add $$$K \space (\le \frac{N \times (N - 1)}{2} - M)$$$ random undirected edges to this graph such that no edge is repeated (including the ones we added).

Goal is to deterministically minimize the number of calls made to $$$rand()$$$ function, only $$$K$$$ calls ideally, choice of edges should be provably random, while keeping rest of the algorithm not too inefficient, for $$$N, M, K \le 10^6$$$.

I feel this problem can be called "pick K random elements without repetition from complement set" where $$${ (i, j) \mid 0 \leq i \leq N \text{ and } 0 \leq j \leq N }$$$ is the universal set and we have a set of $$$M$$$ pairs initially.

I am unable to think any provably random and efficient ways to accomplish this, for example I could pick a random vertex, then pick $$$rand(0, N - degree)$$$ and do some mapping for second vertex, which is $$$O(degree)$$$ probably and can become inefficient.

I would like to know any ideas to accomplish this, thanks.

Full text and comments »

  • Vote: I like it
  • +20
  • Vote: I do not like it

By bihariforces, history, 11 months ago, In English

This is a made up problem, but I was wondering if there is a faster way to solve this.

Given an array $$$A$$$ of length upto $$$10^5$$$, count the number of non-empty subarrays of $$$A$$$ whose length is divisible by the number of distinct elements present in it, i.e. $$$ distinctCount(subarray) \mathrel{|} length(subarray) $$$.

I can't think of any useful optimizations, trying to think with lazy segment trees makes it really complex, my only hope is with either Square-root decomposition, or combining two algorithms for $$$O(N\sqrt{N})$$$ solution.

I was thinking in terms of writing it as $$$length = k * distinctCount$$$ for some integer $$$k$$$, then combining two algorithms for $$$k <> \sqrt{N}$$$, but haven't made much progress.

I am curious so do help me.

Full text and comments »

  • Vote: I like it
  • +45
  • Vote: I do not like it

By bihariforces, history, 14 months ago, In English

If you have not seen already, DQUERY is a problem on SPOJ which asks us to query the number of distinct elements present in a range.

There are ways to solve it online, which might also be possible with what I'll be discussing today, but for sake of explanation we'll solve it offline.

Offline way to solve DQUERY involves use of Fenwick Trees, which is already a well-known technique but we'll try to formulate the solution ourselves in a much more intuitive way.

Let us forget about queries, assume we've a hypothetical $$$N \times N$$$ grid $$$distinct$$$ where $$$distinct[i][j]$$$ is the number of distinct elements present in subarray $$$A[i,j]$$$.

Now we'll use the technique of individual contribution, ie. rather than taking a range and counting distinct, we take an element and find all ranges where this element will contribute an answer to, In order to do that we need a way to resolve duplicates, ie. if there are duplicates only first occurrence should be counted, this makes it very easy to avoid overcounting.

An element $$$A[i]$$$ contributes to all subarrays where it is the first occurrence of itself, ie. any subarray which starts after previous occurrence of $$$A[i]$$$ and contains index $$$i$$$.

If we try to increment contribution of $$$A[i]$$$ in our hypothetical $$$distinct$$$ grid, we see that each cell $$$distinct[x][y]$$$ where $$$prev[arr[i]] + 1 \le x \le i$$$ and $$$i \le y \le N - 1$$$ is getting incremented.

This increment is a $$$2D$$$ range update on rectangle, and the rectangle's right edge always lies on grid's right edge.

Assuming we somehow updated our grid for each $$$A[i]$$$, answer for every possible query would have been precomputed, but unfortunately that isn't feasible.

The problem statement is now, Given an $$$N \times N$$$ 2D grid and $$$N$$$ range 2D increments, find value of given $$$Q$$$ cells after performing all increments.

And this is easy to solve offline, first store all increments as an $$$add$$$ operation on it's beginning row and an $$$end$$$ operation on it's ending row, let us use a Fenwick Tree to maintain only current active row of the grid, then while moving down the grid row-wise, the start of a rectangle is merely a range increment on Fenwick tree and end is range decrement, and we only need to query for points, for each query in current row, we query the column in current active row, Range updates & point Queries, ideal use case for Fenwick Tree.

Thus we were able to perform 2D range updates and point queries in an $$$N \times N$$$ grid without actually storing entire grid, and looking at DQUERY problem this way makes it very intuitive to solve offline using Fenwick tree.

Here's the Accepted Solution implemented using this idea in Python. Code

Full text and comments »

  • Vote: I like it
  • +7
  • Vote: I do not like it

By bihariforces, history, 15 months ago, In English

Given a sorted array where every element is distinct, we need to evaluate product of every difference, modulo $$$ 10^9 + 7 $$$

$$$ \prod_{i < j} (arr[j] - arr[i]) \% (10^9 + 7) $$$

Best approach I can think of is finding frequency of primes in product by grouping elements with same modular residue, which allows $$$ O(N * M / \log M) $$$ if $$$M$$$ is maximum number in array.

I have considered mathematical ways by polynomial multiplication and divide and conquer, which seems most promising but can't solve completely.

Is there any way to find this for arrays of length upto $$$ 10^ 5 $$$, and numbers upto $$$ 10 ^ 9 $$$?

Full text and comments »

  • Vote: I like it
  • +13
  • Vote: I do not like it

By bihariforces, history, 15 months ago, In English

Given an array $$$arr$$$ of length $$$N$$$, find the sum of count of unique values in a subset for every subset of $$$arr$$$, modulo $$$10^9 + 7$$$.

I can see that the result depends only on frequency of distinct values, and seems like some combinatorial trick, it's a made up question but seems standard, what is the most optimal solution for length $$$N$$$ array?

Full text and comments »

  • Vote: I like it
  • +4
  • Vote: I do not like it

By bihariforces, history, 15 months ago, In English

We are given an array of length $$$N$$$ and $$$Q$$$ queries (offline) where each query is a value $$$K$$$, for each query we need to count number of pairs in array with XOR $$$K$$$.

If $$$N$$$ and $$$Q$$$ can both be upto $$$10^5$$$, is it possible to answer these queries better than traversing entire array for each query?

An analogous version to count pairs with given sum uses FFT to precompute frequency every possible sum, which seems very difficult to do for XOR operation.

We can assume values in array are upto $$$10^5$$$ too, but if a solution exists which works for larger numbers too, that's preferable.

Full text and comments »

  • Vote: I like it
  • 0
  • Vote: I do not like it

By bihariforces, 17 months ago, In English

This is a made up question not seen somewhere so I am not sure if a good solution exists but here it is anyways.

Given an array $$$ A $$$ of $$$ N $$$ non-negative integers, find the longest non-decreasing subsequence of this array such that every adjacent pair in the subsequence is co-prime, ie. $$$ gcd(a_{i}, a_{i + 1}) = 1 \space \forall \space 0 <= i < K - 1 $$$ where $$$ K $$$ is length of subsequence.

Would it be possible to have a solution better than $$$ O(N^2) $$$ if the numbers in this array will be upto $$$ 10^5 $$$ ?

I was exploring solutions using segment trees but it's very tricky, compared to a different version where adjacent pairs should be strictly non-coprime which would be trivial.

If there is some way to make it more efficient I would like to know.

Full text and comments »

  • Vote: I like it
  • +14
  • Vote: I do not like it

By bihariforces, history, 20 months ago, In English

https://atcoder.jp/contests/abc290/tasks/abc290_e

We have a variation of this, find sum of $$$F(x)$$$ for every subsequence of a string, and should be better than $$$O(N^2)$$$.

I can only think of $$$O(N^2)$$$ approach which involves finding individual contribution of every unequal pair of characters, can we optimize this?

Full text and comments »

  • Vote: I like it
  • +16
  • Vote: I do not like it

By bihariforces, history, 20 months ago, In English

Let's stick to only odd-length subsets for simplicity, given an array of integers, find sum of median of all subsets mod $$$1e9 + 7$$$, median is the middle element for odd-length subsets when sorted.

First step would obviously be sorting but then to count number of subsets whose median is $$$arr[i]$$$ we need to find number of ways to pick equal number of elements from it's left and right, if left has $$$x$$$ elements and right has $$$y$$$ elements, then,

$$$ count(subsets) = \sum_{i=0}^{\min(x,y)} \binom{x}{i} \binom{y}{i} $$$

Finding this for each element would make it $$$O(N^2)$$$, is there a faster method to find the sum of binomial coefficients as shown above efficiently possibly with some precomputation? Or some entirely different approach to solve the original problem?

Full text and comments »

  • Vote: I like it
  • 0
  • Vote: I do not like it

By bihariforces, history, 22 months ago, In English

How would we implement a multiset data structure that allows insertion, deletion and querying GCD of values currently present in multiset?

insert(5)
GCD() -> 5
insert(10)
GCD() -> 5
remove(5)
GCD() -> 10

This question is not from some existing problem and was just wondering if it was possible to have such data structure where deletion is possible on operations like GCD. I believe this would be helpful https://cp-algorithms.com/data_structures/deleting_in_log_n.html but can't understand how we would apply this or maybe some other method which would work for large numbers upto 1e18.

Full text and comments »

  • Vote: I like it
  • +3
  • Vote: I do not like it

By bihariforces, history, 22 months ago, In English

Suppose we're given an array of length N upto 1e6 and we apply prefix sum to the array in place, ie. [a, b, c] becomes [a, a + b, a + b + c], we repeat this K times where K can be upto 1e6.

Can we find the resulting array after K operations faster than O(N*K)?

I have observed each preceding element has some contribution for an element in resulting array, but the coefficient doesn't seem intuitive to derive, can someone help derive how to compute the resulting array?

Finding a particular element of the resulting array, for example, just last element in better than O(N*K) would also suffice, hope someone can help.

Full text and comments »

  • Vote: I like it
  • +9
  • Vote: I do not like it