Блог пользователя FHVirus

Автор FHVirus, история, 10 месяцев назад, По-английски

Hello Codeforces! Recently, I was learning about Edmonds' Blossom Algorithm for general graph matching, and I've seen implementations like this or this. I failed to prove that they run in $$$O(VE)$$$ or $$$O(V^3)$$$, and suspects that they might just be $$$O(V^2 E)$$$ or $$$O(V^4)$$$ with very low constants.

The main reason is that when they call blossom(v, w, a), v might move through other already-contracted blossoms, and some edges may be used in $$$O(V)$$$ times. However, I failed to construct a case where the number of this kind of edges $$$\in \omega(V)$$$ to disprove the $$$O(V^3)$$$ upper bound. I believe that the implementation here does not suffer this problem, since it v jumps over the already-contracted blossom.

Resulting to problems on Library Checker and UOJ have failed, too, as the two implementation both passed the test cases with similar runtime. As most of the implementations doesn't jump over the already-contracted blossoms, it might be that most people are using a bad implementation, and this worries me. Does anyone know whether the codes mentioned earlier are correct or not?

Wishing you all a favorable weather and thanks in advance!

Note: the problem on Library Checker are mostly randomly generated.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +66
  • Проголосовать: не нравится

Автор FHVirus, история, 13 месяцев назад, По-английски

Hello Codeforces! Today I feel like thirsty for contribution, so I'll share some trick to get (almost) unlimited amount of (almost) square grid paper during contest, and hope that I will get some contribution.

If you use Vim, open an empty file and type the following from normal mode:

i+<esc>25A---+<esc>
o|<esc>25A   |<esc>
ggVGyG35pGdd

Then you will get this: Da Grid

The Vim commands are basically just doing the copy-and-paste part for you. If you don't use Vim, you can manually produce the pattern instead.

The grid is 101 characters wide * 71 characters high, or 25 cells wide * 35 cells high. 101 * 71 characters roughly matches domjudge's upper limit for printing, but please test it in testing session. This trick has been tested in 2023 Taoyuan regional, and the staff have approved this trick.

Since the grid cells are (almost) square, you can use them in DP and Geometry problems. Happy printing!

P.S. Please don't print more than you need. Some people got warning from juries for printing 10+ pages of grid paper during testing session.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +341
  • Проголосовать: не нравится

Автор FHVirus, история, 3 года назад, По-английски

Prequisite: Minkowski sum.

When I was solving the problem yesterday, I came up with a simple solution. I hadn't seen it published by anyone else, so I decided to write a blog myself.

Given two convex polygon $$$A$$$ and $$$B$$$, their minimum distance is equal to the minimum distance from the origin point $$$(0, 0)$$$ to their Minkowski sum $$$C = A + (-B)$$$. $$$-B$$$ means to negate all the vectors in $$$B$$$.

Heuristic proof: We want to know the minimum of distances of every pair vector $$$(a, b) s.t. a \in A, b \in B$$$. That equals to $$$\min(|a - b|) \forall a, b$$$. And that's the distance from the origin point to the Minkowski sum of $$$A$$$ and $$$-B$$$.

Note: This should also work for non-convex polygons, but I don't know how to do Minkowski sum (with good time complexity) for them.

The basic idea is similar to GJK algorithm (which is often used to detect object collision), but it is way easier to implement and can return the distance rather than a bool.

There will be no implementation here, because:

  1. Left as an exercise for reader.
  2. It's very easy, and template for Minkowski sum can be easily found.
  3. I'm lazy.

Some random murmur

While searching about GJK algorithm, I found that many people refer to $$$A + (-B)$$$ as "Minkowski difference." This isn't correct (at least to my knowledge), because there isn't really a point to make a new name if it's just adding the nagative thing. Instead, Minkowski difference of $$$A$$$ and $$$B$$$ should be $$$C$$$ such that $$$C + B = A$$$ (the $$$+$$$ denotes Minkowski sum), in other words, the inverse operation of Minkowski sum. I don't know why so many people say that the GJK algorithm uses minkowski difference while the original paper doesn't seems to mention any Minkowski thing.


I can't find a practice problem for this (perhaps TIOJ 1041, but it's only in traditional chinese and has bad testdata.) If you happen to know one, please leave it in the comments.

Last but not least, I want upvotes. If you learned something, please give me one. Don't hesitate, just poke that button!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +126
  • Проголосовать: не нравится

Автор FHVirus, история, 3 года назад, По-английски

Hello CodeForces! Since my contribution is negative currently (-4) due to a stupid comment I made, I would like to write a little tutorial to earn some contribution stonks.

Anyways, here's the outline of this tutorial:

  1. We're going to solve a simple problem and a not-so-simple problem, then
  2. use the formula derived from the problems to solve "Historic Sum" problem and maintain it on data structures like segment trees.
  3. Use the same technique to solve the "Range Add Range Sum" problem with BIT. Yes, it's possible!

Disclaimer: I did not come up with the method myself. Tutorials on historic sum problem can be found here from jiry_2's blog, and range add range query BIT can also be found on the Internet. I forgot the link to the original post, but the same technique can be found in several different Chinese blogs.

This blog serves as an implementation of Feynman's learning technique, and I hope that it makes some concepts and reasoning behind the cool method much clearer for myself and anyone reading this blog.

It will be a bit long (I want to make everything clear for even a beginner), so maybe you'll want to do some binary jumping and skip ahead for certain parts of the explanation while reading.

Note:

  1. I'm using 1-based index on arrays and closed bounds for intervals.
  2. Section 2 and 3 are independent.
  3. The markdown and math blocks might have some problem, so here's a hackmd version.

1. The essence.

1.1 The simple problem

The problem looks like this:

Given an interger array $$$A$$$ of length $$$n$$$.
We define array $$$B$$$ where $$$B_i = \sum_{j = 1}^{i}{A_j}$$$.
Then we define another array $$$C$$$ where $$$C_i = \sum_{j = 1}^{i}{B_j}$$$.
Then there's $$$q$$$ queries, each of two types:
1. Print $$$C_n$$$.
2. Add $$$x$$$ to $$$A_k, k \in [1, n]$$$.

If there's only queries of type 1, two times of prefix sum will do the job. But what about modifications?

The key observation here is to play with the summations.

$$$ C_i = \sum_{j = 1}^{i}{B_j} = \sum_{j = 1}^{i}{\sum_{k = 1}^{j}{A_k}} \\ = \begin{matrix} (A_1) & + \\ (A_1 & + & A_2) & + \\ (A_1 & + & A_2 & + & A_3) & + \\ \vdots & & \vdots & & \vdots \\ (A_1 & + & A_2 & + & A_3 & + & \cdots & + & A_i) \end{matrix} $$$

When we switch our view from horizontal to vertical, we can se that there's no need to do two times of summation (prefix sum). With a bit of math, we can count the number of appearance of each $$$A_i$$$ in the equation (which is also the height of column $$$i$$$) and thus get a simpler formula:

$$$ C_i = \sum_{j = 1}^{i}{B_j} = \sum_{j = 1}^{i}{\sum_{k = 1}^{j}{A_k}} = \sum_{j = 1}^{i}{(i - j + 1) \cdot A_j} $$$

And for some reason, let's call the $$$(i - j + 1)$$$ term the "contribution multiplier" of $$$A_j$$$ to $$$C_i$$$.

There's two important property of this formula:

  1. It can be done in one pass for loop.
  2. Every element $$$A_i$$$ is independent. That is, for queries of type 2, a modification will be done on single point.

And the solution to the problem looks like this:

  1. Calculate $$$C_n$$$, and store the value sum in a single int or long long.
  2. sum += x * (n - k + 1);

Complexity: $$$O(n + q)$$$ time, $$$O(1)$$$ space.

1.2 The not-simple problem

The problem is pretty much the same, but for type 1 queries, we need to answer $$$C_i$$$ for arbitrary index $$$i$$$ instead of fixed index $$$n$$$. This means that the contribution multiplier is no longer fixed for one element in array $$$A$$$.

And the key is to play with summations again. But this time, since the difficulty stems from the index of queried $$$C_i$$$, let's try to make it independent from that of $$$A_j$$$:

$$$ C_i = \sum_{j = 1}^{i}{(i - j + 1) \cdot A_j} = (i + 1) \cdot \sum_{j = 1}^{i}{A_j} - \sum_{j = 1}^{i}{j \cdot A_j} $$$

Putting the modification aside, all we need to do is maintain the prefix sum $$$B_i = \sum_{j = 1}^{i}{A_j}$$$ and another prefix sum $$$B'_i = \sum_{j = 1}^{i}{j \cdot A_j}$$$, and then $$$C_i$$$ will be $$$(i + 1) \cdot B_i - B'_i$$$.

What about modifications? We simply maintain $$$B$$$ and $$$B'$$$ on two Binary Indexed Trees (or one, it's probably faster and better), and the modification becomes two point add operations.

Complexity: $$$O(n + q \log n)$$$ time, $$$O(n)$$$ space.

Note: It's possible to initialize a BIT in linear time. Read this blog for more information.

The code is very simple, so I would like to be lazy here and omit it.


2. Historic sums & maintaining it on data structures.

The problem goes like this:

Given two identical arrays $$$D$$$ and $$$E$$$ of length $$$n$$$. There are two types of queries:
1. Add $$$x$$$ to $$$D[l, r]$$$. After that, add $$$D_i$$$ to $$$E_i$$$ for all $$$i \in [1, n]$$$.
2. Print the sum of $$$E[l, r]$$$.

It's a bit hard, so let's solve an easier version where $$$n = 1$$$ first. Hint: It's almost the same as the not-so-simple problem.

Solution

There's another inportant property here: if we modified $$$A_p$$$, we will never ask for $$$C_q$$$ where $$$p > q$$$. So all the information we need can be maintained by a single long long.

And now moving on to the original problem with bigger $$$n$$$. It's obvious that we just need to use a segment tree to maintain $$$B_i$$$ and $$$B'_i$$$ for every cell in $$$E$$$, and the modifications will be two ordinary range add queries.

Another method of solving this problem is to maintain a tag of "how many modifications have passed after last pushed down the addition tag." It's more complicated but can also be used in more situations. I'm not going to explain it here because it's irrelevent the main idea of this blog, and if you want to learn about that, please read smax's blog.

For more sophisticated problems about historic sum, refer to jiry_2's blog.


3. Range Add Range Sum BIT

First, let's convert the problem into suffix add prefix sum.

Given an array $$$F$$$ of length $$$n$$$:
1. Add(p, x): Add $$$x$$$ to F_i for all $$$i \in [p, n]$$$.
2. Sum(p): Print $$$\sum_{i = 1}^{p}{F_i}$$$.

Define an array $$$G$$$ where $$$G_i = \sum_{j = 1}^{i}{F_j}$$$. For each query of type 1, it will contribute $$$(i - p + 1) \cdot x$$$ to $$$G_i$$$ for every $$$i >= p$$$. Doesn't it seem similar to the not-so-simple problem?

And voilà! We can do range add range sum with BIT! There's not much to explain, so here's the code.

Code

And that's it for this tutorial!

Special thanks to 8e7 for proofreading.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +203
  • Проголосовать: не нравится