Hello, Codeforces!
We're going to host a new contest at csacademy.com. Round #56 will take place on Wednesday, 08/November/2017 15:05 (UTC). This contest will be a Div1 + Div2, with 7 tasks of varying difficulty that need to be solved in 2 hours.
We are glad to have I_Love_Tina as a problem author.
Contest format:
- You will have to solve 7 tasks in 2 hours.
- There will be full feedback throughout the entire contest.
- Tasks will not have partial scoring, so you need to pass all test cases for a solution to count (ACM-ICPC-style).
- Tasks will have dynamic scores. According to the number of users that solve a problem the score will vary between 100 and 1000.
- Besides the score, each user will also get a penalty that is going to be used as a tie breaker.
Prizes
We're going to award the following prizes:
- First place: 100$
- Second place: 50$
About the penalty system:
- Computed using the following formula: the minute of the last accepted solution + the penalty for each solved task. The penalty for a solved task is equal to log2 (no_of_submissions) * 5.
- Solutions that don't compile or don't pass the example test cases are ignored.
- Once you solve a task you can still resubmit. All the following solutions will be ignored for both the score and the penalty.
If you find any bugs please email us at [email protected]
Don't forget to like us on Facebook, VK and follow us on Twitter.
Later Edit:
Congratulations to the winners:
Also, the editorial has been published. Hope you enjoyed the contest!
Just a reminder, the contest starts in 3 hours and 30 minutes.
I_Love_Tina is a problemsetter, I wonder whether we will get some problems involving Rudy1444 :D
I'm sorry but I have to disappoint you :( .
In Or Problem, the condition dp[i][j−1] ≤ dp[i][j] is not enough I think.
For readability, let's define f(j) = dp[i][j].
What we should prove to apply this technique is f(j + 1) - f(j) ≤ f(j) - f(j - 1).
For example, suppose that f(1) = 1, f(2) = 2, f(3) = 4. There is no C that j = 2 becomes maximal.
+1, we need convexity instead of monotonicity. Otherwise this technique would be applicable to almost every dp problem which is not the case.
This is correct. I'm sorry, I was overseeing this editorial and for some reason it seemed completely fine to me, even though I am aware of the fact that the stronger statement is necessary.
While it seems intuitively correct, I fear that formal proof may be beyond my ability. I don't know if I_Love_Tina has it or not.
There is a bit harder version of problem E (Disjoint Tree Path) here:
http://coj.uci.cu/24h/problem.xhtml?pid=3922
OK, regarding to parametric search, I think I can settle this unproven mess once and for all.
Let's consider following problem: We have a DAG with an edge of cost f(i, j) from vertex i to vertex j (note that in most applications f will not be defined by defining separately every value, it can be sum of numbers from interval, or their bitwise OR, or anything). What's the cheapest path from 1 to n consisting of exactly k edges?
Let's denote by g(k) that cheapest cost.
Powerful Lemma: If f fulfills condition that for every a ≤ b ≤ c ≤ d we have f(a, d) + f(b, c) ≥ f(a, c) + f(b, d) then g is convex (and analogously if inequality is reversed then it is concave)
Such property of f will be called "convex Monge property". This condition is very easily checked for many problems involving parametric search/Lagrangian optimization. If f(i, j) is OR of a[i], ..., a[j - 1] then we can check f clearly has concave Monge property hence it follows that parametric search can be used. If I remember well this condition is also very easily checked in infamous IOI alien problem.
Proof of that lemma can be found here: http://www.cs.ust.hk/mjg_lib/bibs/DPSu/DPSu.Files/sdarticle_204.pdf Result we are interested in is proposition 7. If all we want is proof of Powerful Lemma then we should read only what is written up to Proposition 7 inclusive. Thanks to gawry for providing me this article.
Amazing, good job finding this article! When applying Divide & Conquer optimization for minimization tasks, we always prove something along the lines of (d[a] + f(a, c) ≥ d[b] + f(b, c)) => (d[a] + f(a, d) ≥ d[b] + f(b, d)), but for this optimization there never seemed to be any easy way to prove applicability. Now there is.
Wouldn't expect any less from Swistakk :P
So now we see that if cost function satisfies quadrangle inequality then we can do a parametric search. But even for Divide and Conquer, quadrangle inequality is required. So most problems which have solution using Divide and Conquer can be solved using parametric search at better complexity. Am I right?
Can somebody help me in understanding the Find Edge List editorial? I am unable to understand how the input edges were converted in graph form for topological sorting??