Is it allowed now to discuss the APIO? According to the schedule, the official competition is done by now... but also according to the schedule, the open contest is in about a week.
Edit: It seems like discussions should be kept until after the open contest ends. So, don't scroll down if you want to avoid any kind of spoiler to the open contest (and yes, let's stop discussing until then).
Edit2: Should be good now.
I was told that "we should not discuss openly about results and problems of APIO 2019, until 9:00 AM (UTC+9) of May 20th." from a responsible of APIO 2019 Tokyo venue.
So I think we can discuss (at least about results), but not confident.
Seoul site result is almost a meme. Best score is 253(GyojunYoun), and 4th ~13th are all tied to score 203. I expect 0G/13S/0B.
Seems like 203 is pretty common result. Here we had two people with 203, and I've also heard of another 8 people with 203.
Yeah, it's common, very very common. :)
Congratulation for the perfect score, by the way!
How did you know?
I can see unofficial results :)
So... you know the unofficial results, and you said that both 253-point scorer and 203-point scorer in Seoul is expected to get silver medal. So, it means, I got two informations, right?
Maybe it's even wrong, because it's unofficial anyway, and I didn't check it rigorously. I hope Korea can get gold.
what is the distribution of 203 points btw?
Result of Vietnam: user202729_ scored 243, minhtung04042001 scored 233, and 5 others got 203.
I guest all people with 203 will get silver medals. (Actually no medal, but certificates saying that they are silver medalists).
Okay, so here is the known result of Japan:
1st rank: Masataka Yoneda (E869120), 100+100+60=260 points
2nd rank: Hirotaka Yoneda (square1001), 100+27+100=227 points
3rd rank: Tomohito Hosii, 100+43+80=223 points
4th-8th (or probably 9th or 10th or more): 100+43+60=203 points
I know one candidate who did not tell the score and it is not very unlikely to exceed 203 points, but including this, only four exceeds 203 points, and I personally expect that even this person got 203 points.
My first guess of medal borderline was 243/203/something, but since there could many participants than usual because of ties, it is possible that even I may get gold medal :)
P.S. I solved two problems (A and C) in 66 minutes :)
I found this solution for problem C but I couldn't implement it until the end of the contest.
Let's assume we are given a common $$$[L,R]$$$ for all second queries. We have to find time ranges that in are sequence $$$[L,R]$$$ is consist of ones. Let's call $$$T_1$$$ is the first time that our sequence $$$[L,R]$$$ does consist of ones and $$$E_1$$$ is the first time after $$$T_1$$$ that our sequence $$$[L,R]$$$ does not consist of just ones. And call $$$(T_2,T_3...)$$$ and $$$(E_2,E_3...)$$$ like $$$T_1$$$ and $$$E_1$$$. So the answer is $$$E_1 - T_1 + E_2 - T_2 + E_3 - T_3.....$$$
Now we have to find a more general solution.
It means this time should be the time $$$E$$$ for some ranges. You can easily find which ranges. And update the answer for those ranges. (This could be done with treap inside segment tree).
It means this time should be the time $$$T$$$ for some ranges. You can easily find which ranges. And update the answer for those ranges. (This could be done with treap inside segment tree).
My solution was like that. Is there any wrong point which I couldn't see?
Looks correct. I got 100 with it.
Oh, Thanks !. At least my idea seems right. It's made me happy :D
This thread will spoil the contest for participants of APIO Open, it's better not to discuss, or to hide solutions and warn not to read the future participants.
Sure, thanks for the information.
Could you, please, update your post with it to make sure everyone sees the answer and avoids discussions.
998kover 300
bthero 243
amanbol 203
Auto comment: topic has been updated by Noam527 (previous revision, new revision, compare).
Problems were pretty standard, almost trivial ideawise. Quite sad cause APIO had high problem quality. (I wanted to do call for tasks, but was too busy :( )
Can anyone solve B in $$$O(m^{1+\epsilon})$$$ time for $$$\epsilon > 0$$$, possibly using very complicated data structures?
As far as I know, zero problems was received from call for tasks :)
I'm not problem setter, so this information may be outdated, totally wrong, or something else.
Afaik only B was from call for tasks.
The ideas were not difficult to figure, and I also think the solutions relied heavily on constant factor... overall, didn't really like the problems.
I also think problems are pretty standard and boring. Maybe somehow they only received some data structure problems.
I got all the problems accepted in less than 2 hours. I believe I spent <10min on thinking and ~20mins are spent on constant optimizations. The time limits are pretty tight indeed.
Wow, I admire your skill in constant time optimizations!
Actually, I've solved problem A and C in 66 minutes total. And I spent only another 9 minutes to get solution of problem B. And rested 25 minutes. Then, for another 100 minutes, I considered about existence of solution of which faster than $$$O(n^{1.5} \times \alpha(n))$$$ because I thought it is risky to implement, but my thinking failed. After that, I implemented + debugged my code and it took 40 minutes. I used remaining 60 minutes completely on constant optimization of it.
However, I only got 27 points on problem B. I'm really bad at constant optimization...
Is there anyone who spent 1hr for squeezing segment-tree based approach on A? Funny that I did, and succeeded XD
Why did you need a segment tree on A? I think the common solution is finding period and sweepline.
I plotted (x mod T, y) in a plane and calculated the area of union of rectangles. After contest I found out I was stupid, as usual.
I tried, but after several hours I only got to 30 points :/
I've met the same problem: solving A and C, spending loads of time on the O(n sqrt(n) alpha(n)) algorithm of B and getting only 27 points eventually.
By the way, will the contestants' source codes be made public?
My score is 100 60 100...
I've implemented sqrt decomposition in O(Qsqrt(Q)alpha), but got 60 with TL :(
Maybe someone can help me to find the exact reason why is my code working slow? ^^
Here it is: https://pastebin.com/risLgsfs
I think std::set is a bad idea.. I know it's linear time, but still. I implemented everything on array, maintaining it sorted by merging original edge set with an updated edge set ($$$O(B\log B + M)$$$). I didn't saw anyone who skipped this optimization and pass.
Also for me, optimal bucket size was 800.
Thank you, it made it really better! Ok, now it works faster, but just 73...
My code. It's quite ugly, sorry about that.
I think you can change your 'microdsu' to a dfs.
dfs and bfs are not working good because of the vectors that i need to use to create graph
You can use linked list to store edges. That worked fine for me.
Also, you can just open a array of n*sqrt(n) and use it as 'vector's to store edges.
Thank you, now it works faster, but i can't get 100 :( (check reply to the koo_saga comment)
Where can i see standings?
https://apio.innopolis.university/results/apio/2019/org-apio-2019-main.html
I'm too old to take part in APIO now but I'm really curious as to what's the solution for B... no one I know managed to solve it. Can someone who has solved it give me a rough outline of the solution? Thanks :)
It is a standard square root decomposition problem.
Brief Outline: Divide the queries into $$$\sqrt{Q}$$$ buckets. For each bucket, note that at most $$$\sqrt{Q}$$$ edges have weights unchanged within the bucket. Sort all the queries by decreasing weight in the bucket, and maintain a DSU while adding edges that are not modified within the bucket one by one. For edges with weight modified in the bucket, we iterate through all of them and then add them to the DSU if they are usable, then remove them after we are done with a query (so $$$\sqrt{Q}$$$ additions and removals per query). This can be implemented with DSU with rollbacks.
"DSU with rollbacks" — How is this implemented, with still $$$O(\alpha(n))$$$? (Providing some vague idea is fine)
Use dfs instead of dsu here.
Create a stack to record what you merged, then for every "undo" operation you can just look at the top of the stack and undo the changes (by reassigning values in the dsu array) and then pop from the stack. It's at most $$$O(\log n)$$$ since you still merge small to large (but don't do path compression).
I understand — I guess this would not fit in the time constraint for this specific problem though.
My solution using this method passed, though with some slight optimizations (I kept getting 71 at the beginning).
You don't need to use "dsu with rollback". In each block, you can add edges that covers the whole block from big to small using dsu. And to answer each query, just simply connect the extra edges on this query and run dfs. (When connect these edges you can just regard a connected component as a single vertex with weight.)
Using "dsu with rollback" will bring a log.
The same as TLE.
Ya I didn't realize this before TLE pointed it out, since that was the first thing that came to mind. Thanks.
when will they be available for practice?
How many points have gold medals?
There is a discussion regarding the (unofficial) result/cutoff in this thread.
Thanks!
https://apio2019.ru/results/official-contest/ shows 104 medalists with 201 official contestants. Either I am not good at math or the APIO 2019 committee has clearly violated the regulations they provided themselves.
I believe this is the solution for cut-off the team leaders selected:
Get temporary standings according to top 6 people for each country, get medal cutoffs, include all participants that are tied, and give medals according to cutoffs calculated before, standings will contain more than 6 people for some countries
Hi! any news for testdatas? niyaznigmatul PavelKunyavskiy
Did you mean pashka? I'm not a judge here :)
Thanks for the correction :)
Sorry for delay
https://apio2019.ru/competition/olympiad-materials/
In Gym: 2019 Asia-Pacific Informatics Olympiad (APIO 19)
My code works on oj.uz here, but not on CF 146879237. Which of these is to be believed (i.e. which is closer to APIO runtime)?
You can solve all problems here: https://oj.uz/problems/source/389
In problem B, both setter's and checker's solution got TLE on oj.uz (I downloaded them here)
setter: weight_restriction_nb.cpp
checher:sol_hanh.cpp
Also, their description files are quite funny.
I think 2nd solution's author is prof.PVH
For problem A In this solution it says that
Let
f(x) = {(x + x / b) % a, x % b}
It's possible to prove that
f(x)
is periodic with smallest period =ab / gcd(a, b + 1)
How do I prove this?
please note that I have no experience in finding periods of periodic functions. If you can direct me to some resources that'll be great too.
thanks
can someone help us, please ?
Regarding what?
For problem A In this solution it says that
Let f(x) = {(x + x / b) % a, x % b}
It's possible to prove that f(x) is periodic with smallest period = ab / gcd(a, b + 1)
How do I prove this?
I'll try to answer your question, if you haven't answered it by now.
Suppose we are examining times $$$t_0$$$ and $$$t_1$$$. We know that our tuples are:
$$$\left( \left( t_0 + \Bigl\lfloor \frac{t_0}{B} \Bigr\rfloor \right) \pmod{A}, t_0 \pmod{B} \right)$$$
$$$\left( \left( t_1 + \Bigl\lfloor \frac{t_1}{B} \Bigr\rfloor \right) \pmod{A}, t_1 \pmod{B} \right)$$$
We want to know when the two tuples are equal to each other. We know that $$$t_0 \equiv t_1 \pmod{B} \Longrightarrow t_0 = t_1 + KB$$$ for some $$$K \in \mathbb{Z}$$$. So now we want to know when
Replacing $$$t_0 = t_1 + KB$$$, we have that:
A nice property of the floor function is that $$$\lfloor A/B \rfloor = \lfloor (A + KB)/B \rfloor$$$ for $$$K \in \mathbb{Z}$$$. Using this, we see that:
Using some grade school level arithmetic, we have that:
$$$KB + K \equiv 0\pmod{A}.$$$
From here we see that $$$K$$$ has period $$$\frac{A}{\text{gcd} (A, B + 1)}$$$. So $$$KB$$$ has period $$$\frac{A \cdot B}{\text{gcd} (A, B + 1)}$$$.
How to solve problem C?