A — ok.
B — easy. ok.
C — most interesting
D — O(N^2) — precision issue, AC with biginteger / Slower solution — TL issue
E — I don't like this
G — straightforward voronoi
I — input format..
FGHI — TL issue (especially I)
J — see model solution. really?
K — easy. ok.
ICPC WF should keep the level of the problem high. But for this year it seems like a big fail...
Too many problems have issues not related to the core solution.
Auto comment: topic has been updated by ainta (previous revision, new revision, compare).
Auto comment: topic has been updated by ainta (previous revision, new revision, compare).
I think D would have been a much better problem if they had asked the answer modulo some prime for higher constraints of N. It can even be done in using FFT.
Srsly why does setters don't do that? Storing nCr in a "long double" precision sounds like uh...
A general comment related to problem setting in general.
In my view, a problem loses quite a bit of motivation behind it when the required answer is meaningless in the problem's setting. And calculating a fraction of the form is just that, a meaningless number unless Q is 1.
Sure, there are problems expressed in mathematical terms from the start. For them, any number crunching fits the setting. But when a problem has a kind of real-world legend, the legend part is ruined when the answer is meaningless in its terms.
If Alice and Bob can't use the fraction in you just calculated, that means the problem setter either doesn't need them in the legend at all, or has to find a better presentation for the answer. Otherwise, the problem just logically falls apart.
I recognize that floating-point representation, or any other presentation of the answer, may well have its issues, but that is beside my point here.
What is the difference between and in terms of "meaningless"ness?
Both kind of ruin the legend, if there is one.
Still,
P bmod M
is better for a contestant, since when the answer is less thanM
, it is actually visible as is.Why so obsessed with legends? Yes it's cool when the story makes sense, but I think it shouldn't really diminish the problem solving part.
True, if legend gets in the way of the problem, either the legend can be dropped, or the problem can be altered. It does not make sense to keep a legend anyway if it contradicts the problem.
Generally, a problem legend is just a tool for the problemsetter.
A fitting legend can actually make the problem statement shorter, by exploiting intuitive properties of well-known objects which the legend is about.
A legend can be used to help the reader connect the solution or its core ideas to past and future problems.
A legend can be used to provide a real-world motivating example where the solution is useful, but of course only few problems can boast having such an example.
A legend can be dropped if not useful in a particular problem.
A legend can also be misused when added just for the sake of it.
I agree that it's often weird to ask for answer modulo M (legend-wise), but I do not agree, that it's a reason not to ask answer module N (or reason not to use legends)
Legend may still perform functions you mentioned e.g
even if answer asked in weird form (e.g as PQ - 1)
PS: I don't know anything about WF problems
CP is basically number crunching. Statements doesn’t give motivation or any meanings to them. Just admit it
Fictional statements is why some people start doing CP. They attract newcomers. My personal favourite is this contest
There are contestants and problem authors which regard statements as just an obstacle. They think the world would be better if all problems were just given formally, as it is easier and faster to comprehend, and does not leave any room for misinterpretation. An example is Sereja's problem statements.
There are also contestants and problem authors which like problem statements with a legend. They think it gives a richer experience in general if the problem has a legend. They find additional motivation in it, and like to remember the funny or otherwise amazing problem statements they saw. Some even recognize that a good legend may actually help understand the problem faster. An example is the neighboring comment.
What you say just tells us you belong to the former group, and does not make the latter one non-existent.
So even G had a TL issue, then I won't be able to got medal anyway :/
I don't understand the HUGE input for all problems, which often seems very unnecessary. Problem I was a terrible one — What was the point of N = 6000? The grid is way huge (It was practically N = 10000), and it was quite nasty to get the constant factor reasonably. Why wasn't it N = 2000 or N = 3000? Were they worried that O(n2.5) sqrt decomposition may pass?
Many teams got TL on problem I just because of the huge input. For me when I realized the reason for TL was the speed of input, the contest was nearly ended. :(
What input method have you used in I so that you got TL? Simple
getline
withsync_with_stdio(false)
worked just fine for us.P.S. I completely agree with you on useless large constraints. The authors of WF need fresh blood.
I hardly ever use cin so I don't know how to write
sync_with_stdio(false)
. And it's amazing there is nogets
during the contest.gets is deprecated and now even is not a part of C's language standard.
Use fgets (which is same, except it is much better in terms of security).
No, I'm not talking about the input — I used fgets, which sounds fast enough.
I'm just blaming that the author gave 6s to the problem, which can use at most 36M ~ 72M segment tree calls. Reducing the calls, or replacing it to fenwick tree is definitely boring. (Indeed, I used fenwick trees and got TL)
I don't think this is an "unfortunate slow code" either — Just plain wrong time limit. There is at most 9M "x", and we should see both upper + lower triangles, and we need some messy precomputation DPs too, right? And they gave me 6s for that?
Didn't know about the problem with many queries to segment tree/fenwick. From that point of view the problem gets worse ofc.
Your text to link here... See the code for problem I, it is even zkw segtree instead of normal segtree.
Funny thing. At the beginning of the contest I came up with solution, and I thought it was intended solution. But we were getting a lot of TL's, which were actually caused by my stupid bugs, and not by slowness of solution idea. And in that situation I did not come up with a better solution, and I kept trying to optimize my code. In the end, I fixed the bug and got OK (+16), but that's not the point. It's interesting (for me) that after reading your comment, I came up with a solution in just a few minutes. And it's just knowing the fact that such solution exist and absence of stressful situation.
And yet, if authors tried to cut off some slow solutions, then they did not do it very well =)
When I read problem I, the first thing that I thought was that they set those constraints to not allow solutions without a lot of squeezing. Of course 107 segment/fenwick tree queries will get TL. And getting rid of is not hard at all. If you used suboptimal solution and got TL it is your fault and not jury's.
Oh, so you mean there is O(n2) solution? I'll be glad to hear that, as I was pretty sure that would be really hard. I never saw any team with O(n2) solutions, including the one from SnapDragon.
And yes, if such exists and it's a model solution, then it's my fault.
Basically, the problem can be reduced to the following: we need to maintain set of values dj such that all dj ≤ i and do following queries: find number of elements in set; delete all dj = i and decrease i by one; delete all elements in set. All queries need to be done in O(1). To do that you store array of length i where k-th element stores the number of dj = k. Second query is just pop_back(). To do the third query you also store all dj manually. Here's the code.
Ok, so it sounds easy except everything in that "Basically" phrase. I really don't get anything yet, but it seems quite legit. I'll try to think about it myself. Thanks!
Our team had a really hard time on getting AC on D and E. Actually, we failed on D. It's different from getting the solution itself.
In E, we used some ternary search for getting answer for some physical problems. This led to TLE or WA, depending on number of iterations. We had to calculate more physics and could get AC. Even though some physics give interesting thoughts to the task, but this actually went way too far. This task tests your physics ability lot more than solving/implementing skills.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
My sad story
D was a real pain for me in the contest. I got wonderful observation for this problem (At least for me) after 2~30 minutes of thinking, and I was very hype for thinking it. It was O(N^2) solution, so I got keyboard and implemented it. But when I implemented it, I found that it had lots of precision issue. I tried 2 hours to fix it, but I failed. After the contest, I heard that almost all teams got AC using BigInteger library in Java. It was quite a pain to hear it.
My sad story ends
Actually I think this is a little more better than E, because precision issue is at least a computer engineering issue (comparing to physics). But still, I think it was not a good idea to mix that wonderful observing problem and floating point precision issue.
Do you really thinks problem E is about physics? Anyway, you need to have a way how to get height where you are as function of plane length you passed (to check if you collide with buildings, and even for ternary search). Probably, it not too hard physics, which will need one minute to be invented if you don't know it.
About ternary search. Well, if you put height you need in formula you already have, you get a quadratic equation on tg of angle to plane in which you make a though. Is that physics?
I don't think anyone will complain if he had TL after intersecting lines using binary search of time of one of points flight. What is the difference? You have extra log (with bad constant) in you solution. You get TL, seems to be quite ok for me.
This is not a best problem in my life, but quite ok (and as we see not standard for most teams) geometry problem (with quite useless bfs after it, but why not).
Well, at least it was more hard physics problem than I got in my college physics class midterm about gravity/dynamics. Can we call it geometry? I think this problem is in area of dynamics rather than geometry. If we have to get the whole formula by hand and write it to code, what's the difference between mid-term physics problem and this task?
Seams to be a bit strange for me. In 9-th grade (11 total in Russia) of school, i think problem "how long will an object thrown with this speed and this angle will flight until reach the same height from it was thrown" is one first problem you solve about gravity. And this one have one extra parameter which doesn't change anything.
Anyway, I agree this problem is not about computer since. But for me it not about it in exactly same way, that a lot of geometric problems. I really don't see any difference between this and circles intersection, for example. Both of them solved as write something on paper, solve quadratic equation, and put formulas in solution. And I don't understand why the last is ok subproblem, and this is not.
This is 9-th grade example?!
Parabolic motion is in the "Advanced Physics" (물리 II) science course in standard Korean high school curriculum, which is a optional science subject. AFAIK that course is known to be very hard, so not much students are studying that course.
(Now I'm at the point of life, explaining "물리 II" to foreign competitive programmer. Cool..)
I don't know a lot about E, and I knew about parabolic motion before, so I won't talk long — but wouldn't it be better if the author at least talked about the parabolic motion in detail, if it's not a standard knowledge?
Problem statement says "your horisontal speed is constant, your vertical speed is lineary decreased, sum of squares of their initial values should be equal to this". I think, after this information, which is considered standard knowlage, is how to integrate linear function, and what is sin/cos/tg, which seams to be really standard.
I managed problem E in my team, and physics in the problem was very hard for me :(
If every team had a machine that solves this problem: "return the best throwing angle for given v0(initial speed), Dx and Dy(target position)", then it could be solved by lot more than 10 teams, isn't it?
IMO, That is reason why E is not good problem.
Well, for me it was 10th grade.
Russian Physics textbook, beware of many ads
Same as IOI 17, Good teams got a bad result.
Is ICPC a computer science competition, or computer engineering competition? I want to know about the purpose of ICPC.
Let's do ACM ICPC in the next form: you should not write the code at all, just describe idea of solution, like: "Here we need to use centroid decomposition with some dp". Then jury will just say "Yeah, that's correct, AC!"
That is the since you want?
Actually, I did it on our team. I only make solutions in the competition, talk to teammates, and they implement it. (And teammates say "Yeay, that's correct, AC!") This is our team strategy, and it works well, and I like it.
I want mathematical or algorithmic beauty in ICPC, not architectural beauty. Time constants and double-precision errors are hard to control.
EDIT: My opinion was same as dotorya's comments below. If there was misunderstanding, I'm sorry for ambiguous comments.
I would like such competition, actually. That's a bit disappointing they're almost absent except for some pretty local discrete maths olympiads and tests.
It's a programming contest, as it says in the name. I think it's fair to say "programming" includes both computer science and implementation.
Edit: If you look at WF problems from the '90s or early '00s, then I think your complaints are very valid. Many of them were extremely heavy on implementation and brute-force. But the problem quality has improved greatly since then, and I think the current level of implementation challenges is compatible with ICPC's name, especially considering ICPC problem tradition.
Congratulations for your team's great result, by the way!
Actually, I think IOI'17 is not a bad contest. It consists of several interesting problems (I really like the problem 'simurgh', and 'books', 'train' were pretty good. 'wiring' is not a good problem for subtask-based contest but idea was interesting). On the other hand, in this WF, except 530 line-model-solution problem J, only problem C have interesting and hard-to-think idea. So it is not good for top-tier programming contest like WF.
Well, I'm his team member, and there seems to be some misunderstandings.
By "Computer engineering", he was not talking about implementations. He was talking about cache hit, pipelining, inline assembly, AVX, floating-point precision kind of things. To be more broad, Physics, Computational Geometry (done on paper), Astronomy (using geometry, we saw some problems on Grand Prix) maybe also included.
My solution of problem D is O(n^3) and only use double,I dont know why it can accept
Wow, that's great. We tried to figure the method like that but we failed :(
We used only doubles, too. The main idea is to compute correct probabilities and expectations in dynamic programming, so that for every prefix it calculates correct answers if that prefix is all that we have. Then there are no big values in dp calculation. The transition in dp was in logarithms plus exp, we had to speed up the solution to fit into TL because of exp operations.
I directly calculate the total answer of all situations by Dp.And finally output .I dont know why double is work,but it accept.
Are problem C and problem M from NEERC 2016 almost the same things?
Yup if it was the binary tree... But I think the significant part of problem C is reducing O(n2) algorithm's complexity, which that NEERC16 M doesn't require.
However the reduction is not hard. Simply centroid decomposition will do.
Wow.... I'm a fan of you...
Now I am a fan of you!
Actually I cannot solve it using centroid decomposiion now..and now I agreed that these two problems are not the same one...
I personally didn't like this year finals problemset because most of problems didn't require some great ideas and were mostly about coding. But giving a lot of computational geometry is 'Finals style' either you like it or not.
About some complaints in comments:
D: We also thought that (now classical) 'find answer as P / Q and then print PQ - 1 modulo some prime' will be better but then we thought that they deliberately made problem harder. We had some O(n4) DP with breaks which made ~1.5e7 transitions on maxtest, and there were no subtractions therefore no precision issues.
E: I was very surprised that for somebody this is high-level topic. Maybe organizers should do a research about school programs in different countries. I just did all the computation on paper -> no ternary search -> easy AC.
G: Maybe it is straightforward Voronoi, but our team didn't come up with a solution :) Anyway, I think that this problem is good, and for straightforward Voronoi you have TRD.
J: For people who says that you can't give a problem with 550-lines solution: First, WAT. Second, if you delete all comments and printing (there are A LOT of both) there will be less than 350 lines. For people who says that you can't give a problem about 'Well, there will be a period, so we should solve for small limitations and do precalc': It is kinda obvious (yeah, I can't prove it, but it is easy to believe), and the problem is still about 'how to solve the same exact problem with smaller limitations'.
About all TL issues — I don't know, we were solving on Kattis, maybe on WF it was worse. All our AC solutions fit in TL/2, our AC solution for E works 0.09s (O(n5) without ternary search).
D: I'm actually not sure about "O(n^4) DP with breaks" is why working in your solution and judge's official solution. Is there some way to be sure that it will give correct answer within 10^-6 precision range? If judge's idea was "Comparing answer between slow solution using Python, Java/fast solution with breaks, and it worked", I can't think it's actually fast solution because they used lots of time to verify whether answer is correct.
Furthermore, I'm also not a fan of "deliberately made problem harder" part. Problemset was hard enough, and it was not interesting way to increase difficulty for me.
G: I'm sorry, but what does TRD refer to? I can't find out what it means :(
About TL issues: If the tight time limit for E was intended to block ternary search solution, I think it's not so bad. (Still, I have a complaint for problem itself, but it's other story here.) Problem I was really bad, it had 100+MB input and so tight time limit. Even official solution is O(N^2logN) for N = 6000.
The main issue about TL issue is that contestants can't figure out whether there is better time complexity solution or it's just about huge constant. It's quite inevitable issue for logN factor, but still TL for I was too evil.
Why there would be more precision issues than always? If we are multiplying/dividing two numbers, their relative errors are added up, if we adding two numbers of the same sign, their relative errors are added up. The problem arises when we subtracting two numbers, but we never do it (compare this to inclusion-exclusion solution, which operates on numbers much bigger than actual answer is).
I know about add/multiply/divide doesn't pose issue here. My question is that why break in DP does not pose precision issue. Even though model solution breaks only when the probability for case <= 1e-10, Can't it be a large portion if we sum up all removed cases?
There are ~1.5e7 reachable transitions. If we change this break to 'if (dp == 0) continue' it still works nearly the same time.
I believe he was referring to Team Reference Document for G. I solved it by building the Voronoi diagram in n log n with my Delaunnary Triangulation hackpack. Probably a bit overboard for the bounds of the problem, but it definitely worked (after I fixed my other bug in my code).
I agree with your issue with E. The problem is trivial once you have the correct physics equation. I would have liked to see more twists on that problem that kept it from being just a simple physics formula.
Well nobody said someone can’t give a problem with 350~550 lines of precomputation. I also think it’s setter’s freedom. But it’s quite far from interesting, and it’s not a good idea to use time for such kind of problems in a contest too (isn’t it?)
For me, it seems that the whole point of this thread is not like "this is wrong, make it semi-rated", but about the overall disappointment of the set comparing to WF 2017, or even GPs in this year. I don't think WF was ever a best problemset in general (CERC / NEERC mostly had better problems), and this year it was in the "bad" side of the past WF set — maybe bit better than WF 2014. And I'm more disappointed because it seems like author "killed" some good part of the set (I would really appreciate if D is with 10^9 + 7, E with appropriate formula given, and 2x time limit in I)
Maybe real-number issue or TL issue on D/I was really serious, but I'm not sure. I dealt with problem I quite well smhw, and was not really struck by the "bad problems" in this set :p
I agree that the timelimit should have been increased on I. But for E, giving the formula would make that problem way too easy. (Thus why that was my most disliked problem from the set.) E was made significantly easier to anyone who happens to have some fresh Physics knowledge. Everyone on my team hasn't taken Physics since High School. The formulas can always be derived, but a problem which is literally just: "Can you derive the right formula then put it in your code?" is really stupid.
Ultimately, both E and the TLE issue on I are the main issues I had with the set. I liked most of the other problems (except for J) but would have liked to see something with more variety replace E.
There is one more attribute for problems' quality — how are these problem clear for spectators. World finals are seen by many folks, and it is very interesting to see how problems are presented, described and how easily one can get a described solution. I enjoyed problems as a spectator: especially about inserting commas in the sentences, about jumping over buildings, about cutting wires and about knight on a long chessboard.
You came to the wrong party.
EDIT : Sorry! it seems like this comment offended some people. I'll refrain from posting such comments in future.
What are you even saying !
The problems should be interesting to the participants in general and clearly most of the contestants seem dissatisfied with the problem set. Spectators having fun with the statements holds infinitesimal importance when compared with that.
Hah. Next time someone asks whether programming competitions will ever become a wider recognized sport, show them this comment, along with its voting total (currently at -10).
Actually I don't precisely understand the technical implementation issues in the hardest 4 problems because I didn't read them, but I understood that it's the same as usual (most of problems are not very hard to come up with solution) but implementation and obstacles you will face are pain in the ass (of course maybe the hardest problems ideas are very cool) but I am talking about the easiest 6-7 problems that some of them were very dirty. Anyway what I want to ask is (I hope somebody has an answer).
There are a lot of amazing problem setters (mostly russian-japanese-polish programmers) who actually set problems for very famous competitions\camps (VK cup , petrozavodsk , moscow workshop , IOI , POI....).
The question is why these people aren't invited to contribute to the problem sets (not sure maybe some were and refused). The finals should contain some problems with really amazing and creative ideas that most of people will enjoy solving, not problems that most good teams figure it out easily and spend a lot of time debugging and fitting the time limit and saying "fuuuck", "Сука, блять" "他媽的" the whole contest. Of course any contest may contain some of these problems but in ICPC it's too much.
I hope that in the near future the problems style will change (we should admit of course that quality of problems has increased throughout the late years but the main theme barely changed). There are a lot of legendary problem setters who are retired, I think they should take part in this.
It's really upset for some people when they feel that all their training was for nothing because of some voronoi stuff, or very strict time limits and bugs in 300 lines code :D
I should pay tribute to the legendary comment of Swistakk
For 3rd paragraph: Wow, that was exactly what I was doing in the last hour of contest :(
For 4th paragraph: I think that tasks were not so bad in 2016/2017. Actually, they were quite interesting to solve. But this year's tasks were pain in the ass for me.
For 5th paragraph: It's the main reason that I was really disappointed. This was my final team contest as college student, and I never expected those kind of real-number precision no-fun no-idea stuff in World Finals. Hope that next year would be better contest.
I am not saying that problems were bad in general. But I don't think they are a match for the cool contests at petrozavodsk , moscow workshop, polish contests (I am really a fan of them :D) and some other events.
There are regional contests which are far away better (like NEERC , CERC.. don't know about japanese/korean but I bet they are really cool). Again, maybe the super hard problems are very cool, but at least I am talking about the easiest 70% problems of the problem set. (actually I bet that super hard problems in ICPC are not really cool when you compare them to super hard problems in those contests :D).
Actually as a fun fact, I participated in NEERC this year. I remember some teams which performed really great (something like top 10) but on ICPC finals their situation was far away more terrible than their expected (finished with 4 problems or less) of course the difference between styles was the problem. I remember the problems in NEERC were really nice (you barely need some intermediate knowledge and a lot of creativity and some of these non-fancy knowledge problems were like div1D).
I understand what you are saying, but isn't it the job of the camps and workshops to try to teach/train according to composition of World Finals sets and not the other way around? World Finals should not simply mimic other popular contests just because people like them or find them not so bad. Computational geometry has been a recurring theme on World Finals sets for many years now.
Both geometry and implementation problems nearly always help decide World Finals contests. That is a huge part of competitive programming. As competitors, we need to come up with solutions in time but also code them in time. I agree that J just seems to crazy for WF, but the rest of the set (maybe C excluded), I imagine many of the great teams could solve relatively cleanly without issues.
Ultimately, I didn't find this set to be great, but I don't think its fair to say it was bad simply because it required heavy implementation skills or had potential for issues such as precision / TLE. I don't think it is good to assume, as contestants, that we shouldn't every have to worry about precision or TLE at World Finals.
Both of those are issues my team faces all of the time and have practiced dealing with. (We code in Java where many regionals don't have appropriate time limits to allow Java on certain problems. Also we don't get long doubles) so we were well-prepared to deal with constant-time (but significant) speed-ups for problem I.
This discussion won't end I think, and a lot of people have different points of view. I will tell some points which I think reasonable facts and not personal thoughts.
I was kind of exaggerating in my previous comment. But I think all people are demanding that problems should be more balanced (they weren't clearly, and most often the expected implementation time is much more than expected thinking time).
In other competitions I mentioned, I don't think they provide problem sets from mars and that different. I think it's just more balanced in some way.
Setting (implementation >= 1.5 idea) like problems is much more easier than setting some div1E with barely no fancy algorithm and really cool idea. So they could for sure write some variant of a voronoi problem, or some fancy algorithm with not much effort. I won't make a guess what do they think about this fact. I am sure that they know that WF is different.
Problems like problem G from this year are not fair I suppose. I don't think that there's anybody who will write voronoi by himself (at least he will waste time from computer for another problem). So it's basically about who was the best code in his reference.
I believe competitive programming has changed much throughout recent years. If most competitions are moving in some direction, and world finals is moving in a somehow different one (that doesn't make world finals correct and the others pointless).
To be honest, I don't know actually, I don't think I am veteran enough to judge and I was thinking about not writing any of these comments. At least I believe that world finals fans are not that many :D
G allowed an n^2 log n Voronoi diagram code if you wanted to just code that one. I had the n log n Delaunay in my hackpack so I just used that. However, I was very surprised only 5 teams got it. At the Hello Barcelona camp Fall 2017, there were 2 problems on the set which required Voronoi / Delaunay. That was what inspired me to include the code in my reference document.
I do agree with you on many points, but I think that having occasional problems (such as G) that require complicated algorithms is a good thing. Otherwise, many of the algorithms we use regularly today would be excluded because they would never have been introduced out of a fear of requiring new hackpack. It is a delicate balance, however. But having a geometry specialist is definitely a growing requirement for World Finals and I believe it is part of what allowed our team (University of Central Florida) to earn a Bronze medal this year.
Sick advertisement :D
Well, it is true, but...
I think that one of the main reasons why WF problemsets since 2013 are not looking like 'write 4 geometries, 5 bruteforces and 1 DP/flow which no one would solve because come on, people are not that smart' is 'Those fucking Russians/Poles can come up with hundreds interesting tasks, maybe we can do it too?'
Amd maybe even the cause were Finals in St.Petersburg and Ekaterinburg in 2013 and 2014.
I am the participants of WF 2013/2014. Though my team had hard time during the contest, I like to 2014 problems most among so many WF problemsets.
Why can't we have more?
There is a special Call for problems, that is sent for a short list of people. I got the first email at 2013, and I tried to contribute problems for 3 or 4 years without success (the problems were not so bad and they were used in other contests NEERC, RCC, etc.) If some good problemsetter will ask me I will be happy to forward the letter, if I will not forget (it is sent at the start of July).
Personally I do not like this year's World Final problemset, too.
As a supplement, the problem I is almost the same as an old problem in China, except the input format. When I saw this problem at the beginning of the contest, I was excited and wanted to try to get the first blood. But when I saw the input format, I turned to read other problems immediately :(
I was a participant and solved problem D in the contest.
My solution with long double passed. It ran in largest case in about 2s on local, but this time complexity is enough to hesitate to code it. The constraint should be smaller, for example N ≤ 200, for above reason, I think.
Requiring to calculate the answer not in modulo, but in real number, can be justified to avoid inclusion-exclusion solution. I came up with O(N3) with inclusion-exclusion during the contest, but I think it is not guilty to make this solution failed.