The 22-nd edition of Week of Code https://www.hackerrank.com/w22 starts soon: 8 August 07:00 UTC (note that the start time is unusual).
Monday to Sunday, every day one challenge will go live, easy to hard.
The maximal score decreases by 10% at the end of every 24 hours. Your submissions will run on hidden test cases at the end of every 24 hours.
Only the last submission time counts. It allows you to start late.
The contest is prepared by Arterm and tested by wanbo. Also thanks adamant for help in preparation.
The contest is rated and top 10 get T-shirts.
GL & HF
Auto comment: topic has been updated by Arterm (previous revision, new revision, compare).
This doesn't seem to be true for me. I just opened the first problem and solved it in like 2-5 minutes. However, the time shown on the leaderboard is 49:17, which is counted from the beginning of the contest.
Same here; I'm pretty sure I didn't even enter contest earlier, not to mention that I definitely didn't open problem before.
Bad joke, I will try to make better next time :)
We're using the latest code submit time as a tie breaker. Hence it's counted since the beginning of the contest. It'll always be the time of submission from beginning of contest for all 7 challenges, so basically you can still start late. Only the last day will be counted.
I think most people do not need to worry about the tie breaker(Time penalty), because we scored partially, and I think the tie breaker will only influence the full score users.
What is the reason for such change in rules?
It has been decided to do it to decrease number of cheaters? Or there is some other reason? Previous version was more comfortable for participants :)
Yeah life was hell explaining < 30 sec submissions, banning them then proving them wrong.
Considering you solve first 6 problems correctly you only have to be there at 0700 UTC for the final day problem. Far much comfortable than sum(submit_time — challenge_begin_time)
In general I feel the overall contest details communication process can be improved. I know at-least four different places where information can be provided. Sometimes contradicting one another.
/contests
page.sign up
button, it goes to a different page with another small writeup which is usually the one from previous page. e.g.contests/101hack40/challenges
details
link to go to the actual contest page. In this page I can read all the contest details and rules e.g./101hack40
Over all I think the work flow can be much better. I don't know about contest organizers but feels like it is overhead to both participants and the organizers to maintain description in so many places. This is my third attempt to provide feedback with zero response/changes. Would like to hear from the organizers what they think.
Auto comment: topic has been updated by Arterm (previous revision, new revision, compare).
So, will the number of points to get for a problem decrease by 10% every 24 hours or not?
That's right, points will decrease by 10% every 24h.
What happened with "Matching Sets" problem? Its statement was changed from i < j to i != j, wasn't it?
Yep, it changed. During language review statement was corrected in the wrong way and we missed this. Sorry for inconvinience. You can resubmit without penalty until next task opens.
P.S. In my personal opinion, review by native speaker is important thing. I really do not like awkward statements written by those who is not so well with English (me, for example).
I wasted so much time on the problem "matching sets" thinking it was i<j and now its changed to i!=j...I hope the problem setter and tester are more careful next time in the future.
Sorry for your lost time.
We hope too)
I don't like to write comments like this one, and generally it is not only about this contest. But also I don't care now, when someone didn't respect my time (and I was solving tasks in only free time during the day this week).
I undrestand there is mistakes in testcases. Nobody can invent all interesting optimizations which has bad complexity and kill all of them. That is problem with this kinds of contests, but also it is not very bad, it imporves skills of many coders and for me it makes contest interesting for everyone.
BUT I can not belive that you allowed totally brute force solution to pass system testing in fifth problem. It is not correct! Someone spent good amount of time for thinking and solving and then someone other solves in 5-10 minutes with typing what is wrriten in statement. Expeciatelly I can not belive it happends to coder with rating possible much much bigger than my rating will be during the whole life !
Normally you don't understad this as criticism for only this contest, it happends everytime in last 4-5 rounds!
Now I will stop with doing this and you should think whether you deserve so many coders and more more participants everytime ( I am happy for it ) !
At the end, at least for me, it looks that previous way for scoring was more interesting and better, also It would be great to put the same starting time for easier tasks and you have one more day for the last tasks.
Am I right that official position of organizers is "we know about this issue, we have general idea how to make most of these wrong solutions fail, we aren't going to do anything with it as we find it unfair to take AC back"? In case somebody watched over contest, it was possible to fix test data yesterday ;)
Pure curiosity — is it possible to run submissions from the contest on stronger tests when problems will be moved to practice, without changing standings, but simply to get stats on how many people would pass it?
allllekssssa, at least problem itself is rather nice, I figured out around 4-5 different solutions yesterday (not knowing that tests are so week that there is no need in it), and I believe it was a good exercise.
Yeah, for me task is also interesting. Certainly I didn't spend enough time for solving it, normally it is not expected I can invent it in 30-40 which I had for full thinking yesterday.
I know Arterm is good setter and that is the biggest reason why I started contest this week, for me all other tasks are very interesting, except the third which I have already seen somewhere ( actually I am not sure whether I invented the totally same task or I done it on real contest :D ).
It is not secret that I support and like HR rounds in the last time at most, but also I must mention some things which are repeating several times and which are not good.
I remember situation when I was organizing 101 Hack contest and first problem had weak tests :) Simply I couldn't belive someone can invent such strange greedy soluton :D
In general I thought it was a good problem set. Though the statements and sample test case explanations could have been better.
Two things about problem 5,
I no longer feel good about solving the problem. I had doubts on my run time so, I wrote my own test case where a 10^5 updates followed by 10^5 queries. And updates pretty much update all the queries. In my opinion this is the weak case for my solution. My code runs in time for my test case. But would be interesting to know if it passes all the strong test cases.
I feel good about my ability to judge problem complexity. When I saw close to 300 submissions for full score, I was not sure whether I understood the complexity of the problem properly. I was wondering that there is a much simpler approach that I missed. Now I know that some (or most) those solutions does not deserve full credit.
I would also love to see how many solutions passes on strong test data.
**Yes , HR should improve their end testcases, it is not fair that naive solution can pass all testcases in 5th problem.
The setters have done a great job with the problems, however, the problem statement writers and testers of the contest have ruined all their hard work. The problem statements are written in a way that it takes one hours to deduce what is the correct problem, which takes up the main part of the day, till the statement is corrected. And the test cases are so weak that brute force solution passes on 5th, whereas the problem itself is very nuanced. This is also the 4th or 5th consecutive month where something like this has happened in a Hackerrank contest.
TL;DR : Hackerrank, please give better tests and non-misleading problem statements. Setters, please keep making good problems as you have.
No full score, lol. Seems like everyone got trolled by this contest :D
I TLE'd two cases of Sequential Prefix Function (my solution was something akin to computing backlinks in the tree from Aho-Corasick algorithm), but when I tested them on my recently quite broken laptop, it took at most 4 seconds for each test; AFAIK HR servers are much faster. Then I added a trivial condition "if the last letter is unique currently, the backlink goes to the root = the answer is 0" and it passed in < 0.1 seconds on each of those two tests, while locally, it took around 0.5 seconds on them. Weird.
Maybe exactly one element of set {you, hackerrank} used an optimization flag and it helped in only one version of code? I don't see other possible explanation.
I'm using
-O2
all the time, recently with C++11. HR doesn't list the compiler flags AFAIK (just libraries), but this suggests-O2
as well.