Блог пользователя socho

Автор socho, история, 4 года назад, По-английски

Hello!

I'm part of a team organising a programming competition in my area. We feel like the tasks we have, might be suited to host a mirror contest on CodeForces. Who is the right person to contact for this? I've tried contacting Mike himself, but I presume he is too busy to respond to it right now, which is very understandable. Any suggestions?

Thanks for your time, and stay safe!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +23
  • Проголосовать: не нравится

Автор socho, история, 5 лет назад, По-английски

Hey all!

We would like to invite you all to an online replay contest for UWCOI 2020. UWCOI 2020 is an OI-style contest hosted by UWCSEA Dover which was used to select members for the UWCSEA Dover team for the Singapore National Olympiad in Informatics for this year. The online replay contest will be held as a round rated for both Divisions on CodeChef.

Here are some details:

  • Time: Tuesday, February 25, 2020, 21:00 hrs IST (Indian Standard Time). Please check your local timezone here.
  • Contest Format: 7 OI-style tasks (all tasks have subtasks) in 3 hours
  • There is no time penalty for non-accepted verdicts; however, time will be used as the tiebreak.
  • The contest is rated for both divisions (all ratings).
  • The writers are astoria, kimbj0709 and socho.
  • The technical committee also includes smjleo.
  • The contest is hosted on CodeChef, at this link.
  • For all questions, please email us at [email protected]

We hope you enjoy the contest!

Update: Just a gentle reminder that the contest begins in about 3 hours from now! We hope to see you there!

Update 2:

Thank you all for participating! We hope you enjoyed the contest. Here are the problems and editorials:

Please let us know if you have any comments / feedback!

Update 3: All editorials are now available!

Thanks,
The UWCOI Committee

Полный текст и комментарии »

  • Проголосовать: нравится
  • +83
  • Проголосовать: не нравится

Автор socho, 5 лет назад, По-английски

Hey all!

We're the leaders for our school competitive programming club, and we're hosting a small OI-style contest in our school soon (and for some other schools too!). If we're able to, we might try to host an online mirror contest! We're just wondering if anybody here would be interested to test out our contest, and give us any feedback?

Just a few details:

  • There are 6 tasks, which we estimate are between Division 3 and Division 2 in difficulty.
  • One task is interactive style.
  • The contest is OI-style, so there are subtasks for all the 6 tasks.
  • The writers are astoria, kimbj0709 and socho.
  • We're looking for any feedback about the quality of tasks, the gradient of difficulty in the contest, and the variety of topics in the contest.

Please message me if you're interested in helping us test this! Any help would be greatly appreciated!

Thank you for your time!

Update:

We have now got enough testers — thanks to all of Codeforces for being such a great community! A lot of people have reached out to us offering to test, thank you to everyone involved! We really appreciate your help, and we hope to be able to return the favour to this wonderful community sometime soon!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +98
  • Проголосовать: не нравится

Автор socho, история, 5 лет назад, По-английски

Hey all!

I'm trying to make an OI-style interactive question in Polygon for a school contest, and the score should depend on the number of queries used by the solution in the worst case. For the first few subtasks, the solution must use at most X queries (X depends on the subtask). The scoring for these first few subtasks works. However, for the final subtask, I'm trying to make a partial scoring subtask, where the score for the subtask should depend on the maximum number of queries used in any testcase in the subtask (score should be 30 - the maximum number of queries used).

Here is an example, in case it helps:

Let's say that the final subtask has 4 testcases. In testcases #1 and #2, the solution used 5 queries, in testcase #3, the solution used 9 queries, and in testcase #4, the solution used 15 queries. Then, as 15 was the maximum number of queries used in any testcase, the score for the subtask should be 30 - 15 = 15.

So far, in the checker, I've tried to use quitf(_pc(score)) and quitp(_pc(score)) (as well as score-16 in both), but every time, if I try to test a solution which should get a partial, it gets the full 30 points for the subtask anyway.

Is there any way to do this? If so, please let me know how! I'm sorry if I've missed this from a previous post.

Thank you so much for your time!

Update: Yes, groups and test points are both enabled (to make the other subtasks work). The reason I cannot assign a smaller score to each testcase in the last subtask and then set the policy to EACH_TEST is because, if a solution is incorrect on any testcase, I need it to show Wrong Answer regardless of performance on all other testcases. Here is a similar problem, where the last subtask also gives a score based off the maximum number of queries used in any testcase. I'm trying to make something similar in Polygon.

Thanks again!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +20
  • Проголосовать: не нравится