Блог пользователя dario2994

Автор dario2994, 8 месяцев назад, По-английски

The European Championship 2024 will take place on the 24th of March in Prague. The top teams from the European ICPC regionals CERC, NWERC, SEERC, SWERC will compete for the title of European champions. It is the first edition of this ICPC super-regional.

The mirror contest European Championship 2024 - Online Mirror (Unrated, ICPC Rules, Teams Preferred) will be held on Codeforces at Mar/24/2024 13:00 (Moscow time) and will last 5 hours.

The mirror contest will contain the same problems as the official competition.

I am the chief judge for the competition and I want to thank:

  • The amazing set of judges who proposed and prepared the problems: antontrygubO_o, bicsi, Giove, Martin Kacer, MZuenni, Petr.
  • Our beloved tester ksun48 who showed us that our perception of the difficulties was not exactly spot on...
  • Our beloved proofreader Philae for proofreading the statements.
  • Everyone involved in the organization of EUC. In particular our director Boba Mannová, and Fernando Silva, Václav Herman, Ondřej Votava, Jan Kubr, Jan Baier.
  • The developers of DOMjudge, the contest system used in the official contest.
  • MikeMirzayanov for Polygon (that we used to prepare the problems) and for letting us host the mirror on Codeforces.

I invite you to participate in the contest and I hope that you will like the problems.

On the difficulty
The contest features problems with difficulties from div1A to div1E. It should be enjoyable for many, and challenging even for the strongest teams in the world.

Rules

  1. The contest is unrated, so your codeforces rating will not be affected.
  2. The scoring is ICPC-style: teams are first sorted by number of problems solved, then the time-penalty is used as a tie-break. An incorrect submission gives a 20 minutes penalty.
  3. We encourage participation as a team.
  4. If you are participating in a team, we encourage you to use only one computer for coding the solutions (as in an ICPC contest). Regarding using templates, googling, and copy-pasting code: feel free to do it.
Rationale of rule 4.

UPDATE: We hope you liked the problems!

Congratulations to the winners, and especially to the first two teams for AK:

  1. dw my perception of the difficulties was not exactly spot on: tourist, ecnerwala.
  2. xinyoudui: PubabaOnO, orzdevinwang, jqdai0815
  3. MIPT: Yolki-palki: Tikhon228, Pechalka, Kapt
  4. Captain take me!: crazy_sea, A_zjzj, 275307894a
  5. Beyond Three Wolves: Kevin114514, CrTsIr, Atomic-Jellyfish
  6. HSE: FFTilted: Kirill22, Ormlis

We uploaded the editorial of the contest.

Tune in to ICPCLive to see the closing ceremony and find out how the onsite teams did at 18:00 CET!

UPDATE 2:

Congratulations to the medalists of the onsite contest:

  1. Warsaw Eagles 2024 — University of Warsaw (the only team with 9 problems)
  2. Zagreb 1 — University of Zagreb
  3. KNU_0_GB_RAM — Taras Shevchenko National University of Kyiv
  4. ELTE 1 — Eötvös Loránd University
  5. UWr 1 — University of Wroclaw
  6. ENS Ulm 1 — École Normale Supérieure de Paris

Полный текст и комментарии »

  • Проголосовать: нравится
  • +383
  • Проголосовать: не нравится

Автор dario2994, 18 месяцев назад, По-английски

TL;DR: I am advertising pol2dom: a tool to convert a set of problems prepared in Polygon into a DOMjudge contest.

The whole process is automated: downloading the Polygon package, converting it into a DOMjudge package, and uploading the DOMjudge package to a DOMjudge server.

If you are organizing a contest that uses DOMjudge, I invite you to give pol2dom a try.

Long story

Polygon is the platform used to prepare problems for Codeforces rounds (and, at least one year ago, for Codechef rounds). It has many issues but, as far as I know, it is the best platform for preparing competitive programming problems. It has a large number of checks that catch common mistakes, it is hosted online, it allows a coordinator to quickly review the preparation, and so on. As everyone, I would like to see Polygon improve (for example, with a decent UI), but there's nothing better, so we can complain but we will keep using it.

DOMjudge is an automated system to run ICPC-style programming competitions. It is currently used in various ICPC regional contests (NWERC, SWERC, NAC, SPPC, and others), at the world finals, and in the new Universal Cup. It is rather popular, actively developed, and I really like it.

Sadly, the package of a problem created with Polygon is not compatible with DOMjudge, and even if it was, a lot of annoying clicks would be required to download it from polygon, save it somewhere, and upload it to a DOMjudge instance; multiply everything by the number of problems in an ICPC contest.

Here comes pol2dom, a python tool that downloads problems from polygon, converts them into the DOMjudge format, and uploads the result into an instance of DOMjudge. All of that in a single command! If you want to know more about this tools (for example, the fact that it supports testlib, or that it supports interactive problems, that it supports balloon colors and puts a balloon with the right color in the statement...), I suggest you to read the README.

It was used for SWERC 2022 and SWERC 2023. It has a decent documentation that should allow anyone to use it without going crazy. Without false modesty, it is much better than other tools with similar goals and similar names: polygon2domjudge, poly-to-dj, another polygon2domjudge. One may argue that there are alternatives to Polygon that are compatible with the DOMjudge format (e.g., BAPCtools or problemtools). These alternatives are less powerful and way less used than Polygon.

I will keep this blog in the recent actions for a couple of days with the usual tricks if it receives no comments. Up

Полный текст и комментарии »

  • Проголосовать: нравится
  • +160
  • Проголосовать: не нравится

Автор dario2994, история, 21 месяц назад, По-английски

The Southwestern Europe Regional Contest will take place on the 19th of February in Milan. It is the ICPC regional contest (i.e., the winning teams will advance to the ICPC World Finals) for teams from France, Israel, Italy, Portugal, Spain, and Switzerland.

The mirror contest SWERC 2022-2023 - Online Mirror (Unrated, ICPC Rules, Teams Preferred) will be held on Codeforces at Feb/19/2023 14:05 (Moscow time) and will last 5 hours.

The mirror contest will contain the problems from the official competition plus some additional problems.

I am the chief judge for the competition and I want to thank:

I invite you to participate in the contest and I hope that you will like the problems.

On the difficulty
The contest features problems with difficulties from div2A to div1F; so anyone can find something at their level.

Many teams with little experience participate in SWERC, so the problem set should be enjoyable also for div2 contestants. On the other hand, solving all the problems should be challenging even for the strongest teams in the world: the MIT team did not AK in 5 hours.

Rules

  1. The contest is unrated, so your codeforces rating will not be affected.
  2. The scoring is ICPC-style: teams are first sorted by number of problems solved, then the time-penalty is used as a tie-break. An incorrect submission gives a 20 minutes penalty.
  3. We encourage participation as a team.
  4. If you are participating in a team, we encourage you to use only one computer for coding the solutions (as in an ICPC contest). Regarding using templates, googling, and copy-pasting code: feel free to do it.
Rationale of rule 4.

UPDATE: We hope you liked the problems!

We uploaded the editorial of the contest (you can find it at https://codeforces.net/contest/1776, among the contest materials).

The solution to problem N submitted in the mirror contest by the team Let it rot is a quadratic solution which they managed to squeeze in the time limit. This was not the expected solution. So, morally, this problem is still unsolved! In the next days I might decrease the time limit. We have a solution which gets AC in less than one second but we wanted to be generous with the time limit. It turns out, we were too generous.

Congratulations to all the participants of the onsite contest and in particular to the three teams solving 11 problems:

  1. ENS Ulm 1 -- École Normale Supérieure de Paris
  2. gETHyped -- ETH Zürich
  3. P+P+P -- Harbour.Space University

Полный текст и комментарии »

  • Проголосовать: нравится
  • +351
  • Проголосовать: не нравится

Автор dario2994, история, 2 года назад, По-английски

I am the coordinator of Round #810. I will be brief and clear.

The div1E problem was copied by one of the two authors, it was not a coincidence.

For this reason, the div1 part of the round is unrated. Div2 stays rated.

Sometimes known problems appear in a contest, it happened to me as an author and to any problem setter who has organized many contests. This is never a reason to make a round unrated. Today something different happened, the problem was deliberately copied and the author did not tell anything to anybody (not even to the other author). Codeforces (and me personally) condemns this behavior.

A final personal remark: I feel sad. Please future authors: do not copy problems. It's a waste of everyone's time.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +3412
  • Проголосовать: не нравится

Автор dario2994, 3 года назад, По-английски

The Southwestern Europe Regional Contest will take place on the 23rd of April. It is the ICPC regional contest (i.e., the winning teams will advance to the ICPC World Finals) for teams from France, Israel, Italy, Portugal, Spain, and Switzerland.

The mirror contest SWERC 2021-2022 - Online Mirror (Unrated, ICPC Rules, Teams Preferred) will be held on Codeforces at Apr/24/2022 14:05 (Moscow time) and will last 5 hours.

The mirror contest will contain all the problems from the official competition plus some additional problems.

I am the chief judge for the competition and I want to thank:

I invite you to participate in the contest and I hope that you will like the problems.

On the difficulty
The contest features problems with difficulties from div2A to div1F; so anyone can find something at their level.

Many teams without much experience participate in SWERC, so the problem set should be enjoyable also for div2 contestants. On the other hand, solving all the problems should be challenging even for the strongest teams in the world: the MIT team did not AK in 5 hours.

On the beauty

Rules

  1. The contest is unrated, so your codeforces rating will not be affected.
  2. The scoring is ICPC-style: teams are first sorted by number of problems solved, then the time-penalty is used as a tie-break. An incorrect submission gives a 20 minutes penalty.
  3. We encourage participation as a team.
  4. If you are participating in a team, we encourage you to use only one computer for coding the solutions (as in an ICPC contest). Regarding using templates and copy-pasting code: feel free to do it.
Rationale of rule 4.

UPDATE: We hope you liked the problems, here is the editorial for all the problems in the mirror: https://codeforces.net/blog/entry/102042 .

UPDATE: Congratulatins to the all the participants of the onsite contest and in particular to the two gold medal winning teams, both solving 10 problems (and with a very similar penalty time):

  1. Raw Pots -- Harbour.Space University
  2. TAU++ -- Tel Aviv University

And congratulations also to the four teams who managed to solve all the problems in the mirror:

  1. tourist, ksun48
  2. jiangly
  3. Merkurev, KAN, Um_nik
  4. djq_cpp, hehezhou, jqdai0815

Полный текст и комментарии »

  • Проголосовать: нравится
  • +482
  • Проголосовать: не нравится

Автор dario2994, история, 3 года назад, По-английски

For the sixth time, the Italian national contest (valid for the selection of the Italian IOI team) will be mirrored into an online contest. Everyone is welcome to participate! There are both easy subtasks (div2A) and very hard ones (div1D+), so it can be enjoyable both for newcomers and for very high rated contestants.

  1. The problem statements will be available in both English and Italian.
  2. Tasks will be IOI-like (with graders and subtasks) and you will have 5 hours to solve them.
  3. The only language allowed is C++.
  4. The time window for the practice contest (featuring original problems) will start on 2021 November 13th, 08:00 CET and will end on 2021 November 15th, 23:59 CET.
  5. The time window for the main contest will start on 2021 November 16th, 15:00 CET and will end on 2021 November 17th, 15:00 CET.

The contests' timing will be USACO-like: you can decide when to start your 5-hours time window (after the login), but the contest will end at the given time regardless of your time window.

If you want to participate, you must:

  1. Visit the contest website: https://mirror.oii.olinfo.it
  2. Click the link "register", fill out the form and then click on the register button and then "back to login"
  3. You can now log in with the same username and password you used to sign up
  4. If the login is successful you will be ready to participate, just wait for the contest to start! (And maybe save the page in your bookmarks, so that you can quickly get back to it when the contest begins)
  5. When the contest starts, you will see a red button. Click it when you want to start your 5 hour time window!
  6. Good luck and have fun!

Ranking: The ranking of the online contest will be available at https://mirror.oii.olinfo.it/ranking when the contest starts.

Upsolving: After the end of the contest, tasks will be uploaded in the italian training website (localised also in English), section "task & quiz archive", where they will be available for online evaluation (after registering to the website).

Полный текст и комментарии »

  • Проголосовать: нравится
  • +247
  • Проголосовать: не нравится

Автор dario2994, 3 года назад, По-английски

Hi!

On Jul/25/2021 17:35 (Moscow time) we will host Codeforces Global Round 15.

This is the third round of the 2021 series of Codeforces Global Rounds. The rounds are open and rated for everybody.

The prizes for this round are as follows:

  • 30 best participants get a t-shirt.
  • 20 t-shirts are randomly distributed among those with ranks between 31 and 500, inclusive.

The prizes for the 6-round series in 2021:

  • In each round top-100 participants get points according to the table.
  • The final result for each participant is equal to the sum of points he gets in the four rounds he placed the highest.
  • The best 20 participants over all series get sweatshirts and place certificates.

Thanks to XTX, which in 2021 supported the global rounds initiative!

Problems for this round are set by cip999 and me. Thanks a lot to the coordinator antontrygubO_o, to the testers gamegame, ajit, golions, skydogli, McDic, rabaiBomkarBittalBang, nweeks, Marckess, HenriqueBrito, gratus907, bWayne, zscoder, TheOneYouWant, and to MikeMirzayanov for the Codeforces and Polygon platforms.

You will be given 9 problems and 2 hours 45 minutes to solve them.

The scoring distribution is 250-500-1000-1000-1500-1500-2500-2750-3750.

If you are tempted to make one more comment on the scoring distribution, read this.

Good luck and see you in the standings!

UPDATE 1: Thank you very much for participating, we hope you liked the problems (and sorry to top contestants for giving a not-so-fresh problem I).

Here you can find the editorial (with a bit of behind-the-scenes, some obscenely wrong preditions, and hints for all the problems).

UPDATE 2: Congratulations to the winners!

  1. tourist
  2. ksun48
  3. Benq
  4. Petr
  5. VivaciousAubergine
  6. TadijaSebez
  7. sunset
  8. Um_nik
  9. ko_osaga
  10. dreamoon_love_AA

Полный текст и комментарии »

  • Проголосовать: нравится
  • +823
  • Проголосовать: не нравится

Автор dario2994, 4 года назад, По-английски

Since the amount of information available about the preparation of a competitive programming contest for AtCoder/Codeforces is very little, I decided to collect here what was my experience. I will try to both describe my experiences and give some general advice to wannabe problemsetters.

I hope that this will be useful to future problemsetters who are "out of the loop". Moreover, participants might be curious to know what happens behind the scenes (and maybe the platforms may consider this as a form of constructive feedback).

Acronyms:

  • AGC = Atcoder Grand Contest
  • GR = Codeforces Global Round

Why I know something about problemsetting?

I am in the competitive programming world since ~8 years: I have participated in IOI/ICPC/GCJ and a number of contests on AtCoder/Codeforces (and lately Codechef). I am not a top participant but, being in this world for so long, I know, more or less, all the standard tricks.

Recently, I was the author of the flagship contests of AtCoder and Codeforces: AtCoder Grand Contest 44 and Codeforces Global Round 11. Both rounds were well-received (or, at least, this is my feeling). In the past I was also the author of a number of tasks for the Italian Olympiads in Informatics. Even though there are problemsetters much more experienced than me, I think I have enough experience and material to talk about being the author of a contest.

The timeline of the AtCoder Grand Contest

  • 2018-April 2020: I kept a file with a list of problems I created in my spare time. Some were pretty bad, some were very good (and one of the best appeared in a contest).
  • Beginning of April: I decided that I wanted to host an "important" online round. I was in doubt between an AGC and a CF Global Round.
  • 20-21 April: I decided to go for an AGC (knowing that I might receive a rejection, since the previous authors were mostly either Japanese or Belarusian...). I sent a message to maroonrk. He answered that I should write to rng_58. I wrote to rng_58 and he agreed on considering my problems.
  • 21 April-4 May: Inventing problems and discussing with rng_58 the problems. On the 4th of May the problemset was finalized (more or less).
  • 4 May-20 May: Preparation of the contest and testing. During this period I prepared the problems (and the editorial) and at the same time the testers solved them and gave me feedback.
  • 22 May-23 May: Me and rng_58, and partially also the testers, thoroughly checked everything (statements, checker, validator, solutions).
  • 23 May: Contest day! Everything went smoothly. Feedback was positive.

The timeline of the Codeforces Global Round

  • 24 May: I enjoyed organizing the AGC, thus why not doing it again? Some time before, I had received a message from MikeMirzayanov asking to propose rounds (it was sent to a number of high-rated users). Thus, I decided that my next contest would have been a global round.
  • 25 May-30 June: Inventing problems.
  • 1 July: I sent a message to antontrygubO_o (who I had chosen as coordinator because I really liked Goodbye 2019) asking whether I could be the author of a Global Round and asking some clarifications on the process.
  • 3 July: He answered saying that I could be an author and that, if I wanted, he could be the coordinator. And he also reassured me about the duration of the process (I was scared by the idea of a long queue). I said that he was perfect as coordinator and that I would have sent him some problems after my holidays.
  • 18 July-30 July: Discussing problems with antontrygubO_o.
  • 30 July: The problemset is (more or less) finalised and I start the preparation of the problems.
  • 1 August-27 August: Preparing the problems.
  • 27 August-9 October: Testing and writing editorials.
  • 10 October: Contest day! Everything went smoothly. Feedback was positive.

Why did I decide to be the author of some contests?

There are many possible motivations for problemsetting. For me, the motivation is a mix of two factors:

  1. I like to create problems.
  2. I am proud and happy if the strongest contestants in the world try to solve my problems.

There are many other good reasons to become the author of a contest, but there is also a very bad reason: compensation. It is a huge waste of time to be the author of a contest if you do it (mainly) for the compensation. Maybe in some countries it might actually be convenient "money-wise" to organize a contest, but it is not the case in "rich" countries (Western Europe, Japan, USA). I am not saying that problem-setters should be paid more, just that the current compensation does not justify the amount of dedication/time/effort/ability necessary to prepare a contest. On the other hand, it is always nice to be paid to do something you like!

The various phases

Creating problems

The creation of the problems is, by far, the hardest part of the preparation of a contest. It requires time, creativity and experience. Nonetheless, it is the best part of the whole process.

In my case, when I proposed the contests I had already created some problems (for the AGC, only one ended up in the contest, for the GR I think 3). The remaining problems were created after my first interaction with the coordinator.

My strategy to create as many problems as possible is to devote as much time as possible. I was spending the vast majority of that time discarding bad ideas, thinking about the cosmic void, feeling like I was wasting my time, drawing on a blank piece of paper, reading others' problems to find inspirations... until a decent idea pops up. It's like waiting for the bus: if you wait long enough at the bus stop, the bus will come eventually.

The criterion I follow when accepting/rejecting my own problems is: would I like to solve this during the contest?

I am not a huge fan of some of the problems that appeared in AGC44 or GR11 (for example, AGC-D and GR11-D) but I still consider them decent. On the other hand, I really love some of my problems. The perfect contest is yet to be held and you (and I) should not aim for it. Putting imperfect problems in a contest is fine and what you don't like might be very cool for someone else. On the other hand, I strongly suggest to avoid inserting bad problems in a problemset just to finish it (I guarantee you would regret it).

The ability to create nice and original problems is quite correlated with rating and experience. If you are new or you are relatively weak, I would suggest you to stay away from problemsetting for div1. On the other hand, if you are experienced and somewhat strong (let's say GM), I would suggest you to try problemsetting... it's immensely fulfilling.

Let me conclude with a few pragmatic advices:

  • Write down all the problems you come up with, because otherwise you will forget them.
  • Google your own problems. If they are well-known or easily googleable, discard them. It can happen to give a known problem in a contest (see AGC44-D), this is not a huge issue but try to avoid it.
  • Be honest with yourself about the beauty of your own problems (it is easy both to underestimate and overestimate their beauty).

Proposing to a coordinator

Both on AtCoder and Codeforces I was lucky enough to skip the "waiting in queue" part of proposing a contest and I started interacting with the coordinator from day 1. I want to sincerely thank the coordinators for giving me this opportunity. The main reason why I could skip that pain is because I was proposing "important contests" and/or I was an "experienced" problemsetter (and/or I was just lucky). Do not expect this treatment.

Both for AGC and GR, I talked (either via Google Chat or Telegram) directly with the coordinator and I kept sending problems until there were enough to create a problemset. The process was like this: I sent a problem (with statement, constraints, sketch of the solution) and in a couple of days (or a bit more) the coordinator would tell me whether he accepts or rejects it.

I think that both rng_58 and antontrygubO_o (who were my coordinators) are very good in selecting problems. Even though my taste is not exactly the same as theirs, I can understand all the rejections I received from them. In this phase they were both very responsive.

The rejection rate was very similar. I have proposed 11 problems for the AGC (which contains 6 problems) and 15 problems for the GR (which contains 8 problems).

The main role of the coordinator is to discard bad problems and decide when there are enough to create a contest. If a coordinator rejects a problem, do not insist: the decision belongs to him, not to you. Rejections are never pleasant but you should always remember that the coordinator goal and yours is the same: creating a nice contest.

In general, I strongly advice anyone to propose only decent problems to the coordinator (see what I said in the "Creating Problems" section). It's an unnecessary waste of time (both yours and the time of the coordinator) to propose bad problems.

Contest preparation

Preparing a problem consists of: writing the statement, writing the solution, writing the generator for the tests, writing the validator for the tests, and, when necessary, some additional helping programs (checker, interactor...).

The preparation phase was rather different on the two platforms.

The AtCoder contest was prepared in a Dropbox folder. To upload the problems into the contest interface (which is exactly the one you see when competing on AtCoder) I had to run a php script. Preparing the interactive problem (AGC44-D) was an immense pain due to the complete lack of any documentation (even rng_58 had no idea of how to do it). I consider this whole preparation environment pretty bad.

The Codeforces contest was prepared on Polygon. Polygon does not have a very modern UI and is not perfect, but I consider it a very good environment for preparing problems. It is vastly superior to the AtCoder way of doing it. I like that Polygon makes it very hard to make stupid mistakes (for example, validators are automatically run on all inputs, it checks that tests cover extreme values of the variables, etc...).

For both contests, during the preparation phase the problemset was perturbed slightly (a problem was replaced in AGC and a problem was modified in GR) and I kept interacting with the coordinators. rng_58 was always helpful and responsive, he helped me taking choices and he guided me through the process (anticipating what would have come next and answering my doubts). antontrygubO_o had a very long response time (something like 10 days to answer yes/no to a question that was important to proceed in the preparation) and I was quite frustrated by his behavior. At that time, I expressed with him my concerns, but almost nothing changed. Since I think this kind of ghosting is rather common in this phase, I suggest to all problemsetters to be as independent as possible in this phase and interact with the coordinator only for major changes to the problems.

In this phase it is very common to "change opinion" about a problem. Something that seems cool before preparing might seem more standard after the preparation, and something that looked easy to implement might reveal itself as a torture. Do not pay too much attention to these feelings, your judgement is too biased to be useful.

Preparing a problem is a dull job, preparing a whole contest is even worse. But keep in mind that the sum of "suffering during preparation" + "regret after the contest" is a universal constant. If you get something wrong during this phase, or you neglect something, it will affect the contest.

I made a number of preparation mistakes but, in hindsight, I consider all of them minor. After AGC44 I was very hard on myself for having given a known problem (problem D) in the contest. Now, I am confident enough to say that it can happen and was an acceptable mistake. I suggest you, future authors, to be hard on yourself before the contest but to be somewhat lenient after. It is normal to make some minor mistakes, it is part of being human.

Let me conclude this section with the order I follow when preparing a problem:

  1. Write carefully the statement with all the constraints. Changing a constraint later is annoying and requires a lot of additional work, hence it is better if you get it right.
  2. Write a checker (if necessary), a validator, and a stupid generator.
  3. Write the optimal solution and (if appropriate) a naive solution.
  4. Check that everything works together, that the time-limit and the constraints are reasonable, that the solution is correct.
  5. Write a generator for hard testcases (this is the worst part of the whole creation of a contest).
  6. Choose the parameters of the generator for pretests and tests.

Testing phase

Until this moment only you (and your coauthors) and the coordinator have had access to the problems. Now the coordinator (and you if you like) will invite some testers to test the problems. To interact with the testers, we used a Slack workspace for the AGC, while a Discord server for the GR (in both cases the environment was created by the coordinator).

Ideally a tester will:

  1. Virtually simulate the contest as if he was really competing, but taking notes of what he is doing and what are his impressions during the contest.
  2. Upsolve the problems he did not manage to solve during the simulation.
  3. Share with you a detailed feedback about the problems.

I have said above that the perfect contest is yet to come... on the other hand the perfect tester exists and is dacin21. Let me thank him here as, for both my contests, the quality of his testing was astonishing.

The testing, both with AtCoder and with Codeforces, was below my expectations (for different reasons).

For the AGC there were two official testers (which would actually be enough), but none of them virtual simulated the contest (which makes it hard to estimate appropriately the problems' difficulty). To make up for this issue I asked two strong contestants to virtual simulate (Rafbill and dacin21). Their feedback played a big role in the decision of the problems scores (which, a posteriori, were quite appropriate). On the other hand, the official testers were very scrupolous in testing the problems and I am pretty sure that any mistake in the preparation would have been noticed.

For the Global Round there was a large number of testers. The issue here was that some of the testers were not mature enough to provide useful feedback. Someone who is into competitive programming since three months will never provide a useful feedback, someone who is not able to articulate his thoughts more than "cool" or "orz" or "shit" will never provide a useful feedback, someone who considers obvious the problems he is able to solve and too hard those he can't solve will never provide a useful feedback. Moreover, even if they were an army, there were exactly 0 solutions from Codeforces-provided testers for problems G and H (here dacin21 and Giada saved the day). On the other hand, the large number of virtual participations made it easy to gauge the difficulty of the problems.

I recommend to any author to take with a grain of salt testers' opinions and comments. Remember that the responsibility is on you (the author) and you have the right to reject the suggestions of the testers.

On the other hand, testers' feedback is of uttermost importance to gauge the difficulty of the round and to polish the statements. Judging the difficulty of your own problems is very hard (at least for me) and it is much better to just guess the difficulties depending on testers' results. For the GR, I think that testers feedback improved a lot the quality of problems A, B, C (which had some serious flaws before the testing phase).

Scheduling the round

At a certain point, the coordinator will schedule your round. The process is simple: choose a day that is ok for the author and check whether there is some overlap with some other contest.

For the AGC44, the round was scheduled ``as soon as possible'', while the GR11 could have been scheduled much earlier (and was delayed mostly to hold the Global Round in October). It was not a problem at all for me to wait for the GR11 (actually it gave me more time to polish statements/editorials).

Writing editorials

Once the problems are prepared and everything seems to be working (= you expect no more changes to the statements), you shall start writing the editorials.

For me, writing the editorials carefully is the only way to be sure about the correctness of the solutions of hard problems.

In my (not so) humble opinion, I believe that the quality of the editorials for AGC44 and for GR11 is above average and thus I feel entitled to give some more precise advices on how to write editorials.

The editorial should be a pleasure to read, should give insight into the solution, should provide a detailed proof of the solution and a careful description of the algorithm. Even better if it contains some additional comments on the problem. Just writing a sketch of a solution is much easier than writing a proper editorial, but it gives way less to the reader.

If you are not used to academic writing, it might be nontrivial to produce a decent editorial. There are two very common mistakes to avoid: skipping hard steps and writing in a terrible English.

  • Skipping hard steps is the most natural thing when writing proofs. We perceive as boring and involved (and so we skip) what we do not fully understand. You should be able to detect this feeling and do exactly the opposite. The errors (even in math papers) lies very often in an "obvious" or "easy" or "straight-forward" argument that is skipped.
  • A large part of the competitive-programming community (including myself) does not speak English as his first language and nobody expects you to write perfectly. Nonetheless, you should strive for correctness and clarity. Use simple and short sentences and double-check for typos. If possible, ask someone to proofread the editorials.

In my case (in particular for the GR11) I had plenty of time to write the editorials and I decided to write them carefully. I also decided to write hints, which were very appreciated. Writing hints requires a negligible amount of time, hence I would suggest to any author to provide them.

Final checks before the contest

In both platforms, before the contest the coordinator focuses on your contest in order to check stuff and/or translate the statements.

For AGC44, the final check lasted an immense number of hours (I cannot check since my slack account expired, but something like 12 hours). During this time, me and rng_58 double-checked every single detail of the contest. It was painful, but at the end I felt safe about everything.

For GR11, the final check was more shallow but still accurate (which makes sense since Polygon is automatically checking a lot of stuff). We both checked the statements (and antontrygubO_o translated them in Russian) and all the validators and checkers. antontrygubO_o made some minor changes to the statements to improve the readability. In this phase there was some friction between me and antontrygubO_o as some of his changes did not respect my right as author to choose the style of the statements (he changed a title and inserted a joke; both changes were reverted).

As my experiences show, any problemsetter should expect to be very busy the few (or not so few) hours just before the contest.

One last thing: it is absolutely normal to be a bit stressed just before the round. Don't worry, it will go smoothly!

During the contest

In both platforms, during the contest I had to answer to questions (together with testers and/or coordinator). In both cases, I was expecting a large number of questions but the real amount was rather small (18 for AGC44, 69 for GR11). Hence, during the round I chilled watching the standings.

I enjoyed much more watching GR11 for three reasons.

  1. The number of participants was much larger, which is cool.
  2. I was not stressed by the fact that one of the problems was known (AGC44 D).
  3. I had fun interacting with antontrygubO_o during the contest. Indeed, the chatting environment was more friendly with him than with rng_58 (this is absolutely not a criticism towards rng_58 and was not an issue).

After the contest

Finally the contest is finished and you can relax... not yet! It is now time to: post the editorial, answer the post-contest comments, write the winners in the announcement. Then you can chill and watch Errichto's stream on the contest!

Let me spend a couple of words on the feedback one receives after the contest. It is very likely that someone will not like your contest. It is also very likely that someone will hate your contest. It is also very likely that someone will offend you/your problems. Haters gonna hate, ignore them.

On the other hand, take into serious consideration all concerns that are properly expressed and are raised by experienced participants. It is very hard to get honest feedback on the problems from contestants (even harder if the feedback would be negative), hence don't waste the feedback provided by negative comments.

In order to receive some more feedback, I personally asked a couple of LGMs after my contests.

Thanks for reading

Thank you for reading, I hope you enjoyed it.

It goes without saying that if you have any questions about problemsetting, you shall ask them in the comments. I will answer all the interesting questions. If you have some problemsetting experience, feel free to share in the comments whether your experience is similar or not to what I have described.

Since I really care about this blog (mostly because I would have loved to read something like this in the past) I will try to keep this in the recent blogs for some days even if it does not generate many comments (which, sadly, is the only way to keep blogs visible on codeforces).

Полный текст и комментарии »

  • Проголосовать: нравится
  • +936
  • Проголосовать: не нравится

Автор dario2994, 4 года назад, По-английски

For the fifth time, the Italian national contest (valid for the selection of the Italian IOI team) will be mirrored into an online contest. The contest is primarily intended for high school contestants, but everyone is welcome to participate!

1. The problem statements will be available in both English and Italian.

2. Tasks will be IOI-like (with graders) and you will have 5 hours to solve them.

3. The languages allowed are C and C++.

4. The time window for the practice contest (featuring original problems of the same level as the main contest) will start on 2020 November 22th, 00:01 CET and will end on 2020 November 24th, 23:59 CET

5. The time window for the main contest will start on 2020 November 25th, 18:00 CET and will end on 2020 November 26th, 18:00 CET.

The contests' timing will be USACO-like: you can decide when to start your 5-hours time window (after the login), but the contest will end at the given time regardless of your time window.

If you want to participate, you must:

  1. Visit the contest website: https://mirror.oii.olinfo.it
  2. Click the link "register", fill out the form and then click on the register button and then "back to login"
  3. You can now log in with the same username and password you used to sign up
  4. If the login is successful you will be ready to participate, just wait for the contest to start! (And maybe save the page in your bookmarks, so that you can quickly get back to it when the contest begins)
  5. When the contest starts, you will see a red button. Click it when you want to start your 5 hour time window!
  6. Good luck and have fun!

Ranking

The ranking of the online contest will be available at https://mirror.oii.olinfo.it/ranking when the contest starts.

Training after the contest

After the end of the contest, tasks will be uploaded in the italian training website (localised also in English), section "task & quiz archive", where they will be available for online evaluation (after registering to the website).

Полный текст и комментарии »

  • Проголосовать: нравится
  • +156
  • Проголосовать: не нравится

Автор dario2994, 4 года назад, По-английски

General comments

Broadly speaking, problems A-B-C-D were "div2 problems", while F-G-H were "strong grandmaster problems" (with E staying in the middle). I did not expect anyone to solve all the problems and thus I decided to give the scoring F+G=H (so that maybe someone would have solved H).

Many of the problems (A, C, D, E, G) admit multiple solutions. Sometimes the core of the solution is the same (C, D) and sometimes the solutions are truly different (A, E, G).

If you are an experienced participant, I would like to hear your opinion on the problems. Feel free to comment on this post or send me a private message.

Overview of the problemset

Hints

A

B

C

D

E

F

G

H

Solutions

A

Tutorial is loading...
Solution code
B

C

D

E

F

G

H

Полный текст и комментарии »

Разбор задач Codeforces Global Round 11
  • Проголосовать: нравится
  • +758
  • Проголосовать: не нравится

Автор dario2994, 4 года назад, По-английски

Hi!

On Oct/10/2020 17:50 (Moscow time) we will host Codeforces Global Round 11.

This is the fifth round of the 2020 series of Codeforces Global Rounds. The rounds are open and rated for everybody.

The prizes for this round are as follows:

  • 30 best participants get a t-shirt.
  • 20 t-shirts are randomly distributed among those with ranks between 31 and 500, inclusive.

The prizes for the 6-round series in 2020:

  • In each round top-100 participants get points according to the table.
  • The final result for each participant is equal to the sum of points he gets in the four rounds he placed the highest.
  • The best 20 participants over all series get sweatshirts and place certificates.

Thanks to XTX, which in 2020 supported the global rounds initiative!

Problems for this round are set by me. Thanks a lot to the coordinator antontrygubO_o, to the testers dacin21, Giada, H4ckOm, DimmyT, Retired_cherry, oolimry, nkamzabek, Prakash11, Tlatoani, coderz189, nvmdava, stack_overflows, dorijanlendvaj, and to MikeMirzayanov for the Codeforces and Polygon platforms.

The round will have 8 problems and will last 180 minutes.

The (unusual) scoring distribution is: 500-750-1000-1000-1500-2250-2250-4500.

Why such a scoring distribution?

I hope you will have fun solving the problems!

UPD: The round is postponed by 15 minutes because just before the round there will be a 10-minutes-long unrated testing round. Considering the recent Codeforces downtime, this is a measure to make sure that there will not be technical issues during the real round.

UPD2: There were no technical issues during the testing round, hence the real round will happen. Good luck and see you in the scoreboard!

UPD3: I hope you liked the problems, here is the editorial.

UPD4: Congratulations to the winners!

  1. Benq
  2. yosupo
  3. ksun48
  4. Um_nik
  5. ecnerwala
  6. sunset
  7. maroonrk
  8. zscoder
  9. SirShokoladina
  10. gamegame

UPD5: And congratulation to Petr who upsolved H before the editorial was posted! You made me happy!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +1025
  • Проголосовать: не нравится

Автор dario2994, история, 5 лет назад, По-английски

We will hold AtCoder Grand Contest 044. This contest counts for GP30 scores.

The point values will be 400-700-1000-1100-1300-2400.

We are looking forward to your participation!

Edit: Thank you very much for your participation, I hope that you liked the problems!

First of all, congratulations to the winners:

  1. tourist
  2. jqdai0815
  3. mnbvmar
  4. FizzyDavid
  5. taeyeon_ss

Then, I want to say thank you to the testers dacin21, Rafbill, reew2n, tempura0224 and obviously to the coordinator rng_58 who let me organize my first online contest!

Here is the editorial.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +471
  • Проголосовать: не нравится

Автор dario2994, история, 5 лет назад, По-английски

While trying to squeeze my solution of 1336E2 - Chiori and Doll Picking (hard version) into the time limit, I have encountered an unexpectedly large difference in the execution times among the various C++ compilers offered by Codeforces. Since I think it is something worth knowing, I am sharing this discovery.

Consider the following minimal working example (it generates $$$2N=30$$$ random $$$56$$$-bits numbers and computes the sum of the bits of the xors of all the subsets of the $$$30$$$ numbers).

Code

The important lines are the following ones, where the functions __builtin_popcountll and xor are called $$$2^{30}$$$ times.

 ULL res = 0;
 for (int i = 0; i < (1<<N); i++) for (int j = 0; j < (1<<N); j++) {
     res += __builtin_popcountll(c[i]^d[j]);
 }

Executing the above program in Codeforces custom invocation yields these execution times:

 Compiler                         Execution Time
 GNU GCC C11 5.1.0                4040 ms
 GNU G++11 5.1.0                  4102 ms
 GNU G++14 6.4.0                  1123 ms
 GNU G++17 7.3.0                  1107 ms
 GNU G++17 9.2.0 (64bit, msys 2)  374 ms

Notice that the 64bit-native compiler produces a much faster executable (and notice also that among the other compilers there is quite a difference). Hence, next time you have to optimize a solution with a lot of bit-operations on 64bit integers (and, in fact, this situation is not so uncommon), consider using the compiler GNU G++17 9.2.0 (64bit, msys 2).

It might be that the differences among the execution times are due to the way I have written the program (maybe the wrong PRAGMAS? Maybe preventing some compilers from optimizing because of a certain idiom? Maybe something else?), if this is the case, please enlighten me!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +33
  • Проголосовать: не нравится

Автор dario2994, история, 6 лет назад, По-английски

OII logo

For the third time, the Italian national contest (valid for the selection of the Italian IOI team) will be mirrored into an online contest. The contest is primarily intended for high school contestants, but everyone is welcome to participate!

The contest timing will be USACO-like: it will be open for 24 hours but you will be able to compete for up to 5 hours (you can decide when to "start" your time window, after the login). Participation will be available upon registration to the Italian trainings website (localized also in english).

1. The problem statements will be available in both English and Italian.

2. The time window will start on 2018 September 14th, 10:00 CEST and will end on 2018 September 15th, 10:00 CEST.

3. Tasks will be IOI-like and you will have 5 hours to solve them.

4. The languages allowed are: C, C++.

Note: you can decide when to start your 5 hour time window, but remember that the contest will end at 10:00 CEST regardless of your time window!

If you want to participate, you must:

  1. Visit the training website: https://training.olinfo.it
  2. Click "Sign up" (if you haven't already done it last year!)
  3. Fill out the form and then confirm
  4. Visit the contest list: https://training.olinfo.it/contests
  5. Click on the OII 2018 National contest list entry
  6. Log in with the same username and password you used to sign up
  7. If the login is successful you will be ready to participate, just wait for the contest to start! (And maybe save the page in your bookmarks, so that you can quickly get back to it when the contest begins)
  8. When the contest starts, you will see a red button. Click it when you want to start your 5 hour time window!
  9. Good luck and have fun!

Ranking

The final ranking of the online contest is available here.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +107
  • Проголосовать: не нравится

Автор dario2994, история, 6 лет назад, По-английски

I have a bunch of questions about the codejam finals that are not addressed in the terms and conditions .

  1. During the contest, do contestants have access to their own code library?
  2. During the contest, do contestants have access to internet? (I guess the answer is almost surely no)
  3. The terms say "All events related to the final round are open only to contestants, except that you may bring a guest to the reception.". What does this mean? Can guests be with contestants at all times apart from during the contest?
  4. For how many days are there events with required attendance? (likely this varies each year, just give an estimation)
  5. Is there a better place where to ask all the previous questions?

For completeness let me specify that I am asking 1. and 2. as I don't even know how to write a max-flow without my beloved library. Of course I am asking 3. and 4. 'cause I would like to bring a guest who might not enjoy being alone for a whole week.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +53
  • Проголосовать: не нравится