Hola Codeforces!
I am happy to invite you to Codeforces Round 774 (Div. 2), which will take place on 04.03.2022 18:35 (Московское время). Notice the unusual time! The round will be rated for participants with rating lower than 2100. You will have 2 hours to solve 6 problems. The problems were authored and prepared by me.
I would like to thank the following people who made the round possible:
- darkkcyan for coordination and help in problem preparation.
- KAN for translating the statements.
- antontrygubO_o, errorgorn, mjhun, wxhtzdy, generic_placeholder_name, Supermagzzz, vinfat, fishy15, phattd, MarcosK, TeaTime, Stepavly, SebaMarin, CarusoX, snowysecret, ak2006, DeMen100ns and helcsnewsxd for testing and providing valuable feedback.
- MikeMirzayanov for great Codeforces and Polygon platforms.
See you in the standings!
UPD:
Scoring distribution: $$$750$$$ — $$$1000$$$ — $$$1250$$$ — $$$2000$$$ — $$$2500$$$ — $$$3000$$$
Thanks to ak2006, video editorials for some of the problems will be available on his channel after the round.
UPD: Editorial
UPD: Congratulations to the winners!
Official winners:
Unofficial winners:
Good luck and high rating.
Good luck to all Ukrainian contestants
Why doesn't Ukrainian contestants boycott this Russian website?
use grammarly
how did you reach candidate master without common sense?
down! Charon here
Note The Unusual Timings (1 Hour Late)
GL and wishing you all +ve delta
gl ^ hf
good luck ^ have fun = 0
:)
This was such a hint and everybody ignored it smh
best of luck for all♥️
Good luck and positive rating!!!
why are you disliking him?
Mike Top. I wanted to make a joke
This will probably be my first contest in over a month...I'm awaiting -100 delta :P
same... I'm awaiting losing my green after my first contest :P gl
that wasn't true gonna get +20 :yayy:
let's go! i'm having my series of defeat too
i overslept so i didn't participate
Good luck everyone. Maybe this round will help improve people's mood at codeforces.
hello gagan plz guide me as i am stuck in newbie-pupil phase from a long long time and i don't see improvement also provide tips for dp if any. thnx in advance.
The key to improvement is indeed just good-old practice. Now, everyone should choose a practice regime which suits them well. I just like you were shuffling pupil and newbie for 1 year. This year I started practicing more with a focus on solving questions rather than learning new algorithms. You only need a handful of algorithms like dfs-bfs, Dijkstra, dsu, segment tree to reach the expert but dp practice is very important. I practiced dp from problemset with filter-tag and started solving from 1000 rated questions and gradually increased rated questions. When I felt like I can do 1600 rated dp questions pretty confidently then I started randomly doing questions with gradually increasing difficulty from 1000 to 1600.
Make sure you actually solve the questions and don't see the editorial until you solve it. If it seems impossible then try to read other people's code before checking out editorials. Do cp with your friends, discuss approaches and try to outrank your peers :p. This is what has worked for me till now, hope this helps you. Good luck!
I think DP is the most important thing. segment tree is not too important at pupil stage.
I was talking about the concepts which can land you to expert level.
expert level doesn't need segment tree to be fair, binary search, greedy, simple DP, DFS/BFS and a little knowledge of graph.
Maybe we have the opportunity to just focus on this round itself!
Look at the quality of the questions. My hands are too slow.Orz
Before the usual time was 16:35, then 15:35, then 11:35 ... What is the actual usual time ?
9:05 PM in IST, really good for indian contestents, we don't have to shift our dinner this time.
Tatakae
TATAKAE! TATAKAE!!
15:35 winter time, 16:35 summer time
ooo thanks
Are we going to fight against "Robot" in this round?
May C problem be solved by less than 2000 people. I want it to be an interesting one.
Bruh This is every Cyan dream . That it has less than 2000 solves and we are among them . But sadly cheaters will never make it happen ;_;
I would say and pupil too — last 3 contests happens to me top 1000 after A,B (top 100 last round), and then somehow stuck on C, and -∆, -∆, -∆
First argentinian contest! Congratulations MateoCV
Contest after long time.Eager to participate
Good luck everyone! Have a good contest!
You too my friend !
Argentinian round? Aguante el Diego, Talleres y FaMaF!
Good luck!
Unfriendly time for Chinese contestants.
you can't make everyone participate at a good time, right?
Of course.Good luck to all contestants around the world.
Dear contest, How do you feel today? Yet another person commented on your announcement page rn (me). Since I said hi to you, please give +ve this time to me. yk the usual. thanks in adv
Regards, that random user asking for +ve in rating
Very cool scoring distribution 1000 — 750 = 1250 — 1000 , 3000 — 2500 = 2500 — 2000
Dam bro, did you just invented arithmetic progression.
there is no as a tester comment.
SpeeeeedForcesssss
why? the number of people who solved C isn't high or too low
antonforces
D seemed near yet was so far.
Same with E :)
I haven't coded recursive code in 2 years lol good to remember it with problem C
I did it without recursion
How?Plz share your approach..
Store the factorials numbers <=1e12 in an array (I think the count of them is 13 or 14)
Just iterate on all subsets of them (by iterating from 0 to 214) and then subtract the factorials number from n (let's call the resulting number as temp). After each subset, count the number of set bits in temp. This will be the number of powers of 2 required. Do this for all and keep track of min so far. Submission https://codeforces.net/contest/1646/submission/148374185
you mean factorials?
Solved it using bitmask dp on factorials <= 1e12 and pop count
Submission: https://codeforces.net/contest/1646/submission/148339954
I don't think there is any DP in your solution :p.
sorry used bitmasks to generate all possible subsets of size 15 :)
Why did the setters confuse with -1 where ans is always possible?
Just to check that we are smart enough to think that . As its quite obvious that every no can be represented in binary form. So there no chance for answer to be -1.. But some ppl i saw do handle that case ;_;
I think it's still the same complexity recursion can be done in a different approach of course
I did with a mask-compressed DP...
It took me 2WA's and 50 minutes to figure out that for long long integers
__builtin_popcount
doesn't work and we need to use__builtin_popcountll
... :-/I realized it just before click submit button...
I am just happy that at least I figured it out before the contest ended :)
You should keep such things as #define (macro), easier to remember
I suggest using std::popcount(). The only issue with this function in terms of competitive programming is that it requires
x
to be of unsigned type, so we have to cast explicitly ifx
islong long
.Of course, the simple macro solves this problem:
#define popcnt(x) popcount(uintmax_t(x))
.5th test case of problem D...
it was so fun to write 15 nested for loop in problem C(p.s. I know recursive solution would also work)
that means you don't know this
I mentioned fun, actually you don't know what I know so please read comments carefully next time
Imagine wasting 20 mins debugging E because you "simplified" $$$\frac{lcm(x_1, x_2, \ldots, x_k)}{x_1}$$$ to $$$lcm(x_2, \ldots, x_k)$$$ without thinking properly T_T.
Also I can't exactly describe it, but CDE feel like they are somehow standard yet fairly interesting at the same time. Maybe its because they have extremely simple and natural formulations which don't feel like they've been forced to match an idea?
A nice contest for me!! I loved the problem C. Would love to share my experience while solving problem C,
Initially after reading the problem , I immediately thought of solving it using subset generation through bitmasks , because i thought the size of factors + powers of 2 would be around 30 , but turned out it was ~52. So i drifted a bit & thought of solving it using DP(using coin change approach) but that is also not feasible.
Solution :
Some more thinking : Hmm, the number of factorials till 1e12 is ~15. Just subset generate over it using bitmasks. The remaining number will be (N — (sum of factorials in the subset)). We use the fact , that any number can be represented as the sum of powers of 2(the number of set bits in it), so the number of set bits in the remaining number is the number of powers of 2 we need. So we just try over it & find the minimum number of powerful numbers required.
you only need to bitmask factors, then greedy choose powers of 2 for the rest.
Yup, thats what i did. I don't think you need to greedy choose powers of 2 for the rest , because the remaining number's binary representation containing the set bits already is the least number of powers of 2 required to represent it.
Logic for C?
Note that n can allways be constructed with number of its bits, since these are powers of 2.
So construct all possible sums of factorials (that are at most 2^14 since there are only 14 factorials in range), subtract each one from n and count the bits of the difference.
Smallest number of set bits in difference plus bits in factorial sum is ans.
How to do F?
What d hell is test case 5 of D -_-
Either a large graph or something that kills the incorrect bipartite idea I guess.
Try the following graph if you don't get why bipartite is incorrect:
Thanks sir. got my mistake <3
There are cases where it is optimal to not take two neighboring nodes. Splitting the tree to a bipartite graph and then taking one of the two parts is not always optimal. The idea is similar to the well-known DP problem "Maximum sum over array such that no two elements are adjacent" but implemented on a tree.
2,3-dimethylbutane
My Idea for D :
I got WA on pretest 6. Not sure if the idea is wrong or my implementation My submission : 148381092 Any thoughts?
I tried the same thing (but failed pretest 5) :(
it is true that a good vertex(blue) must be surrounded by bad vertices(black) but it is not necessary that bad vertices be adjacent to good vertices. so if u two color the tree, u wouldn't take into consideration cases where two adjacent nodes are bad
Pretty sure it's finding the max independent set.
I did it in sort of dp style. Rooted the tree at 1.
dp[i][0] = <C,S> means ith node is not good and fill its children however you want such that max good vertices below my subtree is C with sum of edges as S.
dp[i][1] = <C,S> means ith node is good and fill its children however you want such that max good vertices below my subtree is C with sum of edges as S.
For transitions dp[i][1] depends on dp[c][0] where c is children of i since you are good, children can't be good.
dp[i][0] picks dp[c][0] or dp[c][1] according to which is best.
did it work ? i tried the same thing but kept getting WA in pretest 5 or 6
You also need to keep track of the cost in case
dp[i][0]==dp[i][1]
.Yeah the reason for storing S in the dp is to handle the case of dp[c][0] == dp[c][1]
How does the problem change after removing all the leaves?
If there exists some tree for which bipartite doesn't work, I can just create a new tree by attaching a single node to each leaf and your soln won't work for it.
And try the following graph if you don't get why bipartite is incorrect:
Your intuition of weights being degree or 1 is correct, but your actual selection of nodes isn't optimal.
We want to select a maximum set of nodes, such that no two nodes are adjacent.
Lets root the tree arbitrarily and for a given subtree let $$$dp_{x, 0 / 1}$$$ mean the best answer we can achieve for this subtree if the node x is selected in the set or not.
Then $$$dp_{x, 1} = \sum_{v \in children(x)} {dp_{v, 0}} + 1$$$ since we can't have two adjacent nodes selected.
Whereas if we're not taking the current node, it doesn't matter if children are taken or not, so $$$dp_{x, 0} = \sum_{v \in children(x)} {\max(dp_{v, 0}, dp_{v, 1})}$$$.
And then the optimal answer is just $$${\max(dp_{root, 0}, dp_{root, 1})}$$$
But we also have to minimize the sum of degrees in our set, how do we do that?
We can notice, that the only choice we make is in $$$dp_{x, 0} = \sum_{v \in children(x)} {\max(dp_{v, 0}, dp_{v, 1})}$$$. Here we clearly want to take the one with the larger set of nodes, but what if both are equal? Then taking the one with the smallest adjacency size is optimal.
How you do this is just implementation details, my way of doing it is to store $$$dp_{x, 0 / 1} = {\text{max number of nodes, -(min adjacency size)}}$$$, then we can just continue to perform max in a similar manner to before.
Implementation — 148346500
i defined dp[i][0] = max good vertices if vertex i is good and dp[i][1] for bad. and dp[i][0] = 1 + all dp[ne][1] in neighbours and dp[i][1] = max(dp[ne][0] and dp[ne][1]) for all neighbours. Is is correct ? i kept getting WA in either pretest 5 or 6
I guess in dp[i][1] case if dp[ne][0] and dp[ne][1] are equal then how are you ensuring which one to pick to get minimum sum.
Yup, this is likely the issue.
One way to solve this is to store the dp value as a pair $$$<\text{max number of nodes}, -(\text{min sum of degree})>$$$, then taking max works as expected.
Oh I see. I get it. Thanks!
Wait, in your explanation should it be $$$dp_{x, 0} = max(dp_{v,0} , dp_{v,1})$$$ ?
Could someone share their approach for B ?
I tried solving it by sorting the array (minimize the Blue sum) and using prefix sum, but failed. I think this is because my approach colored all the numbers.
Use sum of two smallest numbers as small sum, and biggest number as big sum.
If both sums are ok ans=YES else add next smallest number to small sum, next biggest number to big sum. Repeat.
Well, I also sorted it first, and then kept on calculating Blue Sum from the start and Red Sum from the end of the array. Also, to maintain the count(Blue) > count(Red), I initialised the starting index to be 1(0-based indexing) while initialising Blue Sum to be equal to the first array element.
submitted C in the last minute
took more than 1 hour to realize brute-forcing all subsets of factorial values is feasible
but brute-forcing all subsets of 2-power values is infeasible which gave me a wrong intuition that the factorial counterpart is also infeasible
I hope in the future I can be more familiar with the thresholds of feasibility.
Why we Can't use one factorial two times?
because we need distinct numbers
Got it now. I Read the question again. Sucks when missed important point.
That would make it way harder so be glad about it
the problem description said distinct.
Even if it was feasible to brute force all powers of 2, how would you find the needed amount of factorials?
I don't know how to find the needed factorials except for brute-force, so I should have taken that approach initially. Unfortunately the infeasibility of brute-forcing 2-powers gave me the wrong intuition that brute-forcing factorials is also infeasible.
No Idea. That is why I was not able to solve C. Was expecting there is some other trick to solve the problem which I was not able to think.
There are not much factorials in range, it is only 14 numbers. So you can create only 2^14 sums from these numbers. Just try all of them.
Went through the same trajectory, except I took even more time to realise this and couldn't submit my code in time :( It also somehow escaped by mind that combinations of powers of 2 literally define every single number in exactly one possible way.
Thanks for the contest, I don't know why I did so dumb on A and B, But after all I really liked the problems.
In c, finding all the numbers powerful in a set then take a number which are just less than n and subtracting them from n is not feasible why?
For 184 the correct answer is 120 + 64 but your approach would yield 128 + 32 (edited from 56) + 24.
How did you get 56 here....
184 — 128 = 56. The point is that you can't pick the max number <= n.
But 56 won't be in set, so it would look like 184 → 128 + 32 + 24 (wrong approach)
Ah sorry, I meant 128 + 32. Thanks :)
what is pretest 6 of D? ಥ_ಥ
If you were doing BFS solution from leaves, It would fail. Optimal solution will be reached by dynamic programming.
Can you please explain why bfs from leafs doesn't work ?? Any test-case ???
I think D is just Maximum Independent set with least total degree which can be solved by dp. I realised this, but could not reconstruct the solution from the dp before the contest ended.
How to solve E? I was looking for numbers with the same prime factors
Take 2, 4, 8, ... as example. In the row starting with 2, you get 2^1, 2^2, ... In the row starting with 4, you get 2^2, 2^4, ..., so what in the cells are these powers of 2: 1 ~ M, 2 * 1 ~ 2 * M, 3 * 1 ~ 3 * M, ..., depending on how many powers of 2 are less than or equal to N. Then, what you need to do is calculate how many distinct number is in this list. Sum up all the length of lists of a number that is not power of another one. Don't forget 1.
Any counterexample on D solving with 2-coloring?
6
1 2
1 3
1 4
4 5
4 6
The output should be:
4 6
1 1 1 1 1 1
The idea is that although there can be no 2 adjacent good nodes, this does not mean that there can be no 2 adjacent bad nodes. Unfortunately I realized that a short time just before the contest ended.
Take all the leaves.
Hey,
In the first problem, why this works
cout << s/(n*n) << '\n';
but using
cout << (int)(s/pow(n,2)) << '\n';
gives wrong answer in test 2?because pow() returns double which may cause precision issue.
It turns out s/(n*n) is bounded within INT_MAX because bounds of s depend on n. Problem was caused due to precision issue of pow() as Aging1986 pointed out.
tried dp in D but it doesn't work... so retried bipartite and failed again.
Dp does work in D, as long as you remember the secondary condition of min sum of degree.
Submission — 148346500
Yes, that’s why I failed.
Is it possible to output -1 in c ??
no, any number can be represented as sums of powers of two (its binary representation).
Getting MLE on test 39 on problem D just because I used longs and a relatively high memory constant is the most unfair thing ever...
Auto comment: topic has been updated by MateoCV (previous revision, new revision, compare).
Auto comment: topic has been updated by MateoCV (previous revision, new revision, compare).
Nice contest!
Ratings updated preliminarily. We will remove cheaters and update the ratings again soon!
Would this idea work for D?
Have some structure that keeps pairs {rank of vertex, vertex} sorted — eg. set. 'rank of vertex' is number of vertex's neighbors that have no weight assigned yet.
Now while there are vertices with no weight assigned do:
take vertex with smallest rank
for every neighbor of this vertex that has no weight — assign weight 1 to them
assign weight to our vertex — weight will be number of neighbors
for each neighboring vertex reduce their rank by one and update our structure accordingly
So this algorithm would work something like Kruskal. I think only difference is this weird updating of ranks.
It won't work
Edit:
Actually I don't know.
Edit2:
I've coded it — it doesnt find the minimal value correctly.