I found the following derivation using Lagrange multipliers for the solution of the [Core Training](https://code.google.com/codejam/contest/3274486/dashboard#s=p2&a=2) problem in Code Jam this year. I liked this approach particularly because I don't get to use Lagrange multipliers very often in programming competitions.↵
↵
Let $ \mathbf{p^0} = (p^0_1, p^0_2, \ldots, p^0_n) $ denote the initial vector of success probabilities that we are given, and let $ \mathbf{p} = (p_1, p_2, \ldots, p_n) $ be an arbitrary vector of success probabilities. Let $ S(i, \mathbf{p}) $ denote the probability of exactly $ i $ cores succeeding given that the success probabilities in $ \mathbf{p} $. Then we are trying to maximise↵
$$ f(\mathbf{p}) = \sum_{i \geq K} S(i, \mathbf{p}) $$↵
↵
subject to↵
$$ g(\mathbf{p}) = \sum_{i} p_i = u + \sum_{i} p^0_i $$↵
↵
By using the Lagrangian method, we get that an optimal solution must satisfy↵
$$ \nabla_{\mathbf{p}} f(\mathbf{p}) = \lambda \cdot \nabla_{\mathbf{p}} \left(\sum_{i} p_i\right) = \lambda \cdot (1, 1, \ldots, 1) $$↵
or it must be on the boundary (but we'll get to that later).↵
↵
#### Lemma↵
$$ \frac{\partial f}{\partial p_i} = S(K - 1, \mathbf{p}_{-i}) $$↵
where $ \mathbf{p}_{-i} $ is the vector of probabilities with $ p_i $ removed, i.e. $ (p_1, p_2, \ldots, p_{i - 1}, p_{i + 1}, \ldots, p_{n}) $. That is, the partial derivative of $ f $ with respect to $ p_i $ is the probability that exactly $ K - 1 $ of the cores succeed, not including core $ i $ itself.↵
↵
#### Proof of Lemma↵
This is a somewhat well-known derivation, but I have included it for completeness. By expanding the terms,↵
$$ \frac{\partial f}{\partial p_i} = \frac{\partial}{\partial p_i} \left[ p_i \left( \sum_{i \geq K - 1} S(i, \mathbf{p}_{-i}) \right) + (1 - p_i) \left( \sum_{i \geq K} S(i, \mathbf{p}_{-i}) \right) \right] $$↵
which simplifies to↵
$$ \frac{\partial f}{\partial p_i} = \left( \sum_{i \geq K - 1} S(i, \mathbf{p}_{-i}) \right) - \left( \sum_{i \geq K} S(i, \mathbf{p}_{-i}) \right) = S(K - 1, \mathbf{p}_{-i}) $$↵
(End proof.)↵
↵
In order to have $ \nabla_{\mathbf{p}} f(\mathbf{p}) = \lambda \cdot (1, 1, \ldots, 1) $, then the most obvious thing to do is to set all the $ p_i $ equal to each other. However this is not possible, since we cannot reduce $ p_i $ below $ p^0_i $. By taking into account boundary conditions, this then gives us three choices for each core $ i $:↵
↵
1. We leave $ p_i $ at its initial value $ p^0_i $.↵
2. We increase $ p_i $ to $ 1 $.↵
3. We try to equalise $ p_i $ with the some other $p_j\{ p_j \} $ that are not on the boundary.↵
↵
This observation is sufficient to restrict the solution space to the point where we can search over it. (Well, we still need to make some efficiency arguments, like "only increase $ p_i $ up to 1 if it is already big", and "only try to equalize $ p_i $'s that are close to each other", etc..., but you get the idea.)
↵
Let $ \mathbf{p^0} = (p^0_1, p^0_2, \ldots, p^0_n) $ denote the initial vector of success probabilities that we are given, and let $ \mathbf{p} = (p_1, p_2, \ldots, p_n) $ be an arbitrary vector of success probabilities. Let $ S(i, \mathbf{p}) $ denote the probability of exactly $ i $ cores succeeding given that the success probabilities in $ \mathbf{p} $. Then we are trying to maximise↵
$$ f(\mathbf{p}) = \sum_{i \geq K} S(i, \mathbf{p}) $$↵
↵
subject to↵
$$ g(\mathbf{p}) = \sum_{i} p_i = u + \sum_{i} p^0_i $$↵
↵
By using the Lagrangian method, we get that an optimal solution must satisfy↵
$$ \nabla_{\mathbf{p}} f(\mathbf{p}) = \lambda \cdot \nabla_{\mathbf{p}} \left(\sum_{i} p_i\right) = \lambda \cdot (1, 1, \ldots, 1) $$↵
or it must be on the boundary (but we'll get to that later).↵
↵
#### Lemma↵
$$ \frac{\partial f}{\partial p_i} = S(K - 1, \mathbf{p}_{-i}) $$↵
where $ \mathbf{p}_{-i} $ is the vector of probabilities with $ p_i $ removed, i.e. $ (p_1, p_2, \ldots, p_{i - 1}, p_{i + 1}, \ldots, p_{n}) $. That is, the partial derivative of $ f $ with respect to $ p_i $ is the probability that exactly $ K - 1 $ of the cores succeed, not including core $ i $ itself.↵
↵
#### Proof of Lemma↵
This is a somewhat well-known derivation, but I have included it for completeness. By expanding the terms,↵
$$ \frac{\partial f}{\partial p_i} = \frac{\partial}{\partial p_i} \left[ p_i \left( \sum_{i \geq K - 1} S(i, \mathbf{p}_{-i}) \right) + (1 - p_i) \left( \sum_{i \geq K} S(i, \mathbf{p}_{-i}) \right) \right] $$↵
which simplifies to↵
$$ \frac{\partial f}{\partial p_i} = \left( \sum_{i \geq K - 1} S(i, \mathbf{p}_{-i}) \right) - \left( \sum_{i \geq K} S(i, \mathbf{p}_{-i}) \right) = S(K - 1, \mathbf{p}_{-i}) $$↵
(End proof.)↵
↵
In order to have $ \nabla_{\mathbf{p}} f(\mathbf{p}) = \lambda \cdot (1, 1, \ldots, 1) $, then the most obvious thing to do is to set all the $ p_i $ equal to each other. However this is not possible, since we cannot reduce $ p_i $ below $ p^0_i $. By taking into account boundary conditions, this then gives us three choices for each core $ i $:↵
↵
1. We leave $ p_i $ at its initial value $ p^0_i $.↵
2. We increase $ p_i $ to $ 1 $.↵
3. We try to equalise $ p_i $ with the some other $
↵
This observation is sufficient to restrict the solution space to the point where we can search over it. (Well, we still need to make some efficiency arguments, like "only increase $ p_i $ up to 1 if it is already big", and "only try to equalize $ p_i $'s that are close to each other", etc..., but you get the idea.)