Orlin's algorithm has a complexity of O(n*m). It's the fastest. Why most of the programmers uses Dinic's algorithm or Edmond-Carp algorithm instead of Orlin's algorithm?
Though, I don't know anything about Orlin's algorithm.
№ | Пользователь | Рейтинг |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
Страны | Города | Организации | Всё → |
№ | Пользователь | Вклад |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
Orlin's algorithm has a complexity of O(n*m). It's the fastest. Why most of the programmers uses Dinic's algorithm or Edmond-Carp algorithm instead of Orlin's algorithm?
Though, I don't know anything about Orlin's algorithm.
Название |
---|
i think toshinaritong can help you. he is a master in this topic
rpeng has an answer here
Thank You. I've searched for this. But didn't encountered this answer. Thank You.
Well, if you don't use any heuristics for Dinic or Edmond Karp's algorithm, then I would say that it is very much possible to kill Edmond Karp's algo by exploiting the fact that BFS expands nodes by level. But Dinic's algo is, in most situations, faster than Edmond Karp's algorithm. It is very hard to construct a bad test case (of reasonable size) for Dinic's algo despite the "bad" worst case time complexity. You can try. It is really hard to do so.
So, you can say that using Dinic's algorithm during programming competitions is a good trade-off between implementation difficulty and usefulness of the algo.
Thank You. But I am curious to know that why people don't use Orlin's algorithm? Is it too difficult? or something else? I didn't find any easy tutorial for that. Just found some paper referance.
The complexity of the algorithm is $$$O(nm + m^{\frac{31}{16}} \log{n})$$$. This is (in my opinion) a bogus complexity, and one that could be appreciated only by computer science researchers, and probably not useful in any practical approach, mainly because it is a general unwritten rule that complicated complexities hide away big constant factors (try to come up with an algorithm of that complexity that does something, anything you want, just as an exercise).
Take matrix multiplication for example: why do we implement $$$O(n^3)$$$ "naive" algorithm when it has been so much research going on it that showed that we could do it in $$$O(n^{2.whatever})$$$?
Disclaimer (kind of): Keep in mind that I haven't read the paper or the algorithm and I have virtually zero knowledge about it. I just read the complexity of it. Maybe I'm just ignorant here, but I'm pretty sure what I said above is correct, for empirical reasons.
Thank You for your insight.