Recently, the number of algorithms and data structures we use in competitive programming are rapidly growing. It's a nice thing: by using more algorithms, the variety of possible problems gets wider, and we can enjoy more problems.
On the other hand, before reaching adhoc, thinking-oriented part of this competition, we have to spend more and more time to learn algorithms. Sometimes a problem asks matching on general graphs; you have to find a paper describing it, read it, and implement its really complicated algorithm. Or sometimes you have to spend time tuning your library by a constant factor. Or sometimes you use multiple pre-written codes together, the variable names collide, and get annoyed.
Until now, I basically rejected all problems that require pre-written codes of complicated algorithms because I don't like these things. For example, we never used segment trees with lazy propagation in our contests. However this way we can't use otherwise interesting problems and it may limit the variety of problems.
What to do with this situation? There is a great example that solved this issue: C++ STL's set. It actually hides a monstrous data structure inside, but thanks to set we can use it as an oracle and it can be used in many tasks.
For these reasons, we decided to prepare oracles of various algorithms on our side and enable contestants to focus on the interesting part of problems.
All the codes were implemented by yosupo, and testings were done by rng_58, maroonrk, DEGwer. We wanted to make sure that you can use them as oracles, so we prepared a detailed document that describes the usage of these libraries.
As an example, here is the code that computes the convolution of two given arrays:
#include <atcoder/convolution>
#include <cstdio>
using namespace std;
using namespace atcoder;
int main() {
int n, m;
scanf("%d %d", &n, &m);
vector<long long> a(n), b(m);
for (int i = 0; i < n; i++) {
scanf("%lld", &(a[i]));
}
for (int i = 0; i < m; i++) {
scanf("%lld", &(b[i]));
}
vector<long long> c = convolution(a, b);
for (int i = 0; i < n + m - 1; i++) {
printf("%lld ", c[i]);
}
printf("\n");
return 0;
}
As you see, it makes the code much cleaner. Of course, this library is installed on the AtCoder server, and you can use it in our contests.
Don't misunderstand us — we are not trying to promote librarish problems. It's the other way around. We are trying to make librarish problems less librarish.
In AGCs/ARCs, until now, we chose problems by comparing the amount of thinking and the amount of implementation including the library part. From now, we can measure the amount of implementation excluding the library part, but that's the only change. For example, we won't use "paste a segtree, then do more implementation after that" kind of problems. We may use "think a lot, paste a segtree, add small amount of code and that's all" kind of problems. Of course, I expect the majority of our problems will still be non-library problems (except for the thematic contest announced below). We will keep the "easy implementation" rule in the future — the next admin maroonrk even says that in order to guarantee that, he is planning to write his solutions to all problems in Python.
Link to various places:
- Download the codes and documents.
- solve practice problems.
we are working with some judge trouble, will be fixed soonfixed! - On 20th and 26th, we'll hold ACL Contest 1 and ACL Contest 2.
The two ACL contests will be ARC-rated (rated for 1200-2799, 150 minutes, but we may change the lower bound / duration later). Even though the intended solutions of the majority of problems in these contests use ACL, the main part will be the thinking part. These contests may contain some dummy tasks that are irrelevant to the library, so don't try to think problems like "ok, maybe this task requires that library, so the solution should be...".
Now we have two questions for the community:
- The first version of ACL only contains relatively basic algorithms that many top people already have. Should we include more advanced algorithms (like matchings on general graphs)? Our main concern is the unfairness against Java users.
- What to do with geometry? Currently we are not sure what's the best way to handle precision issues.
You guys are amazing! Thank you for this library and everything you do!
This is a really great effort. I can't appreciate it enough. I think it will also help a lot of beginners/intermediates get better at CP.
For #1, I think it would be a great idea to include advanced algorithms. I just looked at ACL's code and find it highly readable. I'm sure an experienced Java user wouldn't have a problem with writing their own templates/algorithms. But if this is still a concern, I'm sure there will be quite a few Java users from our community who might be willing to help with a Java version of the ACL library.
Also, it might be a good idea to upload the library and documentation on a public Github repository. This would also allow you guys to constantly make updates/additions to it, and the users can always sync to the latest branch easily.
There is only one thing to say.
rng_58 orz
Suppose you give a problem on suffix array. Will you set limitations such that only $$$O(n)$$$ suffix array works?
I'm not considering to do so. At least, I'll write python solutions to all problems, so the time limit can't be that strict.
This won't be a surprise, but for me it doesn't sound like a good idea. Practicing before the competition and preparing your library are two things that you should do and it isn't surprising that better prepared people take better positions.
"Sometimes a problem asks matching on general graphs; you have to find a paper describing it, read it, and implement its really complicated algorithm. Or sometimes you have to spend time tuning your library by a constant factor. Or sometimes you use multiple pre-written codes together, the variable names collide, and get annoyed." — doesn't sound serious for me. Really, colliding names of variables? If you want to win a competition which involves general matching you anyway should have it implemented.
Also, does it mean that we'll finally see some suffix automatons? No, usually you should know how to use these algorithms and if you do, then you most likely already have it implemented, so you won't give such problems because they still require knowledge. Let's be serious, how many people who usually solve rather simple FFT problems doesn't have FFT implemented (or at least doesn't know how to find it fast)? The trick with common algorithms isn't about having prepared stuff, but about having the knowledge on the usage, which usually implies having prepared code and definitely implies general preparation.
What with problems about modifying these algorithms? Like these ones: 575C - Party 1383F - Special Edges. Rollback on augmenting paths? Wow, this requires both pre-written code (written in a correct way, so in a modifiable way), a knowledge on how to use/modify it and some amount of thinking. Can't be, insta-rejected.
So the thing you really doesn't like is about having some specified knowledge, because the argument about having something badly implemented isn't very good imo. Having some knowledge doesn't differ so much from knowing some common tricks, which ofc. appear to be useful in your problems as well as in every other problems and both of them require preparation.
What about lazy propagation? I'm not sure if I'm able to imagine a template for it. It can be modified in so many interesting ways. Ofc. changing the code would help, but I cant't imagine passing some lambdas working in every situation.
I also can't wait for more problems which involve FFT and your implementation doesn't pass while somebody's does and then you do surprised pikachu. Let's reverse the situation. Will you make the time limits lower because you think that your implementation is the best (hint: it isn't)? Then the thing will change to knowing your library instead of knowing the algorithms.
There is also a fun part. Collecting the library was always a really funny thing, like collecting some stuff. You browse the internet (ofc. not during the contest), you learn, your library is growing and growing, you are proud of it and one day you are rewarded during the contest for this preparation. You also want to kill it.
Edit: I've just downloaded your codes. FFT and Flows make sense, but do you really consider Fenwick's tree, SCC or DSU as something pre-written? Also, you ofc. want to have it implemented nicely, with assertions and so on, but afaik they make a code a bit slower (thinking about DSU now). So with your library somebody wouldn't be able to squeeze additional logarithm in the complexity, while with normal lib he would do so, and you'll probably have to make time limits higher because of slower lib implicitly allowing having worse complexity than yours. I'm not sure if it's the good way.
I'll object only your last paragraph as implementer.(Actually, personally I strongly agree with "Collecting the library was always a really funny thing" and other some parts)
Assertions really affect performance seriously? I don't think so (and tip, we can remove all assert with #define NDEBUG). And I also worked about the constant factor of algorithms, and I strongly believe that this library is fast, at least better than the average of other's libraries (if not, I'll happy if you tell me).
Well, ok, the thing with tiny differences in constant factor isn't really common and it doesn't matter so much, but the thing with adding some flags to the library you doesn't see while submitting doesn't look cool. But, I think the opposite about the library checker, if you can browse the codes and prepare them before the contest, then it's cool, it's one of the many resources, so we can just treat it as an additional, educational one.
I think the idea that everyone should prepare their own library to the point where they have all the implementations they know/need is idealistic and is in practice false. Even today, a lot of people just use implementations from KACTL, so I'm pretty sure it's not true that everyone builds their own library. On the other hand, having these centralized community-built libraries allows us to collaboratively build higher-quality and higher-performance code, which I think is a good thing for everyone. (Along those lines, I would love if the atcoder library lived in some public git repository.) It's also a fact that many people know algorithms which they don't have prewritten implementations for, and I think it's a shame to disadvantage those people. I would rather that everyone has access to black-box implementations than some people copy black-boxes and other people spend time in-contest re-implementing well-known algorithms.
I think problems modifying common algorithms are exactly the right counterbalance; they let you test if people actually know the algorithms used by their library, and it's how you encourage people to learn standard algorithms/how to implement them themselves. (Please set more of those problems!)
Re lazy propogation/augmenting seg-trees: It's true that it's hard to design an interface that allows a lot of the complexity with lazy-propogation, but I think it's possible. That being said, this is a great example where you have to understand the internals/implementation of the data structure to solve harder DS problems, which is also a good thing. I think either case is pretty good.
Re perf: I think we already have this problem today, except when testing today, we only have use the testers' libraries as benchmarks, which seems like an even worse sample. I would also expect community-built libraries to have closer-to-optimal perf as more people contribute.
I agree that building a library is fun, but I think it's even more fun to build something high-quality enough that others want to use it. I think if people like writing library code, hacking on the community library would be a great thing to do.
I'm so tired of people saying "everyone competing on serious level already has a library". I was top-1 cf (for a week obviously) 2 years before I started using any prewritten code. I'm not talking about other parts of your comment, just this one thing bothers me very much.
Oh yeah, FFT is not "library-only" algorithm by any means. FFT is shorter than segtree.
Here's one way to implement a template for lazy propagation
IMO, there are much stronger reasons in favor of developing such a library:
This is amazing but I also agree with Radewoosh that there are some issues. E.g. now Atcoder can't use a problem that requires convolution on real values?
Only use problems that are solvable with integers like convex hull of points or linear functions.
I agree with both of you from perspective of improving Atcoder in a future, but your
is imo irrelevant, since as stated in blog
and therefore it wasn't allowed before(at least they were trying to follow those rules).
What I mean is that with this library included number of tasks, that are allowed in Atcoder contests has strictly increased.
They tried to avoid library algorithms but not completely discard them. I required FFT in one problem in AGC 47.
Yes, tutorial problem on primitive roots and FFT is ok, and segment trees with lazy propagation are not ok, because it "require pre-written codes of complicated algorithms". Honestly, I'm happy with this project per se. But the article just killed me.
Segment tree with lazy propagation requires pre-written code? Come on, do you count "you need some experience in implementing it beforehand" as "requires pre-written code"? I wouldn't even call it complicated as it's been around for years and is more common and much more accessible to a broad audience than aforementioned fft.
Upd: sorry, it seems that I didn't get a sarcasm, right? :(
Yeah, this is exactly what I was concerned about as well. I was trying to compare ideal worlds, but I guess you can't avoid exceptions.
If you say including matching on general graphs would be unfair against Java users, it is already unfair against Java users (and of course, Python users, D users, ...).
It would be amazing to have Python bindings for this library.
CPython libraries have always been pretty good on AtCoder.
You have networkx, numba, numpy, scipy and sklearn.
For example just networkx alone adds every graph algo you can think of. Numpy has fft. And I have already managed to use
scipy.optimize.minimize
to cheese problems on other platforms before.I am playing with the AtCoder Library Practice Contest problems and it turns out the networkx algos have terrible constant factors. Some of them improve if you can run it in pypy but the libraries aren't officially installed. For pure python libraries like networkx you can hack in the pythonpath though:
PyPy AC: https://atcoder.jp/contests/practice2/submissions/16587305
CPython TLE: https://atcoder.jp/contests/practice2/submissions/16587319
(but of course it would be nicer if rng_58 or other atcoder admins would install these packages officially so we don't need the pythonpath hack which doesn't work for other libraries like numpy)
Pypy is not always faster and somehow manages to be slower for networkx.bipartite.maximum_matching:
https://atcoder.jp/contests/practice2/submissions/16587795
https://atcoder.jp/contests/practice2/submissions/16587790
Maybe they can invite uwi to help with Java solutions.
Where were you when green coders were struggling to understand Chinese Remainder Theorem
Isn't it going to be confusing? The
Codeforces
doesn't have this feature but now suddenly`Atcoder
does.I think it's fine, there's an expander/inliner included in the library, so you can probably use it on non-Atcoder judges (though some judges have rules against code you didn't write yourself).
On a more meta level, I'm really happy that AtCoder is doing a bunch of experimentation with libraries/platforms. It's nice to have a place to try these new/modern changes.
It's not going to be confusing as of now. But it will definitely be confusing if codeforces / other OJs decide to build their own library as well. Having a common library which a lot of people use is wonderful, as it's going to make reading others' solutions much easier.
Any instructions for using this on my machine? What does
expander.py
do exactly? Maybe just create a README file.EDIT: thanks, reading
index.html
and runningexpander.py --help
are indeed useful.Here you have a useful blog.
Could you open
document_en/index.html
file? There is a document about how to install and how to use expander.py and something.The documentation contains some instructions for including it (document_en/index.html)
You can run
expander.py --help
. It looks like you just useexpander.py source.cpp --lib <parent_of_atcoder>
(or you can omit the lib and it tries to guess from environment variables). It'll output tocombined.cpp
. You can also use-c
or--console
to output to stdout, likeexpander.py source.cpp --lib <parent_of_atcoder> -c > my_combined.cpp
.I guess it’s related to the topic; so you can find tourist’s library here.
https://github.com/atcoder/ac-library We published the public github repository.
And if you found some bugs, please reply to this comment. (of course, a github issue is also okay)
Minor mistake in maxflow.html of document_en: function prototype of change_edge should be
instead of
In min_cost_slope of document_en/mincostflow.html: just before Constraints, it should be flow_limit instead of flowlimit
In apply of document_en/lazysegtree.html, the operation should be mapping instead of op_st.
In constraints of min_left and max_right of document_en/lazysegtree.html, it should be
f(e()) = true
instead off(e_s()) = true
.In apply of document_en/lazysegtree.html, currently it's written that: It applies a[p] = f(a[p]), but in this line of code,
d[p] = mapping(f, d[p]);
.It should be
a[p] = mapping(f, a[p])
instead ofa[p] = f(a[p])
in documentation.When I include fenwicktree library I get following error [Error] 'enable_if_t' in namespace 'std' does not name a template type.
Some AtCoder users are implementing unofficial ports of
ac-library
to other languages. Here is the list that I know. If you have more information, please let me know.(I'm afraid that most of the projects currently use Japanese for discussion, but I think creating issues or PRs in English are always welcome!)
Someone created a spreadsheet
I don't think that a simple conversion of Java library will be good enough for Kotlin. Kotlin allows much more than Java does. For example, you can use inner functions to avoid copying the same arguments over and over again. Look at this
update
function on a segment tree:I agree that some expressions can be written simpler in Kotlin than in Java. However the users' first purpose is to make library available for their languages, and it does not matter at first how they are written. Of course I think it's better to be replaced with more idiomatic expressions afterwards. Anyway you can submit an issue to the repository.
I have some problems. I submit the same code you give for problem A in the practice contest but it gives CE.
./Main.cpp:1:10: fatal error: atcoder/dsu: No such file or directory
compilation terminated.
My submission: https://atcoder.jp/contests/practice2/submissions/16901478
We have two languages,
C++ (GCC 9.2.1)
andC++ (GCC 9.2.1 with AC Library v1.1)
. Could you try the other one? (Note: We will perhaps merge those twos and enable ACL in default.)Kudos yosupo! Very clean implementations.
IM(Humble)O, I believe that CP would be much more applicable outside of the community if we could transform it into library based programming. Be it my research \ work as a programmer, I find myself wishing I could have better transfer from the encapsulated CP environment to them.
I think the right directions are starting with a super simple library, containing only highly useful DS and algos, and only after it is highly used, start by introducing new features.
Hope this succeed :)!
I was trying to use segment tree template to solve KQUERY problem on Spoj but ended up getting tle even after debugging for hours. Can anyone help me with the approach of solving this problem with acl library. https://www.spoj.com/submit/KQUERY
update:I was able to do it only after modifying the prod function of acl library. However, I think its not the clean way to do it. Can someone tell if it is possible to do it without tweaking the acl library.
Wow, that sounds very interesting :)
Hi! Do you think you could give it an update?
It's a great library and it gives me a better chance to challenge harder problems!
Does anyone know how to use the atcoder library with Visual Studio code on ubuntu?
Is atcoder library there on codeforces. I tried submitting on C++20 and C++17 but I am getting Compilation error. Submission link: https://codeforces.net/contest/1389/submission/230066839
What should I do?
The AtCoder library has the expander.py script, which can substitute the include directives by the necessary chunks of code. I have done this preprocessing and submitted your solution: https://codeforces.net/contest/1389/submission/236894387
What command line arguments should I pass?