Hi!
I think everybody can remember a case when
Probably everyone remembers in practice a case when after implementation of correct algorithm you receive Wrong Answer. In such a situation, sometimes works: to send solution on another compiler.
Gerald and Nerevar have found severe bug, which gives one more argument to use the rule above.
Look closely, now it will be a miracle!
#include <cstdlib>
#include <iostream>
#include <cstdio>
using namespace std;
int main() {
int n;
cin >> n;
for (int x = 0; x <= n; x++) {
if (n == x + (x + 1) + (x + 2)) {
cout << x + 2 << " " << x + 1 << " " << x << endl;
exit(0);
}
}
cout << -1 << endl;
return 0;
}
Compile it with -O2: g++ -O2 -o a a.cpp
Run and enter 9
, you will see -1
.
You can try without -O2 or using other compiler, you will get correct answer 4 3 2
.
For sure, I've submitted the bug into gnu gcc tracker. Affected g++ versions are: from 4.7.2 to 4.8.2 (at least).
I have three points about it:
We can select ACCEPTED solutions (on g++), rejudge them without -O2 and under valgrind to be sure that they are really OK. And we can do great regression test: after new release of g++ we can rejudge ~10000 such solutions to expect ACCEPTED. If not ACCEPTED, we can file a bug.
Repeatedly Codeforces community have found bugs in g++. There is common opinion that Microsoft compiler is not so good as g++, but: do you remember such bugs in Microsoft C++?
The described bug affects Codeforces. Be careful!
It works correctly for me, both with and without -O2. My g++ is old, though.
That's not really surprising if it's a regression in 4.8 or 4.9.
both work for me too.
and yeah i'm getting this same message when i check version.
I remember there is a bug with ST::reverse() with -O2. I rarely use it.
zan!
why the result is right when x starts with -1 ? ...
Why the compile command is not "g++ -o a a.cpp -O2" ?
That's a typo.
I am an active stackoverflow user and every now and then I see some code that demonstrates strange behavior in g++. I remember very clearly a case when compiling with optimizations caused abs(-1) to eval to -1. You can imagine what this could cause to some programs!
I regret so much I could not find that thread to link here.
I compile the code with
-O2 -S
and view the assembly code. I find that the code only simply checks whether n is equal to 3. If yes, the code outputs2 1 0
, else always outputs-1
. The assembly code is here, https://gist.github.com/klogk/9994674. If changing the conditional statement into3*x+3
, the code will work fine. I guess that the reason is the wrong optimization for evaluation such bool statements.I wonder why it wasn't found before.
x+x+x
don't work too.PS idea about g++ testing by codeforces is awesome. CF is the big storage of compicated combintations of simple syntax-valid instuctions that runs in ~1sec and have testcases from 40-80 tests) I've read somewhere that compiler developers usually checking the results and consider compiler working after passing 95% of them. I think that it will be brilliant to run separate testing) If I were a student of SGU it would be interesting to understand such testing system (or try to contribute) and write several bugreports by simplifying participants solutions.
You may also use
-fdump-tree-optimized
, as I mentioned in one of the Russian comments. I prints code after optimization, but before translation to assembly.Great! Yours is much more clear at a glance. New skill gotten!!
Very useful for me!It's very clear!
zan.
This situation often happens in POJ. G++ got WA,but C++ got ac finally know the reason.
Well, at least I have known, when output a double variable, people always use "%lf", and it will get AC in C++ but WA in G++. However, if you use "%f" to output a double or float variable, both of them will return AC. But scanf is different, you should use "%lf" to input a double variable and use "%f" to input a float one in both of G++ and C++.
This has nothing to do with GCC. The C/C++ standards say that
%f
, and not%lf
, shall be used to print adouble
withprintf()
.Yes, you're right. That's my point actually, but I didn't explain it clear enough:)
In C89,
%lf
withprintf()
is undefined. But from C99, it is defined as equivalent to%f
(more exactly,l
on a followingf
is defined as has no effect). So, I wonder why you got both AC and WA.I used gdb to debug the program, and the program will always break the for loop if the first n == x + (x + 1) + (x + 2) is false. It seems that if I change the expression into n <= x + (x + 1) + (x + 2), it works fine. The optimizer must have confirmed once n != x + (x + 1) + (x + 2), it would never do. It is not because n will always be greater than the rhs. Instead the it might be resulted from number theory, and the derivation makes some mistakes.
The optimized code is here:
Bug also found in shortened version, gcc 4.7.1
If changed into x * 3 + 3, it would be correct.
My gcc version is 4.8.2.
I can get the correct answer when I use
g++ -O2 -fno-tree-ch
, so I think-ftree-ch
option enabled by-O
may lead to some wrong optimization.Good job... compile code with -O2 fno-tree-ch seems to be safe for now.
(maybe) Another bug found in the last year's ICPC Asia Nanjing Regional Contest. I coded a snippet like this, in order to test whether "-O2" in our compiler command line works correctly.
To my surprise, this code runs for more than 10 seconds without anything outputed. Objdump shows the whole snippet was optimized as a "while(true);". But without -O2 this code works in about 5 seconds.
We tried all g++ versions we have at that time, 3.4.2(carried along with Dev-Cpp 4.9.9.2), 4.4.7, 4.6.3, 4.7.2, 4.8.1 and even 4.9pre. Among them only 3.4.2 works. But obviously we can't use this so-old one on judge workstations.
This fact forced us to double check many suspicious TLE solution during contest.
And one bug only affects Codeforces. (Among all compiler I tested only GCC 4.7.2 MinGW was affected.)
4054614 vs 4054535
Minimized test: http://paste.ubuntu.com/7215859/
C standart says two things:
Variable sum is signed and overflowed, and it can be predicted at compile time. So, compiler can do absolutely anything with it.
Solutions are:
-fwrapv This option instructs the compiler to assume that signed arithmetic overflow of addition, subtraction and multiplication wraps around using twos-complement representation. This flag enables some optimizations and disables other.`
Thanks!
But as
sum
andx
is not related to the for-loop, I assumed "do anything" at least does not contain "convert the whole loop into a while(true);"Now I know I'm wrong :)
Still wonder the logic behind this scene, How can compiler determine that this for-loop is infinite?
I agree that it is awful way to "do anything".
Compilers use UB for optimisations with assumption, that programmer can't cause a UB.
I see such a chain to get 'while(true)':
The idea to use Codeforces submissions as regression tests is great and would help the open source community a lot! However it's always possible that programs exhibit undefined behaviour without valgrind noticing, so it's probably still a lot of work to manually extract the failing part of the program and verify whether there is indeed a regression or the code is just wrong
Hey, I am getting a similar kind of problem. My code in g++ in my computer is giving a right answer, however the output from the file compiled on codeforces' server is giving a different, hence wrong answer.
http://codeforces.net/contest/496/submission/9178052
care to have a look? Is there any solution to it?
Thanks.
Take a look at count[len — 1] value. It has unexpected garbage because you initiate an array count inside main. If count is global, it will be 0, for example. Any way, this is a bug for you, as far as I can understand your code.
As zurg pointed out, you are accessing an uninitialized piece of memory.
Here's an advice: install valgrind and use it to check your memory usage. As an example, here is what valgrind said when running your code on test #4:
You can see that valgrind has detected every access to uninitialized memory, and it gives you the exact line where it happened.
This code is causing bug even without -O2 (tested on GCC 4.8.1):
Isn't signed integer overflow an undefined behavior?
http://stackoverflow.com/questions/16188263/is-signed-integer-overflow-still-undefined-behavior-in-c
Shortly: if something like this happens, the compiler may do anything — generate a correct code (that is, doing exactly what you want), generate an incorrect code or... format all your drives (because why not).
Signed integer overflow is undefined behavior. I guess the optimizer removes the (a*b < 0) condition because in a program with well-defined behavior it's always false (since at that point in program a>=0 and b>=0).
You are right! I was going to say gcc can't assume a >= 0 and b >= 0, but it actually can, because of the two previous if. I spent so much time trying to understand why gcc was treating this as UB and I didn't even notice that. How stupid >.<...
It makes sense now.
Thanks!
Sorry to post here after 3 months, but I found a very annoying bug somewhere in g++.
My AC code for problem 474E didn't work at home. You can find it easily, it's my last submission.
In fact:
It does random crap at a time, some indexes go to some values like 2123421 rather than 2 without any reason...
I think that there is a problem with an optimization who guesses the value of a variable wrongly. It seemed to be deterministic (the crap variables were always the same) but totally unlogical.
I know that some of you here are g++ experts. Personally, I simply use these options because they're nice for debugging, but I don't know if they're unstable. In fact, I never really try to find out :D
I think the same problem can happen to some of you, because my code is a pretty classical segment tree code.
If you find what the fuck happened, please let me know. Thanks ! :-)
Looks like you already did a lot of investigation. The obvious next steps are: 1. Get rid of all warnings. 2. If the problem remains, use binary search to find out the offending option.
I found it !
The problem is due to the combination of -O2 and -fwhole-program
When I use them separately, it works, but with both of them I get the error.
So, to create the problem, you simply need to use "-O2 -fwhole-program"
That sounds logical, according to https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html :
-fwhole-program : Assume that the current compilation unit represents the whole program being compiled. All public functions and variables with the exception of main and those merged by attribute externally_visible become static functions and in effect are optimized more aggressively by interprocedural optimizers.
The mystery seems to be solved :D