Can anyone help me, which is more efficient while using in the for loop?
for(u = 2; u <= sqrt(n); u++)
or
for(u = 2; u*u <= n; u++)
Here are my both submission
Using first method — 119098252
Using second method — 119098208
While using the first method i got TLE but the same code using with the second method got Accepted..
I am assuming that method one is calculating sqrt(n)
again and again and other one is calculating u*u
again and again then why one is got accepted other one is not.
Please tell me the reason behind this?
sqrt
may be $$$O(1)$$$ but it has a high constant factor. Computing it once and storing it in a variablelim
passes.If variable ($$$n$$$) might be reduced during the for-loop then
1LL * u * u <= n
is somewhat betterAlso for some rare occasions did precision error of
sqrt(n)
becomes hard to be detected/debuggedIndeed, the
sqrt
function is the culprit: it is a quite slow mathematical function and calculating it many times can easily lead to TLE.Usually try not to use sqrt function unless your u*u is larger than LLONG_MAX(if you use long long int)/INT_MAX(if you use int)....
sqrt(n) will not get calcuated repeatedly as
n
is not changing unlikeu
changes. Don't know much detail about it, but just heard that if a code is executing repeatedly, the corresponding arthematic operations aren't done if the values aren't changed.The compiler is only allowed to remove repeated calls to functions with const or pure attribute. But sqrt(n) isn't really pure, because it has side effects (it may modify the global errno variable and/or raise FE_INVALID floating point exceptions).
Don't use
sqrt(n)
, it can cause precision errors. (and yeah, it's slow too)Well, it can, but if we are talking about efficiency, then computing
sqrt(n)
once is obviously faster than computingu * u
n times. GCC (since some version) guarantees to make an optimisation in the casefor (int i = 0; i < f(n); i++)
, so thatf(n)
computes only once (obviously, only when it succeeds to prove thatf(n)
does not change during the loop). Clearly, it is not possible to make similar optimization in the case ofu * u
.Btw,
u * u
is also not fail-safe at all because you can simply overflow. Moreover, it overflows even earlier, thansqrt(n)
causes precision errors. So, I think that the most safe and fast way is to handle the case of the perfect square and then usefloor(sqrt(n))
(or withsqrtl
instead ofsqrt
).But, honestly, I also prefer
u * u
because it is simply easier to debug when you don't use floats and the difference in speed is usually not important.You can verify this yourself if you go to https://codeforces.net/problemset/customtest and try to experiment with the following code:
The compiler is not allowed to get rid of the second call to "sqrt(n)" function and replace it by using the already pre-calculated value "sqrt_n". This optimization can't be safely applied, because it would change the user visible program behaviour and the printed errno value would be different.
But there is a special
-ffast-math
compiler option (which doesn't seem to be used by the codeforces platform by default), and it may help: https://stackoverflow.com/a/22135559/4882445The code
for (int i = 0; i < sqrt(n); i++)
surely does call the sqrt function again and again on each loop iteration, unless the-ffast-math
option is used. This all is very fragile and such code stinks.So if i calculate
sqrt(n)
previously and then check in the loop, Is it fine or not?It's fine for small numbers, but for bigger numbers you can run into precision errors. There are two solutions:
Increment or decrement the answer a few times, because you're guaranteed to be close even if there are precision errors. Something like this:
Just use
u*u <= x
. This saves a bit of time to implement, and your sanity. But if you need to explicitly calculate the square root, then use the first way.It's worth to mention that it won't necessarily be fine for small numbers as it does not depend on the magnitude of them.
"The crux of the problem is that numbers are represented in this format as a whole number times a power of two; rational numbers (such as 0.1, which is 1/10) whose denominator is not a power of two cannot be exactly represented." ref
True, I forgot about that. So something like $$$\sqrt{81}$$$ might get something like $$$8.99999999999993435$$$. So you could probably add an epsilon value, like $$$10^{-9}$$$, but at that point, just do the incrementing thing.
No, this can't happen. Everything will be perfectly fine and accurate as long as the numbers are all integers and smaller than $$$2^{53}$$$.
But we may run into troubles if we are dealing with rational numbers. We have infinitely repeating digits after dot in a decimal representation for some of them, which makes fractions like $$$1/3 = 0.333...$$$ kinda inconvenient. And $$$1/5$$$ results in infinitely repeating digits after dot in a binary representation too. The floating point numbers are internally stored in a binary format. And a nicely looking floating point constant
0.2
in the source code actually isn't equal to exactly $$$1/5$$$ and won't give us exactly $$$1.0$$$ after multiplying by $$$5$$$.Maybe you can try this: