Suppose we are working in a setting where we only have integers and the operations that we are doing on them are multiply, add, subtract and comparisons between them
In such a scenario is it safe to replace all integers with doubles? (Safe in the sense of precision issues)
I think it is safe, since I've never had any problems. Only comparisons may be the problem because of decimal places, but you'll never have those decimal places anyway since you are working just with integers.
There are so-called safe integers--the integers which are exactly representable in a
double
type. The name comes from ECMAScript, which does not even know what an integer is and always uses floating-point numbers (barring some corner cases). All integers from $$$-2^{53}$$$ to $$$2^{53}$$$ are safe, which means you can treatdouble
just like an integral type as long as you stay within these bounds. For a 80-bit IEEE 754 type, that is,long double
on *nix GCC, the range is $$$-2^{64}$$$ to $$$2^{64}$$$.Hmm probably depends on what kind of integers and what kind of doubles we're talking about right?
Because double has gaps between very big integers and for sufficiently big integers we could probably use that.
Yeah I missed the point that "doubles" have gaps between very large integers. Thanks!
Why would you want to do this though
I saw a question which had to deal with 100 bit integers. I did it using a custom
BigINT
class but some other solution that just useddouble
passed. So hence I got the doubt..nice blog