Блог пользователя forthright48

Автор forthright48, 10 лет назад, По-английски

I was trying to solve a problem on HackerRank, Devu and a Journey on Metro.

I wrote a solution using BFS and top-down memoization. Here is my submission: Submission.

When I compared my output with the judge output, I found that I am getting Wrong Answer due to precision error. The 6th decimal place seems to be different on some cases. So I looked into the analysis to see if they had some trick to reduce the precision error. Well, they did the same thing but just bottom up.

Bottom up is faster than top-down. For some cases, we can even save memory using bottom up. But I never heard about bottom-up being more precise.

I changed all double variables into long double later, which got more test cases correct but not all. Submission.

Any idea what is going on? Why is this happening?

UPD1: Got my answer from misof's comment.

  • Проголосовать: нравится
  • +33
  • Проголосовать: не нравится

»
10 лет назад, # |
  Проголосовать: нравится -11 Проголосовать: не нравится

Thank you for the site, I was not aware this site

»
10 лет назад, # |
Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

I've also solved this problem with top-down approach. My code might be useful to you.

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Thanks for the code. After looking at your code, I change a single line on mine.

    From: 
    p = 1 / SZ(adj[pos]);
    res *= p;
    To:
    res /= SZ(adj[pos]);
    

    This got me even more accepted test cases(Submission).

    I have a question though. At line 235 of your code, what are you doing exactly?

        if (res == res)
            return res;
    

    Also, our solutions differ in certain aspect. You run the dp N times, finding probability to reach X from 1 and then multiplying with the cost. I do it all at once.

    Calculating them separately gives more accurate results? Shouldn't they be the same.

    • »
      »
      »
      10 лет назад, # ^ |
        Проголосовать: нравится +45 Проголосовать: не нравится

      That is trick to check number for nan

    • »
      »
      »
      10 лет назад, # ^ |
        Проголосовать: нравится +5 Проголосовать: не нравится

      As Egor pointed out, it's a trick to check if current index of memoization array is initialized. In the beginning, it sets every bit of that array to 1 (memset(memo, -1, sizeof memo)) which equals to NaN for doubles (I don't know if it's in standards, but it works that way in gcc). And according to IEEE standard, comparisons involving NaN values always equal to false, that's why expression res == res is only true when res is not NaN. Some answer related to it on Stackoverflow

»
10 лет назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

I had come across the same situation before , solving a probabilities problem with dp , my solution got WA due to precision error but implementing the same dp approach in other way got AC!

I am interested to hear explanation for this situation from experiences programmers and how to tell which implementation for a particular solution is better?

  • »
    »
    10 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    I am wondering, if this is cause of extra "assignment" operations. In recursive functions, we store the variables in stack. So could it be, pushing and pulling from stack is causing precision loss?

    I can't think of any other explanations. Compiler optimizations during recursions?

    • »
      »
      »
      10 лет назад, # ^ |
      Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

      Machine dependent — some processors (including majority of modern ones) do calculation internally (in FPU registers) with 80 bits, which gets truncated to 64 bits (double) when written to memory.

      It means that result can be different in case:

      • going through the whole calculation on FPU only and then get final result
      • doing operations one by one and saving results in memory.

      At C++ level you have very limited control over what compiler does to the calculation, so you can't rely on it. But most likely if you do whole calculation in one place it will get optimized and run on FPU only, while if you spread it, saving results to memory, you'll get less precision in the end. Unless compilers notices that and optimizes.

      You can use "long double" which is compiler / machine specific, but even if you use just "double", internally FPU calculations will still be done with 80 bits precision.

      Long story short — treat floating point arithmetic with respect :)

»
10 лет назад, # |
Rev. 4   Проголосовать: нравится +16 Проголосовать: не нравится

UPD: This comment is not applicable to this specific situation, as misof properly mentions below — last digit errors are clearly below 1e-6 relative error and should be AC. I hope it still makes some sense for general case.


Going from bottom up you're generally summing up small numbers with small numbers and they grow big as you go up, so in the end you add up big numbers with big numbers. In case with probabilities (when you add up probabilities of a "branchy" tree of events it is even more so).

When you go from top to bottom you end up adding up big numbers from adjacent branches to small numbers in current branch, so might get into rounding error.

When I face situation like this (need to sum up lots of small and not so small floating numbers) in the contest I get very nervous. I always try to go from small one to the big ones. So in situation like this I wouldn't try top to bottom, just out of precaution.

Once in the contest I even remember sorting array of positive doubles before summing the up, just to reduce rounding errors. Proved to be unnecessary in the end, but I didn't want to get failed testcase because of 1e-6 rounding error.

»
10 лет назад, # |
  Проголосовать: нравится +42 Проголосовать: не нравится

TL,DR: Your code good. Their task bad.

As far as I can tell, your code works. Well, except for the part where you use %lld to read into an int instead of a long long. (The warnings are there for a reason, you know.) But even after fixing that your code does not get accepted.

Why? Because the problem author (or, alternately, the person who wired the task into the grader) apparently doesn't know what "absolute or relative error" means. (And apparently, neither do you.)

The output of your code sometimes differs from their output in the 6th decimal digit... of a number that has 9 digits before the decimal point! Your calculations are precise enough. In this situation, the 6th decimal place is around the precision limit for doubles. The value you are printing most likely differs from the one they printed only in the least significant bit of the mantissa. That's as precise as it gets.

And as I was saying, the person responsible for the task is a moron and the task is currently broken. The grader only checks for 1e-6 absolute error and these tiny rounding errors still trigger it to give a WA. If it correctly checked for relative error 1e-6 (roughly meaning "if it accepted anything that matches the correct answer in the 6 most significant places"), your outputs would be AC comfortably.

There is no inherent precision error in the top-down approach.

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    My knowledge about "double" is kind of hazy. I knew that mantissa can keep around 15 digits, but didn't know that it included the digits before decimals too. Or perhaps I am still spewing nonsense. It's high time I looked into double. As my coach said: "Digging a little deeper on non-algorithmic aspects of contests never hurts."

    Thanks :D