vlecomte's blog

By vlecomte, history, 7 years ago, In English

Hi,

I've updated my geometry book over at https://vlecomte.github.io/cp-geo.pdf. I hope you like the new content, and I would really appreciate any feedback: suggestions, comments, typos you found, etc. :)

The changes I made:

  • Finished the chapter on "basics" and made some changes based on your feedback from last time.
  • Added a whole chapter on precision issues and how to deal with them. I think some of the problems and ideas in this chapter are not obvious to all and I think they can be useful for people who don't feel very confident in geometry.

The codes are mostly untested for now so take them with a grain of salt (and tell me about the mistakes you find).

In a few weeks, I will hold a Gym contest showcasing geometry techniques/concepts that are not very common at the moment but could help make interesting problems and add some variation to geometry in ICPC-style or online contests.

There will be 5 problems for 3-4 hours. They will be hard but (I hope) interesting. After the contest, along with the editorial, my book will be updated to include all the topics that are linked to the contest. I will post an announcement as soon as the time is fixed. I hope many of you will participate. :)

UPD: I added a chapter on 3D geometry. As usual, feedback of any kind is very appreciated. It's probably full of errors. :)

  • Vote: I like it
  • +450
  • Vote: I do not like it

»
7 years ago, # |
  Vote: I like it 0 Vote: I do not like it

"select to reveal" not working for me in Chromium using the default PDF Viewer plugin. It works when opened with Evince.

  • »
    »
    7 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    That was only a temporary solution (it's not ideal when printed ^^). It should be better now thanks to hyperref magic. Thanks for the feedback! :)

»
7 years ago, # |
  Vote: I like it +3 Vote: I do not like it

Projective geometry would help :)

  • »
    »
    7 years ago, # ^ |
      Vote: I like it +11 Vote: I do not like it

    I don't know much about it but I'm intrigued. Do you have an example of how it can be useful for CP, or a problem in which it's used?

    • »
      »
      »
      7 years ago, # ^ |
      Rev. 2   Vote: I like it +5 Vote: I do not like it

      645G - Armistice Area Apportionment (authored by me) is one where the right projective transformation gives a clean solution :)

      (And by the way, this project looks awesome! Computational geometry from a competitive programming perspective is indeed a topic that could do well with such an exposition.)

    • »
      »
      »
      7 years ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      Mainly duality and voronoi.

»
7 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Congrats for your work! Do we need some basic knowledges to start?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    In terms of geometry, not much. But I'm working with the basic assumption that readers have a good command of high school math. In the future, I might include an appendix chapter with some specific prerequisites if there is enough request.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I am in Middle School...So I don't know High School Math ( But I am Learning it) , What does Knowledge I need to read the book ?

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +9 Vote: I do not like it

        I think I would advise you to just try and read the book a little and see if you get stuck on things. I think your math level should be enough to understand the biggest part of the content.

        Actually I would be really interested in knowing which parts prove the trickiest for a math novice. So if you do read it, it would be really nice if you could make a list of the things you didn't understand and send it to me, so that I can see if I can solve those problems. :)

»
6 years ago, # |
  Vote: I like it +11 Vote: I do not like it

Auto comment: topic has been updated by vlecomte (previous revision, new revision, compare).

»
6 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

The issues you have with relative error on pages 9 and 10 make no sense.

"Relative error e=10^{-6}" essentially means that the six most significant decimal digits of the answer have to be correct. Or, if you prefer, the leftmost 20 bits. This is almost always what you want when doing floating-point calculations, simply because you cannot ask for anything less than this. If you cannot even guarantee this, the algorithm you are using is so imprecise that it's essentially useless.

Formally, if the reference answer is x, all values between x*(1-e) and x*(1+e) will be considered close enough.

The only situation when relative error may misbehave (i.e., when the formula above doesn't do what we want semantically) is when the result itself is close to zero. Here you can encounter some pathological cases such as the exact answer being zero, the reference solution computing 1e-15 and another solution computing -1e-15. To cover these cases, we add the clause about absolute error, essentially saying that "if the reference value is close to zero, you have to be close to zero as well".

Formally, we also allow answers between x-e and x+e. This interval is usually much smaller than the previous one, and only matters when abs(x) is less than 1.

Your suggestion to only set tolerance in terms of absolute error is actually just a poor man's way of properly using the relative error. The best you can do is to set it according to the largest possible instance, and in that case it will do the same thing as the currently used test for instances of that size, and it will be too loose for all small instances.

Imagine a problem in which the input are some grid points with integer coordinates up to 10k. (You need such inputs, and possibly even larger ones, e.g. if you want to have a convex polygon with many vertices.) If you set a reasonable absolute precision according to the largest possible test case, then it's almost certain that some valid small test cases won't be judged correctly, allowing blatantly incorrect answers to pass.

  • »
    »
    6 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    So if I understand correctly, your main complaint is that setting precision bounds with absolute error will allow blatantly incorrect answers to pass? Do you have an example where those kinds of blatantly incorrect solutions can't be made to fail by using tests with a larger magnitude?

    Maybe I wasn't clear enough about that in the book, but the main reason why I advise against using relative tolerance (and thus having a smaller absolute tolerance for smaller answers) is that smaller answers can sometimes come even from instances with large magnitudes and thus large absolute errors. "Keeping the Dogs Apart" is a good example of that, and so is the computation of areas of polygons whose points have large magnitudes but whose inside area is very small (what I was trying to convey in the figure). If a problem setter is not careful about this, it might have the adverse effect of making correct solutions fail on such tests.

    • »
      »
      »
      6 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Do you have an example where those kinds of blatantly incorrect solutions can't be made to fail by using tests with a larger magnitude?

      Sure. Challenges/hacks.

      If the correct answer for a small instance is 3.0, you cannot accept 0.0 as correct just because you want your system to behave nicely for large instances.

    • »
      »
      »
      6 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      There is no golden rule when it comes to floating-point calculations. One example that sometimes crops up in programming contests is that Heron's formula for triangle area can lead to errors that are too big for any kind of tolerance when used on three collinear points. Sometimes there's nothing you can do at all; and sometimes one approach is numerically stable while another isn't. Still, I believe that the current way of testing the precision of the computed answer (with both relative and absolute error) is essentially the best we can get, because as long as there are numerically stable solutions, they should all give results within such a tolerance, and this is true for all sizes and types of instances, not just for the largest ones.

      Your examples aren't good examples for the point you are trying to make. E.g., the actual problem in your first sample code for Dogs is that it contains catastrophic cancellation: you are subtracting two large values with a small difference and this gives you a very imprecise value you then use in following calculations. This has absolutely nothing to do with the way the judge tests whether the final value is precise enough, this is just the programmer's mistake. Sure, the problem itself is nasty in that such a solution can come up quite naturally, but this issue won't be solved by any other way of testing whether the answer is precise enough or not.

      • »
        »
        »
        »
        6 years ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        I agree with the idea that there is no golden rule and we have no choice but to make some numerically unstable solutions get WA.

        I do believe "Keeping the Dogs Apart" is a good example, but I didn't mean that my intentionally naive solution is, I agree with you on that (I'll have to clarify that). Even with reasonable code, the absolute error on the positions remains high because there are a lot of segments to go through and the magnitudes are relatively high. Still, even on large cases, the answer can be made to be < 1 (it's the case in some of the test data). So what I mean is that the magnitude of answer shouldn't be what we should base the tolerance on, because it is not always proportional to the magnitudes of the input.

        Same with the area example. Imagine a dummy problem where you have a bunch of points that are described by line intersections (or any other process that gives floating-point values) and you have to find the area of the polygon they form. Say the points are guaranteed to have a magnitude <=10^5. It's easy to make tests where the absolute errors on the points is large because of their magnitude, which translates directly into a big absolute error on the area (proportional to magnitude^2), but the actual result is < 1.

        In those kinds of cases, you are force to have an absolute tolerance on the answer which is as big as the absolute tolerance on the error for the largest answers.

        I do realize that it may make things a bit harder when creating tests or hacks (though I don't see why it would be impossible) but I think it's dangerous to just recommend using absolute/relative errors because of those nontrivial cases.

        Maybe a better approach would be to specify something like: "on inputs where the magnitude of the input is <=M, the tolerance is M*10^-9, and in any case the answer is accepted if the answer within an absolute error of 10^-6", or something along those lines (and with the exponent of M depending on the dimensionality of the output). This would be the best of both worlds, though it would be a bit more complicated to state. I'll think about it.

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Well, your last paragraph is essentially what we have now, with just one tiny difference. The thing you proposed there is equivalent to saying "Answers with a relative error at most 10^(-9) or an absolute error at most 10^(-6) will be accepted."

          I agree that this may sometimes (for pathological problems such as the ones you mention) be slightly better than having the same exponent for absolute and relative error.

          • »
            »
            »
            »
            »
            »
            6 years ago, # ^ |
              Vote: I like it +8 Vote: I do not like it

            No, it's not equivalent, you are misunderstanding what I say. The specific values I chose are irrelevant (it could have been both 10^-9). What matters in my formulation is that the tolerance is based on the magnitude of the input, not the magnitude of the answer. Your formulation is problematic in the cases I just mentioned, because the magnitude of the answer is small even though the magnitude of the input is large.

            • »
              »
              »
              »
              »
              »
              »
              6 years ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Ok, I really misread that and I get you now. Still:

              1. I believe the whole issue you are raising here is just a borderline case that does not come up often, and the merits of the current way of checking for precision significantly outweigh it.

              2. If I were to set such a problem, I would still use the current way and only increase the absolute error allowed. That should handle all the issues with large inputs that have small outputs while at the same time it would not do weird things for, say, large inputs with intermediately large values in their outputs.

              In the end, the main thing we want is to say "compute the answer precisely enough" and relative precision is the way of formalizing "precisely enough" for all non-pathological cases.

              • »
                »
                »
                »
                »
                »
                »
                »
                6 years ago, # ^ |
                Rev. 2   Vote: I like it 0 Vote: I do not like it

                And as a second "big picture point", problem setting has many faces. To me, understanding intricate details of floating-point precision is not one of the things we want to focus on in algorithmic contests, it's just a necessary evil. To me, the best way to handle this evil is to have one standard formulation that: 1. almost always does what people intuitively expect, and 2. is reasonably easy to understand.

                Sure, for specific problems with specific precision issues there may be a better way of checking precision of your calculations, but it's always worth asking whether the price you would be paying isn't too large. Sometimes it's better to just drop a problem and leave it unused if it contains a weird kind of precision issues.

»
6 years ago, # |
  Vote: I like it +19 Vote: I do not like it

I have marked all the typos in the 3rd chapter (3-D Geo) in the "message" box, i.e. click on the "message box" to see the typo. (They are on Pg: 76, 83, 87, 88, 90, 91, 93, 95, 102, 105, 107, 110, 113). Link: https://drive.google.com/file/d/1EEiiwYFnspJt-NkUGEffYaNflePW-F2L/view?usp=sharing

Also, I really appreciate your work, its a marvel! Can you tell, when do you think you'll be able to complete it?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Those should all be fixed now, thank you so much for the detailed feedback! :)

    To be honest, I have no idea when I will be able to complete it. I'll continue working on it for now. I will soon put the source code on GitHub and specify the CC license under which I put it, so that if at some point I run out of time and someone wants to take on the work, they would be free to do so.

»
6 years ago, # |
  Vote: I like it +16 Vote: I do not like it

vlecomte thank you for your contribution towards the community and for your nice book. But I wonder that why there is no chapter regarding convex hull and its applications in your book?

  • »
    »
    6 years ago, # ^ |
      Vote: I like it +4 Vote: I do not like it

    Because I haven't written it yet. :)

    I'm still planning to continue this book (and that's probably where I'll start) but I can't promise any time frame unfortunately.

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

The given link isn't working. Can someone help me download this book?

»
6 years ago, # |
Rev. 2   Vote: I like it +4 Vote: I do not like it

In general, one question I've had when using the implementations in this book is how much concern has been put into floating point error. So clearly, most of the first chapter is dedicated to the subject, but what about the rest of the book?

For example, the sgn function is implemented as such:


template <typename T> int sgn(T x) { return (T(0) < x) - (x < T(0)); }

However, I know that it's fairly common to have it defined in a similar fashion as below


return x>-EPS && x<EPS ? 0 : (x<0 ? -1 : 1)

Is there any particular reason it isn't done that way in the book, and how much care in general was put into floating point robustness?

  • »
    »
    6 years ago, # ^ |
    Rev. 2   Vote: I like it +18 Vote: I do not like it

    The reason I didn't put in such epsilon checks in the main chapters of the book is that there is no generic good solution to precision issues (which I hope the chapter on precision somewhat showed). They wouldn't necessarily make the implementation more correct, and might give the implementer a false sense of safety. In my opinion, it's the implemnter's job to determine exactly the places where they need to actively handle precision issues (if there are any). And it's the problem setter's job to make sure those places are few.

    That said, I've taken special care that the code handles all possible inputs and gives outputs that are at least sensible. This isn't the case for all intuitive ways to implement them, as illustrated in my remark about circle-circle intersection in the precision chapter. Sometimes there will be no way to guarantee this in a simple way (e.g. for convex hulls), in which case I will address this specifically and indicate how to handle it.

»
6 years ago, # |
  Vote: I like it 0 Vote: I do not like it

When I read problem D today, the first thing I thought was "fuck, I shoulda read that last chapter".

Turns out that's the difference between worlds and 3rd.

  • »
    »
    6 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    (We're mentioning GNYR'18.)

    Actually I don't think my book is useful for that problem. It's mostly a math problem that you have to work out on paper, no fancy computational geometry skills needed.

    • »
      »
      »
      6 years ago, # ^ |
        Vote: I like it +10 Vote: I do not like it

      Perhaps not. But I've never even tried a 3d geometry problem before, and it led to me not looking at the problem during the contest. I'm not sure about your solution, but the Princeton team just approximated the cylinders as prisms. My teammate who looked at the problem didn't have that idea (he was trying to find a closed form integral), but if I'd been more confident in 3d geometry I might have looked at the problem and come up with that idea.

      Basically, it wasn't so much an actual knowledge issue. I just didn't have any confidence in being able to solve a 3d geo problem, and hence, we didn't solve it.

      • »
        »
        »
        »
        6 years ago, # ^ |
          Vote: I like it +5 Vote: I do not like it

        That makes sense. I also used the prism approximation (generating a bunch of points of the curve and pretending it's a polyline) based on an intuition that a closed form would be impossible to get and that this approximation would converge fast. Having worked with 3D geometry problem before certainly helps.

        I don't know if you are still eligible for next year (I'm not) but if so, I wish you the best of luck. :)

        • »
          »
          »
          »
          »
          6 years ago, # ^ |
            Vote: I like it +10 Vote: I do not like it

          Luckily, I'm still a junior, so I'll be eligible for a while more. Unluckily, the rest of my team won't be eligible :(

          With the new format, some pretty strong teammates next year, and you out of my region (haha), I think I'll have a good shot. Thanks!

»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

minor typo on page 21 line 1 "single root x = 0" should be "single root x = 1"

»
5 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Seems like the holy grail of computational geometry. Really enjoying the book. Thank you very much!

»
5 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Appreciable work brother.