### Polynomials

Lets start with a slightly more general problem, we're interested in finding all the real roots of these polynomials in**x**:

k

_{0}+ k_{1}**x**= 0
k

_{0}+ k_{1}**x**+ k_{2}**x**^{2}= 0
k

_{0}+ k_{1}**x**+ k_{2}**x**^{2}+ k_{3}**x**^{3}= 0
k

_{0}+ k_{1}**x**+ k_{2}**x**^{2}+ k_{3}**x**^{3}+ k_{4}**x**^{4}= 0
k

_{0}+ k_{1}**x**+ k_{2}**x**^{2}+ k_{3}**x**^{3}+ k_{4}**x**^{4}+ k_{5}**x**^{5}= 0
We know from the Fundamental Theorem Of Algebra that, for example, a quintic will have at most 5 real roots, so lets go ahead and write out the function prototypes :

(We also know from Galois Theory that quintics might require a numerical approximation, but that's a post for another day!)

So the first case that I hardly ever see handled, is when kQuadratic vanishes, leading to a divide-by-zero exception. Actually, this case is very easy to handle if we have a family of functions:

Another important case is when kConstant is zero. We can see graphically that the polynomial goes through the origin :

Here's the code:

### The Quadratic Formula, Revisited

descriminant = kLinear² - 4*kQuadratic*kConstant

x = ( -kLinear ± √ descriminant ) / 2 / kQuadratic

So the first case that I hardly ever see handled, is when kQuadratic vanishes, leading to a divide-by-zero exception. Actually, this case is very easy to handle if we have a family of functions:

Another important case is when kConstant is zero. We can see graphically that the polynomial goes through the origin :

kConstant is zero means that 0.0 is a root. |

Here's the code:

Note the sneaky 1+ and +1 |

### Catastrophic Cancellation

The last problem we need to fix is when the square root of the descriminant is close in magnitude to kLinear. Graphically :

This happens surprisingly often in physics calculations. |

To fix this, consider the product of the roots :

rootProduct = root[0] * root[1]

rootProduct = (-kLinear + √descriminant)/2/kQuadratic *

(-kLinear - √descriminant)/2/kQuadratic

rootProduct = ( -kLinear + √ descriminant ) * ( -kLinear - √ descriminant ) / 4 / kQuadratic²

Now something cool happens! Remember that (

**x**+**y**) * (**x**-**y**) =**x**² -**y**²
rootProduct = ( (-kLinear)² - √ descriminant² ) / 4 / kQuadratic²

rootProduct = ( kLinear² - descriminant ) / 4 / kQuadratic²

Expanding:

rootProduct = ( kLinear² - kLinear² + 4*kQuadratic*kConstant) / 4 / kQuadratic²

rootProduct = ( kQuadratic*kConstant) / kQuadratic²

rootProduct = kConstant / kQuadratic

What this means is that we can use the larger root to calculate the smaller root by dividing by their product, and keep all the accuracy.

### Appendix - Quadratic formula in C++

If you're just here for the code, the following two functions are hereby licensed under CC0.

#include <math.h>

#define EPSILON (1e-10)

// From http://missingbytes.blogspot.com/2012/04/quadratic-formula.html

int SolveLinear(double rootDest[1],double kConstant,double kLinear){

if(-EPSILON<kLinear&&kLinear<EPSILON){

return 0;

}

rootDest[0]=-kConstant/kLinear;

return 1;

}

// From http://missingbytes.blogspot.com/2012/04/quadratic-formula.html

int SolveQuadratic(double rootDest[2],double kConstant,double kLinear,double kQuadratic){

if(-EPSILON<kQuadratic&&kQuadratic<EPSILON){

return SolveLinear(rootDest,kConstant,kLinear);

}

if(-EPSILON<kConstant&&kConstant<EPSILON){

rootDest[0]=0.0f;

return 1+SolveLinear(rootDest+1,kLinear,kQuadratic);

}

double descriminant=kLinear*kLinear-4*kQuadratic*kConstant;

if(descriminant<0){

return 0;

}

if(descriminant==0){// no EPSILON check!!

rootDest[0]=-kLinear/(2*kQuadratic);

return 1;

}

double adder=sqrt(descriminant);

if(adder*kLinear>0){

adder=-adder;

}

double rootLarge=(-kLinear+adder)/(2*kQuadratic);

rootDest[0]=rootLarge;

rootDest[1]=kConstant/(kQuadratic*rootLarge);

//std::sort(rootDest,rootDest+2);

return 2;

}

### Even More Math

They tell me that each mathematical formula in your blog post exactly halves your potential readership.*How does that make you feel?*

Why not talk about your feelings in the comments below?

Nice post Chris! When are you going to share your graphing software with us?

ReplyDeleteSweet! Actually it's a work-in-progress research I cooked up in response to the 2001 SIGGRAPH paper, "Reliable two-dimensional graphing methods for mathematical formulae with two free variables."

ReplyDeleteI added my own secret sauce for anti-aliasing, inequalities, and threading, and so the resultant code is a little hairy and not really fit for public eyes.

Mine also doesn't handle branch cuts like in the original. e.g. sin(y)=x is fine, but it chokes on y=asin(x)

If you're still keen, I could maybe send you a binary to play with? In any case, I totally recommend Jeff Tupper's paper, it's got the good stuff...

Your code screen shots remind me of Yate - do you still use it? The replacement of && with ∩ is cute; reminds me of pretty-mode for Emacs (http://www.emacswiki.org/emacs/pretty-mode.el).

ReplyDeleteYup, it's the same code from YATE, with a few improvements and ports along the way.

ReplyDeleteI didn't know about pretty-mode in Emacs, that's pretty sweet!

I do a number of replacements, mostly to aid visual pattern matching :

-> →

&& ∩

|| ∪

!= ≠

== ≈ // !!! not correct at all !!!

<= ≤

>= ≥

<< «

>> »

++ < sup>++< /sup>

I also used to match "<=", and the ligatures "fi" and "fl". It took a while for me to figure out that matching non-coding sequences

doesn't help you write code that is more correct. i.e. it doesn't improve productivity.

It also helped me feel better about remapping "==" to a symbol which means approximately. It's wrongness is outweighed by it's usefulness.

And a quick shout out to any budding language designers reading this, please consider using ":=" for the assignment operator!

That's right, Rust, I'm talking about you...

ReplyDeletehttp://www.rust-lang.org