Lagrange Interpolation

Lagrange interpolation is a way of crafting a function from a set of data points..

In the past I’ve seen reference to Lagrange interpolation in relation to audio programming like, for helping make a soft knee for a limiter, but it can be used wherever you need to make a function from some data points.

What’s It Do?

Lagrange interpolation is a way of crafting a y=f(x) function from a set of (x,y) data pairs. The resulting function passes through all the data points you give it (like a Catmull-Rom spline does), so can be used to find a function to interpolate between data sets.

You can’t give two value pairs that have the same x value, but the data points don’t have to be evenly spaced.

Also, if you give N data points, you’ll get out a function that is a N-1 degree polynomial. So, if you interpolate two data points, you’ll get a degree 1 polynomial (a line). If you interpolate three data points, you’ll get a degree 2 polynomial (a quadratic).

The function will be quite messy, but you can use algebra, or wolframalpha.com (or the like) to simplify it for you to a simpler equation.

Lagrange interpolation is subject to Runge’s Phenomenon, so the more data points you have, the more the interpolation tends to get “squirly” near the edges and shoot off up high or down low, instead of smoothly interpolating between data values.

How’s It Do It?

Well, to make any kind of curve from data points, if we want the curve to pass through those data points, one way would be to come up with a set of functions to multiply each data point by.

Each function must evaluate to 1 when the curve is at that control point, it should be zero when the curve is at any other control point. Between control points, the function can take any value, but if you make it continuous / smooth, the curve will be continuous and smooth, so that’s usually what is desired.

When we have those functions, to get a point on the curve we just multiply each control point by it’s corresponding function (called a basis function), and we sum up the results.

The pseudocode below is how this works and is the basic functionality of most common curve types:

// The basic way to evaluate most any type of curve
float PointOnCurve (float t, float *controlPoints, int numControlPoints)
{
    float value = 0.0f;

    for (int i = 0; i < numControlPoints; ++i)
        value += controlPoints[i] * ControlPointFunction(i, t);

    return value;
}

float ControlPointFunction (int i, float t)
{
  // return the ith control point function evaluated at time t.
  // aka return f(t) for the ith control point.
}

What makes Lagrange interpolation different than other curve types is the basis functions it uses.

The Math

If you aren’t used to seeing a capital pi, or a laplacian style cursive l in equations, it’s about to get a bit mathy!

If you feel like skipping to the next section, I don’t blame you, but if you are feeling brave, you should try and follow along, because I’m going to slowly walk through each and every symbol to help explain what’s going on and why.

Let’s say that you are want to be able to interpolate between k+1 data points:

(x_0, y_0)\ldots(x_k, y_k)

The formula for calculating a Lagrange interpolated value is this:

L(x) := \sum_{j=0}^{k} y_j \ell_j(x)

The capital sigma (\sum_{j=0}^{k}) just means that we are going to loop a variable j from 0 to k (including k), and we are going to sum up the total of everything on the right for all values of j. When you see a capital sigma, think sum (note they both start with an s).

The next thing after the sigma is y_j. That is just the y value from our jth control point. That is essentially controlPoints[j].y.

After that comes the last part \ell_j(x). That is just the function for the jth control point that we multiply the control point by (aka the basis function), evaluated for the specific value x.

Since there is no operator between this function and the control point, that means we multiply them together. So yeah… that crazy math just says “multiply each control point by it’s basis function, and sum up the results”, just like our pseudo code above does!

The second equation we need to look at is the definition of the basis functions for each control point. Here is the formula that describes the jth basis function, for the jth control point:

\ell_j(x) := \prod_{\begin{smallmatrix}0\le m\le k\\ m\neq j\end{smallmatrix}} \frac{x-x_m}{x_j-x_m}

First is the capital pi \prod_{\begin{smallmatrix}0\le m\le k\\ m\neq j\end{smallmatrix}}. This means that we are going to do a loop, but instead of adding the results of the loop, we are going to multiply them together. Where a capital sigma means sum, capital pi means product.

The notation for product is a bit different here than in the sigma though which may be a bit tricky to read at first. Instead of explicitly saying that m should go from 0 to k, the notation $latex 0\le m\le k\\$ says that implicitly. That same notation can be used with sigma, or the more explicit style notation could be used with pi.

The pi also has this notation next to it m\neq j. That means that the case where m equals j should be skipped.

Finally, on to the second part: \frac{x-x_m}{x_j-x_m}. This part is pretty easy to read. x is the parameter to the function of course, x_m is just controlPoints[m].x where m is the index variable of our product loop (\prod), and x_j is just controlPoints[j].x where j is the index variable of our summation loop (\sum).

Let’s say that k was 2 because we had 3 data pairs. Our three basis functions would be:

\ell_0(x) := \frac{x-x_1}{x_0-x_1} * \frac{x-x_2}{x_0-x_2}
\ell_1(x) := \frac{x-x_0}{x_1-x_0} * \frac{x-x_2}{x_1-x_2}
\ell_2(x) := \frac{x-x_0}{x_2-x_0} * \frac{x-x_1}{x_2-x_1}

Which means that our final Lagrange interpolation function would be:

L(x) := y_0 * \frac{x-x_1}{x_0-x_1} * \frac{x-x_2}{x_0-x_2} + y_1 * \frac{x-x_0}{x_1-x_0} * \frac{x-x_2}{x_1-x_2} + y_2 * \frac{x-x_0}{x_2-x_0} * \frac{x-x_1}{x_2-x_1}

That is quite a mouth full, but hopefully you understand how we came up with that!

x_i is just controlPoints[i].x and y_i is just controlPoints[i].y.

Math Intuition

The intuition here is that we need to come up with a set of functions to multiply each control point by, such that when the function’s x value is at the control point’s x value, the function should evaluate to 1. When the function’s x value is at a different control points x value, the function should evaluate to 0. The rest of the time, the function can evaluate to whatever it wants, although again, having it have smooth values is nice to making a good curve.

So the first problem is, how do we make a function evaluate to 0 when x is at a different control point?

The easy way would be to multiply a bunch of terms together of this form (x - x_i), but make sure and not include the x of the actual control point that we are multiplying against.

That is exactly what it does with the numerator in the product notation of the basis function.

\ell_j(x) := \prod_{\begin{smallmatrix}0\le m\le k\\ m\neq j\end{smallmatrix}} \frac{x-x_m}{x_j-x_m}

Note that j is the index of the current control point that we are calculating the basis function for. All values of x, that isn’t the x value of a control point will evaluate to non zero.

The denominator value is there so that when x is the value of the control point that we care about, that the function will evaluate to 1.

It does this by figuring out what the value of the numerator will be when x is at the control point, and then makes that be the value that it divides by, so that it’s 1 at that x value.

Not too much to it. Pretty simple stuff, but powerful as well!

Extending to 2D and Beyond

Lagrange interpolation is a one dimensional interpolation scheme, meaning that if you have data points of the form (x,y), it can give you an interpolated y value based on an x value you give it. The interpolation it does can never give two different y values for the same x.

If you want to extend this technique to interpolating a curve through two dimensional data points, or even higher, you need to do interpolation independently for each axis and use a “parametric” value for that axis.

For instance, if you needed to interpolate a curve through 3 dimensional points, you would have data points like this:

X Points = (t_{x,0}, x_0)\ldots(t_{x,k+1}, x_{k+1})
Y Points = (t_{y,0}, y_0)\ldots(t_{y,k+1}, y_{k+1})
Z Points = (t_{z,0}, z_0)\ldots(t_{z,k+1}, y_{k+1})

And then you would interpolate on each axis by the t value to get your X, Y and Z axis values. This should look familiar, because this is how higher dimensional Bezier curves work; you evaluate them per axis based on a parametric value per axis (s,t,u,etc).

You could use the same t values on each axis, or they could be completely independent. You don’t even need to have the same number of points for each axis!

You might wonder how this differs from the standard interpolation in the 2D case. Check the demos in the link section below to really get a grasp of the difference, but in essence, with standard (1D) interpolation, you can never have two x values that evaluate to 2 different y values. Extending it like the above into two dimensions by parameterizing each axis lets you get around that limitation and you can make true 2d shapes.

Lastly, it is possible to make Lagrange interpolated surfaces! I won’t go into the details (perhaps a future post!), but if you know how to make a bezier rectangle by doing a tensor product (basically having X axis Bezier curves, multiplied by Y axis Bezier curves), you can accomplish a Lagrange surface in a really similar way.

Sample Code

This sample code is written for readability, but could easily be optimized for faster execution. Also, from what I hear, the second form of Barycentric Lagrange Interpolation is touted as the fastest form of Lagrange interpolation, since many values can be pre-calculated and re-used for different values of x.

#include <stdio.h>
#include <vector>

struct SPoint
{
    float x;
    float y;
};

typedef std::vector<SPoint> TPointList;

void WaitForEnter ()
{
    printf("Press Enter to quit");
    fflush(stdin);
    getchar();
}

// calculates the lagrange basis function y value for control point "controlPointIndex" at x value "x"
float LagrangeBasis (const TPointList& pointList, size_t controlPointIndex, float x)
{
    // this is the pi "inner loop" multiplication work
    float value = 1.0f;
    for (size_t i = 0, c = pointList.size(); i < c; ++i) {
        if (i != controlPointIndex)
            value *= (x - pointList[i].x) / (pointList[controlPointIndex].x - pointList[i].x);
    }
    return value;
}

// returns a value at x, using lagrange interpolation over the specified list of (x,y) pairs
float LagrangeInterpolate (const TPointList& pointList, float x)
{
    // this is the sigma "outer loop" summation work
    float sum = 0.0f;
    for (size_t controlPointIndex = 0, c = pointList.size(); controlPointIndex < c; ++controlPointIndex)
        sum += pointList[controlPointIndex].y * LagrangeBasis(pointList, controlPointIndex, x);
    return sum;
}

int main (int argc, char **argv)
{
    // show some 1d interpolated values
    // note that the points don't need to be sorted on X, but it makes for easier to read examples
    {
        // (x,y) pairs
        const TPointList points =
        {
            { 0.0f, 1.1f },
            { 1.6f, 8.3f },
            { 2.3f, 6.5f },
            { 3.5f, 4.7f },
            { 4.3f, 3.1f },
            { 5.9f, 7.5f },
            { 6.8f, 0.0f }
        };

        // show values interpolated from x = 0, to x = max x
        printf("1d interpolated values.  y = L(t)n");
        const float c_numPoints = 10;
        for (int i = 0; i < c_numPoints; ++i)
        {
            float percent = ((float)i) / (float(c_numPoints - 1));
            float x = points.back().x * percent;
            float y = LagrangeInterpolate(points, x);
            printf("  (%0.2f, %0.2f)n", x, y);
        }
        printf("n");
    }

    // show some 2d interpolated values
    // also note that x and y don't have to have matching t values!
    {
        // (t, x) pairs
        const TPointList pointsX =
        {
            { 0.0, 0.0f},
            { 1.0, 1.6f},
            { 2.0, 2.3f},
            { 3.0, 3.5f},
            { 4.0, 4.3f},
            { 5.0, 5.9f},
            { 6.0, 6.8f}

        };
        // (t, y) pairs
        const TPointList pointsY =
        {
            { 0.0f, 1.1f },
            { 1.0f, 8.3f },
            { 2.0f, 6.5f },
            { 3.0f, 4.7f },
            { 4.0f, 3.1f },
            { 5.0f, 7.5f },
            { 6.0f, 0.0f }
        };

        // show values interpolated from t = 0, to t = max t, on each axis
        printf("2d interpolated values.  x = L(t_x), y = L(t_y)n");
        const float c_numPoints = 10;
        for (int i = 0; i < c_numPoints; ++i)
        {
            float percent = ((float)i) / (float(c_numPoints - 1));

            // calculate x
            float tx = pointsX.back().x * percent;
            float x = LagrangeInterpolate(pointsX, tx);

            // calculate y
            float ty = pointsY.back().x * percent;
            float y = LagrangeInterpolate(pointsY, ty);

            printf("  (%0.2f, %0.2f)n", x, y);
        }
        printf("n");
    }

    WaitForEnter();
    return 0;
}

And here’s the programs output:

Final Notes

Now that you know how to do all this stuff I wanted to share a couple more pieces of info.

Firstly, it’s kind of weird to call this “Lagrange Interpolation”. A better term is to call this the “Lagrange Form of Polynomial Interpolation”. The reason for that is that if you have some number of data points, there exists only one unique minimal order polynomial (lowest degree of x possible) that fits those points. That is due to the “unisolvence theorem” that you can read more about here: Wikipedia: Polynomial interpolation.

What that means is that if you were to use a different type of polynomial interpolation – such as newton interpolation – the result you get out is algebraically equivalent to the one you’d get from this Lagrange form. There are pros and cons to using different forms of polynomials, but that’s out of the scope of this post so go read about them if you are interested!

Speaking of that, even though this sample code is focused on interpolation using the Lagrange form, this technique is really great at being able to just come up with some simpler f(x) function that passes through specific data points. In this way, you can kind of “bake out” a custom f(x) function to do interpolation for specific values, that doesn’t need all the moving parts of the Lagrange form. For example, if you make the formula for lagrange interpolation of 3 specific value pairs and then simplify, will get out a simple quadratic function in the form of y=Ax^2+Bx+C!

Links

Here are some interactive demos I made to let you play with Lagrange interpolation to get a feel for how it works, and it’s strengths and weaknesses:
One Dimensional Lagrange Interpolation
Two Dimensional Lagrange Interpolation

I also found these links really helpful in finally understanding this topic:
Lagrange Interpolation
Lagrange’s Interpolation Formula

Want to follow the rabbit hole a little deeper? Check out how sinc interpolation relates to the Lagrange form!
The ryg blog: sinc and Polynomial interpolation

The De Casteljau Algorithm for Evaluating Bezier Curves

Over the past year or so I’ve been digging fairly deeply into curves, mostly into Bezier curves specifically.

While digging around, I’ve found many mentions of the De Casteljau algorithm for evaluating Bezier curves, but never much in the way of a formal definition of what the algorithm actually is, or practical examples of it working.

Now that I understand the De Casteljau algorithm, I want to share it with you folks, and help there be more useful google search results for it.

The De Casteljau algorithm is more numerically stable than evaluating Bernstein polynomials, but it is slower. Which method of evaluating Bezier curves is more appropriate is based on your specific usage case, so it’s important to know both.

If you are looking for the mathematical equation of a Bezier curve (the Bernstein form which uses Bernstein basis functions), you have come to the right place, but the wrong page! You can find that information here: Easy Binomial Expansion & Bezier Curve Formulas

Onto the algorithm!

The De Casteljau Algorithm

The De Casteljau algorithm is actually pretty simple. If you know how to do a linear interpolation between two values, you have basically everything you need to be able to do this thing.

In short, the algorithm to evaluate a Bezier curve of any order N is to just linearly interpolate between two curves of degree N-1. Below are some examples to help show some details.

The simplest version of a Bezier curve is a linear curve, which has a degree of 1. It is just a linear interpolation between two points A and B at time t, where t is a value from 0 to 1. When t has a value of 0, you will get point A. When t has a value of 1, you will get point B. For values of t between 0 and 1, you will get points along the line between A and B.

The equation for this is super simple and something you’ve probably seen before: P(t) = A*(1-t) + B*t.

The next simplest version of a Bezier curve is a quadratic curve, which has a degree of 2 and control points A,B,C. A quadratic curve is just a linear interpolation between two curves of degree 1 (aka linear curves). Specifically, you take a linear interpolation between A,B, and a linear interpolation between B,C, and then take a linear interpolation between those two results. That will give you your quadratic curve.

The next version is a cubic curve which has a degree of 3 and control points A,B,C,D. A cubic curve is just a linear interpolation between two quadratic curves. Specifically, the first quadratic curve is defined by control points A,B,C and the second quadratic curve is defined by control points B,C,D.

The next version is a quartic curve, which has a degree of 4 and control points A,B,C,D,E. A quartic curve is just a linear interpolation between two cubic curves. The first cubic curve is defined by control points A,B,C,D and the second cubic curve is defined by control points B,C,D,E.

So yeah, an order N Bezier curve is made by linear interpolating between two Bezier curves of order N-1.

Redundancies

While simple, the De Casteljau has some redundancies in it, which is the reason that it is usually slower to calculate than the Bernstein form. The diagram below shows how a quartic curve with control points A,B,C,D,E is calculated via the De Casteljau algorithm.

Compare that to the Bernstein form (where s is just (1-t))

P(t) = A*s^4 + B*4s^3t + C*6s^2t^2 + D*4st^3 + E*t^4

The Bernstein form removes the redundancies and gives you the values you want with the least amount of moving parts, but it comes at the cost of math operations that can give you less precision in practice, versus the tree of lerps (linear interpolations).

Sample Code

Pretty animations and intuitive explanations are all well and good, but here’s some C++ code to help really drive home how simple this is.

#include <stdio.h>

void WaitForEnter()
{
    printf("Press Enter to quit");
    fflush(stdin);
    getchar();
}

float mix(float a, float b, float t)
{
    // degree 1
    return a * (1.0f - t) + b*t;
}

float BezierQuadratic(float A, float B, float C, float t)
{
    // degree 2
    float AB = mix(A, B, t);
    float BC = mix(B, C, t);
    return mix(AB, BC, t);
}

float BezierCubic(float A, float B, float C, float D, float t)
{
    // degree 3
    float ABC = BezierQuadratic(A, B, C, t);
    float BCD = BezierQuadratic(B, C, D, t);
    return mix(ABC, BCD, t);
}

float BezierQuartic(float A, float B, float C, float D, float E, float t)
{
    // degree 4
    float ABCD = BezierCubic(A, B, C, D, t);
    float BCDE = BezierCubic(B, C, D, E, t);
    return mix(ABCD, BCDE, t);
}

float BezierQuintic(float A, float B, float C, float D, float E, float F, float t)
{
    // degree 5
    float ABCDE = BezierQuartic(A, B, C, D, E, t);
    float BCDEF = BezierQuartic(B, C, D, E, F, t);
    return mix(ABCDE, BCDEF, t);
}

float BezierSextic(float A, float B, float C, float D, float E, float F, float G, float t)
{
    // degree 6
    float ABCDEF = BezierQuintic(A, B, C, D, E, F, t);
    float BCDEFG = BezierQuintic(B, C, D, E, F, G, t);
    return mix(ABCDEF, BCDEFG, t);
}

int main(int argc, char **argv)
{
    struct SPoint
    {
        float x;
        float y;
    };

    SPoint controlPoints[7] =
    {
        { 0.0f, 1.1f },
        { 2.0f, 8.3f },
        { 0.5f, 6.5f },
        { 5.1f, 4.7f },
        { 3.3f, 3.1f },
        { 1.4f, 7.5f },
        { 2.1f, 0.0f },
    };

    //calculate some points on a sextic curve!
    const float c_numPoints = 10;
    for (int i = 0; i < c_numPoints; ++i)
    {
        float t = ((float)i) / (float(c_numPoints - 1));
        SPoint p;
        p.x = BezierSextic(controlPoints[0].x, controlPoints[1].x, controlPoints[2].x, controlPoints[3].x, controlPoints[4].x, controlPoints[5].x, controlPoints[6].x, t);
        p.y = BezierSextic(controlPoints[0].y, controlPoints[1].y, controlPoints[2].y, controlPoints[3].y, controlPoints[4].y, controlPoints[5].y, controlPoints[6].y, t);
        printf("point at time %0.2f = (%0.2f, %0.2f)n", t, p.x, p.y);
    }

    WaitForEnter();
}

Here’s the output of the program:

Thanks to wikipedia for the awesome Bezier animations! Wikipedia: Bézier curve

What is Pre-multiplied Alpha and Why Does it Matter?

The usual equation for blending two pixels with alpha is to use a source factor of “source alpha” and a destination factor of “one minus source alpha”.

That results in this equation:

\bf{Out.RGB} = In.RGB * In.A + Out.RGB * (1.0 - In.A)

In other words, it’s just a linear interpolation between In and Out, using the alpha of the pixel you are writing to determine the weighting for the lerp.

Just to be clear, Out is the pixel in your frame buffer (ie the final result that shows up on your monitor) and In is the new pixel you are trying to write or combine with the output image.

If you use pre-multiplied alpha, all that means is that In.RGB is already multipled by In.A which results in slightly less math:

\bf{Out.RGB} = In.RGB + Out.RGB * (1.0 - In.A)

Less math for the same results is always a good thing due to increased efficiency, but this also results in a higher quality results in the case of using mips. For more info on that check out this link: NVIDIA GameWorks – Alpha Blending: To Pre or Not To Pre

To achieve this in OpenGL or DirectX with pre-multiplied alpha textures, you just use a source factor of “one” and leave the destination factor at “one minus source alpha”.

If you are using alpha to write to a render target that also has an alpha channel, the math improvement is even better. Here is the equation with regular (post-multiplied) alpha in that scenario for combining the RGB portion of the pixels:

\bf{Out.RGB} = \frac{In.RGB * In.A + Out.RGB * Out.A * (1.0 - In.A)}{In.A + (1.0 - In.A) * Out.A}

When working with pre-multiplied alpha it just goes back to the lerp:

\bf{Out.RGB} = In.RGB + Out.RGB * (1.0 - In.A)

When blending to a target with alpha, the equation to combine alpha is the old familiar lerp:

\bf{Out.A} = In.A + Out.A * (1.0 - In.A)

That makes the lerp form of color combining even nicer since it means the same math for all color channels of the pixel:

\bf{Out} = In + Out * (1.0 - In.A)

Confused about why alpha combining using lerp makes sense? It makes most sense to me when thinking about it like this… if you had something that was half opaque(Out.A = 0.5), then you looked at that through something that was 3/4 opaque (In.A = 0.75), only 25% of the light would get past the top layer (0.25), to reach the bottom layer. Only 50% of the light that reached the bottom layer gets through, so we cut that 25% in half to get 12.5% (0.125). Since alpha really means “opaqueness” (1.0 means no transparency), that means that the combined alpha is 1 – 0.125, or 0.875.

If you plug the numbers into the above equation you get the same result:

\bf{Out.A} = 0.75 + (1.0 - 0.75) * 0.5
\bf{Out.A} = 0.75 + 0.25 * 0.5
\bf{Out.A} = 0.75 + 0.125
\bf{Out.A} = 0.875

This compute graphics stack exchange question has some more great info on what premultiplied alpha can give you. It’s much more than I wrote here:
Does premultiplied alpha give order independent transparency?

Here’s a fun question… does alpha represent transparency, or does it represent how much of a pixel is covered by an opaque object? To find out the answer give this a read!
Jounral of Computer Graphics Techniques: Interpreting Alpha

A Fifth Way to Calculate Sine Without Trig

In a previous post, I showed Four Ways to Calculate Sine Without Trig. While reading up on rational bezier curves, I came across a 5th way!

A rational bezier curve post will be coming in the (hopefully near) future, so I won’t go into too many details of that, but the important thing to know is that with a rational 2d quadratic bezier curve, you can EXACTLY represent any conic section, including circle arcs. With a regular (integral) bezier curve, you can’t.

You can in fact see that in action in an interactive HTML5 demo i made here: Rational Quadratic Bezier Curve. Note that on my main page, i have links to other 1d/2d bezier/trig rational/integral interactive curve demos. go check them out if you are interested in seeing more: http://demofox.og.

After I made the 2d quadratic bezier demo I realized something… The x and y positions of that curve are calculated independently of each other, and if you wanted to draw a circle in a program, you’d calculate the x and y positions independently, using sine for the y axis value and cosine for the x axis value. That means that if rational curves can – as people say – perfectly represent conic sections, that the two 1d curves that control each axis of that circle arc curve must be exact representations of sine and cosine, at least for the 90 degrees of that arc.

It turns out that this is true enough. I am seeing some variation, but have an open question on math stack exchange to try to get to the bottom of that.

Let’s do some math to come up with our curve based sine equation!

Math Derivation – Skip If You Want To

A rational Bezier equation is defined like this, which is basically just one Bezier curve divided by another:

\bf{B}(t) = \frac{\sum\limits_{i=0}^n\binom {n} {i}(1-t)^{n-i}t^iW_iP_i}{\sum\limits_{i=0}^n\binom {n} {i}(1-t)^{n-i}t^iW_i}

So, here is the equation for a rational quadratic bezier curve (n = 2):

\bf{B}(t) = \frac{(A*W_1*(1-t)^2 + B*W_2*2t(1-t) + C*W_3*t^2)}{(W_1*(1-t)^2 + W_2*2t(1-t) + W_3*t^2)}

A,B,C are the control points and W_1,W_2,W_3 are the weightings associated with those control points.

For the first 90 degrees of sine, A=0, B=1, C = 1, W_1 = 1, W_2 = cos(arcAngle/2), W_3 = 1.

arcAngle is 90 degrees in our case, so W_2=cos(45) or W_2=1/sqrt(2) or W_2=0.70710678118.

If we plug those values in and simplify, and treat it as an explicit (1d) equation of y = f(x), instead of P=f(t), we come up with the equation in the next section.

Final Equation

y = \frac{(0.70710678118*2x(1-x) + x^2)}{((1-x)^2 + 0.70710678118*2x(1-x) + x^2)}

or, so you can copy paste it:
y=(0.70710678118*2x(1-x) + x^2) / ((1-x)^2 + 0.70710678118*2x(1-x) + x^2)

That gives us the first quadrant (first 90 degrees) of sine. To get the second quadrant, we just horizontally flip the first quadrant. To get the third quadrant, we vertically flip the first quadrant. To get the fourth quadrant, we vertically and horizontally flip the first quadrant.

Equation In Action

Here’s a screenshot from a shadertoy where I implemented this. red = true sine value, green = the value made with the curve, yellow = where they overlap and are equal. Any place you see red or green peaking out means they aren’t quite equal. I sort of expected it to be exactly dead on correct, but like I said, I have an open math stack exchange question to find out why it isn’t and will update this with my findings if they are relevant or interesting. It still is pretty darn good though.

Also, this formula may not be the most efficient way to calculate sine without trig that I’ve shown, but it is interesting, and that’s why I wanted to show it (:

You can see this in action at Shadertoy: Sin Without Trig IV

Easy Binomial Expansion & Bezier Curve Formulas

In this post we are going to explore two things:

  1. Learn how to easily be able to expand (x+y)^N for ANY value N using the binomial theorem (well, N has to be a positive integer…)
  2. Use that for coming up with the equation for a Bezier curve of any degree (any number of control points)

Binomial Theorem

The binomial theorem is a way of easily expanding (x+y)^N. As you probably know, if N is 2, you can use FOIL (first, outer, inner, last). If N gets above 2, things start to get harrier though and more tedious.

Let’s check out how the values look when N is 1, 2 and 3:

N (x+y)^N Expanded
1 x+y
2 x^2+2xy+y^2
3 x^3+3x^2y+3xy^2+y^3

To help make the patterns more visible, let’s make things a bit more explicit:

N (x+y)^N Expanded
1 1x^1y^0+1x^0y^1
2 1x^2y^0+2x^1y^1+1x^0y^2
3 1x^3y^0+3x^2y^1+3x^1y^2+1x^0y^3

The easiest pattern to see is that there are N+1 terms.

The next pattern that might jump out at you looking at the above table, is that in the first term, y starts out at power 0, and the next term has a power of 1, and the power of y keeps increasing by 1 for each term left to right until we run out of terms. Similarly, x starts out at a power of 0 on the last term, has a power of 1 on the second last term, and counts up going from right to left, until we run out of terms.

Those patterns explain how many terms there are and the powers of x and y for each term. There is one piece left of the puzzle though, which is those constants that each term is multiplied by.

Those constants are called the “binomial coefficients” and they also appear as rows on pascal’s triangle. In short, each number in pascal’s triangle is a sum of the numbers directly above it. Below is an image to show what I’m talking about. Check the links section at the end for more detailed info.

So if you notice, the 2nd row of pascal’s triangle is “1,1”. Those are the constants multiplied by each term for the N=1 case. The next row down is “1,2,1” which is the constants multiplied by each term for the N=2 case. Lastly the next row down (fourth row) is “1,3,3,1” which is the constants multiplied by each term for the N=3 case. Essentially, you just use the N+1th tow of pascals triangle to come up with the constants to multiply each term by.

There are algorithms for calculating the N+1th row of the pascal’s triangle. I have one such algorithm in the example code below, but also check the links section for more info on that.

TADA! That is the binomial theorem.

Bezier Curve Equation Generalized (Again)

You may remember that I previously showed you a generalized way to get the equation for a bezier curve of any order in Bezier Curves Part 2 (and Bezier Surfaces).

It wasn’t too difficult, but it DID require you to manually expand (x+y)^N. Now that we know how to do that more easily, thanks to the section above, let’s revise how we come up with bezier curves of any order.

What you do is evaluate P=(s+t)^N, and then multiply each term by a unique control point (A,B,C,etc). After you have your equation, you can optionally replace all s‘s with (1-t), or you can just remember that when you evaluate the equation, since the first form is less messy.

Boom, you are done, that’s all!

Here’s the quadratic (N=2) version to see the end result:
P = As^2 + 2Bst + Ct^2

Formalized Mathematical Description

The above makes a lot of sense and is easy to understand, wouldn’t it be neat if math could describe it that way?

Well… it turns out it can, and does. Here is the formal “explicit definition” of bezier curves:
\sum\limits_{i=0}^n\binom {n} {i}(1-t)^{n-i}t^iP_i

If you remember from above, s is the same as (1-t) so you could also write it like this:
\sum\limits_{i=0}^n\binom {n} {i}s^{n-i}t^iP_i

The \sum\limits_{i=0}^n (Sigma, going from 0 to n) means that you are going to sum (add) together n+1 terms (it includes both 0 and n), using i as the loop variable in your summation. It’s basically a for loop, adding together each iteration of the for loop. Everything that comes after is what happens during each iteration of the for loop that gets summed together.

The \binom {n} {i} part means to take the ith number from the (n+1)th row of Pascal’s triangle. More formally, this specifies to use specific binomial coefficients.

The next part s^{n-i}t^i means that you multiply the binomial coefficient by s to the (n-i)th power, and t to the ith power. This is the same as saying s starts at power n and counts down left to right, while t starts at 0 and counts up left to right. This fits the pattern we saw above in binomial expansion.

Lastly comes P_i which means to use the ith P. So, P is basically an array with n+1 elements. This array is the control points, so each P is a different control point.

So, to sum it all up, it’s saying to make a for loop from 0 to n where you are going to add up the results of each for loop. For each loop iteration, where i is the index variation of the loop, you are going to:

  1. Start with the ith item from the (n+1)th row of pascals triangle
  2. Multiply that by s^(n-i)t^i
  3. Multiply that by a unique control point

There ya go, formalized math descriptions with crazy symbols can actually mean something useful. Who would have thought?!

Here is the quadratic bezier curve again for you to look at (quadratic means n = 2), in a form that will help you when thinking about the steps above:
P = 1s^2t^0A + 2s^1t^1B + 1s^0t^2C

And when it’s cleaned up, it looks more familiar:
P = As^2 + 2Bst + Ct^2

Example Code

#include <stdio.h>
#include <vector>

//=====================================================================================
void WaitForEnter ()
{
    printf("nPress Enter to quit");
    fflush(stdin);
    getchar();
}

//=====================================================================================
std::vector<unsigned int> PascalsTriangleRow(int row)
{
	std::vector<unsigned int> ret;
	ret.push_back(1);
    for (int i = 0; i < row; ++i)
        ret.push_back(ret[i] * (row - i) / (i + 1));
	return ret;
}

//=====================================================================================
void main(void)
{
	printf("Expand (x+y)^N and give Bezier curve order NnnPlease enter N:n");
	unsigned int N;
	// keeping it limited to a sane value. also protects against -1 being huge.
	// Also, so we don't run out of letters for control points!
	if (scanf("%u",&N)==1 && N < 26)
	{
		auto row = PascalsTriangleRow(N);

		// binomial expansion
		printf("nBinomial Expansion:n");
		if (N == 0)
		{
			printf("1");
		}
		else
		{
			for (unsigned int i = 0, c = row.size(); i < c; ++i)
			{
				if (i > 0)
					printf(" + ");

				if (row[i] != 1)
					printf("%u", row[i]);

				unsigned int xPow = N - i;
				if (xPow > 0)
				{
					printf("x");
					if (xPow > 1)
						printf("^%i", xPow);
				}

				unsigned int yPow = i;
				if (yPow > 0)
				{
					printf("y");
					if (yPow > 1)
						printf("^%i", yPow);
				}
			}
		}

		// bezier curves
		printf("nnBezier Curve Order %u:nP = ", N);
		if (N == 0)
		{
			printf("A");
		}
		else
		{
			for (unsigned int i = 0, c = row.size(); i < c; ++i)
			{
				if (i > 0)
					printf(" + ");

				// control point name
				printf("%c*",'A'+i);

				if (row[i] != 1)
					printf("%u", row[i]);

				unsigned int sPow = N - i;
				if (sPow > 0)
				{
					printf("s");
					if (sPow > 1)
						printf("^%i", sPow);
				}

				unsigned int tPow = i;
				if (tPow > 0)
				{
					printf("t");
					if (tPow > 1)
						printf("^%i", tPow);
				}
			}
		}

		// bezier curves
		printf("nnOr:nP = ", N);
		if (N == 0)
		{
			printf("A");
		}
		else
		{
			for (unsigned int i = 0, c = row.size(); i < c; ++i)
			{
				if (i > 0)
					printf(" + ");

				// control point name
				printf("%c*",'A'+i);

				if (row[i] != 1)
					printf("%u", row[i]);

				unsigned int sPow = N - i;
				if (sPow > 0)
				{
					printf("(1-t)");
					if (sPow > 1)
						printf("^%i", sPow);
				}

				unsigned int tPow = i;
				if (tPow > 0)
				{
					printf("t");
					if (tPow > 1)
						printf("^%i", tPow);
				}
			}
		}

		printf("n");
	}
	else
	{
		printf("Invalid value for Nn");
	}
	WaitForEnter();
}

Example Output

Here are some runs of the program





Links

Note that even though we talked about binomials, and bezier curves, these techniques can be expanded to trinomials and bezier triangles – and beyond! (hint: there is such thing as Pascal’s pyramid!)

Here are some links to more info about some of the topics talked about above:
Wikipedia: Binomial Theorem
Wikipedia: Pascal’s Triangle
Wikipedia: Binomial Coefficient
StackOverflow: How to efficiently calculate a row in pascal’s triangle?

Bilinear Filtering & Bilinear Interpolation

Take a look at the picture below. Can you calculate the values at points A,B and C?

There’s a technique for doing this called bilinear interpolation which extends the idea of linear interpolation into two dimensions.

To calculate bilinear interpolation, you just do linear interpolation on one axis, and then the other.

Check out these examples:

Point A
Point A has a coordinate of (0.2,0.8). Let’s start with the X axis.

Firstly, we’ll do interpolation across the top of the square on the X axis. That means we need to go 0.2 (20%) of the way from 7 to 0. That 0.2 is our x axis coordinate. To calculate that we do this:
7 + (0-7) * 0.2 = 5.6

Next, we’ll do interpolation across the bottom of the square on the X axis. So similar to above, we’ll go 0.2 (20%) of the way from 3 to 5. To calculate that we do this:
3 + (5-3) * 0.2 = 3.4

Now that we have interpolated across our X axis we need to interpolate across our Y axis. We are now going to go 0.8 (80%) of the way from our bottom value (the value of Y = 0), to our top value (the value of Y = 1). The bottom value is the bottom interpolation we did (3.4) and the top value is the top interpolation we did (5.6).

So, we need to go 0.8 (80%) of the way from 3.4 to 5.6. To calculate that we do this:
3.4 + (5.6 – 3.4) * 0.8 = 5.16

So, the value at point A is 5.16. The image below should help explain visually the process we went through.

Point B

Point B has a coordinates of (1.0,0.5).

Let’s start with the X axis interpolation again (although you could easily start with Y first if you wanted to! You’d end up with the same result).

Doing the X axis interpolation across the top, we need to go 1.0 (100%) of the way from 7 to 0. I’m sure you can guess what the answer is but here it is calculated:
7 + (0-7) * 1.0 = 0

Similarly, let’s do the X axis interpolation across the bottom. We need to go 1.0 (100%) of the way from 3 to 5.
3 + (5-3) * 1.0 = 5

Next up comes the Y axis interpolation. We need to go 0.5 (50%) of the way from 5 to 0.
5 + (0-5) * 0.5 = 2.5

There we go, the value at point B is 2.5. That should make sense too by looking at the diagram. It’s basically 1d linear interpolation between 5 and 0, and is half way between them, so intuitively, the answer 2.5 should make sense.

Point C

Point C has a coordinate of (0.8,0.2).

Once again, let’s do the X axis interpolation across the top of the box. We need to go 0.8 (80%) of the way from 7 to 0.
7 + (0-7) * 0.8 = 1.4

Then, we need to do the x axis interpolation across the bottom of the box. We need to go 0.8 (80%) of the way from 3 to 5.
3 + (5-3) * 0.8 = 4.6

Lastly, the y axis interpolation. We need to go 0.2 (20%) from 4.6 to 1.4.
4.6 + (1.4-4.6) * 0.2 = 3.96

The value of point C is 3.96

Bilinear Filtering

While bilinear interpolation is useful if you have a grid of data that you want to interpolate values within (such as a height field that represents terrain?!), where this really shines in game development is in bilinear texture filtering.

Basically, when you have a texture coordinate that doesn’t perfectly line up with the center of a pixel in the texture (because there’s a remainder), it uses the fractional part of the coordinate within that pixel to do bilinear interpolation between the 4 pixels involved to come up with the final pixel color. It does bilinear interpolation on each of the channels: Red, Green, Blue and Alpha.

The end result is pretty good! Check out the image below to see it in action on a texture that we are zoomed in too far on. The left side uses bilinear filtering, while the right side does not, and just shows you the nearest pixel.

Consequently, bilinear texture filtering has to do 4 pixel reads to be able to interpolate between them, instead of a single pixel read like when doing nearest pixel sampling. More pixel reads = more overhead, and not as cheap (computationally) to use.

Links

For some folks who are reading this, this info is pretty basic and you might be wondering why i bothered writing about it. Look to the next post for something kind of bizarre regarding bilinear filtering. This was needed as a stepping stone to the next thing I want to show you guys 😛

Wikipedia: Linear Interpolation

Wikipedia: Bilinear Interpolation

Wikipedia: Bilinear Filtering

The Dark Side of Bilinear Filtering

Check out these links for some deeper info about some shortcomings (and some workarounds) of hardware based bilinear sampling:
iq: hardware interpolation
iq: improved texture interpolation

FlipQuad & FlipTri Antialiasing

Here are the last two SSAA algorithms/sampling patterns that I wanted to share – FlipQuad and FlipTri.

FlipQuad

The last post talked about 4-Rook antialiasing which took 4 samples per pixel to do antialiasing.

Before that we talked about quincunx which was able to get 5 samples per pixel, while only rendering 2 samples per pixel. It was able to do that by sharing “corner samples” among 4 pixels.

If you combine the ideas from the last two posts, you could start with the 4-rook sampling points and push them to the edge of the pixel so that the edge samples can be shared between the pixels that share the edge.

If you do that though, you’ll actually have two samples on each edge. To solve this and keep the same number of sample points per pixel, you could flip every other pixel’s sample points.

Ta-Da! That is exactly what FlipQuad is. Check out the sampling pattern below, which shows the samples for a 2×2 grid of pixels. You’d just repeat this pattern for as many pixels as you had.

Here’s an image that shows FlipQuad in action – anti aliased on the left, regular image on the right. You can click on the image to see the shadertoy demo of it.

It works pretty well, but it is pretty blurry, especially the textures! Kind of makes sense because quincunx was blurry due to shared samples. We have ONLY shared samples in this pattern, so it’s understandable that it’s kind of blurry. There’s less unique information per pixel than in the other SSAA techniques shown so far.

This essentially is 2 samples rendered per pixel to get 4x SSAA.

ShaderToy: FlipQuad AntiAliasing

FlipTri

Flipquad worked decently, but instead of using a quad, could we use a triangle? Yes we can, check out the 2×2 pixel sampling pattern below:

Here’s an image that shows FlipTri in action – anti aliased on the left, regular image on the right. You can click on the image to see the shadertoy demo of it.

This essentially is 1.25 samples rendered per pixel to get 3x SSAA! It really doesn’t look that different to my eyes than the flipquad method, but it uses quite a fewer number of samples!

ShaderToy: FlipTri AntiAliasing

Conclusion

So basically, there’s a bunch of different ways to do SSAA style anti aliasing, even beyond the ones I showed, but in the end you are basically just taking more samples and/or sharing samples across multiple pixels to get a better resulting image.

SSAA is also a common technique in raytracing, where they will shoot multiple rays per pixel, and combine them together in this way, sometimes for things like movies, they will cast hundreds of rays per pixel! You could also just render multiple times and do one of the methods we’ve talked about in the last couple post instead of explicitly shooting multiple rays per pixel.

At SIGGRAPH 2014, someone mentioned in the “advancements in realtime rendering” talk that they used the flipquad pattern along with temporal supersampling. That made it so the 2 samples rendered per pixel was amortized across two frames and thus became one sample per pixel, which is pretty nifty. I feel like you could extend that to flip tri’s and perhaps render less than 1 sample per pixel each frame.

A big issue with implementing these in modern graphics situations is that it’s difficult to render a sample once and share it across multiple pixels. I have a way in mind involving rendering the scene a couple times with different sized buffers and offsets, but not sure yet if it’s practical. Temporal supersampling algorithms definitely seem like they could benefit from these exotic patterns more easily.

Up next I think I’m going to try and figure out MSAA, which has similar results as SSAA, but from the sound of things, performs a bit better.

Links

FlipQuad Research Paper: FLIPQUAD: Low-Cost Multisampling
Rasterization

FlipTri Research Paper: An Extremely Inexpensive
Multisampling Scheme

A Weighted Error Metric and Optimization Method for
Antialiasing Patterns

CGT 511
(Anti)aliasing

Bart Wronski: Temporal supersampling and antialiasing

A Family of Inexpensive Sampling Schemes

4-Rook Antialiasing (RGSS)

The last post was about quincunx antialiasing which used 5 samples per pixel, but allowed you to share 4 of those samples with 4 other pixels each, making it so you only had to render 2 samples per pixel to get those 5 samples per pixel.

I also mentioned that shadertoy didn’t allow you to render to texture so in my shadertoy demo, i had to actually just do 5 samples per pixel to be able to show you the quincunx effect.

If you ever find yourself in that sort of a situation where you can’t (or don’t want) to render your scene twice, 4-Rook Antialiasing may be more what you are looking for.

4-Rook anti aliasing takes 4 samples per pixel, and does so in the pattern below with the specified blend weights. This pattern is also sometimes called rotated grid supersampling (RGSS) and is also a subset of “N-Rook” supersampling where your sample points within a pixel don’t share a vertical or horizontal line with any other sample point. The N-Rook sample patterns are good at breaking up horizontal or vertical aliasing.

Interestingly, 4-Rook AA looks less blurry than quincunx so is higher quality. Intuitively, I’d have to say that makes sense, especially since it is more expensive to do (you render 4 samples per pixel instead of 2!), and also, while quincunx technically has 5 samples per pixel, and 4-Rook only has 4, those 4 samples are all NEW information used only once, while 4 of the samples in quincunx are “old news” and just the old information repeated again.

I think of it like this… the difference between a blur and real SSAA is that in a blur, you try to improve the image with no added information, while in SSAA you try to improve the image WITH added information. Quincunx is on the spectrum between those two since it re-uses 4 samples (minimal “new information”), while 4-rook’s samples are all new information.

Note that you could implement this anti aliasing by rendering your scene 4 times, each time offset by one of the 4 offsets. You would then do a final full screen pass to average each pixel across all 4 renders. In other words: Output[x][y] = (A[X][Y] + B[X][Y] + C[X][Y] + D[X][Y])/4.

Click the image below to be taken to the shadertoy demo of 4-rook anti aliasing.

Here is the quincunx image again for reference, note how it looks blurier in comparison to the 4-Rook image above (check out the red/blue rectangles to see that best), and that the spiral squares don’t quite look as good (they look more aliased?) and that the background grid is ever so slightly darker than the 4-rook version above (or the aliased version of the grid in either picture)!

Shadertoy: 4-Rook Antialiasing (RGSS)

Quincunx Antialiasing

If you are looking for a quick and easy antialiasing implementation, quincunx could be what you are looking for.

Quincunx is part of the family of anti aliasing called supersampling antialiasing (SSAA), which all work by taking multiple samples per pixel, each sample being offset by some amount, and then adding the samples back together with various weightings to get the final antialiased pixel color.

Quincunx antialiasing works by mixing five samples per pixel, in the pattern of the diagram below with the specified weights. The real interesting and useful thing about it though is that the 4 corner samples are each re-used by 4 other pixels, which makes it so that even though you take five samples per pixel, you are in effect really only rendering TWO samples per pixel to get results similar to 5x supersampling antialiasing!

Basically, the performance cost is about the same as 2x SSAA, but gives you results similar to 4x-5x SSAA.

Trivial Note: The word quincunx describes the position of those 5 dots, which is the same configuration you see for the number five on a six sided die.

Since the corners are a half pixel offset from the center pixel, the algorithm for doing quincunx sampling is as follows:

  1. Render your scene to a full size buffer (same size as your output render target), offsetting your camera in screen space by (0.5,0.5) pixels
  2. Next, render your scene to the usual render target as normal
  3. Run a full screen pixel shader post effect on your normal “output render target”. For each pixel:
    1. Multiply the pixel color channels by 0.5
    2. Add in these four offsets from the current pixel location, in the “offset” buffer (the first one rendered), multiplying the color channels by 0.125: (-1,-1), (-1,0), (0,-1), (0,0).

That’s all there is to it! Click the image below to be taken to an animated shadertoy I made which does quincunx antialiasing in a WebGL pixel shader. Shadertoy unfortunately doesn’t let you render to texture, so i have to do all 5 samples per pixel, but you can see see the effect compared to no anti aliasing at all, and the results are pretty nice!

The left half of the image is quincunx antialiased while the right side is not antialiased at all.

A few more creative supersampling AA algorithms coming next (:

Shadertoy: Quincunx Anti Aliasing

Bresenham’s Drawing Algorithms

If you were asked to name a line drawing algorithm, chances are you would say Bresenham. I recently needed to write my own software line drawing algorithm (CPU and regular ram, not GPU and VRAM) and Bresenham was the first to come to mind for me as well.

Believe it or not, Jack Bresenham actually came up with 2 famous line drawing algorithms. One is a run length algorithm, and the other is a run slice algorithm. People are most often familiar with the run length algorithm, which I mainly talk about in this post, but there’s some information about the run slice algorithm and links to other Bresenham primitive rendering algorithms below as well!

An interesting thing about both line algorithms is that they involve only integer mathematics, so there is no round off error, epsilons, numerical instability or anything like that. You use only integers and simple math (no divisions even!), and get the exact right answer out. I think that is pretty cool.

Bresenham’s Run Length Line Algorithm Summarized

To help understand the code, I want to give a brief summarization of how the algorithm works at a high level.

The first step of the Bresenham line algorithm is to see if the line is longer on the X axis or Y axis. Whichever one it is longer on is the major axis, and the shorter one is the minor axis.

Then, starting at the starting point of the line, you loop across all pixels of the major axis, doing some math to decide for each pixel whether you need to move the pixel on the minor axis yet or not. Basically you keep track of the error – or the distance the pixel is away from the true position of the line on the minor axis – and if the error is greater than or equal to 1 pixel, you move the pixel on the minor axis and subtract one pixel from the error. If the error is smaller than 1 pixel, you keep it at the same value it was for the last pixel, and keep the error value the same, so that it can accumulate some more error for the next pixel and test again.


The left line is longer on the X axis, so the major axis is the X axis. The right line is longer on the Y axis, so the major axis is the Y axis. Notice that there is one pixel for each value along the major axis of each line, but repeated pixel values along the minor axis of each line.

If you want the deeper details about the algorithm or the math behind it, check out this link:
Wikipedia: Bresenham’s line algorithm

One Octant Code

Wikipedia says that many implementations implement the algorithm for a single octant (half of a quadrant) and then use coordinate conversions (flipping the x and y’s and/or negating them) to make the same logic work for any quadrant.

In that vein, here is a simple implementation for the first octant – where the major axis is X, and both X and Y are increasing from the starting point to the ending point.

	void DrawLine (int x1, int y1, int x2, int y2, unsigned int color)
	{
		const int dx = x2 - x1;
		const int dy = y2 - y1;

		int Error = 2 * dy - dx;
		int y = y1;
		
		DrawPixel(x1, y1, color);
		for (int x = x1+1; x < x2; ++x)
		{
			if (Error > 0)
			{
				y++;
				Error += 2 * dy - 2 * dx;
			}
			else
			{
				Error += 2 * dy;
			}
			DrawPixel(x,y,color);
		}
	}

With the point of Bresenham being that it is efficient, let’s sacrifice readability a bit and get rid of those DrawPixel calls and perhaps over-micro-optimize a bit and make some consts for frequently used values.

	void DrawLine (unsigned int* pixels, int width, int x1, int y1, int x2, int y2, unsigned int color)
	{
		const int dx = x2 - x1;
		const int dy = y2 - y1;
		const int dx2 = dx * 2;
		const int dy2 = dy * 2;
		const int dy2Mindx2 = dy2 - dx2;

		int Error = dy2 - dx;

		unsigned int* pixel = &amp;pixels[y1*width+x1];
		*pixel = color;
		for (int x = x2 - x1 - 1; x > 0; --x)
		{
			if (Error > 0)
			{
				pixel += width + 1;
				Error += dy2Mindx2;
			}
			else
			{
				pixel++;
				Error += dy2;
			}
			*pixel = color;
		}
	}

Cool, now let’s make it work for any line.

Final Code

	// Generic Line Drawing
	// X and Y are flipped for Y maxor axis lines, but the pixel writes are handled correctly due to
	// minor and major axis pixel movement
	void DrawLineMajorAxis(
		unsigned int* pixel,
		int majorAxisPixelMovement,
		int minorAxisPixelMovement,
		int dx,
		int dy,
		unsigned int color
	)
	{
		// calculate some constants
		const int dx2 = dx * 2;
		const int dy2 = dy * 2;
		const int dy2Mindx2 = dy2 - dx2;

		// calculate the starting error value
		int Error = dy2 - dx;

		// draw the first pixel
		*pixel = color;

		// loop across the major axis
		while (dx--)
		{
			// move on major axis and minor axis
			if (Error > 0)
			{
				pixel += majorAxisPixelMovement + minorAxisPixelMovement;
				Error += dy2Mindx2;
			}
			// move on major axis only
			else
			{
				pixel += majorAxisPixelMovement;
				Error += dy2;
			}

			// draw the next pixel
			*pixel = color;
		}
	}

	// Specialized Line Drawing optimized for horizontal or vertical lines
	// X and Y are flipped for Y maxor axis lines, but the pixel writes are handled correctly due to
	// minor and major axis pixel movement
	void DrawLineSingleAxis(unsigned int* pixel, int majorAxisPixelMovement, int dx, unsigned int color)
	{
		// draw the first pixel
		*pixel = color;

		// loop across the major axis and draw the rest of the pixels
		while (dx--)
		{
			pixel += majorAxisPixelMovement;
			*pixel = color;
		};
	}

	// Draw an arbitrary line.  Assumes start and end point are within valid range
	// pixels is a pointer to where the pixels you want to draw to start aka (0,0)
	// pixelStride is the number of unsigned ints to get from one row of pixels to the next.
	// Usually, that is the same as the width of the image you are drawing to, but sometimes is not.
	void DrawLine(unsigned int* pixels, int pixelStride, int x1, int y1, int x2, int y2, unsigned int color)
	{
		// calculate our deltas
		int dx = x2 - x1;
		int dy = y2 - y1;

		// if the X axis is the major axis
		if (abs(dx) >= abs(dy))
		{
			// if x2 < x1, flip the points to have fewer special cases
			if (dx < 0)
			{
				dx *= -1;
				dy *= -1;
				swap(x1, x2);
				swap(y1, y2);
			}

			// get the address of the pixel at (x1,y1)
			unsigned int* startPixel = &amp;pixels[y1 * pixelStride + x1];

			// determine special cases
			if (dy > 0)
				DrawLineMajorAxis(startPixel, 1, pixelStride, dx, dy, color);
			else if (dy < 0)
				DrawLineMajorAxis(startPixel, 1, -pixelStride, dx, -dy, color);
			else
				DrawLineSingleAxis(startPixel, 1, dx, color);
		}
		// else the Y axis is the major axis
		else
		{
			// if y2 < y1, flip the points to have fewer special cases
			if (dy < 0)
			{
				dx *= -1;
				dy *= -1;
				swap(x1, x2);
				swap(y1, y2);
			}

			// get the address of the pixel at (x1,y1)
			unsigned int* startPixel = &amp;pixels[y1 * pixelStride + x1];

			// determine special cases
			if (dx > 0)
				DrawLineMajorAxis(startPixel, pixelStride, 1, dy, dx, color);
			else if (dx < 0)
				DrawLineMajorAxis(startPixel, pixelStride, -1, dy, -dx, color);
			else
				DrawLineSingleAxis(startPixel, pixelStride, dy, color);
		}
	}

Using the above functions, I wrote a little test to draw lines from the center of an image towards the edges at 100 different angles, and also a horizontal and vertical line. Here’s the results:

Run Slice Algorithm

So, like I said above, there is another Bresenham line algorithm that is a run slice algorithm.

What that means is that instead of looping across the major axis figuring out if each pixel needs to move on the minor axis as well, instead it loops across the minor axis, figuring out how many pixels it needs to write for that row or column of pixels.

One benefit of that algorithm is that it is less loop iterations since it’s looping across the minor axis pixels instead of the major axis.

Here’s an interesting read about Michael Abrash and his encounters with and the run slice algorithm:

Michael Abrash’s Graphics Programming Black Book Special Edition: The Good, the Bad, and the Run-Sliced

Other Bresenham Algorithms

This was origionally just going to be about the line algorithms, and I was going to make up another post about the circle algorithm, but I changed my mind.

While doing some research to see if there was a Bresenham bezier curve algorithm I stumbled on the page below, which shows that yes, there is! There are quite a few other algorithms as well:

Many Bresenham algorithms with very short c++ implementations