# Easy Binomial Expansion & Bezier Curve Formulas

In this post we are going to explore two things:

1. Learn how to easily be able to expand $(x+y)^N$ for ANY value $N$ using the binomial theorem (well, N has to be a positive integer…)
2. Use that for coming up with the equation for a Bezier curve of any degree (any number of control points)

## Binomial Theorem

The binomial theorem is a way of easily expanding $(x+y)^N$. As you probably know, if $N$ is 2, you can use FOIL (first, outer, inner, last). If $N$ gets above 2, things start to get harrier though and more tedious.

Let’s check out how the values look when $N$ is 1, 2 and 3: $N$ $(x+y)^N$ Expanded 1 $x+y$ 2 $x^2+2xy+y^2$ 3 $x^3+3x^2y+3xy^2+y^3$

To help make the patterns more visible, let’s make things a bit more explicit: $N$ $(x+y)^N$ Expanded 1 $1x^1y^0+1x^0y^1$ 2 $1x^2y^0+2x^1y^1+1x^0y^2$ 3 $1x^3y^0+3x^2y^1+3x^1y^2+1x^0y^3$

The easiest pattern to see is that there are N+1 terms.

The next pattern that might jump out at you looking at the above table, is that in the first term, y starts out at power 0, and the next term has a power of 1, and the power of y keeps increasing by 1 for each term left to right until we run out of terms. Similarly, x starts out at a power of 0 on the last term, has a power of 1 on the second last term, and counts up going from right to left, until we run out of terms.

Those patterns explain how many terms there are and the powers of x and y for each term. There is one piece left of the puzzle though, which is those constants that each term is multiplied by.

Those constants are called the “binomial coefficients” and they also appear as rows on pascal’s triangle. In short, each number in pascal’s triangle is a sum of the numbers directly above it. Below is an image to show what I’m talking about. Check the links section at the end for more detailed info. So if you notice, the 2nd row of pascal’s triangle is “1,1”. Those are the constants multiplied by each term for the $N=1$ case. The next row down is “1,2,1” which is the constants multiplied by each term for the $N=2$ case. Lastly the next row down (fourth row) is “1,3,3,1” which is the constants multiplied by each term for the $N=3$ case. Essentially, you just use the $N+1$th tow of pascals triangle to come up with the constants to multiply each term by.

There are algorithms for calculating the $N+1$th row of the pascal’s triangle. I have one such algorithm in the example code below, but also check the links section for more info on that.

TADA! That is the binomial theorem.

## Bezier Curve Equation Generalized (Again)

You may remember that I previously showed you a generalized way to get the equation for a bezier curve of any order in Bezier Curves Part 2 (and Bezier Surfaces).

It wasn’t too difficult, but it DID require you to manually expand $(x+y)^N$. Now that we know how to do that more easily, thanks to the section above, let’s revise how we come up with bezier curves of any order.

What you do is evaluate $P=(s+t)^N$, and then multiply each term by a unique control point (A,B,C,etc). After you have your equation, you can optionally replace all $s$‘s with $(1-t)$, or you can just remember that when you evaluate the equation, since the first form is less messy.

Boom, you are done, that’s all!

Here’s the quadratic (N=2) version to see the end result: $P = As^2 + 2Bst + Ct^2$

## Formalized Mathematical Description

The above makes a lot of sense and is easy to understand, wouldn’t it be neat if math could describe it that way?

Well… it turns out it can, and does. Here is the formal “explicit definition” of bezier curves: $\sum\limits_{i=0}^n\binom {n} {i}(1-t)^{n-i}t^iP_i$

If you remember from above, s is the same as (1-t) so you could also write it like this: $\sum\limits_{i=0}^n\binom {n} {i}s^{n-i}t^iP_i$

The $\sum\limits_{i=0}^n$ (Sigma, going from 0 to n) means that you are going to sum (add) together n+1 terms (it includes both 0 and n), using i as the loop variable in your summation. It’s basically a for loop, adding together each iteration of the for loop. Everything that comes after is what happens during each iteration of the for loop that gets summed together.

The $\binom {n} {i}$ part means to take the ith number from the (n+1)th row of Pascal’s triangle. More formally, this specifies to use specific binomial coefficients.

The next part $s^{n-i}t^i$ means that you multiply the binomial coefficient by s to the (n-i)th power, and t to the ith power. This is the same as saying s starts at power n and counts down left to right, while t starts at 0 and counts up left to right. This fits the pattern we saw above in binomial expansion.

Lastly comes $P_i$ which means to use the ith P. So, P is basically an array with n+1 elements. This array is the control points, so each P is a different control point.

So, to sum it all up, it’s saying to make a for loop from 0 to n where you are going to add up the results of each for loop. For each loop iteration, where i is the index variation of the loop, you are going to:

1. Start with the ith item from the (n+1)th row of pascals triangle
2. Multiply that by s^(n-i)t^i
3. Multiply that by a unique control point

There ya go, formalized math descriptions with crazy symbols can actually mean something useful. Who would have thought?!

Here is the quadratic bezier curve again for you to look at (quadratic means n = 2), in a form that will help you when thinking about the steps above: $P = 1s^2t^0A + 2s^1t^1B + 1s^0t^2C$

And when it’s cleaned up, it looks more familiar: $P = As^2 + 2Bst + Ct^2$

## Example Code

#include <stdio.h>
#include <vector>

//=====================================================================================
void WaitForEnter ()
{
printf("nPress Enter to quit");
fflush(stdin);
getchar();
}

//=====================================================================================
std::vector<unsigned int> PascalsTriangleRow(int row)
{
std::vector<unsigned int> ret;
ret.push_back(1);
for (int i = 0; i < row; ++i)
ret.push_back(ret[i] * (row - i) / (i + 1));
return ret;
}

//=====================================================================================
void main(void)
{
printf("Expand (x+y)^N and give Bezier curve order NnnPlease enter N:n");
unsigned int N;
// keeping it limited to a sane value. also protects against -1 being huge.
// Also, so we don't run out of letters for control points!
if (scanf("%u",&N)==1 && N < 26)
{
auto row = PascalsTriangleRow(N);

// binomial expansion
printf("nBinomial Expansion:n");
if (N == 0)
{
printf("1");
}
else
{
for (unsigned int i = 0, c = row.size(); i < c; ++i)
{
if (i > 0)
printf(" + ");

if (row[i] != 1)
printf("%u", row[i]);

unsigned int xPow = N - i;
if (xPow > 0)
{
printf("x");
if (xPow > 1)
printf("^%i", xPow);
}

unsigned int yPow = i;
if (yPow > 0)
{
printf("y");
if (yPow > 1)
printf("^%i", yPow);
}
}
}

// bezier curves
printf("nnBezier Curve Order %u:nP = ", N);
if (N == 0)
{
printf("A");
}
else
{
for (unsigned int i = 0, c = row.size(); i < c; ++i)
{
if (i > 0)
printf(" + ");

// control point name
printf("%c*",'A'+i);

if (row[i] != 1)
printf("%u", row[i]);

unsigned int sPow = N - i;
if (sPow > 0)
{
printf("s");
if (sPow > 1)
printf("^%i", sPow);
}

unsigned int tPow = i;
if (tPow > 0)
{
printf("t");
if (tPow > 1)
printf("^%i", tPow);
}
}
}

// bezier curves
printf("nnOr:nP = ", N);
if (N == 0)
{
printf("A");
}
else
{
for (unsigned int i = 0, c = row.size(); i < c; ++i)
{
if (i > 0)
printf(" + ");

// control point name
printf("%c*",'A'+i);

if (row[i] != 1)
printf("%u", row[i]);

unsigned int sPow = N - i;
if (sPow > 0)
{
printf("(1-t)");
if (sPow > 1)
printf("^%i", sPow);
}

unsigned int tPow = i;
if (tPow > 0)
{
printf("t");
if (tPow > 1)
printf("^%i", tPow);
}
}
}

printf("n");
}
else
{
printf("Invalid value for Nn");
}
WaitForEnter();
}


## Example Output

Here are some runs of the program     Note that even though we talked about binomials, and bezier curves, these techniques can be expanded to trinomials and bezier triangles – and beyond! (hint: there is such thing as Pascal’s pyramid!)

Wikipedia: Binomial Theorem
Wikipedia: Pascal’s Triangle
Wikipedia: Binomial Coefficient
StackOverflow: How to efficiently calculate a row in pascal’s triangle?

# No Bad Code, Creeping Normality and Social Structure Code Organization

In the last post I said that another post was coming regarding using bilinear texture sampling for something interesting. Hopefully without sounding too pretentious, that interesting thing I wanted to write up has shown quite a bit of fruit, so I’m going to try to write it up and see about getting it put into The Journal of Computer Graphics Techniques. We’ll see how that pans out, and I’ll make sure and share once it’s published (or if I can’t get it published hehe) as well as a post on the process of trying to get something published, in case other people out there are interested in pursuing that sort of thing.

Today’s post is a bit different than usual. There are some interesting concepts that I’ve been exposed to that I think are worth sharing, and that I’m hoping you will also find interesting.

One idea presented to me recently was the concept that there is no bad code. The idea being that people make decisions in code based on the situations at the time, and that as situations change, code that once was totally reasonable, and most engineers would likely have made the same choices, no longer is seen as a good choice in the new sets of circumstances.

I’m going to be a bit cynical here and mention that I do believe there is bad code, and that it comes around due to bugs, as well as lack of knowledge, lack of experience, and lack of testing before deployment. However, the idea of code that is now seen as bad, was once perfectly reasonable does make sense to me and is kind of interesting.

I know we’ve all hit code that we or others wrote that we despise, causes us grief, and gets in the way of us working what we ought to be working on. This concept definitely does absolve SOME such code I’ve personally had to deal with. But, there is definitely plenty of code left that it doesn’t. In my personal examples, this mostly comes from shoddy third party middleware software, and also, just a lack of knowledge or experience on the part of the implementer. I’m guilty of the second point, but I think we all are, and that is a sign we are learning and growing.

It’s probably good to keep in mind that lousy code of the present may have been perfectly reasonable code of the past, and before deciding that code was lousy, or that a specific engineer is terrible, that you have to judge the code in the light that it was put in.

… Then refactor it.

# Creeping Normality

Creeping normality is an interesting concept. Basically, just like that old tale of a frog sitting in progressively hotter water til it boils (which by the way is debunked by snopes), it’s easy for individuals or companies to find themselves in situations that would NEVER be seen as ideal situations, but were arrived at by incremental mis-steps – or also just a changing landscape.

This relates to the last point a bit, because it can show how the needs of a piece of code can change over time, such that looking at it as a static snapshot in time, you might wonder why it’s so complex and trying to do so many different things needlessly.

This can also happen to a company on a more global scale, and can explain some really odd, less than ideal behaviors a company might be doing, where you are pretty sure the people involved know better.

How do you fight creeping normality? Good question… but when you are able to identify some oddness or problems that come up due to it, hopefully it’s as early as possible, and hopefully the people with the power to make things right listen to you (:

# Social Structure Code Organization

The last topic is kind of bizarre, but totally makes sense once you think about it. The idea is that code will match the communication structure of the teams making the code.

This is Conway’s law which says: “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations”. Another good quote from the wikipedia page for Conway’s law states that (paraphrased) “if you have four teams working on a compiler, you’ll end up with a four pass compiler”.

This can be seen a lot in situations where you have a game engineering team and an engine engineering team. There will be a distinct line between the code bases, based on the responsibility of the teams. If instead, you have some code where the engine and game team are the same, there will be no such line, and if you then try to split that team into being a game and engine team, it’ll take some time to draw the line between responsibilities, separate the code along those lines, and possibly do something like get the code into separate projects and possibly code repositories, which all seems like a logical choice to do when you have separate teams.

I think this also relates back to the first point about “No Bad Code”, because were someone to come into a team where the organization had recently changed, they are going to wonder why the code doesn’t have nice abstracted API layers at the boundaries, like most folks would consider logical and good practice. Perhaps too, this would be a case of creeping normality, or at least, if the project organization changed, it could be the result of a reaction AGAINST creeping normality.

In short, there are a lot of ways in which perfectly good code can go bad, and it’s probably a good idea to think about that a bit before condemning code as inherently rotten.

HOWEVER, rotten code does exist. I’d name some names, but it would probably be classified as slander, so i’ll bite my tongue. I’m sure you have plenty of examples of your own (;

Lastly, if you are responsible for crimes against code-manity in days past, either due to any of the reasons above, or because you didn’t know things you know now, or even if you were just lazy or misguidedly purposefully malicious (?!), remember this: You are not your mistakes!

OK, enough of that, next post will be on some cool programming technique, I promise 😛