# Animating Noise For Integration Over Time

You can use noise textures (like the ones from the last post) to do dithering.

For instance, you can do the process below to make a 1 bit (black and white) dithered image using a gray scale source image and a gray scale noise texture. This would be useful if you had a 1 bit display that you were trying to display an image on.

1. For each pixel in the source image…
2. If the source image pixel is brighter than the noise texture, put a white pixel.
3. Else put a black pixel.

(info on converting images to grey scale here: Converting RGB to Grayscale)

The quality of the result depends on the type of noise you use.

If you use pure random numbers (white noise) it looks like this:

You could also use something called “Interleaved Gradient Noise” which would look like this:

Or you could use blue noise which would look like this:

As you can see, white noise was the worst looking, interleaved gradient noise is is the middle, and blue noise looked the best.

White noise is very cheap to generate and can be done in real time on either the CPU or GPU – you just use random numbers.

Blue noise is more expensive to generate and usually must be done in advance, but gives high quality results.

Interleaved gradient noise, which gives middle results, is actually very similar in generation costs as white noise believe it or not, and so can also be done in real time on either the CPU or GPU.

If you have X and Y pixel coordinates (not uv coordinates), you can generate the noise value for the pixel by using this formula:

float noise = std::fmodf(52.9829189f * std::fmodf(0.06711056f*float(x) + 0.00583715f*float(y), 1.0f), 1.0f);


Next Generation Post Processing in Call Of Duty: Advanced Warfare
Dithering part three – real world 2D quantization dithering (Bart Wronksi)

Dithering still images is fun, but in the context of video games, we are more interested in animated images, so let’s look at things in motion.

## Animated Noise

Let’s start by just animating those three noise types over 8 frames.

For white noise, we’ll generate a new white noise texture every frame.

For interleaved gradient noise, we’ll add a random offset (0 to 1000) to the pixel each frame, so we get 8 different interleaved gradient noise textures.

For blue noise, we’ll just have 8 different blue noise textures that we generate in advance.

These are playing at 8 fps, so loop every second.

White Noise:

IG Noise:

Blue Noise:

Once again we can see that white noise is worst, blue noise is best, and interleaved gradient noise is in the middle.

When you think about it though, despite these animations all using different types of noise over SPACE, they all use white noise over time. What i mean by that is if you isolate any individual pixel in any of the images and look at it over the 8 frames, that single pixel will look like white noise.

Let’s see if we can improve that.

## Golden Ratio Animated Noise

In a conversation on twitter, @R4_Unit told me that in the past he had good success by adding the golden ratio to blue noise textures to make the noise more blue over time.

The background here is that repeatedly adding the golden ratio to any number will make a low discrepancy sequence (details: When Random Numbers Are Too Random: Low Discrepancy Sequences)

The golden ratio is $\frac{1+\sqrt{5}}{2}$ or approximately 1.61803398875, and interestingly is THE MOST irrational number that there is. Weird right?

For each of the noise types, we’ll generate a single texture for frame 0, and each subsequent frame we will add the golden ratio to each pixel. The pixel values are in the 0 to 1 space when adding the golden ratio (not 0 to 255) and we use modulus to wrap it around.

The DFT magnitude is shown on the left to show how adding the golden ratio affects frequency components.

White Noise:

IG Noise:

Blue Noise:

When I look at these side by side with the previous animations, it’s hard for me to see much of a difference. That is interesting for the case of blue noise, where it’s difficult to generate multiple blue noise textures. It means that you can get a fairly decent “blue noise” texture by adding multiples of the golden ratio to an existing blue noise texture (aka recycling!).

It’s interesting that the white noise and interleaved gradient noise don’t change their frequency spectrum much over time. On the other hand, it’s a bit sad to see that the blue noise texture gains some low frequency content so the blue noise becomes lower quality. You aren’t just getting more blue noise textures for free by adding the golden ratio, even though they are blue-ish.

Another important aspect to look at is the histogram of colors used of these images when adding golden ratio. The ideal situation is that the starting images have roughly the same number of every color in the image, and that when adding the golden ratio for each frame, that we still use roughly the same number of every color. That turns out to be the case luckily.

The white noise histogram has a minimum count of 213, a maximum count of 303, an average count of 256 (the image is 256×256), and a standard deviation of 15.64. Those values are the same for each frame of the animation.

For interleaved gradient noise, it has a minimum count of 245, a maximum count of 266, an average count of 256 and a standard deviation of 2.87. Those values are the same for the entire animation.

Lastly, for blue noise, it has a minimum, maximum, and average count of 256, and a standard deviation of 0. This also remains true for the entire animation.

## Integration Over Time

A big reason we might want animated noise in graphics is because we are taking multiple samples and want to numerically integrate them.

Lets analyze how these two types of animations (regular and golden ratio) compare for integration.

These animations are the same as before, but on frame 1, we show the average of frame 0 and 1. On frame 2 we show the average of frame 0, 1 and 2. And so on to frame 7 which is the average of all 8 frames. This is an integration of our black and white sample points we are taking, where the correct value of the integration is the greyscale image we started with.

Here is white noise, IG noise and blue noise animated (new noise each frame), integrated over those 8 frames, playing at 8 frames a second:

Here is the same using the golden ratio to animate the noise instead:

Since it can be a little difficult to compare these things while they are in motion, here is the final frames of each method and some graphs to show the average error and standard deviation of the error, compared to the ground truth source image.

White Noise vs White Noise Golden Ratio:

IG Noise vs IG Noise Golden Ratio:

Blue Noise vs Blue Noise Golden Ratio:

Interestingly, the golden ratio average error and standard deviation (from the ground truth) are pretty even for all types of noise by frame 7, even though the blue noise is perceptually superior. This also happens for the non golden ratio integrations of blue noise and white noise. That’s part of the value of blue noise, that even if it has the same amount of error as say, white noise, it still looks better.

Another interesting observation is that interleaved gradient noise performs better at integration (at least numerically) than white or blue noise, when not using the golden ratio. The only way I can explain this is that when picking random pixel offsets to generate each frame of interleaved gradient noise, it’s somehow more blue over time than the other two methods. It’s a strange but pretty useful property.

Despite IG having success when looking at the numbers, it has very visible directional patterns which are not so nice. The fact that it is as cheap as white noise to generate, but has results much closer to blue noise perceptually is pretty awesome though.

Something else important to note is that white noise beats blue noise in the long run (higher sample counts). It’s only at these lower sample counts that blue noise is the clear winner.

Lastly, it seems like the ideal setup for integrating some values over time with a lower sample count would be to have N blue noise textures to use over N frames, but *somehow* have a constraint on those textures generated such that each individual pixel over time has blue noise distributed values.

I’m not sure how to generate that, or if it’s even possible to do so, but doing that seems like it would be pretty near the ideal for doing integration like the above.

Taking a guess at how the DFT’s would look, each individual slice seems like it should look like a totally normal blue noise texture where it’s black in the middle (low frequencies) and noisy elsewhere (high frequencies). If you had N slices of these it would look like a black cylinder surrounded by noise when taking the 3D DFT. I’m not sure though how having the constraint on individual pixels would modify the DFT, or if it even would.

This “ideal” I’m describing is different than vanilla 3d blue noise. The 3d DFT of 3d blue noise is a black sphere surrounded by noise. What I’m describing is a cylinder instead of a sphere.

3d blue noise turns out not to be great for these needs. You can read about that here:

The problem with 3D blue noise

That author also has some an interesting post on blue noise, and a zip file full of blue noise textures that you can take and use.

Free Blue Noise Textures

I have some thoughts on generating this blue noise cylinder that if they work out may very well be the next blog post.

## Code

Here is the code used to generate the images in this post. It’s also on github, which also contains the source images used.

Atrix256: RandomCode/AnimatedNoise

#define _CRT_SECURE_NO_WARNINGS

#include <windows.h>  // for bitmap headers.  Sorry non windows people!
#include <stdint.h>
#include <vector>
#include <random>
#include <atomic>
#include <complex>
#include <array>

typedef uint8_t uint8;

const float c_pi = 3.14159265359f;

// settings
const bool c_doDFT = true;

// globals
FILE* g_logFile = nullptr;

//======================================================================================
inline float Lerp (float A, float B, float t)
{
return A * (1.0f - t) + B * t;
}

//======================================================================================
struct SImageData
{
SImageData ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
size_t m_pitch;
std::vector<uint8> m_pixels;
};

//======================================================================================
struct SColor
{
SColor (uint8 _R = 0, uint8 _G = 0, uint8 _B = 0)
: R(_R), G(_G), B(_B)
{ }

inline void Set (uint8 _R, uint8 _G, uint8 _B)
{
R = _R;
G = _G;
B = _B;
}

uint8 B, G, R;
};

//======================================================================================
struct SImageDataComplex
{
SImageDataComplex ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
std::vector<std::complex<float>> m_pixels;
};

//======================================================================================
std::complex<float> DFTPixel (const SImageData &srcImage, size_t K, size_t L)
{
std::complex<float> ret(0.0f, 0.0f);

for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Get the pixel value (assuming greyscale) and convert it to [0,1] space
const uint8 *src = &srcImage.m_pixels[(y * srcImage.m_pitch) + x * 3];
float grey = float(src[0]) / 255.0f;

// Add to the sum of the return value
float v = float(K * x) / float(srcImage.m_width);
v += float(L * y) / float(srcImage.m_height);
ret += std::complex<float>(grey, 0.0f) * std::polar<float>(1.0f, -2.0f * c_pi * v);
}
}

return ret;
}

//======================================================================================
void ImageDFT (const SImageData &srcImage, SImageDataComplex &destImage)
{
// NOTE: this function assumes srcImage is greyscale, so works on only the red component of srcImage.
// ImageToGrey() will convert an image to greyscale.

// size the output dft data
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pixels.resize(destImage.m_width*destImage.m_height);

// calculate 2d dft (brute force, not using fast fourier transform) multithreadedly
std::atomic<size_t> nextRow(0);
{
[&] ()
{
bool reportProgress = (row == 0);
int lastPercent = -1;

while (row < srcImage.m_height)
{
// calculate the DFT for every pixel / frequency in this row
for (size_t x = 0; x < srcImage.m_width; ++x)
{
destImage.m_pixels[row * destImage.m_width + x] = DFTPixel(srcImage, x, row);
}

// report progress if we should
if (reportProgress)
{
int percent = int(100.0f * float(row) / float(srcImage.m_height));
if (lastPercent != percent)
{
lastPercent = percent;
printf("            \rDFT: %i%%", lastPercent);
}
}

// go to the next row
}
}
);
}

t.join();

printf("\n");
}

//======================================================================================
void GetMagnitudeData (const SImageDataComplex& srcImage, SImageData& destImage)
{
// size the output image
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pitch = 4 * ((srcImage.m_width * 24 + 31) / 32);
destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);

// get floating point magnitude data
std::vector<float> magArray;
magArray.resize(srcImage.m_width*srcImage.m_height);
float maxmag = 0.0f;
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Offset the information by half width & height in the positive direction.
// This makes frequency 0 (DC) be at the image origin, like most diagrams show it.
int k = (x + (int)srcImage.m_width / 2) % (int)srcImage.m_width;
int l = (y + (int)srcImage.m_height / 2) % (int)srcImage.m_height;
const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k];

float mag = std::abs(src);
if (mag > maxmag)
maxmag = mag;

magArray[y*srcImage.m_width + x] = mag;
}
}
if (maxmag == 0.0f)
maxmag = 1.0f;

const float c = 255.0f / log(1.0f+maxmag);

// normalize the magnitude data and send it back in [0, 255]
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
float src = c * log(1.0f + magArray[y*srcImage.m_width + x]);

uint8 magu8 = uint8(src);

uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3];
dest[0] = magu8;
dest[1] = magu8;
dest[2] = magu8;
}
}
}

//======================================================================================
bool ImageSave (const SImageData &image, const char *fileName)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "wb");
if (!file) {
printf("Could not save %s\n", fileName);
return false;
}

// write the data and close the file
fclose(file);

return true;
}

//======================================================================================
bool ImageLoad (const char *fileName, SImageData& imageData)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "rb");
if (!file)
return false;

{
fclose(file);
return false;
}

// read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4
if (fread(&imageData.m_pixels[0], imageData.m_pixels.size(), 1, file) != 1)
{
fclose(file);
return false;
}

imageData.m_pitch = 4 * ((imageData.m_width * 24 + 31) / 32);

fclose(file);
return true;
}

//======================================================================================
void ImageInit (SImageData& image, size_t width, size_t height)
{
image.m_width = width;
image.m_height = height;
image.m_pitch = 4 * ((width * 24 + 31) / 32);
image.m_pixels.resize(image.m_pitch * image.m_height);
std::fill(image.m_pixels.begin(), image.m_pixels.end(), 0);
}

//======================================================================================
template <typename LAMBDA>
void ImageForEachPixel (SImageData& image, const LAMBDA& lambda)
{
size_t pixelIndex = 0;
for (size_t y = 0; y < image.m_height; ++y)
{
SColor* pixel = (SColor*)&image.m_pixels[y * image.m_pitch];
for (size_t x = 0; x < image.m_width; ++x)
{
lambda(*pixel, pixelIndex);
++pixel;
++pixelIndex;
}
}
}

//======================================================================================
template <typename LAMBDA>
void ImageForEachPixel (const SImageData& image, const LAMBDA& lambda)
{
size_t pixelIndex = 0;
for (size_t y = 0; y < image.m_height; ++y)
{
SColor* pixel = (SColor*)&image.m_pixels[y * image.m_pitch];
for (size_t x = 0; x < image.m_width; ++x)
{
lambda(*pixel, pixelIndex);
++pixel;
++pixelIndex;
}
}
}

//======================================================================================
void ImageConvertToLuma (SImageData& image)
{
ImageForEachPixel(
image,
[] (SColor& pixel, size_t pixelIndex)
{
float luma = float(pixel.R) * 0.3f + float(pixel.G) * 0.59f + float(pixel.B) * 0.11f;
uint8 lumau8 = uint8(luma + 0.5f);
pixel.R = lumau8;
pixel.G = lumau8;
pixel.B = lumau8;
}
);
}

//======================================================================================
void ImageCombine2 (const SImageData& imageA, const SImageData& imageB, SImageData& result)
{
// put the images side by side. A on left, B on right
ImageInit(result, imageA.m_width + imageB.m_width, max(imageA.m_height, imageB.m_height));
std::fill(result.m_pixels.begin(), result.m_pixels.end(), 0);

// image A on left
for (size_t y = 0; y < imageA.m_height; ++y)
{
SColor* destPixel = (SColor*)&result.m_pixels[y * result.m_pitch];
SColor* srcPixel = (SColor*)&imageA.m_pixels[y * imageA.m_pitch];
for (size_t x = 0; x < imageA.m_width; ++x)
{
destPixel[0] = srcPixel[0];
++destPixel;
++srcPixel;
}
}

// image B on right
for (size_t y = 0; y < imageB.m_height; ++y)
{
SColor* destPixel = (SColor*)&result.m_pixels[y * result.m_pitch + imageA.m_width * 3];
SColor* srcPixel = (SColor*)&imageB.m_pixels[y * imageB.m_pitch];
for (size_t x = 0; x < imageB.m_width; ++x)
{
destPixel[0] = srcPixel[0];
++destPixel;
++srcPixel;
}
}
}

//======================================================================================
void ImageCombine3 (const SImageData& imageA, const SImageData& imageB, const SImageData& imageC, SImageData& result)
{
// put the images side by side. A on left, B in middle, C on right
ImageInit(result, imageA.m_width + imageB.m_width + imageC.m_width, max(max(imageA.m_height, imageB.m_height), imageC.m_height));
std::fill(result.m_pixels.begin(), result.m_pixels.end(), 0);

// image A on left
for (size_t y = 0; y < imageA.m_height; ++y)
{
SColor* destPixel = (SColor*)&result.m_pixels[y * result.m_pitch];
SColor* srcPixel = (SColor*)&imageA.m_pixels[y * imageA.m_pitch];
for (size_t x = 0; x < imageA.m_width; ++x)
{
destPixel[0] = srcPixel[0];
++destPixel;
++srcPixel;
}
}

// image B in middle
for (size_t y = 0; y < imageB.m_height; ++y)
{
SColor* destPixel = (SColor*)&result.m_pixels[y * result.m_pitch + imageA.m_width * 3];
SColor* srcPixel = (SColor*)&imageB.m_pixels[y * imageB.m_pitch];
for (size_t x = 0; x < imageB.m_width; ++x)
{
destPixel[0] = srcPixel[0];
++destPixel;
++srcPixel;
}
}

// image C on right
for (size_t y = 0; y < imageC.m_height; ++y)
{
SColor* destPixel = (SColor*)&result.m_pixels[y * result.m_pitch + imageA.m_width * 3 + imageC.m_width * 3];
SColor* srcPixel = (SColor*)&imageC.m_pixels[y * imageC.m_pitch];
for (size_t x = 0; x < imageC.m_width; ++x)
{
destPixel[0] = srcPixel[0];
++destPixel;
++srcPixel;
}
}
}

//======================================================================================
float GoldenRatioMultiple (size_t multiple)
{
return float(multiple) * (1.0f + std::sqrtf(5.0f)) / 2.0f;
}

//======================================================================================
void IntegrationTest (const SImageData& dither, const SImageData& groundTruth, size_t frameIndex, const char* label)
{
// calculate min, max, total and average error
size_t minError = 0;
size_t maxError = 0;
size_t totalError = 0;
size_t pixelCount = 0;
for (size_t y = 0; y < dither.m_height; ++y)
{
SColor* ditherPixel = (SColor*)&dither.m_pixels[y * dither.m_pitch];
SColor* truthPixel = (SColor*)&groundTruth.m_pixels[y * groundTruth.m_pitch];
for (size_t x = 0; x < dither.m_width; ++x)
{
size_t error = 0;
if (ditherPixel->R > truthPixel->R)
error = ditherPixel->R - truthPixel->R;
else
error = truthPixel->R - ditherPixel->R;

totalError += error;

if ((x == 0 && y == 0) || error < minError)
minError = error;

if ((x == 0 && y == 0) || error > maxError)
maxError = error;

++ditherPixel;
++truthPixel;
++pixelCount;
}
}
float averageError = float(totalError) / float(pixelCount);

// calculate standard deviation
float sumSquaredDiff = 0.0f;
for (size_t y = 0; y < dither.m_height; ++y)
{
SColor* ditherPixel = (SColor*)&dither.m_pixels[y * dither.m_pitch];
SColor* truthPixel = (SColor*)&groundTruth.m_pixels[y * groundTruth.m_pitch];
for (size_t x = 0; x < dither.m_width; ++x)
{
size_t error = 0;
if (ditherPixel->R > truthPixel->R)
error = ditherPixel->R - truthPixel->R;
else
error = truthPixel->R - ditherPixel->R;

float diff = float(error) - averageError;

sumSquaredDiff += diff*diff;
}
}
float stdDev = std::sqrtf(sumSquaredDiff / float(pixelCount - 1));

// report results
fprintf(g_logFile, "%s %zu error\n", label, frameIndex);
fprintf(g_logFile, "  min error: %zu\n", minError);
fprintf(g_logFile, "  max error: %zu\n", maxError);
fprintf(g_logFile, "  avg error: %0.2f\n", averageError);
fprintf(g_logFile, "  stddev: %0.2f\n", stdDev);
fprintf(g_logFile, "\n");
}

//======================================================================================
void HistogramTest (const SImageData& noise, size_t frameIndex, const char* label)
{
std::array<size_t, 256> counts;
std::fill(counts.begin(), counts.end(), 0);

ImageForEachPixel(
noise,
[&] (const SColor& pixel, size_t pixelIndex)
{
counts[pixel.R]++;
}
);

// calculate min, max, total and average
size_t minCount = 0;
size_t maxCount = 0;
size_t totalCount = 0;
for (size_t i = 0; i < 256; ++i)
{
if (i == 0 || counts[i] < minCount)
minCount = counts[i];

if (i == 0 || counts[i] > maxCount)
maxCount = counts[i];

totalCount += counts[i];
}
float averageCount = float(totalCount) / float(256.0f);

// calculate standard deviation
float sumSquaredDiff = 0.0f;
for (size_t i = 0; i < 256; ++i)
{
float diff = float(counts[i]) - averageCount;
sumSquaredDiff += diff*diff;
}
float stdDev = std::sqrtf(sumSquaredDiff / 255.0f);

// report results
fprintf(g_logFile, "%s %zu histogram\n", label, frameIndex);
fprintf(g_logFile, "  min count: %zu\n", minCount);
fprintf(g_logFile, "  max count: %zu\n", maxCount);
fprintf(g_logFile, "  avg count: %0.2f\n", averageCount);
fprintf(g_logFile, "  stddev: %0.2f\n", stdDev);
fprintf(g_logFile, "  counts: ");
for (size_t i = 0; i < 256; ++i)
{
if (i > 0)
fprintf(g_logFile, ", ");
fprintf(g_logFile, "%zu", counts[i]);
}

fprintf(g_logFile, "\n\n");
}

//======================================================================================
void GenerateWhiteNoise (SImageData& image, size_t width, size_t height)
{
ImageInit(image, width, height);

std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<unsigned int> dist(0, 255);

ImageForEachPixel(
image,
[&] (SColor& pixel, size_t pixelIndex)
{
uint8 value = dist(rng);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);
}

//======================================================================================
void GenerateInterleavedGradientNoise (SImageData& image, size_t width, size_t height, float offsetX, float offsetY)
{
ImageInit(image, width, height);

std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<unsigned int> dist(0, 255);

for (size_t y = 0; y < height; ++y)
{
SColor* pixel = (SColor*)&image.m_pixels[y * image.m_pitch];
for (size_t x = 0; x < width; ++x)
{
float valueFloat = std::fmodf(52.9829189f * std::fmod(0.06711056f*float(x + offsetX) + 0.00583715f*float(y + offsetY), 1.0f), 1.0f);
size_t valueBig = size_t(valueFloat * 256.0f);
uint8 value = uint8(valueBig % 256);
pixel->R = value;
pixel->G = value;
pixel->B = value;
++pixel;
}
}
}

//======================================================================================
void DitherWithTexture (const SImageData& ditherImage, const SImageData& noiseImage, SImageData& result)
{
// init the result image
ImageInit(result, ditherImage.m_width, ditherImage.m_height);

// make the result image
for (size_t y = 0; y < ditherImage.m_height; ++y)
{
SColor* srcDitherPixel = (SColor*)&ditherImage.m_pixels[y * ditherImage.m_pitch];
SColor* destDitherPixel = (SColor*)&result.m_pixels[y * result.m_pitch];

for (size_t x = 0; x < ditherImage.m_width; ++x)
{
// tile the noise in case it isn't the same size as the image we are dithering
size_t noiseX = x % noiseImage.m_width;
size_t noiseY = y % noiseImage.m_height;
SColor* noisePixel = (SColor*)&noiseImage.m_pixels[noiseY * noiseImage.m_pitch + noiseX * 3];

uint8 value = 0;
if (noisePixel->R < srcDitherPixel->R)
value = 255;

destDitherPixel->R = value;
destDitherPixel->G = value;
destDitherPixel->B = value;

++srcDitherPixel;
++destDitherPixel;
}
}
}

//======================================================================================
void DitherWhiteNoise (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

// make noise
SImageData noise;
GenerateWhiteNoise(noise, ditherImage.m_width, ditherImage.m_height);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(ditherImage, noise, dither, combined);
ImageSave(combined, "out/still_whitenoise.bmp");
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

// make noise
SImageData noise;

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(ditherImage, noise, dither, combined);
ImageSave(combined, "out/still_ignoise.bmp");
}

//======================================================================================
void DitherBlueNoise (const SImageData& ditherImage, const SImageData& blueNoise)
{
printf("\n%s\n", __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, blueNoise, dither);

// save the results
SImageData combined;
ImageCombine3(ditherImage, blueNoise, dither, combined);
ImageSave(combined, "out/still_bluenoise.bmp");
}

//======================================================================================
void DitherWhiteNoiseAnimated (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/anim_whitenoise%zu.bmp", i);

// make noise
SImageData noise;
GenerateWhiteNoise(noise, ditherImage.m_width, ditherImage.m_height);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1000.0f);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/anim_ignoise%zu.bmp", i);

// make noise
SImageData noise;

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimated (const SImageData& ditherImage, const SImageData blueNoise[8])
{
printf("\n%s\n", __FUNCTION__);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/anim_bluenoise%zu.bmp", i);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, blueNoise[i], dither);

// save the results
SImageData combined;
ImageCombine2(blueNoise[i], dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherWhiteNoiseAnimatedIntegrated (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animint_whitenoise%zu.bmp", i);

// make noise
SImageData noise;
GenerateWhiteNoise(noise, ditherImage.m_width, ditherImage.m_height);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1000.0f);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animint_ignoise%zu.bmp", i);

// make noise
SImageData noise;

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&](SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i + 1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimatedIntegrated (const SImageData& ditherImage, const SImageData blueNoise[8])
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animint_bluenoise%zu.bmp", i);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, blueNoise[i], dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(blueNoise[i], dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherWhiteNoiseAnimatedGoldenRatio (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

// make noise
SImageData noiseSrc;
GenerateWhiteNoise(noiseSrc, ditherImage.m_width, ditherImage.m_height);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

SImageDataComplex noiseDFT;
SImageData noiseDFTMag;

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgr_whitenoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// DFT the noise
if (c_doDFT)
{
ImageDFT(noise, noiseDFT);
GetMagnitudeData(noiseDFT, noiseDFTMag);
}
else
{
ImageInit(noiseDFTMag, noise.m_width, noise.m_height);
std::fill(noiseDFTMag.m_pixels.begin(), noiseDFTMag.m_pixels.end(), 0);
}

// Histogram test the noise
HistogramTest(noise, i, __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(noiseDFTMag, noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

// make noise
SImageData noiseSrc;

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

SImageDataComplex noiseDFT;
SImageData noiseDFTMag;

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgr_ignoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// DFT the noise
if (c_doDFT)
{
ImageDFT(noise, noiseDFT);
GetMagnitudeData(noiseDFT, noiseDFTMag);
}
else
{
ImageInit(noiseDFTMag, noise.m_width, noise.m_height);
std::fill(noiseDFTMag.m_pixels.begin(), noiseDFTMag.m_pixels.end(), 0);
}

// Histogram test the noise
HistogramTest(noise, i, __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(noiseDFTMag, noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimatedGoldenRatio (const SImageData& ditherImage, const SImageData& noiseSrc)
{
printf("\n%s\n", __FUNCTION__);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

SImageDataComplex noiseDFT;
SImageData noiseDFTMag;

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgr_bluenoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// DFT the noise
if (c_doDFT)
{
ImageDFT(noise, noiseDFT);
GetMagnitudeData(noiseDFT, noiseDFTMag);
}
else
{
ImageInit(noiseDFTMag, noise.m_width, noise.m_height);
std::fill(noiseDFTMag.m_pixels.begin(), noiseDFTMag.m_pixels.end(), 0);
}

// Histogram test the noise
HistogramTest(noise, i, __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(noiseDFTMag, noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherWhiteNoiseAnimatedGoldenRatioIntegrated (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// make noise
SImageData noiseSrc;
GenerateWhiteNoise(noiseSrc, ditherImage.m_width, ditherImage.m_height);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgrint_whitenoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// make noise
SImageData noiseSrc;

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgrint_ignoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimatedGoldenRatioIntegrated (const SImageData& ditherImage, const SImageData& noiseSrc)
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgrint_bluenoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
int main (int argc, char** argv)
{
// load the dither image and convert it to greyscale (luma)
SImageData ditherImage;
{
return 0;
}
ImageConvertToLuma(ditherImage);

// load the blue noise images.
SImageData blueNoise[8];
for (size_t i = 0; i < 8; ++i)
{
char buffer[256];
sprintf(buffer, "src/BN%zu.bmp", i);
{
return 0;
}

// They have different values in R, G, B so make R be the value for all channels
ImageForEachPixel(
blueNoise[i],
[] (SColor& pixel, size_t pixelIndex)
{
pixel.G = pixel.R;
pixel.B = pixel.R;
}
);
}

g_logFile = fopen("log.txt", "w+t");

// still image dither tests
DitherWhiteNoise(ditherImage);
DitherBlueNoise(ditherImage, blueNoise[0]);

// Animated dither tests
DitherWhiteNoiseAnimated(ditherImage);
DitherBlueNoiseAnimated(ditherImage, blueNoise);

// Golden ratio animated dither tests
DitherWhiteNoiseAnimatedGoldenRatio(ditherImage);
DitherBlueNoiseAnimatedGoldenRatio(ditherImage, blueNoise[0]);

// Animated dither integration tests
DitherWhiteNoiseAnimatedIntegrated(ditherImage);
DitherBlueNoiseAnimatedIntegrated(ditherImage, blueNoise);

// Golden ratio animated dither integration tests
DitherWhiteNoiseAnimatedGoldenRatioIntegrated(ditherImage);
DitherBlueNoiseAnimatedGoldenRatioIntegrated(ditherImage, blueNoise[0]);

fclose(g_logFile);

return 0;
}


# Transmuting White Noise To Blue, Red, Green, Purple

There are many algorithms for generating blue noise, and there are a lot of people working on new ways to do it.

It made me wonder: why don’t people just use the inverse discrete Fourier transform to make noise that has the desired frequency spectrum?

I knew there had to be a reason, since that is a pretty obvious thing to try, but I wasn’t sure if it was due to poor quality results, slower execution times, or some other reason.

After trying it myself and not getting great results I asked on twitter and Bart Wronski (@BartWronsk) clued me in.

It turns out that you can set up your frequency magnitudes such that there are only high frequencies, giving them random amplitudes, and random phases, but when you do the inverse DFT, the result isn’t guaranteed to use all possible color values (0-255) and even if it does, it may not use them evenly.

He pointed me at something that Timothy Lottes wrote up (@TimothyLottes), which talked about using some basic DSP operations to transform white noise into blue noise.

This post uses that technique to do some “Noise Alchemy” and turn white noise into a couple other types of noise. Simple single file standalone C++ source code included at bottom of the post!

# Red Noise

We’ll start with red noise because it’s the simplest. Here’s how you do it:

2. Low pass filter the white noise
3. Re-normalize the histogram
4. Repeat from step 2 as many times as desired

That’s all there is to it.

If you are wondering how you low pass filter an image, that’s another way of saying “blur”. Blurring makes the high frequency details go away, leaving the low frequency smooth shapes.

There are multiple ways to do a blur, including: box blur (averaging pixels), Gaussian blur, sinc filtering. In this post I use a Gaussian blur and get decent results, but box blurring would be cheaper/faster, and sinc filtering would be the most correct results.

An important detail about doing the blur is that your blur needs to “wrap around”. If you are blurring a pixel on the edge of the image, it should smear over to the other side of the image.

You might be wondering how you would normalize the histogram. Normalizing the histogram just means that we want to make sure that the image uses the full range of greyscale values evenly. We don’t want the noise to only use bright colors or only use dark colors, or even just MOSTLY use dark colors, for instance. If we count each color used in the image (which is the histogram I’m referring to), the counts for each color should be roughly equal.

To fix the histogram, Timothy Lottes suggests making an array that contains each pixel location and the brightness of that pixel. You first shuffle the array and then sort by brightness (Timothy uses a 64 bit int to store the pixel information, so uses a radix sort which is more efficient for fixed size keys). Next set the brightness of each item in the array to be it’s index divided by the number of items in the list to put them in the 0 to 1 range. Lastly you write the brightness values back out to the image, using the pixel locations you stored off.

What this process does is makes sure that the full range of greyscale values are used, and that they are used evenly. It also preserves the order of the brightness of the pixels; if pixel A was darker than pixel B before this process, it still will be darker after this process.

You may wonder why the shuffle is done before the sort. That’s done so that if there are any ties between values that it will be random which one is darker after this process. This is important because if it wasn’t random, there may be obvious (ugly/incorrect) patterns in the results.

When normalizing the histogram, it affects the frequency composition of the image, but if doing this process multiple times, it seems to do an OK job of converging to decent results.

Red noise has low frequency content which means it doesn’t have sharp details / fast data changes. An interesting property of 2d red noise is that if you take a random walk on the 2d red noise texture, that the values you’d hit would be a random walk of 1d values. Also, if you draw a straight line randomly on the texture, the pixels it passes through will be a random walk. That is, you’ll get random numbers, but each number will be pretty close to the previous one.

The formal definition of red noise has a more strict definition about frequency content than what we are going for in this post. (Wikipedia: red noise)

Here’s red noise (top) and the frequency magnitudes (bottom) using 5 iterations of the described process, and a blur sigma (strength of blur) of 1.0:

Using different blur strengths controls what frequencies get attenuated. Weaker blurs leave higher frequency components.

Here is red noise generated the same way but using a blur sigma of 0.5:

And here is red noise generated using a blur sigma of 2.0

Here are some animated gifs showing the evolution of the noise as well as the frequencies over time:

Sigma 0.5:

Sigma 1.0:

Sigma 2.0:

## Blue Noise

To make blue noise, you use the exact same method but instead of using a low pass filter you use a high pass filter.

An easy way to high pass filter an image is to do a low pass filter to get the low frequency content, and subtract that from the original image so that you are left with the high frequency content.

Blue noise has high frequency content which means it is only made up of sharp details / fast data changes. An interesting property of 2d blue noise is that if you take a random walk (or a straight line walk) on it in any direction, you’ll get a low discrepancy sequence. That is, you’ll get random numbers, but each number will be very different from the previous one.

The formal definition of blue noise has a more strict definition about frequency content than what we are going for in this post. (Wikipedia: blue noise)

Here is blue noise using 5 iterations and a blur sigma of 1.0:

Just like with red noise, changing the strength of the blur controls what frequencies get attenuated.

Here is a sigma of 0.5:

Here is a sigma of 2.0:

Animations of sigma 0.5:

Animations of sigma 1.0:

Animations of sigma 2.0:

## Green Noise

Green noise is noise that doesn’t have either low or high frequency components, only mid frequency components.

To make green noise, use you a “band pass” filter, which is a filter that gets rid of both high and low frequency components leaving only the middle.

Here’s how to make a band pass filter:

1. Make a weak blur of the image – this is the image without the highest frequencies.
2. Make a strong blur of the image – this is the image with just the lowest frequencies.
3. Subtract the strong blur from the weak blur – this is the image with just the middle frequencies.

Here is 5 iterations using a sigma of 0.5 and 2.0:

Here is the animation of it evolving:

Nathan Reed (@ReedBeta) mentioned that the green noise looked a lot like Perlin noise, which made sense due to Perlin noise being designed to be band limited, which makes it easier to control the look of perlin noise by summing mulitple octaves. This makes sense to me because you basically can control what frequencies you put noise into by scaling the frequency ring.

Fabien Giesen (@rygorous) said this also helps with mipmapping. This makes sense to me because there can’t be (as much) aliasing with the higher frequencies missing from the noise.

## Purple Noise

I’ve never heard of this noise so it may have another name, but what I’m calling purple noise is just noise which has high and low frequency content, but no middle frequency content. It’s basically red noise plus blue noise.

You could literally make red noise and add it to blue noise to make purple noise, but how I made it for this post is to use a “band stop” filter.

A band stop filter is a filter that gets rid of middle frequencies and leaves high and low frequencies alone.

To band stop filter an image, you do a band pass filter to get the middle frequencies (as described in the last section!), and then subtract that from the original image to get only the low and high frequencies.

Here is 5 iterations using a sigma of 0.5 and 2.0:

Here is the animation:

This technique might be useful if you ever need to generate specific types of noise quickly, but if you are just generating noise textures to use later in performance critical situations, there are better algorithms to use. When generating textures offline in advance, you have “all the time in the world”, so it is probably not worth the simplicity of this algorithm, when the trade off is less good noise results.

Dithering part two – golden ratio sequence, blue noise and highpass-and-remap (Bart Wronski)

VDR Follow Up – Fine Art of Film Grain (Timothy Lottes)

Gaussian Blur (Me)

Image DFT / IDFT (me)

Blue-noise Dithered Sampling (Solid Angle) – a better way to generate colored noises

Apparently there is a relation between blue noise, turing patterns / reaction diffusion and these filtering techniques. (Thanks @R4_Unit!)
Turing Patterns in Photoshop

Point Sampling with General Noise Spectrum

Here is an interesting shadertoy which uses the mip map of a noise texture to get the low frequency content to do a high pass filter: (found by @paniq, who unfortunately got nerd sniped by this noise generation stuff hehe)
pseudo blue noise 2

## Source Code

The source code to generate the images is below, but is also on githib at Atrix256 – NoiseShaping

#define _CRT_SECURE_NO_WARNINGS

#include <windows.h>  // for bitmap headers.  Sorry non windows people!
#include <stdint.h>
#include <vector>
#include <random>
#include <array>
#include <complex>
#include <atomic>

typedef uint8_t uint8;
typedef int64_t int64;

const float c_pi = 3.14159265359f;

// settings
const size_t    c_imageSize = 256;
const bool      c_doDFT = true;
const float     c_blurThresholdPercent = 0.005f; // lower numbers give higher quality results, but take longer. This is 0.5%
const float     c_numBlurs = 5;

//======================================================================================
struct SImageData
{
SImageData ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
size_t m_pitch;
std::vector<uint8> m_pixels;
};

//======================================================================================
struct SColor
{
SColor (uint8 _R = 0, uint8 _G = 0, uint8 _B = 0)
: R(_R), G(_G), B(_B)
{ }

inline void Set (uint8 _R, uint8 _G, uint8 _B)
{
R = _R;
G = _G;
B = _B;
}

uint8 B, G, R;
};

//======================================================================================
struct SImageDataFloat
{
SImageDataFloat()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
std::vector<float> m_pixels;
};

//======================================================================================
struct SImageDataComplex
{
SImageDataComplex ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
std::vector<std::complex<float>> m_pixels;
};

//======================================================================================
std::complex<float> DFTPixel (const SImageData &srcImage, size_t K, size_t L)
{
std::complex<float> ret(0.0f, 0.0f);

for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Get the pixel value (assuming greyscale) and convert it to [0,1] space
const uint8 *src = &srcImage.m_pixels[(y * srcImage.m_pitch) + x * 3];
float grey = float(src[0]) / 255.0f;

// Add to the sum of the return value
float v = float(K * x) / float(srcImage.m_width);
v += float(L * y) / float(srcImage.m_height);
ret += std::complex<float>(grey, 0.0f) * std::polar<float>(1.0f, -2.0f * c_pi * v);
}
}

return ret;
}

//======================================================================================
void ImageDFT (const SImageData &srcImage, SImageDataComplex &destImage)
{
// NOTE: this function assumes srcImage is greyscale, so works on only the red component of srcImage.
// ImageToGrey() will convert an image to greyscale.

// size the output dft data
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pixels.resize(destImage.m_width*destImage.m_height);

// calculate 2d dft (brute force, not using fast fourier transform) multithreadedly
std::atomic<size_t> nextRow(0);
{
[&] ()
{
bool reportProgress = (row == 0);
int lastPercent = -1;

while (row < srcImage.m_height)
{
// calculate the DFT for every pixel / frequency in this row
for (size_t x = 0; x < srcImage.m_width; ++x)
{
destImage.m_pixels[row * destImage.m_width + x] = DFTPixel(srcImage, x, row);
}

// report progress if we should
if (reportProgress)
{
int percent = int(100.0f * float(row) / float(srcImage.m_height));
if (lastPercent != percent)
{
lastPercent = percent;
printf("            \rDFT: %i%%", lastPercent);
}
}

// go to the next row
}
}
);
}

t.join();

printf("\n");
}

//======================================================================================
void GetMagnitudeData (const SImageDataComplex& srcImage, SImageData& destImage)
{
// size the output image
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pitch = 4 * ((srcImage.m_width * 24 + 31) / 32);
destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);

// get floating point magnitude data
std::vector<float> magArray;
magArray.resize(srcImage.m_width*srcImage.m_height);
float maxmag = 0.0f;
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Offset the information by half width & height in the positive direction.
// This makes frequency 0 (DC) be at the image origin, like most diagrams show it.
int k = (x + (int)srcImage.m_width / 2) % (int)srcImage.m_width;
int l = (y + (int)srcImage.m_height / 2) % (int)srcImage.m_height;
const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k];

float mag = std::abs(src);
if (mag > maxmag)
maxmag = mag;

magArray[y*srcImage.m_width + x] = mag;
}
}
if (maxmag == 0.0f)
maxmag = 1.0f;

const float c = 255.0f / log(1.0f+maxmag);

// normalize the magnitude data and send it back in [0, 255]
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
float src = c * log(1.0f + magArray[y*srcImage.m_width + x]);

uint8 magu8 = uint8(src);

uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3];
dest[0] = magu8;
dest[1] = magu8;
dest[2] = magu8;
}
}
}

//======================================================================================
bool ImageSave (const SImageData &image, const char *fileName)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "wb");
if (!file) {
printf("Could not save %s\n", fileName);
return false;
}

// write the data and close the file
fclose(file);

return true;
}

//======================================================================================
bool ImageLoad (const char *fileName, SImageData& imageData)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "rb");
if (!file)
return false;

{
fclose(file);
return false;
}

// read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4
if (fread(&imageData.m_pixels[0], imageData.m_pixels.size(), 1, file) != 1)
{
fclose(file);
return false;
}

imageData.m_pitch = 4 * ((imageData.m_width * 24 + 31) / 32);

fclose(file);
return true;
}

//======================================================================================
void ImageInit (SImageData& image, size_t width, size_t height)
{
image.m_width = width;
image.m_height = height;
image.m_pitch = 4 * ((width * 24 + 31) / 32);
image.m_pixels.resize(image.m_pitch * image.m_height);
std::fill(image.m_pixels.begin(), image.m_pixels.end(), 0);
}

//======================================================================================
void ImageFloatInit (SImageDataFloat& image, size_t width, size_t height)
{
image.m_width = width;
image.m_height = height;
image.m_pixels.resize(image.m_width * image.m_height);
std::fill(image.m_pixels.begin(), image.m_pixels.end(), 0.0f);
}

//======================================================================================
int PixelsNeededForSigma (float sigma)
{
// returns the number of pixels needed to represent a gaussian kernal that has values
// down to the threshold amount.  A gaussian function technically has values everywhere
// on the image, but the threshold lets us cut it off where the pixels contribute to
// only small amounts that aren't as noticeable.
return int(floor(1.0f + 2.0f * sqrtf(-2.0f * sigma * sigma * log(c_blurThresholdPercent)))) + 1;
}

//======================================================================================
float Gaussian (float sigma, float x)
{
return expf(-(x*x) / (2.0f * sigma*sigma));
}

//======================================================================================
float GaussianSimpsonIntegration (float sigma, float a, float b)
{
return
((b - a) / 6.0f) *
(Gaussian(sigma, a) + 4.0f * Gaussian(sigma, (a + b) / 2.0f) + Gaussian(sigma, b));
}

//======================================================================================
std::vector<float> GaussianKernelIntegrals (float sigma, int taps)
{
std::vector<float> ret;
float total = 0.0f;
for (int i = 0; i < taps; ++i)
{
float x = float(i) - float(taps / 2);
float value = GaussianSimpsonIntegration(sigma, x - 0.5f, x + 0.5f);
ret.push_back(value);
total += value;
}
// normalize it
for (unsigned int i = 0; i < ret.size(); ++i)
{
ret[i] /= total;
}
return ret;
}

//======================================================================================
const float* GetPixelWrapAround (const SImageDataFloat& image, int x, int y)
{
if (x >= (int)image.m_width)
{
x = x % (int)image.m_width;
}
else
{
while (x < 0)
x += (int)image.m_width;
}

if (y >= (int)image.m_height)
{
y = y % (int)image.m_height;
}
else
{
while (y < 0)
y += (int)image.m_height;
}

return &image.m_pixels[(y * image.m_width) + x];
}

//======================================================================================
void ImageGaussianBlur (const SImageDataFloat& srcImage, SImageDataFloat &destImage, float xblursigma, float yblursigma, unsigned int xblursize, unsigned int yblursize)
{
// allocate space for copying the image for destImage and tmpImage
ImageFloatInit(destImage, srcImage.m_width, srcImage.m_height);

SImageDataFloat tmpImage;
ImageFloatInit(tmpImage, srcImage.m_width, srcImage.m_height);

// horizontal blur from srcImage into tmpImage
{
auto row = GaussianKernelIntegrals(xblursigma, xblursize);

int startOffset = -1 * int(row.size() / 2);

for (int y = 0; y < tmpImage.m_height; ++y)
{
for (int x = 0; x < tmpImage.m_width; ++x)
{
float blurredPixel = 0.0f;
for (unsigned int i = 0; i < row.size(); ++i)
{
const float *pixel = GetPixelWrapAround(srcImage, x + startOffset + i, y);
blurredPixel += pixel[0] * row[i];
}

float *destPixel = &tmpImage.m_pixels[y * tmpImage.m_width + x];
destPixel[0] = blurredPixel;
}
}
}

// vertical blur from tmpImage into destImage
{
auto row = GaussianKernelIntegrals(yblursigma, yblursize);

int startOffset = -1 * int(row.size() / 2);

for (int y = 0; y < destImage.m_height; ++y)
{
for (int x = 0; x < destImage.m_width; ++x)
{
float blurredPixel = 0.0f;
for (unsigned int i = 0; i < row.size(); ++i)
{
const float *pixel = GetPixelWrapAround(tmpImage, x, y + startOffset + i);
blurredPixel += pixel[0] * row[i];
}

float *destPixel = &destImage.m_pixels[y * destImage.m_width + x];
destPixel[0] = blurredPixel;
}
}
}
}

//======================================================================================
void SaveImageFloatAsBMP (const SImageDataFloat& imageFloat, const char* fileName)
{
printf("\n%s\n", fileName);

// init the image
SImageData image;
ImageInit(image, imageFloat.m_width, imageFloat.m_height);

// write the data to the image
const float* srcData = &imageFloat.m_pixels[0];
for (size_t y = 0; y < image.m_height; ++y)
{
SColor* pixel = (SColor*)&image.m_pixels[y*image.m_pitch];

for (size_t x = 0; x < image.m_width; ++x)
{
uint8 value = uint8(255.0f * srcData[0]);

pixel->Set(value, value, value);

++pixel;
++srcData;
}
}

// save the image
ImageSave(image, fileName);

// also save a DFT of the image
if (c_doDFT)
{
SImageDataComplex dftData;
ImageDFT(image, dftData);

SImageData DFTMagImage;
GetMagnitudeData(dftData, DFTMagImage);

char buffer[256];
sprintf(buffer, "%s.mag.bmp", fileName);

ImageSave(DFTMagImage, buffer);
}
}

//======================================================================================
void NormalizeHistogram (SImageDataFloat& image)
{
struct SHistogramHelper
{
float value;
size_t pixelIndex;
};
static std::vector<SHistogramHelper> pixels;
pixels.resize(image.m_width * image.m_height);

// put all the pixels into the array
for (size_t i = 0, c = image.m_width * image.m_height; i < c; ++i)
{
pixels[i].value = image.m_pixels[i];
pixels[i].pixelIndex = i;
}

// shuffle the pixels to randomly order ties. not as big a deal with floating point pixel values though
static std::random_device rd;
static std::mt19937 rng(rd());
std::shuffle(pixels.begin(), pixels.end(), rng);

// sort the pixels by value
std::sort(
pixels.begin(),
pixels.end(),
[] (const SHistogramHelper& a, const SHistogramHelper& b)
{
return a.value < b.value;
}
);

// use the pixel's place in the array as the new value, and write it back to the image
for (size_t i = 0, c = image.m_width * image.m_height; i < c; ++i)
{
float value = float(i) / float(c - 1);
image.m_pixels[pixels[i].pixelIndex] = value;
}
}

//======================================================================================
void BlueNoiseTest (float blurSigma)
{
// calculate the blur size from our sigma
int blurSize = PixelsNeededForSigma(blurSigma) | 1;

// setup the randon number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate some white noise
SImageDataFloat noise;
ImageFloatInit(noise, c_imageSize, c_imageSize);
for (float& v : noise.m_pixels)
{
v = dist(rng);
}

// save off the starting white noise
const char* baseFileName = "bluenoise_%i_%zu.bmp";
char fileName[256];

sprintf(fileName, baseFileName, int(blurSigma * 100.0f), 0);
SaveImageFloatAsBMP(noise, fileName);

// iteratively high pass filter and rescale histogram to the 0 to 1 range
SImageDataFloat blurredImage;
for (size_t blurIndex = 0; blurIndex < c_numBlurs; ++blurIndex)
{
// get a low passed version of the current image
ImageGaussianBlur(noise, blurredImage, blurSigma, blurSigma, blurSize, blurSize);

// subtract the low passed version to get the high passed version
for (size_t pixelIndex = 0; pixelIndex < c_imageSize * c_imageSize; ++pixelIndex)
noise.m_pixels[pixelIndex] -= blurredImage.m_pixels[pixelIndex];

// put all pixels between 0.0 and 1.0 again
NormalizeHistogram(noise);

// save this image
sprintf(fileName, baseFileName, int(blurSigma * 100.0f), blurIndex + 1);
SaveImageFloatAsBMP(noise, fileName);
}
}

//======================================================================================
void RedNoiseTest (float blurSigma)
{
// calculate the blur size from our sigma
int blurSize = PixelsNeededForSigma(blurSigma) | 1;

// setup the randon number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate some white noise
SImageDataFloat noise;
ImageFloatInit(noise, c_imageSize, c_imageSize);
for (float& v : noise.m_pixels)
{
v = dist(rng);
}

// save off the starting white noise
const char* baseFileName = "rednoise_%i_%zu.bmp";
char fileName[256];

sprintf(fileName, baseFileName, int(blurSigma * 100.0f), 0);
SaveImageFloatAsBMP(noise, fileName);

// iteratively high pass filter and rescale histogram to the 0 to 1 range
SImageDataFloat blurredImage;
for (size_t blurIndex = 0; blurIndex < c_numBlurs; ++blurIndex)
{
// get a low passed version of the current image
ImageGaussianBlur(noise, blurredImage, blurSigma, blurSigma, blurSize, blurSize);

// set noise image to the low passed version
noise.m_pixels = blurredImage.m_pixels;

// put all pixels between 0.0 and 1.0 again
NormalizeHistogram(noise);

// save this image
sprintf(fileName, baseFileName, int(blurSigma * 100.0f), blurIndex + 1);
SaveImageFloatAsBMP(noise, fileName);
}
}

//======================================================================================
void BandPassTest (float blurSigma1, float blurSigma2)
{
// calculate the blur size from our sigma
int blurSize1 = PixelsNeededForSigma(blurSigma1) | 1;
int blurSize2 = PixelsNeededForSigma(blurSigma2) | 1;

// setup the randon number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate some white noise
SImageDataFloat noise;
ImageFloatInit(noise, c_imageSize, c_imageSize);
for (float& v : noise.m_pixels)
{
v = dist(rng);
}

// save off the starting white noise
const char* baseFileName = "bandpass_%i_%i_%zu.bmp";
char fileName[256];

sprintf(fileName, baseFileName, int(blurSigma1 * 100.0f), int(blurSigma2 * 100.0f), 0);
SaveImageFloatAsBMP(noise, fileName);

// iteratively high pass filter and rescale histogram to the 0 to 1 range
SImageDataFloat blurredImage1;
SImageDataFloat blurredImage2;
for (size_t blurIndex = 0; blurIndex < c_numBlurs; ++blurIndex)
{
// get two low passed versions of the current image
ImageGaussianBlur(noise, blurredImage1, blurSigma1, blurSigma1, blurSize1, blurSize1);
ImageGaussianBlur(noise, blurredImage2, blurSigma2, blurSigma2, blurSize2, blurSize2);

// subtract one low passed version from the other
for (size_t pixelIndex = 0; pixelIndex < c_imageSize * c_imageSize; ++pixelIndex)
noise.m_pixels[pixelIndex] = blurredImage1.m_pixels[pixelIndex] - blurredImage2.m_pixels[pixelIndex];

// put all pixels between 0.0 and 1.0 again
NormalizeHistogram(noise);

// save this image
sprintf(fileName, baseFileName, int(blurSigma1 * 100.0f), int(blurSigma2 * 100.0f), blurIndex + 1);
SaveImageFloatAsBMP(noise, fileName);
}
}

//======================================================================================
void BandStopTest (float blurSigma1, float blurSigma2)
{
// calculate the blur size from our sigma
int blurSize1 = PixelsNeededForSigma(blurSigma1) | 1;
int blurSize2 = PixelsNeededForSigma(blurSigma2) | 1;

// setup the randon number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate some white noise
SImageDataFloat noise;
ImageFloatInit(noise, c_imageSize, c_imageSize);
for (float& v : noise.m_pixels)
{
v = dist(rng);
}

// save off the starting white noise
const char* baseFileName = "bandstop_%i_%i_%zu.bmp";
char fileName[256];

sprintf(fileName, baseFileName, int(blurSigma1 * 100.0f), int(blurSigma2 * 100.0f), 0);
SaveImageFloatAsBMP(noise, fileName);

// iteratively high pass filter and rescale histogram to the 0 to 1 range
SImageDataFloat blurredImage1;
SImageDataFloat blurredImage2;
for (size_t blurIndex = 0; blurIndex < c_numBlurs; ++blurIndex)
{
// get two low passed versions of the current image
ImageGaussianBlur(noise, blurredImage1, blurSigma1, blurSigma1, blurSize1, blurSize1);
ImageGaussianBlur(noise, blurredImage2, blurSigma2, blurSigma2, blurSize2, blurSize2);

// subtract one low passed version from the other to get the pandpass noise, and subtract that from the original noise to get the band stop noise
for (size_t pixelIndex = 0; pixelIndex < c_imageSize * c_imageSize; ++pixelIndex)
noise.m_pixels[pixelIndex] -= (blurredImage1.m_pixels[pixelIndex] - blurredImage2.m_pixels[pixelIndex]);

// put all pixels between 0.0 and 1.0 again
NormalizeHistogram(noise);

// save this image
sprintf(fileName, baseFileName, int(blurSigma1 * 100.0f), int(blurSigma2 * 100.0f), blurIndex + 1);
SaveImageFloatAsBMP(noise, fileName);
}
}

//======================================================================================
int main (int argc, char ** argv)
{
BlueNoiseTest(0.5f);
BlueNoiseTest(1.0f);
BlueNoiseTest(2.0f);

RedNoiseTest(0.5f);
RedNoiseTest(1.0f);
RedNoiseTest(2.0f);

BandPassTest(0.5f, 2.0f);

BandStopTest(0.5f, 2.0f);

return 0;
}


# Generating Blue Noise Sample Points With Mitchell’s Best Candidate Algorithm

Lately I’ve been eyeball deep in noise, ordered dithering and related topics, and have been learning some really interesting things.

As the information coalesces it’ll become apparent whether there is going to be a huge mega post coming, or if there will be several smaller ones.

In the meantime, I wanted to share this bite sized chunk of information.

Three sampling patterns that are most often used when sampling (say, when numerically integrating a lighting function for graphics/rendering purposes) are: regular samples, white noise samples, and blue noise samples.

## Regular Sampling

Regular sampling just means evenly spacing the samples. This sampling strategy can suffer from aliasing, but gives good coverage of the sample space.

Here are 256, 1024 and 4096 samples:

Here are those samples taken from a source image:

Here is the DFT (frequency amplitude) of those samples:

## White Noise Sampling

White noise sampling just chooses random numbers for where to place the samples.
The random numbers are uniformly distributed, meaning they are just plain vanilla random numbers where each number is equally likely to come up.

White noise sampling can make for noisy results, and suffers from the fact that white noise sample points can clump together and leave empty space. Clumped sample points give redundant information while empty space is information that you are lacking in the sample space. In general, noise is often desired over aliasing though, so white noise samples are generally preferred over regular sampling. Monte Carlo integration also requires random samples.

White noise is called white noise because it contains all frequencies approximately evenly, like how white light is made up of all frequencies of light.

Here are 256, 1024 and 4096 samples:

Here are those samples taken from a source image:

Here is the DFT (frequency amplitude) of those samples:

## Blue Noise Sampling

Lastly is blue noise sampling which is somewhere between regular sampling and white noise sampling. Blue noise sampling has randomly placed points like white noise does, but the randomly placed points are approximately evenly spaced, which is more like regular sampling.

Things like low discrepancy sequences, stratified sampling, and jittered regular sampling mimic blue noise, and are often a cheaper alternative when an approximation is acceptable. More info on low discrepancy sequences is available on my post here: When Random Numbers Are Too Random: Low Discrepancy Sequences

Blue noise is called blue noise because it contains higher amounts of higher frequencies and lower amounts of lower frequencies. This is the same of blue light, which contains higher frequency (bluer) light.

Here are 256, 1024 and 4096 samples:

Here are those samples taken from a source image:

Here is the DFT (frequency amplitude) of those samples:

## Comparison

Imagine you were a robot with 4096 light/color sensors. Which of the arrangements below do you think would give you the best information about the world around you with that number of sensors?

To me, the regular grid and the blue noise are a bit of a coin toss, while the white noise version is awful.

The regular grid does seem to show me straight lined things better (the road, sidewalk, etc), but it makes non straight lined things – like the trees – look blocky. The blue noise grid does the reverse and makes straight things look wavy, while making it easier to see the true shape of non straight things.

Mathematically, blue noise is superior sampling, so maybe this example isn’t the best way to show the value of blue noise.

Here is the real image:

Apparently the photo-receptors in our eyes are arranged in a blue noise pattern. Some people say this is why blue noise is more agreeable with our perception, but since it also helps numerical integration converge faster for lower sample counts (compared to white noise – which wins out with larger sample counts BTW!), it seems like there is a more fundamental reason which would cause an evolutionary advantage for them to be arranged that way in the first place.

## Generating Blue Noise Sample Points

The obvious question is: I know how to make uniform and random sample points. How do I make blue noise sample points?

There are multiple ways to do it, but a method that I find very easy to understand and to implement is “Mitchell’s Best Candidate Algorithm”.

The algorithm is as follows:

1. Place a random sample point as the first sample point.
2. Generate some number of random dots as candidates to be the next sample point.
3. Whichever of these dots is farthest away from the closest existing sample point is the winner. Place that dot as the new sample point.
4. GOTO 2 and Repeat until you have as many sample points as you want.

The algorithm is pretty simple, but there are two other important details that are needed to give you good results:

• When calculating distance between dots, you need to consider wrap around. More info on how to do that here: Calculating the Distance Between Points in “Wrap Around” (Toroidal) Space.
• The number of candidates you generate should scale up with the number of existing sample points. As the original paper describing this technique says, doing that helps ensure that the statistics of your sample points stay constant as the number of sample points changes.

When I first tried to get this algorithm working, I was generating a fixed number of candidates each time. That gave me these pretty terrible results:

However, when I multiplied the number of existing sample points by a constant “m” as the number of sample points to generate, I got much better results, even when m was 1! (Note: m=0 is the same as white noise in this image. I did NumSamples*m+1 candidates each time.)

Related Computer Graphics Stack Exchange Question: Mitchell’s Best Candidate Algorithm

If you store existing sample points in a grid, you can speed up the algorithm since it will be faster to find the closest point to a candidate. In the implementation on this post I didn’t do that.

You may be able to multithread this algorithm but I haven’t tried it. The idea would be if you needed to make and test N candidates, that you could split that among M threads, so long as N was large enough to make that worth while. I didn’t do that in this post.

Lastly, instead of working with distance, you can work with SQUARED distance to avoid many unnecessary square root calculations. The example code here does that optimization.

The 1991 paper that described this technique:
Spectrally optimal sampling for distribution ray tracing

Another interesting link on this algorithm:
Mitchell’s Best-Candidate

This algorithm isn’t that great for making dense sample points, or for use in dithering / stippling. Look for a future blog post about those usage cases, but for now, this is a great resource:
Free Blue Noise Textures (and info / examples on blue noise texture usage)

A physics based approach to blue noise distributed samples:
Electrostatic Half Toning

A neat read on the “void and cluster” method for generating blue noise, and also a good read on what ordered dithering is all about:
The void and cluster method for dither array generation

## Source Code

Here is some simple standalone C++ source code which can generate blue noise sample points, and also generated the images used in this post.

It’s also on github (along with the source image) at https://github.com/Atrix256/RandomCode/tree/master/Mitchell

#define _CRT_SECURE_NO_WARNINGS

#include <windows.h>  // for bitmap headers.  Sorry non windows people!
#include <stdint.h>
#include <vector>
#include <complex>
#include <atomic>
#include <random>
#include <array>

typedef uint8_t uint8;
typedef int64_t int64;

const float c_pi = 3.14159265359f;

//======================================================================================
struct SImageData
{
SImageData ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
size_t m_pitch;
std::vector<uint8> m_pixels;
};

SImageData s_stippleImage;

//======================================================================================
struct SColor
{
SColor (uint8 _R = 0, uint8 _G = 0, uint8 _B = 0)
: R(_R), G(_G), B(_B)
{ }

inline void Set (uint8 _R, uint8 _G, uint8 _B)
{
R = _R;
G = _G;
B = _B;
}

uint8 B, G, R;
};

//======================================================================================
struct SImageDataComplex
{
SImageDataComplex ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
std::vector<std::complex<float>> m_pixels;
};

//======================================================================================
std::complex<float> DFTPixel (const SImageData &srcImage, size_t K, size_t L)
{
std::complex<float> ret(0.0f, 0.0f);

for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Get the pixel value (assuming greyscale) and convert it to [0,1] space
const uint8 *src = &srcImage.m_pixels[(y * srcImage.m_pitch) + x * 3];
float grey = float(src[0]) / 255.0f;

// Add to the sum of the return value
float v = float(K * x) / float(srcImage.m_width);
v += float(L * y) / float(srcImage.m_height);
ret += std::complex<float>(grey, 0.0f) * std::polar<float>(1.0f, -2.0f * c_pi * v);
}
}

return ret;
}

//======================================================================================
void DFTImage (const SImageData &srcImage, SImageDataComplex &destImage)
{
// NOTE: this function assumes srcImage is greyscale, so works on only the red component of srcImage.
// ImageToGrey() will convert an image to greyscale.

// size the output dft data
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pixels.resize(destImage.m_width*destImage.m_height);

// calculate 2d dft (brute force, not using fast fourier transform) multithreadedly
std::atomic<size_t> nextRow(0);
{
[&] ()
{
bool reportProgress = (row == 0);
int lastPercent = -1;

while (row < srcImage.m_height)
{
// calculate the DFT for every pixel / frequency in this row
for (size_t x = 0; x < srcImage.m_width; ++x)
{
destImage.m_pixels[row * destImage.m_width + x] = DFTPixel(srcImage, x, row);
}

// report progress if we should
if (reportProgress)
{
int percent = int(100.0f * float(row) / float(srcImage.m_height));
if (lastPercent != percent)
{
lastPercent = percent;
printf("            \rDFT: %i%%", lastPercent);
}
}

// go to the next row
}
}
);
}

t.join();

printf("\n");
}

//======================================================================================
void GetMagnitudeData (const SImageDataComplex& srcImage, SImageData& destImage)
{
// size the output image
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pitch = 4 * ((srcImage.m_width * 24 + 31) / 32);
destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);

// get floating point magnitude data
std::vector<float> magArray;
magArray.resize(srcImage.m_width*srcImage.m_height);
float maxmag = 0.0f;
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Offset the information by half width & height in the positive direction.
// This makes frequency 0 (DC) be at the image origin, like most diagrams show it.
int k = (x + (int)srcImage.m_width / 2) % (int)srcImage.m_width;
int l = (y + (int)srcImage.m_height / 2) % (int)srcImage.m_height;
const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k];

float mag = std::abs(src);
if (mag > maxmag)
maxmag = mag;

magArray[y*srcImage.m_width + x] = mag;
}
}
if (maxmag == 0.0f)
maxmag = 1.0f;

const float c = 255.0f / log(1.0f+maxmag);

// normalize the magnitude data and send it back in [0, 255]
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
float src = c * log(1.0f + magArray[y*srcImage.m_width + x]);

uint8 magu8 = uint8(src);

uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3];
dest[0] = magu8;
dest[1] = magu8;
dest[2] = magu8;
}
}
}

//======================================================================================
void GetPhaseData (const SImageDataComplex& srcImage, SImageData& destImage)
{
// size the output image
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pitch = 4 * ((srcImage.m_width * 24 + 31) / 32);
destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);

// get floating point phase data, and encode it in [0,255]
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Offset the information by half width & height in the positive direction.
// This makes frequency 0 (DC) be at the image origin, like most diagrams show it.
int k = (x + (int)srcImage.m_width / 2) % (int)srcImage.m_width;
int l = (y + (int)srcImage.m_height / 2) % (int)srcImage.m_height;
const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k];

// get phase, and change it from [-pi,+pi] to [0,255]
float phase = (0.5f + 0.5f * std::atan2(src.real(), src.imag()) / c_pi);
if (phase < 0.0f)
phase = 0.0f;
if (phase > 1.0f)
phase = 1.0;
uint8 phase255 = uint8(phase * 255);

// write the phase as grey scale color
uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3];
dest[0] = phase255;
dest[1] = phase255;
dest[2] = phase255;
}
}
}

//======================================================================================
bool ImageSave (const SImageData &image, const char *fileName)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "wb");
if (!file) {
printf("Could not save %s\n", fileName);
return false;
}

// write the data and close the file
fclose(file);

return true;
}

//======================================================================================
bool ImageLoad (const char *fileName, SImageData& imageData)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "rb");
if (!file)
return false;

{
fclose(file);
return false;
}

// read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4
if (fread(&imageData.m_pixels[0], imageData.m_pixels.size(), 1, file) != 1)
{
fclose(file);
return false;
}

imageData.m_pitch = 4 * ((imageData.m_width * 24 + 31) / 32);

fclose(file);
return true;
}

//======================================================================================
void ImageInit (SImageData& image, size_t width, size_t height)
{
image.m_width = width;
image.m_height = height;
image.m_pitch = 4 * ((width * 24 + 31) / 32);
image.m_pixels.resize(image.m_pitch * image.m_width);
std::fill(image.m_pixels.begin(), image.m_pixels.end(), 0);
}

//======================================================================================
void SampleTest (const SImageData& image, const SImageData& samples, const char* fileName)
{
SImageData outImage;
ImageInit(outImage, image.m_width, image.m_height);

for (size_t y = 0; y < image.m_height; ++y)
{
size_t sampleY = y % samples.m_height;
for (size_t x = 0; x < image.m_width; ++x)
{
size_t sampleX = x % samples.m_width;

const SColor* samplePixel = (SColor*)&samples.m_pixels[sampleY*samples.m_pitch + sampleX * 3];
const SColor* imagePixel = (SColor*)&image.m_pixels[y*image.m_pitch + x * 3];

SColor* outPixel = (SColor*)&outImage.m_pixels[y*outImage.m_pitch + x * 3];

if (samplePixel->R == 255)
*outPixel = *imagePixel;
}
}

ImageSave(outImage, fileName);
}

//======================================================================================
inline float Distance (size_t x1, size_t y1, size_t x2, size_t y2, int imageWidth)
{
// this returns the toroidal distance between the points
// aka the interval [0, width) wraps around
float dx = std::abs(float(x2) - float(x1));
float dy = std::abs(float(y2) - float(y1));

if (dx > float(imageWidth / 2))
dx = float(imageWidth) - dx;

if (dy > float(imageWidth / 2))
dy = float(imageWidth) - dy;

// returning squared distance cause why not
return dx*dx + dy*dy;
}

//======================================================================================
int main (int argc, char** argv)
{
const size_t c_imageSize = 256;
const bool c_doDFT = true;

const size_t c_blueNoiseSampleMultiplier = 1;

const size_t samples1 = 256;   // 16x16
const size_t samples2 = 1024;  // 32x32
const size_t samples3 = 4096; // 128x128

SImageData image;

// init random number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<int> dist(0, c_imageSize - 1);

// white noise
{
SImageData samples;
ImageInit(samples, c_imageSize, c_imageSize);

for (size_t i = 1; i <= samples3; ++i)
{
size_t x = dist(rng);
size_t y = dist(rng);

SColor* pixel = (SColor*)&samples.m_pixels[y*samples.m_pitch + x * 3];
pixel->R = pixel->G = pixel->B = 255;

if (i == samples1 || i == samples2 || i == samples3)
{
printf("White Noise %zu samples\n", i);

char fileName[256];
sprintf(fileName, "WhiteNoise_%zu.bmp", i);
ImageSave(samples, fileName);

sprintf(fileName, "WhiteNoise_%zu_samples.bmp", i);
SampleTest(image, samples, fileName);

if (c_doDFT)
{
SImageDataComplex frequencyData;
DFTImage(samples, frequencyData);

SImageData magnitudeData;
GetMagnitudeData(frequencyData, magnitudeData);

sprintf(fileName, "WhiteNoise_%zu_mag.bmp", i);
ImageSave(magnitudeData, fileName);
}
}
}
}

// regular samples
{

auto GridTest = [&] (size_t sampleCount) {
SImageData samples;
ImageInit(samples, c_imageSize, c_imageSize);

size_t side = size_t(std::sqrt(float(sampleCount)));

size_t pixels = c_imageSize / side;

for (size_t y = 0; y < side; ++y)
{
size_t pixelY = y * pixels;
for (size_t x = 0; x < side; ++x)
{
size_t pixelX = x * pixels;

SColor* pixel = (SColor*)&samples.m_pixels[pixelY*samples.m_pitch + pixelX * 3];
pixel->R = pixel->G = pixel->B = 255;
}
}

printf("Regular %zu samples\n", sampleCount);

char fileName[256];
sprintf(fileName, "Regular_%zu.bmp", sampleCount);
ImageSave(samples, fileName);

sprintf(fileName, "Regular_%zu_samples.bmp", sampleCount);
SampleTest(image, samples, fileName);

if (c_doDFT)
{
SImageDataComplex frequencyData;
DFTImage(samples, frequencyData);

SImageData magnitudeData;
GetMagnitudeData(frequencyData, magnitudeData);

sprintf(fileName, "Regular_%zu_mag.bmp", sampleCount);
ImageSave(magnitudeData, fileName);
}
};

GridTest(samples1);
GridTest(samples2);
GridTest(samples3);
}

// blue noise
{
SImageData samples;
ImageInit(samples, c_imageSize, c_imageSize);

std::vector<std::array<size_t, 2>> samplesPos;

size_t percent = (size_t)-1;

for (size_t i = 1; i <= samples3; ++i)
{
size_t newPercent;
if (i <= samples1)
newPercent = size_t(100.0f * float(i) / float(samples1));
else if (i <= samples2)
newPercent = size_t(100.0f * float(i - samples1) / float(samples2 - samples1));
else
newPercent = size_t(100.0f * float(i - samples2) / float(samples3 - samples2));
if (percent != newPercent)
{
percent = newPercent;
printf("\rGenerating Blue Noise Samples: %zu%%", percent);
}

// keep the candidate that is farthest from it's closest point
size_t numCandidates = samplesPos.size() * c_blueNoiseSampleMultiplier + 1;
float bestDistance = 0.0f;
size_t bestCandidateX = 0;
size_t bestCandidateY = 0;
for (size_t candidate = 0; candidate < numCandidates; ++candidate)
{
size_t x = dist(rng);
size_t y = dist(rng);

// calculate the closest distance from this point to an existing sample
float minDist = FLT_MAX;
for (const std::array<size_t, 2>& samplePos : samplesPos)
{
float dist = Distance(x, y, samplePos[0], samplePos[1], c_imageSize);
if (dist < minDist)
minDist = dist;
}

if (minDist > bestDistance)
{
bestDistance = minDist;
bestCandidateX = x;
bestCandidateY = y;
}
}
samplesPos.push_back({ bestCandidateX, bestCandidateY });

SColor* pixel = (SColor*)&samples.m_pixels[bestCandidateY*samples.m_pitch + bestCandidateX * 3];
pixel->R = pixel->G = pixel->B = 255;

if (i == samples1 || i == samples2 || i == samples3)
{
printf("\nBlue Noise %zu samples\n", i);

char fileName[256];
sprintf(fileName, "BlueNoise_%zu.bmp", i);
ImageSave(samples, fileName);

sprintf(fileName, "BlueNoise_%zu_samples.bmp", i);
SampleTest(image, samples, fileName);

if (c_doDFT)
{
SImageDataComplex frequencyData;
DFTImage(samples, frequencyData);

SImageData magnitudeData;
GetMagnitudeData(frequencyData, magnitudeData);

sprintf(fileName, "BlueNoise_%zu_mag.bmp", i);
ImageSave(magnitudeData, fileName);
}
}
}
}

return 0;
}


# Generating Random Numbers From a Specific Distribution With Rejection Sampling

The last post showed how to transform uniformly generated random numbers into any random number distribution you desired.

It did so by turning the PDF (probability density function) into a CDF (cumulative density function) and then inverting it – either analytically (making a function) or numerically (making a look up table).

This post will show you how to generate numbers from a PDF as well, but will do so using rejection sampling.

# Dice

Let’s say you wanted to simulate a fair five sided die but that you only had a six sided die.

You can use rejection sampling for this by rolling a six sided die and ignoring the roll any time a six came up. Doing that, you do in fact get a fair five sided die roll!

This shows doing that to get 10,000 five sided die rolls:

One disadvantage to this method is that you are throwing away die rolls which can be a source of inefficiency. In this setup it takes 1.2 six sided die rolls on average to get a valid five sided die roll since a roll will be thrown away 1/6 of the time.

Another disadvantage is that each time you need a new value, there are an unknown number of die rolls needed to get it. On average it’s true that you only need 1.2 die rolls, but in reality, it’s possible you may roll 10 sixes in a row. Heck it’s even technically possible (but very unlikely) that you could be rolling dice until the end of time and keep getting sixes. (Using PRNG’s in computers, this won’t happen, but it does take a variable number of rolls).

This is just to say: there is uneven and unpredictable execution time of this algorithm, and it needs an unknown (but somewhat predictable) amount of random numbers to work. This is true of the other forms of sampling methods I talk about lower down as well.

Instead of using a six sided die you could use a die of any size that is greater than (or equal to…) five. Here shows a twenty sided die simulating a five sided die:

It looks basically the same as using a six sided die, which makes sense (that shows that it works), but in this case, it actually took 4 rolls on average to make a valid five sided die roll, since the roll fails 15/20 times (3 out of 4 rolls will fail).

Quick Asides:

• If straying from rejection sampling ideas for a minute, in the case of the twenty sided die, you could use modulus to get a fair five sided die roll each time: $((roll - 1) \% 5) + 1$. This works because there is no remainder for 20 % 5. If there was a remainder it would bias the rolls towards the numbers <= the remainder, making them more likely to come up than the other numbers.
• You could also get a four sided die roll at the same time if you didn’t want to waste any of this precious random information: $((roll - 1) / 5) + 1$
• Another algorithm to check out for discrete (integer) weighted random numbers is Vose’s method: Vose’s Method.

# Box Around PDF

Moving back into the world of continuous valued random numbers and PDF’s, a simple version of how rejection sampling can be used is like this:

2. Draw a box around the PDF
3. Generate a (uniform) random point in that box
4. If the point is under the curve of the PDF, use the x axis value as your random number, else throw it out and go to 1

That’s all there is to it!

This works because the x axis value of your 2d point is the random number you might be choosing. The y axis value of your 2d point is a probability of choosing that point. Since the PDF graph is higher in places that are more probable, those places are more likely to accept your 2d point than places that have lower PDF values.

Furthermore, the average number of rejected samples vs accepted samples is based on the area under the PDF compared to the area of the box.

The number of samples on average will be the area of the box divided by the area of the PDF.

Since PDF’s by definition have to integrate to 1, that means that you are dividing by 1. So, to simplify: The number of samples on average will be the same as the area of the box!

If it’s hard to come up with the exact size of the box for the PDF, the box doesn’t have to fit exactly, but of course the tighter you can fit the box around the PDF, the fewer rejected samples you’ll have.

You don’t actually need to graph the PDF and draw a box to do this though. Just generate a 2d random number (a random x and a random y) and reject the point if $PDF(x) < y$.

Here I'm using this technique with the PDF $y=2x$ where x is in [0,1) and I'm using a box that goes from (0,0) to (1,2) to get 100,000 samples.

As expected, it took on average 2 points to get a single valid point since the area of the box is 2. Here are how many failed tests each histogram bucket had. Unsurprisingly, lower values of the PDF have more failed tests!

Moving to a more complex PDF, let’s look at $y=\frac{x^3-10x^2+5x+11}{10.417}$

Here are 10 million samples (lots of samples to minimize the noise), using a box height of 1.2, which unsurprisingly takes 1.2 samples on average to get a valid sample:

Here is the graph of the failure counts:

Here the box has a height of 2.8. It still works, but uses 2.8 samples on average which is less efficient:

Here’s the graph of failure counts:

Something interesting about this technique is that technically, the distribution you are sampling from doesn’t even have to be a PDF! If you have negative parts of the graph, they will be treated as zero, assuming your box has a minimum y of 0. Also, the fact that your function may not integrate to (have an area of) 1 doesn’t matter at all.

Here we take the PDF from the last examples, and take off the division by a constant, so that it doesn’t integrate to 1: $y=x^3-10x^2+5x+11$

The interesting thing is that we get as output a normalized PDF (the red line), even though the distribution we were using to sample was not normalized (the blue line, which is mostly hidden behind the yellow line).

Here are the rejection counts:

## Generating One PDF from Another PDF

In the last section we showed how to enclose a PDF in a box, make uniformly random 2d points, and use them to generate points from the PDF.

By enclosing it in a box, all we were really doing is putting it under a uniform distribition that was scaled up to be larger than the PDF at all points.

Now here’s the interesting thing: We aren’t limited to using the uniform distribution!

To generalize this technique, if you are trying to sample from a PDF $f(x)$, you can use any PDF $g(x)$ to do so, so long as you multiply $g(x)$ by a scalar value $M$ so that $M*g(x)>= f(x)$ for all values of x. In other words: scale up g so that it’s always bigger than f.

Using this more generalized technique has one or two more steps than the other way, but allows for a tighter fit of a generating function, resulting in fewer samples thrown away.

Here’s how to do it:

1. Generate a random number from the distribution g, and call it x.
2. Calculate the percentage chance of x being chosen by getting a ratio of how likely that number is to be chosen in each PDF: $\frac{f(x)}{M*g(x)}$
3. Generate a uniform random number from 0 to 1. If it’s less than the value you just calculated, accept x as the random number, else reject it and go back to 1.

Let’s see this in action!

We’ll generate numbers in a Gaussian distribution with a mean of 15 and a standard deviation of 5. We’ll truncate it to +/- 3 standard deviations so we want to generate random numbers from [0,30).

To generate these numbers, we’ll draw random numbers from the PDF $y=x*0.002222$. We’ll use an $M$ value of 3 to scale up this PDF to always be greater than the Gaussian one.

Here is how it looks doing this with 20,000 samples:

We generate random numbers along the red line, multiply them by 3 to make them be the yellow line. Then, at whatever point we are at on the x axis, we divide the blue line value by the yellow line value and use that as an acceptance probability. Doing this and counting numbers in a histogram gives us our result – the green line. Since the end goal is the blue line, you can see it is indeed working! With a larger number of samples, the green line would more closely match the blue line.

Here’s the graph of the failed tests:

We have to take on average 3 samples before we get a valid random number. That shouldn’t be too surprising because both PDF’s start with area of 1, but we are multiplying one of them by 3 to make it always be larger than the other.

Something else interesting you might notice is that we have a lot fewer failed tests where the two PDF functions are more similar.

That is the power of this technique: If you can cheaply and easily generate samples that are “pretty close” to a harder distribution to sample from, you can use this technique to more cheaply sample from it.

Something to note is that just like in the last section, the target PDF doesn’t necessarily need to be a real PDF with only positive values and integrating to 1. It would work just the same with a non PDF function, just so long as the PDF generating the random numbers you start with is always above the function.

# Some Other Notes

There is family of techniques called “adaptive rejection sampling” that will change the PDF they are drawing from whenever there is a failed test.

Basically, if you imagine the PDF you are drawing from as being a bunch of line segments connected together, you could imagine that whenever you failed a test, you moved a line segment down to be closer to the curve, so that when you sampled from that area again, the chances would be lower that you’d fail the test again.

Taking this to the limit, your sampling PDF will eventually become the PDF you are trying to sample from, and then using this PDF will be a no-op.

These techniques are a continued area of research.

Something else to note is that rejection sampling can be used to find random points within shapes.

For instance, a random point on a triangle, ellipse or circle could be done by putting a (tight) bounding box around the shape, generating points randomly in that box, and only accepting ones within the inner shape.

This can be extended to 3d shapes as well.

Some shapes have better ways to generate points within them that don’t involve iteration and rejected samples, but if all else fails, rejection sampling does indeed work!

At some point in the future I’d like to look into “Markov Chain Monte Carlo” more deeply. It seems like a very interesting technique to approach this same problem, but I have no idea if it’s used often in graphics, especially real time graphics.

# Code

Here is the code that generated all the data from this post. The data was visualized with open office.

#define _CRT_SECURE_NO_WARNINGS

#include <stdio.h>
#include <random>
#include <array>
#include <unordered_map>

template <size_t NUM_TEST_SAMPLES, size_t SIMULATED_DICE_SIDES, size_t ACTUAL_DICE_SIDES>
void TestDice (const char* fileName)
{
// seed the random number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<size_t> dist(0, ACTUAL_DICE_SIDES-1);

// generate the histogram
std::array<size_t, SIMULATED_DICE_SIDES> histogram = { 0 };
size_t rejectedSamples = 0;
for (size_t i = 0; i < NUM_TEST_SAMPLES; ++i)
{
size_t roll = dist(rng);
while (roll >= SIMULATED_DICE_SIDES)
{
++rejectedSamples;
roll = dist(rng);
}
histogram[roll]++;
}

// write the histogram and rejected sample count to a csv
// an extra 0 data point forces the graph to include 0 in the scale. hack to make the data not look noisier than it really is.
FILE *file = fopen(fileName, "w+t");
fprintf(file, "Actual Count, Expected Count, , %0.2f samples needed per roll on average.\n", (float(NUM_TEST_SAMPLES) + float(rejectedSamples)) / float(NUM_TEST_SAMPLES));
for (size_t value : histogram)
fprintf(file, "%zu,%zu,0\n", value, (size_t)(float(NUM_TEST_SAMPLES) / float(SIMULATED_DICE_SIDES)));
fclose(file);
}

template <size_t NUM_TEST_SAMPLES, size_t NUM_HISTOGRAM_BUCKETS, typename PDF_LAMBDA>
void Test (const char* fileName, float maxPDFValue, const PDF_LAMBDA& PDF)
{
// seed the random number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate the histogram
std::array<size_t, NUM_HISTOGRAM_BUCKETS> histogram = { 0 };
std::array<size_t, NUM_HISTOGRAM_BUCKETS> failedTestCounts = { 0 };
size_t rejectedSamples = 0;
for (size_t i = 0; i < NUM_TEST_SAMPLES; ++i)
{
// Generate a sample from the PDF by generating a random 2d point.
// If the y axis of the value is <= the value returned by PDF(x), accept it, else reject it.
// NOTE: this takes an unknown number of iterations, and technically may NEVER finish.
float pointX = 0.0f;
float pointY = 0.0f;
bool validPoint = false;
while (!validPoint)
{
pointX = dist(rng);
pointY = dist(rng) * maxPDFValue;
float pdfValue = PDF(pointX);
validPoint = (pointY <= pdfValue);

// track number of failed tests per histogram bucket
if (!validPoint)
{
size_t bin = (size_t)std::floor(pointX * float(NUM_HISTOGRAM_BUCKETS));
failedTestCounts[std::min(bin, NUM_HISTOGRAM_BUCKETS - 1)]++;
++rejectedSamples;
}
}

// increment the correct bin in the histogram
size_t bin = (size_t)std::floor(pointX * float(NUM_HISTOGRAM_BUCKETS));
histogram[std::min(bin, NUM_HISTOGRAM_BUCKETS -1)]++;
}

// write the histogram and pdf sample to a csv
FILE *file = fopen(fileName, "w+t");
fprintf(file, "PDF, Simulated PDF, Generating Function, Failed Tests, %0.2f samples needed per value on average.\n", (float(NUM_TEST_SAMPLES) + float(rejectedSamples)) / float(NUM_TEST_SAMPLES));
for (size_t i = 0; i < NUM_HISTOGRAM_BUCKETS; ++i)
{
float x = (float(i) + 0.5f) / float(NUM_HISTOGRAM_BUCKETS);
float pdfSample = PDF(x);
fprintf(file, "%f,%f,%f,%f\n",
pdfSample,
NUM_HISTOGRAM_BUCKETS * float(histogram[i]) / float(NUM_TEST_SAMPLES),
maxPDFValue,
float(failedTestCounts[i])
);
}
fclose(file);
}

template <size_t NUM_TEST_SAMPLES, size_t NUM_HISTOGRAM_BUCKETS, typename PDF_LAMBDA>
void TestNotPDF (const char* fileName, float maxPDFValue, float normalizationConstant, const PDF_LAMBDA& PDF)
{
// seed the random number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate the histogram
std::array<size_t, NUM_HISTOGRAM_BUCKETS> histogram = { 0 };
std::array<size_t, NUM_HISTOGRAM_BUCKETS> failedTestCounts = { 0 };
size_t rejectedSamples = 0;
for (size_t i = 0; i < NUM_TEST_SAMPLES; ++i)
{
// Generate a sample from the PDF by generating a random 2d point.
// If the y axis of the value is <= the value returned by PDF(x), accept it, else reject it.
// NOTE: this takes an unknown number of iterations, and technically may NEVER finish.
float pointX = 0.0f;
float pointY = 0.0f;
bool validPoint = false;
while (!validPoint)
{
pointX = dist(rng);
pointY = dist(rng) * maxPDFValue;
float pdfValue = PDF(pointX);
validPoint = (pointY <= pdfValue);

// track number of failed tests per histogram bucket
if (!validPoint)
{
size_t bin = (size_t)std::floor(pointX * float(NUM_HISTOGRAM_BUCKETS));
failedTestCounts[std::min(bin, NUM_HISTOGRAM_BUCKETS - 1)]++;
++rejectedSamples;
}
}

// increment the correct bin in the histogram
size_t bin = (size_t)std::floor(pointX * float(NUM_HISTOGRAM_BUCKETS));
histogram[std::min(bin, NUM_HISTOGRAM_BUCKETS -1)]++;
}

// write the histogram and pdf sample to a csv
FILE *file = fopen(fileName, "w+t");
fprintf(file, "Function, Simulated PDF, Scaled Simulated PDF, Generating Function, Failed Tests, %0.2f samples needed per value on average.\n", (float(NUM_TEST_SAMPLES) + float(rejectedSamples)) / float(NUM_TEST_SAMPLES));
for (size_t i = 0; i < NUM_HISTOGRAM_BUCKETS; ++i)
{
float x = (float(i) + 0.5f) / float(NUM_HISTOGRAM_BUCKETS);
float pdfSample = PDF(x);
fprintf(file, "%f,%f,%f,%f,%f\n",
pdfSample,
NUM_HISTOGRAM_BUCKETS * float(histogram[i]) / float(NUM_TEST_SAMPLES),
NUM_HISTOGRAM_BUCKETS * float(histogram[i]) / float(NUM_TEST_SAMPLES) * normalizationConstant,
maxPDFValue,
float(failedTestCounts[i])
);
}
fclose(file);
}

template <size_t NUM_TEST_SAMPLES, size_t NUM_HISTOGRAM_BUCKETS, typename PDF_F_LAMBDA, typename PDF_G_LAMBDA, typename INVERSE_CDF_G_LAMBDA>
void TestPDFToPDF (const char* fileName, const PDF_F_LAMBDA& PDF_F, const PDF_G_LAMBDA& PDF_G, float M, const INVERSE_CDF_G_LAMBDA& Inverse_CDF_G, float rngRange)
{
// We generate a sample from PDF F by generating a sample from PDF G, and accepting it with probability PDF_F(x)/(M*PDF_G(x))

// seed the random number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate the histogram
std::array<size_t, NUM_HISTOGRAM_BUCKETS> histogram = { 0 };
std::array<size_t, NUM_HISTOGRAM_BUCKETS> failedTestCounts = { 0 };
size_t rejectedSamples = 0;
for (size_t i = 0; i < NUM_TEST_SAMPLES; ++i)
{
// generate random points until we have one that's accepted
// NOTE: this takes an unknown number of iterations, and technically may NEVER finish.
float sampleG = 0.0f;
bool validPoint = false;
while (!validPoint)
{
// Generate a sample from the soure PDF G
sampleG = Inverse_CDF_G(dist(rng));

// calculate the ratio of how likely we are to accept this sample
float acceptChance = PDF_F(sampleG) / (M * PDF_G(sampleG));

// see if we should accept it
validPoint = dist(rng) <= acceptChance;

// track number of failed tests per histogram bucket
if (!validPoint)
{
size_t bin = (size_t)std::floor(sampleG * float(NUM_HISTOGRAM_BUCKETS) / rngRange);
failedTestCounts[std::min(bin, NUM_HISTOGRAM_BUCKETS - 1)]++;
++rejectedSamples;
}
}

// increment the correct bin in the histogram
size_t bin = (size_t)std::floor(sampleG * float(NUM_HISTOGRAM_BUCKETS) / rngRange);
histogram[std::min(bin, NUM_HISTOGRAM_BUCKETS - 1)]++;
}

// write the histogram and pdf sample to a csv
FILE *file = fopen(fileName, "w+t");
fprintf(file, "PDF F,PDF G,Scaled PDF G,Simulated PDF,Failed Tests,%0.2f samples needed per value on average.\n", (float(NUM_TEST_SAMPLES) + float(rejectedSamples)) / float(NUM_TEST_SAMPLES));
for (size_t i = 0; i < NUM_HISTOGRAM_BUCKETS; ++i)
{
float x = (float(i) + 0.5f) * rngRange / float(NUM_HISTOGRAM_BUCKETS);

fprintf(file, "%f,%f,%f,%f,%f\n",
PDF_F(x),
PDF_G(x),
PDF_G(x)*M,
NUM_HISTOGRAM_BUCKETS * float(histogram[i]) / (float(NUM_TEST_SAMPLES)*rngRange),
float(failedTestCounts[i])
);
}
fclose(file);
}

int main(int argc, char **argv)
{
// Dice
{
// Simulate a 5 sided dice with a 6 sided dice
TestDice<10000, 5, 6>("test1_5_6.csv");

// Simulate a 5 sided dice with a 20 sided dice
TestDice<10000, 5, 20>("test1_5_20.csv");
}

// PDF y=2x, simulated with a uniform distribution
{
auto PDF = [](float x) { return 2.0f * x; };

Test<1000, 100>("test2_1k.csv", 2.0f, PDF);
Test<100000, 100>("test2_100k.csv", 2.0f, PDF);
Test<1000000, 100>("test2_1m.csv", 2.0f, PDF);
}

// PDF y=(x^3-10x^2+5x+11)/10.417, simulated with a uniform distribution
{
auto PDF = [](float x) {return (x*x*x - 10.0f*x*x + 5.0f*x + 11.0f) / (10.417f); };
Test<10000000, 100>("test3_10m_1_15.csv", 1.15f, PDF);
Test<10000000, 100>("test3_10m_1_5.csv", 1.5f, PDF);
Test<10000000, 100>("test3_10m_2_8.csv", 2.8f, PDF);
}

// function (not PDF, Doesn't integrate to 1!) y=(x^3-10x^2+5x+11), simulated with a scaled up uniform distribution
{
auto PDF = [](float x) {return (x*x*x - 10.0f*x*x + 5.0f*x + 11.0f); };
TestNotPDF<10000000, 100>("test4_10m_12_5.csv", 12.5f, 10.417f, PDF);
}

// Generate samples from PDF F using samples from PDF G.  random numbers are from 0 to 30.
// F PDF = gaussian distribution, mean 15, std dev of 5.  Truncated to +/- 3 stddeviations.
// G PDF = x*0.002222
// G CDF = 0.001111 * x^2
// G inverted CDF = (1000 * sqrt(x)) / sqrt(1111)
// M = 3
{
// gaussian PDF F
const float mean = 15.0f;
const float stddev = 5.0f;
auto PDF_F = [=] (float x) -> float
{
return (1.0f / (stddev * sqrt(2.0f * (float)std::_Pi))) * std::exp(-0.5f * pow((x - mean) / stddev, 2.0f));
};

// PDF G
auto PDF_G = [](float x) -> float
{
return x * 0.002222f;
};

// Inverse CDF of G
auto Inverse_CDF_G = [] (float x) -> float
{
return 1000.0f * std::sqrtf(x) / std::sqrtf(1111.0f);
};

TestPDFToPDF<20000, 100>("test5.csv", PDF_F, PDF_G, 3.0f, Inverse_CDF_G, 30.0f);
}

return 0;
}


# Generating Random Numbers From a Specific Distribution By Inverting the CDF

The last post talked about the normal distribution and showed how to generate random numbers from that distribution by generating regular (uniform) random numbers and then counting the bits.

What would you do if you wanted to generate random numbers from a different, arbitrary distribution though? Let’s say the distribution is defined by a function even.

It turns out that in general this is a hard problem, but in practice there are a few ways to approach it. The below are the most common techniques for achieving this that I’ve seen.

• Inverting the CDF (analytically or numerically)
• Rejection Sampling
• Markov Chain Monte Carlo
• Ziggurat algorithm

This post talks about the first one listed: Inverting the CDF.

# What Is A CDF?

The last post briefly explained that a PDF is a probability density function and that it describes the relative probability of numbers being chosen at random. A requirement of a PDF is that it has non negative value everywhere and also that the area under the curve is 1.

It needs to be non negative everywhere because a negative probability doesn’t make any sense. It needs to have an area under the curve of 1 because that means it represents the full 100% probability of all possible outcomes.

CDF stands for “Cumulative distribution function” and is related to the PDF.

A PDF is a function y=f(x) where y is the probability of the number x number being chosen at random from the distribution.

A CDF is a function y=f(x) where y is the probability of the number x, or any lower number, being chosen at random from that distribution.

You get a CDF from a PDF by integrating the PDF. From there you make sure that the CDF has a starting y value of 0, and an ending value of 1. You might have to do a bias (addition or subtraction) and/or scale (multiplication or division) to make that happen.

# Why Invert the CDF? (And Not the PDF?)

With both a PDF and a CDF, you plug in a number, and you get information about probabilities relating to that number.

To get a random number from a specific distribution, we want to do the opposite. We want to plug in a probability and get out the number corresponding to that probability.

Basically, we want to flip x and y in the equation and solve for y, so that we have a function that does this. That is what we have to do to invert the CDF.

Why invert the CDF though and not the PDF? Check out the images below from Wikipedia. The first is some Gaussian PDF’s and the second is the same distributions as CDF’s:

The issue is that if we flip x and y’s in a PDF, there would be multiple y values corresponding to the same x. This isn’t true in a CDF.

Let’s work through sampling some PDFs by inverting the CDF.

# Example 0: y=1

This is the easiest case and represents uniform random numbers, where every number is evenly likely to be chosen.

Our PDF equation is: $y=1$ where $x \in [0,1]$. The graph looks like this:

If we integrate the pdf to get the cdf, we get $y=x$ where $x \in [0,1]$ which looks like this:

Now, to invert the cdf, we flip x and y, and then solve for y again. It’s trivially easy…

$y=x \Leftarrow \text{CDF}\\ x=y \Leftarrow \text{Flip x and y}\\ y=x \Leftarrow \text{Solve for y again}$

Now that we have our inverted CDF, which is $y=x$, we can generate uniform random numbers, plug them into that equation as x and get y which is the actual value drawn from our PDF.

You can see that since we are plugging in numbers from an even distribution and not doing anything to them at all, that the result is going to an even distribution as well. So, we are in fact generating uniformly distributed random numbers using this inverted CDF, just like our PDF asked for.

This is so trivially simple it might be confusing. If so, don’t sweat it. Move onto the next example and you can come back to this later if you want to understand what I’m talking about here.

Note: The rest of the examples are going to have x in [0,1] as well but we are going to stop explicitly saying so. This process still works when x is in a different range of values, but for simplicity we’ll just have x be in [0,1] for the rest of the post.

# Example 1: y=2x

The next easiest case for a PDF is $y=2x$ which looks like this:

You might wonder why it’s $y=2x$ instead of $y=x$. This is because the area under the curve $y=x$ is 0.5. PDF’s need to have an area of 1, so I multiplied by 2 to make it have an area of 1.

What this PDF means is that small numbers are less likely to be picked than large numbers.

If we integrate the PDF $y=2x$ to get the CDF, we get $y=x^2$ which looks like this:

Now let’s flip x and y and solve for y again.

$y=x^2 \Leftarrow \text{CDF}\\ x=y^2 \Leftarrow \text{Flip x and y}\\ y=\sqrt{x} \Leftarrow \text{Solve for y again}$

We now have our inverted CDF which is $y=\sqrt{x}$ and looks like this:

Now, if we plug uniformly random numbers into that formula as x, we should get as output samples that follow the probability of our PDF.

We can use a histogram to see if this is really true. We can generate some random numbers, square root them, and count how many are in each range of values.

Here is a histogram where I took 1,000 random numbers, square rooted them, and put their counts into 100 buckets. Bucket 1 counted how many numbers were in [0, 0.01), bucket 2 counted how many numbers were in [0.01, 0.02) and so on until bucket 100 which counted how many numbers were in [0.99, 1.0).

Increasing the number of samples to 100,000 it gets closer:

At 1,000,000 samples you can barely see a difference:

The reason it doesn’t match up at lower sample counts is just due to the nature of random numbers being random. It does match up, but you’ll have some variation with lower sample counts.

# Example 2: y=3x^2

Let’s check out the PDF $y=3x^2$. The area under that curve where x is in [0,1) is 1.0 and it’s non negative everywhere in that range too, so it’s a valid PDF.

Integrating that, we get $y=x^3$ for the CDF. Then we invert the CDF:

$y=x^3 \Leftarrow \text{CDF}\\ x=y^3 \Leftarrow \text{Flip x and y}\\ y=\sqrt[3]{x} \Leftarrow \text{Solve for y again}$

And here is a 100,000 sample histogram vs the PDF to verify that we got the right answer:

# Example 3: Numeric Solution

So far we’ve been able to invert the CDF to get a nice easy function to transform uniform distribution random numbers into numbers from the distribution described by the PDF.

Sometimes though, inverting a CDF isn’t possible, or gives a complex equation that is costly to evaluate. In these cases, you can actually invert the CDF numerically via a lookup table.

A lookup table may also be desired in cases where eg you have a pixel shader that is drawing numbers from a PDF, and instead of making N shaders for N different PDFs, you want to unify them all into a single shader. Passing a lookup table via a constant buffer, or perhaps even via a texture can be a decent solution here. (Note: if storing in a texture you may be interested in fitting the data with curves and using this technique to store it and recall it from the texture: GPU Texture Sampler Bezier Curve Evaluation)

Let’s invert a PDF numerically using a look up table to see how that would work.

Our PDF will be:

$y=\frac{x^3-10x^2+5x+11}{10.417}$

And looks like this:

It’s non negative in the range we care about and it integrates to 1.0 – or it integrates closely enough… the division by 10.417 is there for that reason, and using more digits would get it closer to 1.0.

What we are going to do is evaluate that PDF at N points to get a probability for those samples of numbers. That will give us a lookup table for our PDF.

We are then going to make each point be the sum of all the PDF samples to the left of it to make a lookup table for a CDF. We’ll also have to normalize the CDF table since it’s likely that our PDF samples don’t all add up (integrate) to 1.0. We do this by dividing every item in the CDF by the last entry in the CDF. If you look at the table after that, it will fully cover everything from 0% to 100% probability.

Below are some histogram comparisons of the lookup table technique vs the actual PDF.

Here is 100 million samples (to make it easier to see the data without very much random noise), in 100 histogram buckets, and a lookup table size of 3 which is pretty low quality:

Increasing it to a lookup table of size 5 gives you this:

Here’s 10:

25:

And here’s 100:

So, not surprisingly, the size of the lookup table affects the quality of the results!

## Code

here is the code I used to generate the data in this post, which i visualized with open office. I visualized the function graphs using wolfram alpha.

#define _CRT_SECURE_NO_WARNINGS

#include
#include
#include
#include

template
void Test (const char* fileName, const PDF_LAMBDA& PDF, const INVERSE_CDF_LAMBDA& inverseCDF)
{
// seed the random number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution dist(0.0f, 1.0f);

// generate the histogram
std::array histogram = { 0 };
for (size_t i = 0; i < NUM_TEST_SAMPLES; ++i)
{
// put a uniform random number into the inverted CDF to sample the PDF
float x = dist(rng);
float y = inverseCDF(x);

// increment the correct bin on the histogram
size_t bin = (size_t)std::floor(y * float(NUM_HISTOGRAM_BUCKETS));
histogram[std::min(bin, NUM_HISTOGRAM_BUCKETS -1)]++;
}

// write the histogram and pdf sample to a csv
FILE *file = fopen(fileName, "w+t");
fprintf(file, "PDF, Inverted CDF\n");
for (size_t i = 0; i < NUM_HISTOGRAM_BUCKETS; ++i)
{
float x = (float(i) + 0.5f) / float(NUM_HISTOGRAM_BUCKETS);
float pdfSample = PDF(x);
fprintf(file, "%f,%f\n",
pdfSample,
NUM_HISTOGRAM_BUCKETS * float(histogram[i]) / float(NUM_TEST_SAMPLES)
);
}
fclose(file);
}

template
void TestPDFOnly (const char* fileName, const PDF_LAMBDA& PDF)
{
// make the CDF lookup table by sampling the PDF
// NOTE: we could integrate the buckets by averaging multiple samples instead of just the 1. This bucket integration is pretty low tech and low quality.
std::array CDFLookupTable;
float value = 0.0f;
for (size_t i = 0; i < LOOKUP_TABLE_SIZE; ++i)
{
float x = float(i) / float(LOOKUP_TABLE_SIZE - 1); // The -1 is so we cover the full range from 0% to 100%
value += PDF(x);
CDFLookupTable[i] = value;
}

// normalize the CDF - make sure we span the probability range 0 to 1.
for (float& f : CDFLookupTable)
f /= value;

// make our LUT based inverse CDF
// We will binary search over the y's (which are sorted smallest to largest) looking for the x, which is implied by the index.
// I'm sure there's a better & more clever lookup table setup for this situation but this should give you an idea of the technique
auto inverseCDF = [&CDFLookupTable] (float y) {

// there is an implicit entry of "0%" at index -1
if (y < CDFLookupTable[0])
{
float t = y / CDFLookupTable[0];
return t / float(LOOKUP_TABLE_SIZE);
}

// get the lower bound in the lut using a binary search
auto it = std::lower_bound(CDFLookupTable.begin(), CDFLookupTable.end(), y);

// figure out where we are at in the table
size_t index = it - CDFLookupTable.begin();

// Linearly interpolate between the values
// NOTE: could do other interpolation methods, like perhaps cubic (https://blog.demofox.org/2015/08/08/cubic-hermite-interpolation/)
float t = (y - CDFLookupTable[index - 1]) / (CDFLookupTable[index] - CDFLookupTable[index - 1]);
float fractionalIndex = float(index) + t;
return fractionalIndex / float(LOOKUP_TABLE_SIZE);
};

// call the usual function to do the testing
Test(fileName, PDF, inverseCDF);
}

int main (int argc, char **argv)
{
// PDF: y=2x
// inverse CDF: y=sqrt(x)
{
auto PDF = [] (float x) { return 2.0f * x; };
auto inverseCDF = [] (float x) { return std::sqrt(x); };

Test("test1_1k.csv", PDF, inverseCDF);
Test("test1_100k.csv", PDF, inverseCDF);
Test("test1_1m.csv", PDF, inverseCDF);
}

// PDF: y=3x^2
// inverse CDF: y=cuberoot(x) aka y = pow(x, 1/3)
{
auto PDF = [] (float x) { return 3.0f * x * x; };
auto inverseCDF = [](float x) { return std::pow(x, 1.0f / 3.0f); };

Test("test2_100k.csv", PDF, inverseCDF);
}

// PDF: y=(x^3-10x^2+5x+11)/10.417
// Inverse CDF Numerically via a lookup table
{
auto PDF = [] (float x) {return (x*x*x - 10.0f*x*x + 5.0f*x + 11.0f) / (10.417f); };
TestPDFOnly("test3_100m_3.csv", PDF);
TestPDFOnly("test3_100m_5.csv", PDF);
TestPDFOnly("test3_100m_10.csv", PDF);
TestPDFOnly("test3_100m_25.csv", PDF);
TestPDFOnly("test3_100m_100.csv", PDF);
}

return 0;
}


# Counting Bits & The Normal Distribution

I recently saw some interesting posts on twitter about the normal distribution:

I’m not really a statistics kind of guy, but knowing that probability distributions come up in graphics (Like in PBR & Path Tracing), it seemed like a good time to upgrade knowledge in this area while sharing an interesting technique for generating normal distribution random numbers.

## Basics

Below is an image showing a few normal (aka Gaussian) distributions (from wikipedia).

Normal distributions are defined by these parameters:

• $\mu$ – “mu” is the mean. This is the average value of the distribution. This is where the center (peak) of the curve is on the x axis.
• $\sigma^2$ – “sigma squared” is the variance, and is just the standard deviation squared. I find standard deviation more intuitive to think about.
• $\sigma$ – “sigma” is the standard deviation, which (surprise surprise!) is the square root of the variance. This controls the “width” of the graph. The area under the cover is 1.0, so as you increase standard deviation and make the graph wider, it also gets shorter.

Here’s a diagram of standard deviations to help understand them (also from wikipedia):

I find the standard deviation intuitive because 68.2% of the data is within one standard deviation from the mean (on the plus and minus side of the mean). 95.4% of the data is within two standard deviations of the mean.

Standard deviation is given in the same units as the data itself, so if a bell curve described scores on a test, with a mean of 80 and a standard deviation of 5, it means that 68.2% of the students got between 75 and 85 points on the test, and that 95.4% of the students got between 70 and 90 points on the test.

The normal distribution is what’s called a “probability density function” or pdf, which means that the y axis of the graph describes the likelyhood of the number on the x axis being chosen at random.

This means that if you have a normal distribution that has a specific mean and variance (standard deviation), that numbers closer to the mean are more likely to be chosen randomly, while numbers farther away are less likely. The variance controls how the probability drops off as you get farther away from the mean.

Thinking about standard deviation again, 68.2% of the random numbers generated will be within 1 standard deviation of the mean (+1 std dev or -1 std dev). 95.4% will be within 2 standard deviations.

## Generating Normal Distribution Random Numbers – Coin Flips

Generating uniform random numbers, where every number is as likely as every other number, is pretty simple. In the physical world, you can roll some dice or flip some coins. In the software world, you can use PRNGs.

How would you generate random numbers that follow a normal distribution though?

In C++, there is std::normal_distribution that can do this for you. There is also something called the Box-Muller transform that can turn uniformly distributed random numbers into normal distribution random numbers (info here: Generating Gaussian Random Numbers).

I want to talk about something else though and hopefully build some better intuition.

First let’s look at coin flips.

If you flip a fair coin a million times and keep a count of how many heads and tails you saw, you might get 500014 heads and 499986 tails (I got this with a PRNG – std::mt19937). That is a pretty uniform distribution of values in the range of [0,1]. (breadcrumb: pascal’s triangle row 2 is 1,1)

Let’s flip two coins at a time though and add our values together (say that heads is 0 and tails is 1). Here’s what that graph looks like:

Out of 1 million flips, 250639 had no tails, 500308 had one tail, and 249053 had two tails. It might seem weird that they aren’t all even, but it makes more sense when you look at the outcome of flipping two coins: we can get heads/heads (00), heads/tails (01), tails/heads (10) or tails/tails (11). Two of the four possibilities have a single tails, so it makes sense that flipping two coins and getting one coin being a tail would be twice as likely as getting no tails or two tails. (breadcrumb: pascal’s triangle row 3 is 1,2,1)

What happens when we sum 3 coins? With a million flips I got 125113 0’s, 375763 1’s, 373905 2’s and 125219 3’s.

If you work out the possible combinations, there is 1 way to get 0, 3 ways to get 1, 3 ways to get 2 and 1 way to get 3. Those numbers almost exactly follow that 1, 3, 3, 1 probability. (breadcrumb: pascal’s triangle row 4 is 1,3,3,1)

If we flip 100 coins and sum them, we get this:

That looks a bit like the normal distribution graphs at the beginning of this post doesn’t it?

Flipping and summing coins will get you something called the “Binomial Distribution”, and the interesting thing there is that the binomial distribution approaches the normal distribution the more coins you are summing together. At an infinite number of coins, it is the normal distribution.

## Generating Normal Distribution Random Numbers – Dice Rolls

What if instead of flipping coins, we roll dice?

Well, rolling a 4 sided die a million times, you get each number roughly the same percentage of the time as you’d expect; roughly 25% each. 250125 0’s, 250103 1’s, 249700 2’s, 250072 3’s.

If we sum two 4 sided dice rolls we get this:

If we sum three 4 sided dice rolls we get this:

And if we sum one hundred we get this, which sure looks like a normal distribution:

This isn’t limited to four sided dice though, here’s one hundred 6 sided dice being summed:

With dice, instead of being a “binomial distribution”, it’s called a “multinomial distribution”, but as the number of dice goes to infinity, it also approaches the normal distribution.

This means you can get a normal distribution with not only coins, but any sided dice in general.

An even stronger statement than that is the Central Limit Theorem which says that if you have random numbers from ANY distribution, if you add enough of em together, you’ll often approach a normal distribution.

Strange huh?

## Generating Normal Distribution Random Numbers – Counting Bits

Now comes a fun way of generating random numbers which follow a normal distribution. Are you ready for it?

Simply generate an N bit random number and return how many 1 bits are set.

That gives you a random number that follows a normal distribution!

One problem with this is that you have very low “resolution” random numbers. Counting the bits of a 64 bit random number for instance, you can only return 0 through 64 so there are only 65 possible random numbers.

That is a pretty big limitation, but if you need normal distribution numbers calculated quickly and don’t mind if they are low resolution (like in a pixel shader?), this technique could work well for you.

Another problem though is that you don’t have control over the variance or the mean of the distribution.

That isn’t a super huge deal though because you can easily convert numbers from one normal distribution into another normal distribution.

To do so, you get your normal distribution random number. First you subtract the mean of the distribution to make it centered on 0 (have a mean of 0). You then divide it by the standard deviation to make it be part of a distribution which has a standard deviation of 1.

At this point you have a random number from a normal distribution which has a mean of 0 and a standard deviation of 1.

Next, you multiply the number by the standard deviation of the distribution you want, and lastly you add the mean of the distribution you want.

That’s pretty simple (and is implemented in the source code at the bottom of this post), but to do this you need to know what standard deviation (variance) and mean you are starting with.

If you have some way to generate random numbers in [0, N) and you are summing M of those numbers together, the mean is $M*(N-1)/2$. Note that if you instead are generating random numbers in [1,N], the mean instead is $M*(N+1)/2$.

The variance in either case is $M*(N^2-1)/12$. The standard deviation is the square root of that.

Using that information you have everything you need to generate normal distribution random numbers of a specified mean and variance.

Thanks to @fahickman for the help on calculating mean and variance of dice roll sums.

## Code

Here is the source code I used to generate the data which was used to generate the graphs in this post. There is also an implementation of the bit counting algorithm i mentioned, which converts to the desired mean and variance.

#define _CRT_SECURE_NO_WARNINGS

#include <array>
#include <random>
#include <stdint.h>
#include <stdio.h>
#include <limits>

const size_t c_maxNumSamples = 1000000;
const char* c_fileName = "results.csv";

template <size_t DiceRange, size_t DiceCount, size_t NumBuckets>
void DumpBucketCountsAddRandomNumbers (size_t numSamples, const std::array<size_t, NumBuckets>& bucketCounts)
{
// open file for append if we can
FILE* file = fopen(c_fileName, "a+t");
if (!file)
return;

// write the info
float mean = float(DiceCount) * float(DiceRange - 1.0f) / 2.0f;
float variance = float(DiceCount) * (DiceRange * DiceRange) / 12.0f;
if (numSamples == 1)
{
fprintf(file, "\"%zu random numbers [0,%zu) added together (sum %zud%zu). %zu buckets.  Mean = %0.2f.  Variance = %0.2f.  StdDev = %0.2f.\"\n", DiceCount, DiceRange, DiceCount, DiceRange, NumBuckets, mean, variance, std::sqrt(variance));
fprintf(file, "\"\"");
for (size_t i = 0; i < NumBuckets; ++i)
fprintf(file, ",\"%zu\"", i);
fprintf(file, "\n");
}
fprintf(file, "\"%zu samples\",", numSamples);

// report the samples
for (size_t count : bucketCounts)
fprintf(file, "\"%zu\",", count);

fprintf(file, "\"\"\n");
if (numSamples == c_maxNumSamples)
fprintf(file, "\n");

// close file
fclose(file);
}

template <size_t DiceSides, size_t DiceCount>
{
std::mt19937 rng;
rng.seed(std::random_device()());
std::uniform_int_distribution<size_t> dist(size_t(0), DiceSides - 1);

std::array<size_t, (DiceSides - 1) * DiceCount + 1> bucketCounts = { 0 };

size_t nextDump = 1;
for (size_t i = 0; i < c_maxNumSamples; ++i)
{
size_t sum = 0;
for (size_t j = 0; j < DiceCount; ++j)
sum += dist(rng);

bucketCounts[sum]++;

if (i + 1 == nextDump)
{
nextDump *= 10;
}
}
}

template <size_t NumBuckets>
void DumpBucketCountsCountBits (size_t numSamples, const std::array<size_t, NumBuckets>& bucketCounts)
{
// open file for append if we can
FILE* file = fopen(c_fileName, "a+t");
if (!file)
return;

// write the info
float mean = float(NumBuckets-1) * 1.0f / 2.0f;
float variance = float(NumBuckets-1) * 3.0f / 12.0f;
if (numSamples == 1)
{
fprintf(file, "\"%zu random bits (coin flips) added together. %zu buckets.  Mean = %0.2f.  Variance = %0.2f.  StdDev = %0.2f.\"\n", NumBuckets - 1, NumBuckets, mean, variance, std::sqrt(variance));
fprintf(file, "\"\"");
for (size_t i = 0; i < NumBuckets; ++i)
fprintf(file, ",\"%zu\"", i);
fprintf(file, "\n");
}
fprintf(file, "\"%zu samples\",", numSamples);

// report the samples
for (size_t count : bucketCounts)
fprintf(file, "\"%zu\",", count);

fprintf(file, "\"\"\n");
if (numSamples == c_maxNumSamples)
fprintf(file, "\n");

// close file
fclose(file);
}

template <size_t NumBits> // aka NumCoinFlips!
void CountBitsTest ()
{

size_t maxValue = 0;
for (size_t i = 0; i < NumBits; ++i)
maxValue = (maxValue << 1) | 1;

std::mt19937 rng;
rng.seed(std::random_device()());
std::uniform_int_distribution<size_t> dist(0, maxValue);

std::array<size_t, NumBits + 1> bucketCounts = { 0 };

size_t nextDump = 1;
for (size_t i = 0; i < c_maxNumSamples; ++i)
{
size_t sum = 0;
size_t number = dist(rng);
while (number)
{
if (number & 1)
++sum;
number = number >> 1;
}

bucketCounts[sum]++;

if (i + 1 == nextDump)
{
DumpBucketCountsCountBits(nextDump, bucketCounts);
nextDump *= 10;
}
}
}

float GenerateNormalRandomNumber (float mean, float variance)
{
static std::mt19937 rng;
static std::uniform_int_distribution<uint64_t> dist(0, (uint64_t)-1);

static bool seeded = false;
if (!seeded)
{
seeded = true;
rng.seed(std::random_device()());
}

// generate our normal distributed random number from 0 to 65.
//
float sum = 0.0f;
uint64_t number = dist(rng);
while (number)
{
if (number & 1)
sum += 1.0f;
number = number >> 1;
}

// convert from: mean 32, variance 16, stddev 4
// to: mean 0, variance 1, stddev 1
float ret = sum;
ret -= 32.0f;
ret /= 4.0f;

// convert to the specified mean and variance
ret *= std::sqrt(variance);
ret += mean;
return ret;
}

void VerifyGenerateNormalRandomNumber (float mean, float variance)
{
// open file for append if we can
FILE* file = fopen(c_fileName, "a+t");
if (!file)
return;

// write info
fprintf(file, "\"Normal Distributed Random Numbers. mean = %0.2f.  variance = %0.2f.  stddev = %0.2f\"\n", mean, variance, std::sqrt(variance));

// write some random numbers
fprintf(file, "\"100 numbers\"");
for (size_t i = 0; i < 100; ++i)
fprintf(file, ",\"%f\"", GenerateNormalRandomNumber(mean, variance));
fprintf(file, "\n\n");

// close file
fclose(file);
}

int main (int argc, char **argv)
{
// clear out the file
FILE* file = fopen(c_fileName, "w+t");
if (file)
fclose(file);

// coin flips
{
// flip a fair coin

// flip two coins and sum them

// sum 3 coin flips

// sum 100 coin flips
}

// dice rolls
{
// roll a 4 sided die

// sum two 4 sided dice

// sum three 4 sided dice

// sum one hundred 4 sided dice

// sum one hundred 6 sided dice
}

CountBitsTest<8>();
CountBitsTest<16>();
CountBitsTest<32>();
CountBitsTest<64>();

VerifyGenerateNormalRandomNumber(0.0f, 20.0f);

VerifyGenerateNormalRandomNumber(0.0f, 10.0f);

VerifyGenerateNormalRandomNumber(5.0f, 10.0f);

return 0;
}


# WebGL PBR Implementation

Just want to see the demo? Click the link below. Warning: it loads quite a few images, some of which are ~10MB, so may take some time to load (it does report loading progress though):

http://demofox.org/WebGLPBR/

There is a great PBR (Physically Based Rendering) tutorial at: https://learnopengl.com/#!PBR/Theory

I followed that tutorial, making a WebGL PBR implementation as I went, but also making some C++ for pre-integrating diffuse and specular IBL (Image Based Lighting) and making the splitsum texture.

Pre-integrating the diffuse and specular (and using the splitsum texture) allows you to use an object’s surroundings as light sources, which is more in line with how real life works; we don’t just have point lights and directional lights in the real world, we have objects that glow because they are illuminated by light sources, and we have light sources which are in odd shapes.

It’s possible that there are one or more math errors or bugs in the C++ as well as my WebGL PBR implementation. At some point in the future I’ll dig deeper into the math of PBR and try and write up some simple blog posts about it, at which point I’ll be more confident about correctness other than “well, it looks right…”.

The source code for the C++ pre-integrations are on github:
IBL Diffuse Cube Map Integration
IBL Specular Cube Map Integration + Split Sum Texture

The WebGL PBR implementation is also on github:
WebGLPBR

Here are some screenshots:

Learn WebGL2:

https://webgl2fundamentals.org/

Free PBR Materials:

http://freepbr.com/materials/rusted-iron-pbr-metal-material-alt/

https://learnopengl.com/#!PBR/Theory

https://disney-animation.s3.amazonaws.com/library/s2012_pbs_disney_brdf_notes_v2.pdf

# SIMD / GPU Friendly Branchless Binary Search

The other day I was thinking about how you might do a binary search branchlessly. I came up with a way, and I’m pretty sure I’m not the first to come up with it, but it was fun to think about and I wanted to share my solution.

Here it is searching a list of 8 items in 3 steps:

size_t BinarySearch8 (size_t needle, const size_t haystack[8])
{
size_t ret = (haystack[4] <= needle) ? 4 : 0;
ret += (haystack[ret + 2] <= needle) ? 2 : 0;
ret += (haystack[ret + 1] <= needle) ? 1 : 0;
return ret;
}


The three steps it does are:

1. The list has 8 items in it. We test index 4 to see if we need to continue searching index 0 to 3, or index 4 to 7. The returned index becomes either 0xx or 1xx.
2. The list has 4 items in it now. We test index 2 to see if we need to continue searching index 0 to 1, or index 2 to 3. The returned index becomes one of the following: 00x, 01x, 10x or 11x.
3. The list has 2 items in it. We test index 1 to see if we need to take the left item or the right item. The returned index becomes: 000, 001, 010, 011, 100, 101, 110, or 111.

## But Big O Complexity is Worse!

Usually a binary search can take up to $O(log N)$ steps, where $N$ is the number of items in the list. In this post’s solution, it always takes $log_2N$ steps.

It probably seems odd that this branchless version could be considered an improvement when it has big O complexity that is always the worst case of a regular binary search. That is strange, but in the world of SIMD and shader programs, going branchless can be a big win that is not captured by looking at big O complexity. (Note that cache coherancy and thread contention are two other things not captured by looking at big O complexity).

Also, when working in something like video games or other interactive simulations, an even frame rate is more important than a high frame rate for making a game look and feel smooth. Because of this, if you have algorithms that have very fast common cases but much slower worst cases, you may actually prefer to use an algorithm that is slower in the common case but faster in the worst case just to keep performance more consistent. Using an algorithm such as this, which has a constant amount of work regardless of input can be a good trade off there.

Lastly, in cryptographic applications, attackers can gather secret information by seeing how long certain operations take. For instance, if you use a shorter password than other people, an attacker may be able to detect that by seeing that it consistently takes you a little bit less time to login than other people. They now have an idea of the length of your password, and maybe will brute force you, knowing that you are low hanging fruit!

These timing based attacks can be thwarted by algorithms which run at a constant time regardless of input. This algorithm is one of those algorithms.

As an example of another algorithm that runs in constant time regardless of input, check out CORDIC math. I really want to write up a post on that someday, it’s pretty cool stuff.

You might have noticed that if the item you are searching for isn’t in the list, the function doesn’t return anything indicating that, and you might think that’s strange.

This function actually just returns the largest index that isn’t greater than the value you are searching for. If all the numbers are greater than the value you are searching for, it returns zero.

This might seem odd but this can actually come in handy if the list you are searching represents something like animation data, where there are keyframes sorted by time, and you want to find which two keyframes you are between so that you can interpolate.

To actually test if your value was in the list, you could do an extra check:

    size_t searchValue = 3;
size_t index = BinarySearch8(searchValue, list);
bool found = (list[index] == searchValue);


If you need that extra check, it’s easy enough to add, and if you don’t need that extra check, it’s nice to not have it.

## Without Ternary Operator

If in your setup you don’t have a ternary operator, or if the ternary operator isn’t branchless for you, you get the same results using multiplication:

size_t BinarySearch8 (size_t needle, const size_t haystack[8])
{
size_t ret = (haystack[4] <= needle) * 4;
ret += (haystack[ret + 2] <= needle) * 2;
ret += (haystack[ret + 1] <= needle) * 1;
return ret;
}


Note that on some platforms, the less than or equal test will be a branch! None of the platforms or compilers I tested had that issue but if you find yourself hitting that issue, you can do a branchless test via subtraction or similar.

Here is a godbolt link that lets you view the assembly for various compilers. When you open the link you’ll see clang doing this work branchlessly.
View Assembly

@adamjmiles from twitter also verified that GCN does it branchlessly, which you can see at the link below. Thanks for that!
View GCN Assembly

Something to keep in mind for the non GPU case though is that if you were doing this in SIMD, you’d be using SIMD intrinsics.

## Larger Lists

It’s trivial to search larger numbers of values. Here it is searching 16 items in 4 steps:

size_t BinarySearch16 (size_t needle, const size_t haystack[16])
{
size_t ret = (haystack[8] <= needle) ? 8 : 0;
ret += (haystack[ret + 4] <= needle) ? 4 : 0;
ret += (haystack[ret + 2] <= needle) ? 2 : 0;
ret += (haystack[ret + 1] <= needle) ? 1 : 0;
return ret;
}


And here it is searching 32 items in 5 steps:

size_t BinarySearch32 (size_t needle, const size_t haystack[32])
{
size_t ret = (haystack[16] <= needle) ? 16 : 0;
ret += (haystack[ret + 8] <= needle) ? 8 : 0;
ret += (haystack[ret + 4] <= needle) ? 4 : 0;
ret += (haystack[ret + 2] <= needle) ? 2 : 0;
ret += (haystack[ret + 1] <= needle) ? 1 : 0;
return ret;
}


## Non Power of 2 Lists

Let’s say that your list is not a perfect power of two in length. GASP!

You can still use the technique, but you treat it as if it has the next power of 2 up items, and then make sure your indices stay in range. The nice part here is that you don’t have to do extra work on the index at each step of the way, only in the places where it’s possible for the index to go out of range.

Here it is searching an array of size 7 in 3 steps:

size_t BinarySearch7 (size_t needle, const size_t haystack[7])
{
size_t ret = 0;
size_t testIndex = 0;

// test index is at most 4, so is within range.
testIndex = ret + 4;
ret = (haystack[testIndex] <= needle) ? testIndex : ret;

// test index is at most 6, so is within range.
testIndex = ret + 2;
ret = (haystack[testIndex] <= needle) ? testIndex : ret;

// test index is at most 7, so could be out of range.
// use min() to make sure the index stays in range.
testIndex = std::min<size_t>(ret + 1, 6);
ret = (haystack[testIndex] <= needle) ? testIndex : ret;

return ret;
}


There are some other techniques for dealing with non power of 2 sized lists that you can find in the links at the bottom, but there was one particularly interesting that my friend and ex boss James came up with.

Basically, you start out with something like this if you were searching a list of 7 items:

    // 7 because the list has 7 items in it.
// 4 because it's half of the next power of 2 that is >= 7.
ret = (haystack[4] <= needle) * (7-4);


The result is that instead of having ret go to either 0 or 4, it goes to 0 or 3.

From there, in both cases you have 4 items in your sublist remaining, so you don’t need to worry about the index going out of bounds from that point on.

## Code

Here’s some working code demonstrating the ideas above, as well as it’s output.

#include <algorithm>
#include <stdlib.h>

size_t BinarySearch8 (size_t needle, const size_t haystack[8])
{
// using ternary operator
size_t ret = (haystack[4] <= needle) ? 4 : 0;
ret += (haystack[ret + 2] <= needle) ? 2 : 0;
ret += (haystack[ret + 1] <= needle) ? 1 : 0;
return ret;
}

size_t BinarySearch8b (size_t needle, const size_t haystack[8])
{
// using multiplication
size_t ret = (haystack[4] <= needle) * 4;
ret += (haystack[ret + 2] <= needle) * 2;
ret += (haystack[ret + 1] <= needle) * 1;
return ret;
}

size_t BinarySearch7 (size_t needle, const size_t haystack[7])
{
// non perfect power of 2.  use min() to keep it from going out of bounds.
size_t ret = 0;
size_t testIndex = 0;

// test index is 4, so is within range.
testIndex = ret + 4;
ret = (haystack[testIndex] <= needle) ? testIndex : ret;

// test index is at most 6, so is within range.
testIndex = ret + 2;
ret = (haystack[testIndex] <= needle) ? testIndex : ret;

// test index is at most 7, so could be out of range.
// use min() to make sure the index stays in range.
testIndex = std::min<size_t>(ret + 1, 6);
ret = (haystack[testIndex] <= needle) ? testIndex : ret;

return ret;
}

int main (int argc, char **argv)
{
// search a list of size 8
{
// show the data
printf("Seaching through a list with 8 items:\n");
size_t data[8] = { 1, 3, 5, 6, 9, 11, 15, 21 };
printf("data = [");
for (size_t i = 0; i < sizeof(data)/sizeof(data[0]); ++i)
{
if (i > 0)
printf(", ");
printf("%zu", data[i]);
}
printf("]\n");

// do some searches on it using ternary operation based function
printf("\nTernary based searches:\n");
#define FIND(needle) printf("Find " #needle ": index = %zu, value = %zu, found = %s\n", BinarySearch8(needle, data), data[BinarySearch8(needle, data)], data[BinarySearch8(needle, data)] == needle ? "true" : "false");
FIND(2);
FIND(3);
FIND(0);
FIND(22);
FIND(16);
FIND(15);
FIND(21);
#undef FIND

// do some searches on it using multiplication based function
printf("\nMultiplication based searches:\n");
#define FIND(needle) printf("Find " #needle ": index = %zu, value = %zu, found = %s\n", BinarySearch8b(needle, data), data[BinarySearch8b(needle, data)], data[BinarySearch8b(needle, data)] == needle ? "true" : "false");
FIND(2);
FIND(3);
FIND(0);
FIND(22);
FIND(16);
FIND(15);
FIND(21);
#undef FIND

printf("\n\n\n\n");
}

// search a list of size 7
{
// show the data
printf("Seaching through a list with 7 items:\n");
size_t data[7] = { 1, 3, 5, 6, 9, 11, 15};
printf("data = [");
for (size_t i = 0; i < sizeof(data)/sizeof(data[0]); ++i)
{
if (i > 0)
printf(", ");
printf("%zu", data[i]);
}
printf("]\n");

// do some searches on it using ternary operation based function
printf("\nTernary based searches:\n");
#define FIND(needle) printf("Find " #needle ": index = %zu, value = %zu, found = %s\n", BinarySearch7(needle, data), data[BinarySearch7(needle, data)], data[BinarySearch7(needle, data)] == needle ? "true" : "false");
FIND(2);
FIND(3);
FIND(0);
FIND(22);
FIND(16);
FIND(15);
FIND(21);
#undef FIND

printf("\n\n\n\n");
}

system("pause");
return 0;
}


## Closing

Another facet of binary searching is that it isn’t the most cache friendly algorithm out there. There might be some value in combining the above with the information in the link below.

Cache-friendly binary search

If you like this sort of thing, here is an interesting paper from this year (2017):
Array Layouts For Comparison-Based Searching

And further down the rabbit hole a bit, this talks about re-ordering the search array to fit things into a cache line better:
https://people.mpi-inf.mpg.de/~rgemulla/publications/schlegel09search.pdf

Taking the next step is Intel and Oracle’s FAST paper:
http://www.timkaldewey.de/pubs/FAST__TODS11.pdf

Florian Gross from twitch made me aware of the last two links and also mentioned his master’s these in this area (thank you Florian!):

@rygorous mentioned on twitter some improvements such as ternary and quaternary search, as well as a way to handle the case of non power of 2 sized lists without extra index checks:

Thanks to everyone who gave feedback. It’s a very interesting topic, of which this post only seems to scratch this surface!

Hopefully you found this interesting. Questions, comments, corrections, let me know!

# When Random Numbers Are Too Random: Low Discrepancy Sequences

Random numbers can be useful in graphics and game development, but they have a pesky and sometimes undesirable habit of clumping together.

This is a problem in path tracing and monte carlo integration when you take N samples, but the samples aren’t well spread across the sampling range.

This can also be a problem for situations like when you are randomly placing objects in the world or generating treasure for a treasure chest. You don’t want your randomly placed trees to only be in one part of the forest, and you don’t want a player to get only trash items or only godly items when they open a treasure chest. Ideally you want to have some randomness, but you don’t want the random number generator to give you all of the same or similar random numbers.

The problem is that random numbers can be TOO random, like in the below where you can see clumps and large gaps between the 100 samples.

For cases like that, when you want random numbers that are a little bit more well distributed, you might find some use in low discrepancy sequences.

The standalone C++ code (one source file, standard headers, no libraries to link to) I used to generate the data and images are at the bottom of this post, as well as some links to more resources.

# What Is Discrepancy?

In this context, discrepancy is a measurement of the highest or lowest density of points in a sequence. High discrepancy means that there is either a large area of empty space, or that there is an area that has a high density of points. Low discrepancy means that there are neither, and that your points are more or less pretty evenly distributed.

The lowest discrepancy possible has no randomness at all, and in the 1 dimensional case means that the points are evenly distributed on a grid. For monte carlo integration and the game dev usage cases I mentioned, we do want some randomness, we just want the random points to be spread out a little more evenly.

If more formal math notation is your thing, discrepancy is defined as:
$D_{N}(P)=\sup _{{B\in J}}\left|{\frac {A(B;P)}{N}}-\lambda _{s}(B)\right|$

Equidistributed sequence

For monte carlo integration specifically, this is the behavior each thing gives you:

• High Discrepancy: Random Numbers / White Noise aka Uniform Distribution – At lower sample counts, convergance is slower (and have higher variance) due to the possibility of not getting good coverage over the area you integrating. At higher sample counts, this problem disappears. (Hint: real time graphics and preview renderings use a smaller number of samples)
• Lowest Discrepancy: Regular Grid – This will cause aliasing, unlike the other “random” based sampling, which trade aliasing for noise. Noise is preferred over aliasing.
• Low Discrepancy: Low Discrepancy Sequences – In lower numbers of samples, this will have faster convergence by having better coverage of the sampling space, but will use randomness to get rid of aliasing by introducing noise.

Also interesting to note, Quasi Monte Carlo has provably better asymptotic convergence than regular monte carlo integration.

# 1 Dimensional Sequences

We’ll first look at 1 dimensional sequences.

## Grid

Here are 100 samples evenly spaced:

## Random Numbers (White Noise)

This is actually a high discrepancy sequence. To generate this, you just use a standard random number generator to pick 100 points between 0 and 1. I used std::mt19937 with a std::uniform_real_distribution from 0 to 1:

## Subrandom Numbers

Subrandom numbers are ways to decrease the discrepancy of white noise.

One way to do this is to break the sampling space in half. You then generate even numbered samples in the first half of the space, and odd numbered samples in the second half of the space.

There’s no reason you can’t generalize this into more divisions of space though.

This splits the space into 4 regions:

8 regions:

16 regions:

32 regions:

There are other ways to generate subrandom numbers though. One way is to generate random numbers between 0 and 0.5, and add them to the last sample, plus 0.5. This gives you a random walk type setup.

Here is that:

## Uniform Sampling + Jitter

If you take the first subrandom idea to the logical maximum, you break your sample space up into N sections and place one point within those N sections to make a low discrepancy sequence made up of N points.

Another way to look at this is that you do uniform sampling, but add some random jitter to the samples, between +/- half a uniform sample size, to keep the samples in their own areas.

This is that:

I have heard that Pixar invented this technique interestingly.

# Irrational Numbers

Rational numbers are numbers which can be described as fractions, such as 0.75 which can be expressed as 3/4. Irrational numbers are numbers which CANNOT be described as fractions, such as pi, or the golden ratio, or the square root of a prime number.

Interestingly you can use irrational numbers to generate low discrepancy sequences. You start with some value (could be 0, or could be a random number), add the irrational number, and modulus against 1.0. To get the next sample you add the irrational value again, and modulus against 1.0 again. Rinse and repeat until you get as many samples as you want.

Some values work better than others though, and apparently the golden ratio is provably the best choice (1.61803398875…), says Wikipedia.

Here is the golden ratio, using 4 different random (white noise) starting values:

Here I’ve used the square root of 2, with 4 different starting random numbers again:

Lastly, here is pi, with 4 random starting values:

## Van der Corput Sequence

The Van der Corput sequence is the 1d equivelant of the Halton sequence which we’ll talk about later.

How you generate values in the Van der Corput sequence is you convert the index of your sample into some base.

For instance if it was base 2, you would convert your index to binary. If it was base 16, you would convert your index to hexadecimal.

Now, instead of treating the digits as if they are $B^0$, $B^1$, $B^2$, etc (where B is the base), you instead treat them as $B^{-1}$, $B^{-2}$, $B^{-3}$ and so on. In other words, you multiply each digit by a fraction and add up the results.

To show a couple quick examples, let’s say we wanted sample 6 in the sequence of base 2.

First we convert 6 to binary which is 110. From right to left, we have 3 digits: a 0 in the 1’s place, a 1 in the 2’s place, and a 1 in the 4’s place. $0*1 + 1*2 + 1*4 = 6$, so we can see that 110 is in fact 6 in binary.

To get the Van der Corput value for this, instead of treating it as the 1’s, 2’s and 4’s digit, we treat it as the 1/2, 1/4 and 1/8’s digit.

$0 * 1/2 + 1 * 1/4 + 1 * 1/8 = 3/8$.

So, sample 6 in the Van der Corput sequence using base 2 is 3/8.

Let’s try sample 21 in base 3.

First we convert 21 to base 3 which is 210. We can verify this is right by seeing that $0 * 1 + 1 * 3 + 2 * 9 = 21$.

Instead of a 1’s, 3’s and 9’s digit, we are going to treat it like a 1/3, 1/9 and 1/27 digit.

$0 * 1/3 + 1 * 1/9 + 2 * 1/27 = 5/27$

So, sample 21 in the Van der Corput sequence using base 3 is 5/27.

Here is the Van der Corput sequence for base 2:

Here it is for base 3:

Base 4:

Base 5:

## Sobol

One dimensional Sobol is actually just the Van der Corput sequence base 2 re-arranged a little bit, but it’s generated differently.

You start with 0 (either using it as sample 0 or sample -1, doesn’t matter which), and for each sample you do this:

1. Calculate the Ruler function value for the current sample’s index(more info in a second)
2. Make the direction vector by shifting 1 left (in binary) 31 – ruler times.
3. XOR the last sample by the direction vector to get the new sample
4. To interpret the sample as a floating point number you divide it by $2^{32}$

That might sound completely different than the Van der Corput sequence but it actually is the same thing – just re-ordered.

In the final step when dividing by $2^{32}$, we are really just interpreting the binary number as a fraction just like before, but it’s the LEFT most digit that is the 1/2 spot, not the RIGHT most digit.

The Ruler Function goes like: 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, …

It’s pretty easy to calculate too. Calculating the ruler function for an index (starting at 1) is just the zero based index of the right most 1’s digit after converting the number to binary.

1 in binary is 001 so Ruler(1) is 0.
2 in binary is 010 so Ruler(2) is 1.
3 in binary is 011 so Ruler(3) is 0.
4 in binary is 100 so Ruler(4) is 2.
5 in binary is 101 so Ruler(5) is 0.
and so on.

Here is 1D Sobol:

# Hammersley

In one dimension, the Hammersley sequence is the same as the base 2 Van der Corput sequence, and in the same order. If that sounds strange that it’s the same, it’s a 2d sequence I broke down into a 1d sequence for comparison. The one thing Hammersley has that makes it unique in the 1d case is that you can truncate bits.

It doesn’t seem that useful for 1d Hammersley to truncate bits but knowing that is useful info too I guess. Look at the 2d version of Hammersley to get a fairer look at it, because it’s meant to be a 2d sequence.

Here is Hammersley:

With 1 bit truncated:

With 2 bits truncated:

# Poisson Disc

Poisson disc points are points which are densely packed, but have a minimum distance from each other.

Computer scientists are still working out good algorithms to generate these points efficiently.

I use “Mitchell’s Best-Candidate” which means that when you want to generate a new point in the sequence, you generate N new points, and choose whichever point is farthest away from the other points you’ve generated so far.

Here it is where N is 100:

# 2 Dimensional Sequences

Next up, let’s look at some 2 dimensional sequences.

## Grid

Below is 2d uniform samples on a grid.

Note that uniform grid is not particularly low discrepancy for the 2d case! More info here: Is it expected that uniform points would have non zero discrepancy?

## Random

Here are 100 random points:

## Uniform Grid + Jitter

Here is a uniform grid that has random jitter applied to the points. Jittered grid is a pretty commonly used low discrepancy sampling technique that has good success.

## Subrandom

Just like in 1 dimensions, you can apply the subrandom ideas to 2 dimensions where you divide the X and Y axis into so many sections, and randomly choose points in the sections.

If you divide X and Y into the same number of sections though, you are going to have a problem because some areas are not going to have any points in them.

@Reedbeta pointed out that instead of using i%x and i%y, that you could use i%x and (i/x)%y to make it pick points in all regions.

Picking different numbers for X and Y can be another way to give good results. Here’s dividing X and Y into 2 and 3 sections respectively:

If you choose co-prime numbers for divisions for each axis you can get maximal period of repeats. 2 and 3 are coprime so the last example is a good example of that, but here is 3 and 11:

Here is 3 and 97. 97 is large enough that with only doing 100 samples, we are almost doing jittered grid on the y axis.

Here is the other subrandom number from 1d, where we start with a random value for X and Y, and then add a random number between 0 and 0.5 to each, also adding 0.5, to make a “random walk” type setup again:

## Halton

The Halton sequence is just the Van der Corput sequence, but using a different base on each axis.

Here is the Halton sequence where X and Y use bases 2 and 3:

Here it is using bases 5 and 7:

Here are bases 13 and 9:

# Irrational Numbers

The irrational numbers technique can be used for 2d as well but I wasn’t able to find out how to make it give decent looking output that didn’t have an obvious diagonal pattern in them. Bart Wronski shared a neat paper that explains how to use the golden ratio in 2d with great success: Golden Ratio Sequences For Low-Discrepancy Sampling

This uses the golden ratio for the X axis and the square root of 2 for the Y axis. Below that is the same, with a random starting point, to make it give a different sequence.

Here X axis uses square root of 2 and Y axis uses square root of 3. Below that is a random starting point, which gives the same discrepancy.

## Hammersley

In 2 dimensions, the Hammersley sequence uses the 1d Hammersley sequence for the X axis: Instead of treating the binary version of the index as binary, you treat it as fractions like you do for Van der Corput and sum up the fractions.

For the Y axis, you just reverse the bits and then do the same!

Here is the Hammersley sequence. Note we would have to take 128 samples (not just the 100 we did) if we wanted it to fill the entire square with samples.

Truncating bits in 2d is a bit useful. Here is 1 bit truncated:

2 bits truncated:

## Poisson Disc

Using the same method we did for 1d, we can generate points in 2d space:

## N Rooks

There is a sampling pattern called N-Rooks where you put N rooks onto a chess board and arrange them such that no two are in the same row or column.

A way to generate these samples is to realize that there will be only one rook per row, and that none of them will ever be in the same column. So, you make an array that has numbers 0 to N-1, and then shuffle the array. The index into the array is the row, and the value in the array is the column.

Here are 100 rooks:

## Sobol

Sobol in two dimensions is more complex to explain so I’ll link you to the source I used: Sobol Sequences Made Simple.

The 1D sobol already covered is used for the X axis, and then something more complex was used for the Y axis:

Bart Wronski has a really great series on a related topic: Dithering in Games

Wikipedia: Low Discrepancy Sequence

Wikipedia: Halton Sequence

Wikipedia: Van der Corput Sequence

Using Fibonacci Sequence To Generate Colors

Deeper info and usage cases for low discrepancy sequences

Poisson-Disc Sampling

Low discrepancy sequences are related to blue noise. Where white noise contains all frequencies evenly, blue noise has more high frequencies and fewer low frequencies. Blue noise is essentially the ultimate in low discrepancy, but can be expensive to compute. Here are some pages on blue noise:

Free Blue Noise Textures

The problem with 3D blue noise

Stippling and Blue Noise

Vegetation placement in “The Witness”

Here are some links from @marc_b_reynolds:
Sobol (low-discrepancy) sequence in 1-3D, stratified in 2-4D.
Classic binary-reflected gray code.
Sobol.h

Weyl Sequence

## Code

#define _CRT_SECURE_NO_WARNINGS

#include <windows.h>  // for bitmap headers and performance counter.  Sorry non windows people!
#include <vector>
#include <stdint.h>
#include <random>
#include <array>
#include <algorithm>
#include <stdlib.h>
#include <set>

typedef uint8_t uint8;

#define NUM_SAMPLES 100  // to simplify some 2d code, this must be a square
#define NUM_SAMPLES_FOR_COLORING 100

// Turning this on will slow things down significantly because it's an O(N^5) operation for 2d!
#define CALCULATE_DISCREPANCY 0

#define IMAGE1D_WIDTH 600
#define IMAGE1D_HEIGHT 50
#define IMAGE2D_WIDTH 300
#define IMAGE2D_HEIGHT 300

#define AXIS_HEIGHT 40
#define DATA_HEIGHT 20
#define DATA_WIDTH 2

#define COLOR_FILL SColor(255,255,255)
#define COLOR_AXIS SColor(0, 0, 0)

//======================================================================================
struct SImageData
{
SImageData ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
size_t m_pitch;
std::vector<uint8> m_pixels;
};

struct SColor
{
SColor (uint8 _R = 0, uint8 _G = 0, uint8 _B = 0)
: R(_R), G(_G), B(_B)
{ }

uint8 B, G, R;
};

//======================================================================================
bool SaveImage (const char *fileName, const SImageData &image)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "wb");
if (!file) {
printf("Could not save %s\n", fileName);
return false;
}

// write the data and close the file
fclose(file);

return true;
}

//======================================================================================
void ImageInit (SImageData& image, size_t width, size_t height)
{
image.m_width = width;
image.m_height = height;
image.m_pitch = 4 * ((width * 24 + 31) / 32);
image.m_pixels.resize(image.m_pitch * image.m_width);
std::fill(image.m_pixels.begin(), image.m_pixels.end(), 0);
}

//======================================================================================
void ImageClear (SImageData& image, const SColor& color)
{
uint8* row = &image.m_pixels[0];
for (size_t rowIndex = 0; rowIndex < image.m_height; ++rowIndex)
{
SColor* pixels = (SColor*)row;
std::fill(pixels, pixels + image.m_width, color);

row += image.m_pitch;
}
}

//======================================================================================
void ImageBox (SImageData& image, size_t x1, size_t x2, size_t y1, size_t y2, const SColor& color)
{
for (size_t y = y1; y < y2; ++y)
{
uint8* row = &image.m_pixels[y * image.m_pitch];
SColor* start = &((SColor*)row)[x1];
std::fill(start, start + x2 - x1, color);
}
}

//======================================================================================
float Distance (float x1, float y1, float x2, float y2)
{
float dx = (x2 - x1);
float dy = (y2 - y1);

return std::sqrtf(dx*dx + dy*dy);
}

//======================================================================================
SColor DataPointColor (size_t sampleIndex)
{
SColor ret;
float percent = (float(sampleIndex) / (float(NUM_SAMPLES_FOR_COLORING) - 1.0f));

ret.R = uint8((1.0f - percent) * 255.0f);
ret.G = 0;
ret.B = uint8(percent * 255.0f);

float mag = (float)sqrt(ret.R*ret.R + ret.G*ret.G + ret.B*ret.B);
ret.R = uint8((float(ret.R) / mag)*255.0f);
ret.G = uint8((float(ret.G) / mag)*255.0f);
ret.B = uint8((float(ret.B) / mag)*255.0f);

return ret;
}

//======================================================================================
float RandomFloat (float min, float max)
{
static std::random_device rd;
static std::mt19937 mt(rd());
std::uniform_real_distribution<float> dist(min, max);
return dist(mt);
}

//======================================================================================
size_t Ruler (size_t n)
{
size_t ret = 0;
while (n != 0 && (n & 1) == 0)
{
n /= 2;
++ret;
}
return ret;
}

//======================================================================================
float CalculateDiscrepancy1D (const std::array<float, NUM_SAMPLES>& samples)
{
// some info about calculating discrepancy
// https://math.stackexchange.com/questions/1681562/how-to-calculate-discrepancy-of-a-sequence

// Calculates the discrepancy of this data.
// Assumes the data is [0,1) for valid sample range
std::array<float, NUM_SAMPLES> sortedSamples = samples;
std::sort(sortedSamples.begin(), sortedSamples.end());

float maxDifference = 0.0f;
for (size_t startIndex = 0; startIndex <= NUM_SAMPLES; ++startIndex)
{
// startIndex 0 = 0.0f.  startIndex 1 = sortedSamples[0]. etc

float startValue = 0.0f;
if (startIndex > 0)
startValue = sortedSamples[startIndex - 1];

for (size_t stopIndex = startIndex; stopIndex <= NUM_SAMPLES; ++stopIndex)
{
// stopIndex 0 = sortedSamples[0].  startIndex[N] = 1.0f. etc

float stopValue = 1.0f;
if (stopIndex < NUM_SAMPLES)
stopValue = sortedSamples[stopIndex];

float length = stopValue - startValue;

// open interval (startValue, stopValue)
size_t countInside = 0;
for (float sample : samples)
{
if (sample > startValue &&
sample < stopValue)
{
++countInside;
}
}
float density = float(countInside) / float(NUM_SAMPLES);
float difference = std::abs(density - length);
if (difference > maxDifference)
maxDifference = difference;

// closed interval [startValue, stopValue]
countInside = 0;
for (float sample : samples)
{
if (sample >= startValue &&
sample <= stopValue)
{
++countInside;
}
}
density = float(countInside) / float(NUM_SAMPLES);
difference = std::abs(density - length);
if (difference > maxDifference)
maxDifference = difference;
}
}
return maxDifference;
}

//======================================================================================
float CalculateDiscrepancy2D (const std::array<std::array<float, 2>, NUM_SAMPLES>& samples)
{
// some info about calculating discrepancy
// https://math.stackexchange.com/questions/1681562/how-to-calculate-discrepancy-of-a-sequence

// Calculates the discrepancy of this data.
// Assumes the data is [0,1) for valid sample range.

// Get the sorted list of unique values on each axis
std::set<float> setSamplesX;
std::set<float> setSamplesY;
for (const std::array<float, 2>& sample : samples)
{
setSamplesX.insert(sample[0]);
setSamplesY.insert(sample[1]);
}
std::vector<float> sortedXSamples;
std::vector<float> sortedYSamples;
sortedXSamples.reserve(setSamplesX.size());
sortedYSamples.reserve(setSamplesY.size());
for (float f : setSamplesX)
sortedXSamples.push_back(f);
for (float f : setSamplesY)
sortedYSamples.push_back(f);

// Get the sorted list of samples on the X axis, for faster interval testing
std::array<std::array<float, 2>, NUM_SAMPLES> sortedSamplesX = samples;
std::sort(sortedSamplesX.begin(), sortedSamplesX.end(),
[] (const std::array<float, 2>& itemA, const std::array<float, 2>& itemB)
{
return itemA[0] < itemB[0];
}
);

// calculate discrepancy
float maxDifference = 0.0f;
for (size_t startIndexY = 0; startIndexY <= sortedYSamples.size(); ++startIndexY)
{
float startValueY = 0.0f;
if (startIndexY > 0)
startValueY = *(sortedYSamples.begin() + startIndexY - 1);

for (size_t startIndexX = 0; startIndexX <= sortedXSamples.size(); ++startIndexX)
{
float startValueX = 0.0f;
if (startIndexX > 0)
startValueX = *(sortedXSamples.begin() + startIndexX - 1);

for (size_t stopIndexY = startIndexY; stopIndexY <= sortedYSamples.size(); ++stopIndexY)
{
float stopValueY = 1.0f;
if (stopIndexY < sortedYSamples.size())
stopValueY = sortedYSamples[stopIndexY];

for (size_t stopIndexX = startIndexX; stopIndexX <= sortedXSamples.size(); ++stopIndexX)
{
float stopValueX = 1.0f;
if (stopIndexX < sortedXSamples.size())
stopValueX = sortedXSamples[stopIndexX];

// calculate area
float length = stopValueX - startValueX;
float height = stopValueY - startValueY;
float area = length * height;

// open interval (startValue, stopValue)
size_t countInside = 0;
for (const std::array<float, 2>& sample : samples)
{
if (sample[0] > startValueX &&
sample[1] > startValueY &&
sample[0] < stopValueX &&
sample[1] < stopValueY)
{
++countInside;
}
}
float density = float(countInside) / float(NUM_SAMPLES);
float difference = std::abs(density - area);
if (difference > maxDifference)
maxDifference = difference;

// closed interval [startValue, stopValue]
countInside = 0;
for (const std::array<float, 2>& sample : samples)
{
if (sample[0] >= startValueX &&
sample[1] >= startValueY &&
sample[0] <= stopValueX &&
sample[1] <= stopValueY)
{
++countInside;
}
}
density = float(countInside) / float(NUM_SAMPLES);
difference = std::abs(density - area);
if (difference > maxDifference)
maxDifference = difference;
}
}
}
}

return maxDifference;
}

//======================================================================================
void Test1D (const char* fileName, const std::array<float, NUM_SAMPLES>& samples)
{
// create and clear the image
SImageData image;

// setup the canvas
ImageClear(image, COLOR_FILL);

// calculate the discrepancy
#if CALCULATE_DISCREPANCY
float discrepancy = CalculateDiscrepancy1D(samples);
printf("%s Discrepancy = %0.2f%%\n", fileName, discrepancy*100.0f);
#endif

// draw the sample points
size_t i = 0;
for (float f: samples)
{
size_t pos = size_t(f * float(IMAGE1D_WIDTH)) + IMAGE_PAD;
ImageBox(image, pos, pos + 1, IMAGE1D_CENTERY - DATA_HEIGHT / 2, IMAGE1D_CENTERY + DATA_HEIGHT / 2, DataPointColor(i));
++i;
}

// draw the axes lines. horizontal first then the two vertical
ImageBox(image, IMAGE_PAD, IMAGE_PAD + 1, IMAGE1D_CENTERY - AXIS_HEIGHT / 2, IMAGE1D_CENTERY + AXIS_HEIGHT / 2, COLOR_AXIS);
ImageBox(image, IMAGE1D_WIDTH + IMAGE_PAD, IMAGE1D_WIDTH + IMAGE_PAD + 1, IMAGE1D_CENTERY - AXIS_HEIGHT / 2, IMAGE1D_CENTERY + AXIS_HEIGHT / 2, COLOR_AXIS);

// save the image
SaveImage(fileName, image);
}

//======================================================================================
void Test2D (const char* fileName, const std::array<std::array<float,2>, NUM_SAMPLES>& samples)
{
// create and clear the image
SImageData image;

// setup the canvas
ImageClear(image, COLOR_FILL);

// calculate the discrepancy
#if CALCULATE_DISCREPANCY
float discrepancy = CalculateDiscrepancy2D(samples);
printf("%s Discrepancy = %0.2f%%\n", fileName, discrepancy*100.0f);
#endif

// draw the sample points
size_t i = 0;
for (const std::array<float, 2>& sample : samples)
{
size_t posx = size_t(sample[0] * float(IMAGE2D_WIDTH)) + IMAGE_PAD;
size_t posy = size_t(sample[1] * float(IMAGE2D_WIDTH)) + IMAGE_PAD;
ImageBox(image, posx - 1, posx + 1, posy - 1, posy + 1, DataPointColor(i));
++i;
}

// horizontal lines

// vertical lines

// save the image
SaveImage(fileName, image);
}

//======================================================================================
void TestUniform1D (bool jitter)
{
// calculate the sample points
const float c_cellSize = 1.0f / float(NUM_SAMPLES+1);
std::array<float, NUM_SAMPLES> samples;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
samples[i] = float(i+1) / float(NUM_SAMPLES+1);
if (jitter)
samples[i] += RandomFloat(-c_cellSize*0.5f, c_cellSize*0.5f);
}

// save bitmap etc
if (jitter)
Test1D("1DUniformJitter.bmp", samples);
else
Test1D("1DUniform.bmp", samples);
}

//======================================================================================
void TestUniformRandom1D ()
{
// calculate the sample points
const float c_halfJitter = 1.0f / float((NUM_SAMPLES + 1) * 2);
std::array<float, NUM_SAMPLES> samples;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
samples[i] = RandomFloat(0.0f, 1.0f);

// save bitmap etc
Test1D("1DUniformRandom.bmp", samples);
}

//======================================================================================
void TestSubRandomA1D (size_t numRegions)
{
const float c_randomRange = 1.0f / float(numRegions);

// calculate the sample points
const float c_halfJitter = 1.0f / float((NUM_SAMPLES + 1) * 2);
std::array<float, NUM_SAMPLES> samples;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
samples[i] = RandomFloat(0.0f, c_randomRange);
samples[i] += float(i % numRegions) / float(numRegions);
}

// save bitmap etc
char fileName[256];
sprintf(fileName, "1DSubRandomA_%zu.bmp", numRegions);
Test1D(fileName, samples);
}

//======================================================================================
void TestSubRandomB1D ()
{
// calculate the sample points
std::array<float, NUM_SAMPLES> samples;
float sample = RandomFloat(0.0f, 0.5f);
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
sample = std::fmodf(sample + 0.5f + RandomFloat(0.0f, 0.5f), 1.0f);
samples[i] = sample;
}

// save bitmap etc
Test1D("1DSubRandomB.bmp", samples);
}

//======================================================================================
void TestVanDerCorput (size_t base)
{
// calculate the sample points
std::array<float, NUM_SAMPLES> samples;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
samples[i] = 0.0f;
float denominator = float(base);
size_t n = i;
while (n > 0)
{
size_t multiplier = n % base;
samples[i] += float(multiplier) / denominator;
n = n / base;
denominator *= base;
}
}

// save bitmap etc
char fileName[256];
sprintf(fileName, "1DVanDerCorput_%zu.bmp", base);
Test1D(fileName, samples);
}

//======================================================================================
void TestIrrational1D (float irrational, float seed)
{
// calculate the sample points
std::array<float, NUM_SAMPLES> samples;
float sample = seed;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
sample = std::fmodf(sample + irrational, 1.0f);
samples[i] = sample;
}

// save bitmap etc
char irrationalStr[256];
sprintf(irrationalStr, "%f", irrational);
char seedStr[256];
sprintf(seedStr, "%f", seed);
char fileName[256];
sprintf(fileName, "1DIrrational_%s_%s.bmp", &irrationalStr[2], &seedStr[2]);
Test1D(fileName, samples);
}

//======================================================================================
void TestSobol1D ()
{
// calculate the sample points
std::array<float, NUM_SAMPLES> samples;
size_t sampleInt = 0;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
size_t ruler = Ruler(i + 1);
size_t direction = size_t(size_t(1) << size_t(31 - ruler));
sampleInt = sampleInt ^ direction;
samples[i] = float(sampleInt) / std::pow(2.0f, 32.0f);
}

// save bitmap etc
Test1D("1DSobol.bmp", samples);
}

//======================================================================================
void TestHammersley1D (size_t truncateBits)
{
// calculate the sample points
std::array<float, NUM_SAMPLES> samples;
size_t sampleInt = 0;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
size_t n = i >> truncateBits;
float base = 1.0f / 2.0f;
samples[i] = 0.0f;
while (n)
{
if (n & 1)
samples[i] += base;
n /= 2;
base /= 2.0f;
}
}

// save bitmap etc
char fileName[256];
sprintf(fileName, "1DHammersley_%zu.bmp", truncateBits);
Test1D(fileName, samples);
}

//======================================================================================
float MinimumDistance1D (const std::array<float, NUM_SAMPLES>& samples, size_t numSamples, float x)
{
// Used by poisson.
// This returns the minimum distance that point (x) is away from the sample points, from [0, numSamples).
float minimumDistance = 0.0f;
for (size_t i = 0; i < numSamples; ++i)
{
float distance = std::abs(samples[i] - x);
if (i == 0 || distance < minimumDistance)
minimumDistance = distance;
}
return minimumDistance;
}

//======================================================================================
void TestPoisson1D ()
{
// every time we want to place a point, we generate this many points and choose the one farthest away from all the other points (largest minimum distance)
const size_t c_bestOfAttempts = 100;

// calculate the sample points
std::array<float, NUM_SAMPLES> samples;
for (size_t sampleIndex = 0; sampleIndex < NUM_SAMPLES; ++sampleIndex)
{
// generate some random points and keep the one that has the largest minimum distance from any of the existing points
float bestX = 0.0f;
float bestMinDistance = 0.0f;
for (size_t attempt = 0; attempt < c_bestOfAttempts; ++attempt)
{
float attemptX = RandomFloat(0.0f, 1.0f);
float minDistance = MinimumDistance1D(samples, sampleIndex, attemptX);

if (minDistance > bestMinDistance)
{
bestX = attemptX;
bestMinDistance = minDistance;
}
}
samples[sampleIndex] = bestX;
}

// save bitmap etc
Test1D("1DPoisson.bmp", samples);
}

//======================================================================================
void TestUniform2D (bool jitter)
{
// calculate the sample points
std::array<std::array<float, 2>, NUM_SAMPLES> samples;
const size_t c_oneSide = size_t(std::sqrt(NUM_SAMPLES));
const float c_cellSize = 1.0f / float(c_oneSide+1);
for (size_t iy = 0; iy < c_oneSide; ++iy)
{
for (size_t ix = 0; ix < c_oneSide; ++ix)
{
size_t sampleIndex = iy * c_oneSide + ix;

samples[sampleIndex][0] = float(ix + 1) / (float(c_oneSide + 1));
if (jitter)
samples[sampleIndex][0] += RandomFloat(-c_cellSize*0.5f, c_cellSize*0.5f);

samples[sampleIndex][1] = float(iy + 1) / (float(c_oneSide) + 1.0f);
if (jitter)
samples[sampleIndex][1] += RandomFloat(-c_cellSize*0.5f, c_cellSize*0.5f);
}
}

// save bitmap etc
if (jitter)
Test2D("2DUniformJitter.bmp", samples);
else
Test2D("2DUniform.bmp", samples);
}

//======================================================================================
void TestUniformRandom2D ()
{
// calculate the sample points
std::array<std::array<float, 2>, NUM_SAMPLES> samples;
const size_t c_oneSide = size_t(std::sqrt(NUM_SAMPLES));
const float c_halfJitter = 1.0f / float((c_oneSide + 1) * 2);
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
samples[i][0] = RandomFloat(0.0f, 1.0f);
samples[i][1] = RandomFloat(0.0f, 1.0f);
}

// save bitmap etc
Test2D("2DUniformRandom.bmp", samples);
}

//======================================================================================
void TestSubRandomA2D (size_t regionsX, size_t regionsY)
{
const float c_randomRangeX = 1.0f / float(regionsX);
const float c_randomRangeY = 1.0f / float(regionsY);

// calculate the sample points
std::array<std::array<float, 2>, NUM_SAMPLES> samples;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
samples[i][0] = RandomFloat(0.0f, c_randomRangeX);
samples[i][0] += float(i % regionsX) / float(regionsX);

samples[i][1] = RandomFloat(0.0f, c_randomRangeY);
samples[i][1] += float(i % regionsY) / float(regionsY);
}

// save bitmap etc
char fileName[256];
sprintf(fileName, "2DSubRandomA_%zu_%zu.bmp", regionsX, regionsY);
Test2D(fileName, samples);
}

//======================================================================================
void TestSubRandomB2D ()
{
// calculate the sample points
float samplex = RandomFloat(0.0f, 0.5f);
float sampley = RandomFloat(0.0f, 0.5f);
std::array<std::array<float, 2>, NUM_SAMPLES> samples;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
samplex = std::fmodf(samplex + 0.5f + RandomFloat(0.0f, 0.5f), 1.0f);
sampley = std::fmodf(sampley + 0.5f + RandomFloat(0.0f, 0.5f), 1.0f);
samples[i][0] = samplex;
samples[i][1] = sampley;
}

// save bitmap etc
Test2D("2DSubRandomB.bmp", samples);
}

//======================================================================================
void TestHalton (size_t basex, size_t basey)
{
// calculate the sample points
std::array<std::array<float, 2>, NUM_SAMPLES> samples;
const size_t c_oneSide = size_t(std::sqrt(NUM_SAMPLES));
const float c_halfJitter = 1.0f / float((c_oneSide + 1) * 2);
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
// x axis
samples[i][0] = 0.0f;
{
float denominator = float(basex);
size_t n = i;
while (n > 0)
{
size_t multiplier = n % basex;
samples[i][0] += float(multiplier) / denominator;
n = n / basex;
denominator *= basex;
}
}

// y axis
samples[i][1] = 0.0f;
{
float denominator = float(basey);
size_t n = i;
while (n > 0)
{
size_t multiplier = n % basey;
samples[i][1] += float(multiplier) / denominator;
n = n / basey;
denominator *= basey;
}
}
}

// save bitmap etc
char fileName[256];
sprintf(fileName, "2DHalton_%zu_%zu.bmp", basex, basey);
Test2D(fileName, samples);
}

//======================================================================================
void TestSobol2D ()
{
// calculate the sample points

// x axis
std::array<std::array<float, 2>, NUM_SAMPLES> samples;
size_t sampleInt = 0;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
size_t ruler = Ruler(i + 1);
size_t direction = size_t(size_t(1) << size_t(31 - ruler));
sampleInt = sampleInt ^ direction;
samples[i][0] = float(sampleInt) / std::pow(2.0f, 32.0f);
}

// y axis
// uses numbers: new-joe-kuo-6.21201

// Direction numbers
std::vector<size_t> V;
V.resize((size_t)ceil(log((double)NUM_SAMPLES) / log(2.0)));
V[0] = size_t(1) << size_t(31);
for (size_t i = 1; i < V.size(); ++i)
V[i] = V[i - 1] ^ (V[i - 1] >> 1);

// Samples
sampleInt = 0;
for (size_t i = 0; i < NUM_SAMPLES; ++i) {
size_t ruler = Ruler(i + 1);
sampleInt = sampleInt ^ V[ruler];
samples[i][1] = float(sampleInt) / std::pow(2.0f, 32.0f);
}

// save bitmap etc
Test2D("2DSobol.bmp", samples);
}

//======================================================================================
void TestHammersley2D (size_t truncateBits)
{
// figure out how many bits we are working in.
size_t value = 1;
size_t numBits = 0;
while (value < NUM_SAMPLES)
{
value *= 2;
++numBits;
}

// calculate the sample points
std::array<std::array<float, 2>, NUM_SAMPLES> samples;
size_t sampleInt = 0;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
// x axis
samples[i][0] = 0.0f;
{
size_t n = i >> truncateBits;
float base = 1.0f / 2.0f;
while (n)
{
if (n & 1)
samples[i][0] += base;
n /= 2;
base /= 2.0f;
}
}

// y axis
samples[i][1] = 0.0f;
{
size_t n = i >> truncateBits;
size_t mask = size_t(1) << (numBits - 1 - truncateBits);

float base = 1.0f / 2.0f;
{
samples[i][1] += base;
base /= 2.0f;
}
}
}

// save bitmap etc
char fileName[256];
sprintf(fileName, "2DHammersley_%zu.bmp", truncateBits);
Test2D(fileName, samples);
}

//======================================================================================
void TestRooks2D ()
{
// make and shuffle rook positions
std::random_device rd;
std::mt19937 mt(rd());
std::array<size_t, NUM_SAMPLES> rookPositions;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
rookPositions[i] = i;
std::shuffle(rookPositions.begin(), rookPositions.end(), mt);

// calculate the sample points
std::array<std::array<float, 2>, NUM_SAMPLES> samples;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
// x axis
samples[i][0] = float(rookPositions[i]) / float(NUM_SAMPLES-1);

// y axis
samples[i][1] = float(i) / float(NUM_SAMPLES - 1);
}

// save bitmap etc
Test2D("2DRooks.bmp", samples);
}

//======================================================================================
void TestIrrational2D (float irrationalx, float irrationaly, float seedx, float seedy)
{
// calculate the sample points
std::array<std::array<float, 2>, NUM_SAMPLES> samples;
float samplex = seedx;
float sampley = seedy;
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
samplex = std::fmodf(samplex + irrationalx, 1.0f);
sampley = std::fmodf(sampley + irrationaly, 1.0f);

samples[i][0] = samplex;
samples[i][1] = sampley;
}

// save bitmap etc
char irrationalxStr[256];
sprintf(irrationalxStr, "%f", irrationalx);
char irrationalyStr[256];
sprintf(irrationalyStr, "%f", irrationaly);
char seedxStr[256];
sprintf(seedxStr, "%f", seedx);
char seedyStr[256];
sprintf(seedyStr, "%f", seedy);
char fileName[256];
sprintf(fileName, "2DIrrational_%s_%s_%s_%s.bmp", &irrationalxStr[2], &irrationalyStr[2], &seedxStr[2], &seedyStr[2]);
Test2D(fileName, samples);
}

//======================================================================================
float MinimumDistance2D (const std::array<std::array<float, 2>, NUM_SAMPLES>& samples, size_t numSamples, float x, float y)
{
// Used by poisson.
// This returns the minimum distance that point (x,y) is away from the sample points, from [0, numSamples).
float minimumDistance = 0.0f;
for (size_t i = 0; i < numSamples; ++i)
{
float distance = Distance(samples[i][0], samples[i][1], x, y);
if (i == 0 || distance < minimumDistance)
minimumDistance = distance;
}
return minimumDistance;
}

//======================================================================================
void TestPoisson2D ()
{
// every time we want to place a point, we generate this many points and choose the one farthest away from all the other points (largest minimum distance)
const size_t c_bestOfAttempts = 100;

// calculate the sample points
std::array<std::array<float, 2>, NUM_SAMPLES> samples;
for (size_t sampleIndex = 0; sampleIndex < NUM_SAMPLES; ++sampleIndex)
{
// generate some random points and keep the one that has the largest minimum distance from any of the existing points
float bestX = 0.0f;
float bestY = 0.0f;
float bestMinDistance = 0.0f;
for (size_t attempt = 0; attempt < c_bestOfAttempts; ++attempt)
{
float attemptX = RandomFloat(0.0f, 1.0f);
float attemptY = RandomFloat(0.0f, 1.0f);
float minDistance = MinimumDistance2D(samples, sampleIndex, attemptX, attemptY);

if (minDistance > bestMinDistance)
{
bestX = attemptX;
bestY = attemptY;
bestMinDistance = minDistance;
}
}
samples[sampleIndex][0] = bestX;
samples[sampleIndex][1] = bestY;
}

// save bitmap etc
Test2D("2DPoisson.bmp", samples);
}

//======================================================================================
int main (int argc, char **argv)
{
// 1D tests
{
TestUniform1D(false);
TestUniform1D(true);

TestUniformRandom1D();

TestSubRandomA1D(2);
TestSubRandomA1D(4);
TestSubRandomA1D(8);
TestSubRandomA1D(16);
TestSubRandomA1D(32);

TestSubRandomB1D();

TestVanDerCorput(2);
TestVanDerCorput(3);
TestVanDerCorput(4);
TestVanDerCorput(5);

// golden ratio mod 1 aka (sqrt(5) - 1)/2
TestIrrational1D(0.618034f, 0.0f);
TestIrrational1D(0.618034f, 0.385180f);
TestIrrational1D(0.618034f, 0.775719f);
TestIrrational1D(0.618034f, 0.287194f);

// sqrt(2) - 1
TestIrrational1D(0.414214f, 0.0f);
TestIrrational1D(0.414214f, 0.385180f);
TestIrrational1D(0.414214f, 0.775719f);
TestIrrational1D(0.414214f, 0.287194f);

// PI mod 1
TestIrrational1D(0.141593f, 0.0f);
TestIrrational1D(0.141593f, 0.385180f);
TestIrrational1D(0.141593f, 0.775719f);
TestIrrational1D(0.141593f, 0.287194f);

TestSobol1D();

TestHammersley1D(0);
TestHammersley1D(1);
TestHammersley1D(2);

TestPoisson1D();
}

// 2D tests
{
TestUniform2D(false);
TestUniform2D(true);

TestUniformRandom2D();

TestSubRandomA2D(2, 2);
TestSubRandomA2D(2, 3);
TestSubRandomA2D(3, 11);
TestSubRandomA2D(3, 97);

TestSubRandomB2D();

TestHalton(2, 3);
TestHalton(5, 7);
TestHalton(13, 9);

TestSobol2D();

TestHammersley2D(0);
TestHammersley2D(1);
TestHammersley2D(2);

TestRooks2D();

// X axis = golden ratio mod 1 aka (sqrt(5)-1)/2
// Y axis = sqrt(2) mod 1
TestIrrational2D(0.618034f, 0.414214f, 0.0f, 0.0f);
TestIrrational2D(0.618034f, 0.414214f, 0.775719f, 0.264045f);

// X axis = sqrt(2) mod 1
// Y axis = sqrt(3) mod 1
TestIrrational2D(std::fmodf((float)std::sqrt(2.0f), 1.0f), std::fmodf((float)std::sqrt(3.0f), 1.0f), 0.0f, 0.0f);
TestIrrational2D(std::fmodf((float)std::sqrt(2.0f), 1.0f), std::fmodf((float)std::sqrt(3.0f), 1.0f), 0.775719f, 0.264045f);

TestPoisson2D();
}

#if CALCULATE_DISCREPANCY
printf("\n");
system("pause");
#endif
}


# Solving N equations and N unknowns: The Fine Print (Gauss Jordan Elimination)

In basic algebra we were taught that if we have three unknowns (variables), it takes three equations to solve for them.

There’s some fine print though that isn’t talked about until quite a bit later.

Let’s have a look at three unknowns in two equations:

$A + B + C = 2 \\ B = 5$

If we just need a third equation to solve this, why not just modify the second equation to make a third?

$-B = -5$

That obviously doesn’t work, because it doesn’t add any new information! If you try it out, you’ll find that adding that equation doesn’t get you any closer to solving for the variables.

So, it takes three equations to solve for three unknowns, but the three equations have to provide unique, meaningful information. That is the fine print.

How can we know if an equation provides unique, meaningful information though?

It turns out that linear algebra gives us a neat technique for simplifying a system of equations. It actually solves for individual variables if it’s able to, and also gets rid of redundant equations that don’t add any new information.

This simplest form is called the Reduced Row Echelon Form (Wikipedia) which you may also see abbreviated as “rref” (perhaps a bit of a confusing term for programmers) and it involves you putting the equations into a matrix and then performing an algorithm, such as Gauss–Jordan elimination (Wikipedia) to get the rref.

# Equations as a Matrix

Putting n set of equations into a matrix is really straight forward.

Each row of a matrix is a separate equation, and each column represents the coefficient of a variable.

Let’s see how with this set of equations:

$3x + y = 5\\ 2y = 7\\ y + z = 14$

Not every equation has every variable in it, so let’s fix that by putting in zero terms for the missing variables, and let’s make the one terms explicit as well:

$3x + 1y + 0z = 5\\ 0x + 2y + 0z = 7\\ 0x + 1y + 1z = 14$

Putting those equations into a matrix looks like this:

$\left[\begin{array}{rrr} 3 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 1 & 1 \end{array}\right]$

If you also include the constants on the right side of the equation, you get what is called an augmented matrix, which looks like this:

$\left[\begin{array}{rrr|r} 3 & 1 & 0 & 5 \\ 0 & 2 & 0 & 7 \\ 0 & 1 & 1 & 14 \end{array}\right]$

# Reduced Row Echelon Form

Wikipedia explains the reduced row echelon form this way:

• all nonzero rows (rows with at least one nonzero element) are above any rows of all zeroes (all zero rows, if any, belong at the bottom of the matrix), and
• the leading coefficient (the first nonzero number from the left, also called the pivot) of a nonzero row is always strictly to the right of the leading coefficient of the row above it.
• Every leading coefficient is 1 and is the only nonzero entry in its column.

This is an example of a 3×5 matrix in reduced row echelon form:
$\left[\begin{array}{rrrrr} 1 & 0 & a_1 & 0 & b_1 \\ 0 & 1 & a_2 & 0 & b_2 \\ 0 & 0 & 0 & 1 & b_3 \end{array}\right]$

Basically, the lower left triangle of the matrix (the part under the diagonal) needs to be zero, and the first number in each row needs to be one.

Looking back at the augmented matrix we made:

$\left[\begin{array}{rrr|r} 3 & 1 & 0 & 5 \\ 0 & 2 & 0 & 7 \\ 0 & 1 & 1 & 14 \end{array}\right]$

If we put it into reduced row echelon form, we get this:

$\left[\begin{array}{rrr|r} 1 & 0 & 0 & 0.5 \\ 0 & 1 & 0 & 3.5 \\ 0 & 0 & 1 & 10.5 \end{array}\right]$

There’s something really neat about the reduced row echelon form. If we take the above augmented matrix and turn it back into equations, look what we get:

$1x + 0y + 0z = 0.5\\ 0x + 1y + 0z = 3.5\\ 0x + 0y + 1z = 10.5$

Or if we simplify that:

$x = 0.5\\ y = 3.5\\ z = 10.5$

Putting it into reduced row echelon form simplified our set of equations so much that it actually solved for our variables. Neat!

How do we put a matrix into rref? We can use Gauss–Jordan elimination.

# Gauss–Jordan Elimination

Gauss Jordan Elimination is a way of doing operations on rows to be able to manipulate the matrix to get it into the desired form.

It’s often explained that there are three row operations you can do:

• Type 1: Swap the positions of two rows.
• Type 2: Multiply a row by a nonzero scalar.
• Type 3: Add to one row a scalar multiple of another.

You might notice that the first two rules are technically just cases of using the third rule. I find that easier to remember, maybe you will too.

The algorithm for getting the rref is actually pretty simple.

1. Starting with the first column of the matrix, find a row which has a non zero in that column, and make that row be the first row by swapping it with the first row.
2. Multiply the first row by a value so that the first column has a 1 in it.
3. Subtract a multiple of the first row from every other row in the matrix so that they have a zero in the first column.

You’ve now handled one column (one variable) so move onto the next.

1. Continuing on, we consider the second column. Find a row which has a non zero in that column and make that row be the second row by swapping it with the second row.
2. Multiply the second row by a value so that the second column has a 1 in it.
3. Subtract a multiple of the second row from every other row in the matrix so that they have a zero in the second column.

You repeat this process until you either run out of rows or columns, at which point you are done.

Note that if you ever find a column that has only zeros in it, you just skip that row.

Let’s work through the example augmented matrix to see how we got it into rref. Starting with this:

$\left[\begin{array}{rrr|r} 3 & 1 & 0 & 5 \\ 0 & 2 & 0 & 7 \\ 0 & 1 & 1 & 14 \end{array}\right]$

We already have a non zero in the first column, so we multiply the top row by 1/3 to get this:

$\left[\begin{array}{rrr|r} 1 & 0.3333 & 0 & 1.6666 \\ 0 & 2 & 0 & 7 \\ 0 & 1 & 1 & 14 \end{array}\right]$

All the other rows have a zero in the first column so we move to the second row and the second column. The second row already has a non zero in the second column, so we multiply the second row by 1/2 to get this:

$\left[\begin{array}{rrr|r} 1 & 0.3333 & 0 & 1.6666 \\ 0 & 1 & 0 & 3.5 \\ 0 & 1 & 1 & 14 \end{array}\right]$

To make sure the second row is the only row that has a non zero in the second column, we subtract the second row times 1/3 from the first row. We also subtract the second row from the third row. That gives us this:

$\left[\begin{array}{rrr|r} 1 & 0 & 0 & 0.5 \\ 0 & 1 & 0 & 3.5 \\ 0 & 0 & 1 & 10.5 \end{array}\right]$

Since the third row has a 1 in the third column, and all other rows have a 0 in that column we are done.

That’s all there is to it! We put the matrix into rref, and we also solved the set of equations. Neat huh?

You may notice that the ultimate rref of a matrix is just the identity matrix. This is true unless the equations can’t be fully solved.

# Overdetermined, Underdetermined & Inconsistent Equations

Systems of equations are overdetermined when they have more equations than unknowns, like the below which has three equations and two unknowns:

$x + y = 3 \\ x = 1 \\ y = 2 \\$

Putting that into (augmented) matrix form gives you this:

$\left[\begin{array}{rr|r} 1 & 1 & 3 \\ 1 & 0 & 1 \\ 0 & 1 & 2 \end{array}\right]$

If you put that into rref, you end up with this:

$\left[\begin{array}{rr|r} 1 & 0 & 1 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \end{array}\right]$

The last row became zeroes, which shows us that there was redundant info in the system of equations that disappeared. We can easily see that x = 1 and y = 2, and that satisfies all three equations.

Just like we talked about in the opening of this post, if you have equations that don’t add useful information beyond what the other equations already give, it will disappear when you put it into rref. That made our over-determined system become just a determined system.

What happens though if we change the third row in the overdetermined system to be something else? For instance, we can say y=10 instead of y=2:

$x + y = 3 \\ x = 1 \\ y = 10 \\$

The augmented matrix for that is this:

$\left[\begin{array}{rr|r} 1 & 1 & 3 \\ 1 & 0 & 1 \\ 0 & 1 & 10 \end{array}\right]$

If we put that in rref, we get the identity matrix out which seems like everything is ok:

$\left[\begin{array}{rr|r} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right]$

However, if we turn it back into a set of equations, we can see that we have a problem:

$x = 0 \\ x = 0 \\ 0 = 1 \\$

The result says that 0 = 1, which is not true. Having a row of “0 = 1” in rref is how you detect that a system of equations is inconsistent, or in other words, that the equations give contradictory information.

A system of equations can also be underderdetermined, meaning there isn’t enough information to solve the equations. Let’s use the example from the beginning of the post:

$A + B + C = 2 \\ B = 5 \\$

In an augmented matrix, that looks like this:

$\left[\begin{array}{rrr|r} 1 & 1 & 1 & 2 \\ 0 & 1 & 0 & 5 \\ \end{array}\right]$

Putting that in rref we get this:

$\left[\begin{array}{rrr|r} 1 & 0 & 1 & -3 \\ 0 & 1 & 0 & 5 \\ \end{array}\right]$

Converting the matrix back into equations we get this:

$A + C = -3 \\ B = 5 \\$

This says there isn’t enough information to fully solve the equations, and shows how A and C are related, even though B is completely determined.

Note that another way of looking at this is that “A” and “C” are “free variables”. That means that if your equations specify constraints, that you are free to choose a value for either A or C. If you choose a value for one, the other becomes defined. B is not a free variable because it’s value is determined.

Let’s finish the example from the beginning of the post, showing what happens when we “make up” an equation by transforming one of the equations we already have:

$A + B + C = 2 \\ B = 5\\ -B = -5$

The augmented matrix looks like this:

$\left[\begin{array}{rrr|r} 1 & 1 & 1 & 2 \\ 0 & 1 & 0 & 5 \\ 0 & -1 & 0 & -5 \\ \end{array}\right]$

Putting it in rref, we get this:

$\left[\begin{array}{rrr|r} 1 & 0 & 1 & -3 \\ 0 & 1 & 0 & 5 \\ 0 & 0 & 0 & 0 \\ \end{array}\right]$

Which as you can see, our rref matrix is the same as it was without the extra “made up” equation besides the extra row of zeros in the result.

The number of non zero rows in a matrix in rref is known as the rank of the matrix. In these last two examples, the rank of the matrix was two in both cases. That means that you can tell if adding an equation to a system of equations adds any new, meaningful information or not by seeing if it changes the rank of the matrix for the set of equations. If the rank is the same before and after adding the new equation, it doesn’t add anything new. If the rank does change, that means it does add new information.

This concept of “adding new, meaningful information” actually has a formalized term: linear independence. If a new equation is linearly independent from the other equations in the system, it will change the rank of the rref matrix, else it won’t.

The rank of a matrix for a system of equations just tells you the number of linearly independent equations there actually are, and actually gives you what those equations are in their simplest form.

Lastly I wanted to mention that the idea of a system of equations being inconsistent is completely separate from the idea of a system of equations being under determined or over determined. They can be both over determined and inconsistent, under determined and inconsistent, over determined and consistent or under determined and consistent . The two ideas are completely separate, unrelated things.

# Inverting a Matrix

Interestingly, Gauss-Jordan elimination is also a common way for efficiently inverting a matrix!

How you do that is make an augmented matrix where on the left side you have the matrix you want to invert, and on the right side you have the identity matrix.

Let’s invert a matrix I made up pretty much at random:

$\left[\begin{array}{rrr|rrr} 1 & 0 & 1 & 1 & 0 & 0 \\ 0 & 3 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1\\ \end{array}\right]$

Putting that matrix in rref, we get this:

$\left[\begin{array}{rrr|rrr} 1 & 0 & 0 & 1 & 0 & -1 \\ 0 & 1 & 0 & 0 & 0.3333 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1\\ \end{array}\right]$

The equation on the right is the inverse of the original matrix we had on the left!

You can double check by using an online matrix inverse calculator if you want: Inverse Matrix Calculator

Note that not all matrices are invertible though! When you get an inconsistent result, or the result is not the identity matrix, it wasn’t invertible.

# Solving Mx = b

Let’s say that you have two vectors x and b, and a matrix M. Let’s say that we know the matrix M and the vector b, and that we are trying to solve for the vector x.

This comes up more often that you might suspect, including when doing “least squares fitting” of an equation to a set of data points (more info on that: Incremental Least Squares Curve Fitting).

One way to solve this equation would be to calculate the inverse matrix of M and multiply that by vector b to get vector x:

$Mx = b\\ x = M^{-1} * b$

However, Gauss-Jordan elimination can help us here too.

If we make an augmented matrix where on the left we have M, and on the right we have b, we can put the matrix into rref, which will essentially multiply vector b by the inverse of M, leaving us with the vector x.

For instance, on the left is our matrix M that scales x,y,z by 2. On the right is our vector b, which is the matrix M times our unknown vector x:

$\left[\begin{array}{rrr|r} 2 & 0 & 0 & 2 \\ 0 & 2 & 0 & 4 \\ 0 & 0 & 2 & 8 \\ \end{array}\right]$

Putting that into rref form we get this:

$\left[\begin{array}{rrr|r} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & 4 \\ \end{array}\right]$

From this, we know that the value of vector x is the right side of the augmented matrix: (1,2,4)

This only works when the matrix is invertible (aka when the rref goes to an identity matrix).

# Source Code

Here is some C++ source code which does Gauss-Jordan elimination. It’s written mainly to be readable, not performant!

#include <stdio.h>
#include <array>
#include <vector>
#include <assert.h>

// Define a vector as an array of floats
template<size_t N>
using TVector = std::array<float, N>;

// Define a matrix as an array of vectors
template<size_t M, size_t N>
using TMatrix = std::array<TVector<N>, M>;

// Helper function to fill out a matrix
template <size_t M, size_t N>
TMatrix<M, N> MakeMatrix (std::initializer_list<std::initializer_list<float>> matrixData)
{
TMatrix<M, N> matrix;

size_t m = 0;
assert(matrixData.size() == M);
for (const std::initializer_list<float>& rowData : matrixData)
{
assert(rowData.size() == N);
size_t n = 0;
for (float value : rowData)
{
matrix[m][n] = value;
++n;
}
++m;
}

return matrix;
}

// Make a specific row have a 1 in the colIndex, and make all other rows have 0 there
template <size_t M, size_t N>
bool MakeRowClaimVariable (TMatrix<M, N>& matrix, size_t rowIndex, size_t colIndex)
{
// Find a row that has a non zero value in this column and swap it with this row
{
// Find a row that has a non zero value
size_t nonZeroRowIndex = rowIndex;
while (nonZeroRowIndex < M && matrix[nonZeroRowIndex][colIndex] == 0.0f)
++nonZeroRowIndex;

// If there isn't one, nothing to do
if (nonZeroRowIndex == M)
return false;

// Otherwise, swap the row
if (rowIndex != nonZeroRowIndex)
std::swap(matrix[rowIndex], matrix[nonZeroRowIndex]);
}

// Scale this row so that it has a leading one
float scale = 1.0f / matrix[rowIndex][colIndex];
for (size_t normalizeColIndex = colIndex; normalizeColIndex < N; ++normalizeColIndex)
matrix[rowIndex][normalizeColIndex] *= scale;

// Make sure all rows except this one have a zero in this column.
// Do this by subtracting this row from other rows, multiplied by a multiple that makes the column disappear.
for (size_t eliminateRowIndex = 0; eliminateRowIndex < M; ++eliminateRowIndex)
{
if (eliminateRowIndex == rowIndex)
continue;

float scale = matrix[eliminateRowIndex][colIndex];
for (size_t eliminateColIndex = 0; eliminateColIndex < N; ++eliminateColIndex)
matrix[eliminateRowIndex][eliminateColIndex] -= matrix[rowIndex][eliminateColIndex] * scale;
}

return true;
}

// make matrix into reduced row echelon form
template <size_t M, size_t N>
void GaussJordanElimination (TMatrix<M, N>& matrix)
{
size_t rowIndex = 0;
for (size_t colIndex = 0; colIndex < N; ++colIndex)
{
if (MakeRowClaimVariable(matrix, rowIndex, colIndex))
{
++rowIndex;
if (rowIndex == M)
return;
}
}
}

int main (int argc, char **argv)
{
auto matrix = MakeMatrix<3, 4>(
{
{ 2.0f, 0.0f, 0.0f, 2.0f },
{ 0.0f, 2.0f, 0.0f, 4.0f },
{ 0.0f, 0.0f, 2.0f, 8.0f },
});

GaussJordanElimination(matrix);

return 0;
}


I hope you enjoyed this post and/or learned something from it. This is a precursor to an interesting (but maybe obscure) topic for my next blog post, which involves a graphics / gamedev thing.

Any comments, questions or corrections, let me know in the comments below or on twitter at @Atrix256