# Generating Blue Noise Sample Points With Mitchell’s Best Candidate Algorithm

Lately I’ve been eyeball deep in noise, ordered dithering and related topics, and have been learning some really interesting things.

As the information coalesces it’ll become apparent whether there is going to be a huge mega post coming, or if there will be several smaller ones.

In the meantime, I wanted to share this bite sized chunk of information.

Three sampling patterns that are most often used when sampling (say, when numerically integrating a lighting function for graphics/rendering purposes) are: regular samples, white noise samples, and blue noise samples.

## Regular Sampling

Regular sampling just means evenly spacing the samples. This sampling strategy can suffer from aliasing, but gives good coverage of the sample space.

Here are 256, 1024 and 4096 samples:   Here are those samples taken from a source image:   Here is the DFT (frequency amplitude) of those samples:   ## White Noise Sampling

White noise sampling just chooses random numbers for where to place the samples.
The random numbers are uniformly distributed, meaning they are just plain vanilla random numbers where each number is equally likely to come up.

White noise sampling can make for noisy results, and suffers from the fact that white noise sample points can clump together and leave empty space. Clumped sample points give redundant information while empty space is information that you are lacking in the sample space. In general, noise is often desired over aliasing though, so white noise samples are generally preferred over regular sampling. Monte Carlo integration also requires random samples.

White noise is called white noise because it contains all frequencies approximately evenly, like how white light is made up of all frequencies of light.

Here are 256, 1024 and 4096 samples:   Here are those samples taken from a source image:   Here is the DFT (frequency amplitude) of those samples:   ## Blue Noise Sampling

Lastly is blue noise sampling which is somewhere between regular sampling and white noise sampling. Blue noise sampling has randomly placed points like white noise does, but the randomly placed points are approximately evenly spaced, which is more like regular sampling.

Things like low discrepancy sequences, stratified sampling, and jittered regular sampling mimic blue noise, and are often a cheaper alternative when an approximation is acceptable. More info on low discrepancy sequences is available on my post here: When Random Numbers Are Too Random: Low Discrepancy Sequences

Blue noise is called blue noise because it contains higher amounts of higher frequencies and lower amounts of lower frequencies. This is the same of blue light, which contains higher frequency (bluer) light.

Here are 256, 1024 and 4096 samples:   Here are those samples taken from a source image:   Here is the DFT (frequency amplitude) of those samples:   ## Comparison

Imagine you were a robot with 4096 light/color sensors. Which of the arrangements below do you think would give you the best information about the world around you with that number of sensors?   To me, the regular grid and the blue noise are a bit of a coin toss, while the white noise version is awful.

The regular grid does seem to show me straight lined things better (the road, sidewalk, etc), but it makes non straight lined things – like the trees – look blocky. The blue noise grid does the reverse and makes straight things look wavy, while making it easier to see the true shape of non straight things.

Mathematically, blue noise is superior sampling, so maybe this example isn’t the best way to show the value of blue noise.

Here is the real image: Apparently the photo-receptors in our eyes are arranged in a blue noise pattern. Some people say this is why blue noise is more agreeable with our perception, but since it also helps numerical integration converge faster for lower sample counts (compared to white noise – which wins out with larger sample counts BTW!), it seems like there is a more fundamental reason which would cause an evolutionary advantage for them to be arranged that way in the first place.

## Generating Blue Noise Sample Points

The obvious question is: I know how to make uniform and random sample points. How do I make blue noise sample points?

There are multiple ways to do it, but a method that I find very easy to understand and to implement is “Mitchell’s Best Candidate Algorithm”.

The algorithm is as follows:

1. Place a random sample point as the first sample point.
2. Generate some number of random dots as candidates to be the next sample point.
3. Whichever of these dots is farthest away from the closest existing sample point is the winner. Place that dot as the new sample point.
4. GOTO 2 and Repeat until you have as many sample points as you want.

The algorithm is pretty simple, but there are two other important details that are needed to give you good results:

• When calculating distance between dots, you need to consider wrap around. More info on how to do that here: Calculating the Distance Between Points in “Wrap Around” (Toroidal) Space.
• The number of candidates you generate should scale up with the number of existing sample points. As the original paper describing this technique says, doing that helps ensure that the statistics of your sample points stay constant as the number of sample points changes.

When I first tried to get this algorithm working, I was generating a fixed number of candidates each time. That gave me these pretty terrible results: However, when I multiplied the number of existing sample points by a constant “m” as the number of sample points to generate, I got much better results, even when m was 1! (Note: m=0 is the same as white noise in this image. I did NumSamples*m+1 candidates each time.) Related Computer Graphics Stack Exchange Question: Mitchell’s Best Candidate Algorithm

If you store existing sample points in a grid, you can speed up the algorithm since it will be faster to find the closest point to a candidate. In the implementation on this post I didn’t do that.

You may be able to multithread this algorithm but I haven’t tried it. The idea would be if you needed to make and test N candidates, that you could split that among M threads, so long as N was large enough to make that worth while. I didn’t do that in this post.

Lastly, instead of working with distance, you can work with SQUARED distance to avoid many unnecessary square root calculations. The example code here does that optimization.

The 1991 paper that described this technique:
Spectrally optimal sampling for distribution ray tracing

Another interesting link on this algorithm:
Mitchell’s Best-Candidate

This algorithm isn’t that great for making dense sample points, or for use in dithering / stippling. Look for a future blog post about those usage cases, but for now, this is a great resource:
Free Blue Noise Textures (and info / examples on blue noise texture usage)

A physics based approach to blue noise distributed samples:
Electrostatic Half Toning

A neat read on the “void and cluster” method for generating blue noise, and also a good read on what ordered dithering is all about:
The void and cluster method for dither array generation

## Source Code

Here is some simple standalone C++ source code which can generate blue noise sample points, and also generated the images used in this post.

It’s also on github (along with the source image) at https://github.com/Atrix256/RandomCode/tree/master/Mitchell

```#define _CRT_SECURE_NO_WARNINGS

#include <windows.h>  // for bitmap headers.  Sorry non windows people!
#include <stdint.h>
#include <vector>
#include <complex>
#include <atomic>
#include <random>
#include <array>

typedef uint8_t uint8;
typedef int64_t int64;

const float c_pi = 3.14159265359f;

//======================================================================================
struct SImageData
{
SImageData ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
size_t m_pitch;
std::vector<uint8> m_pixels;
};

SImageData s_stippleImage;

//======================================================================================
struct SColor
{
SColor (uint8 _R = 0, uint8 _G = 0, uint8 _B = 0)
: R(_R), G(_G), B(_B)
{ }

inline void Set (uint8 _R, uint8 _G, uint8 _B)
{
R = _R;
G = _G;
B = _B;
}

uint8 B, G, R;
};

//======================================================================================
struct SImageDataComplex
{
SImageDataComplex ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
std::vector<std::complex<float>> m_pixels;
};

//======================================================================================
std::complex<float> DFTPixel (const SImageData &srcImage, size_t K, size_t L)
{
std::complex<float> ret(0.0f, 0.0f);

for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Get the pixel value (assuming greyscale) and convert it to [0,1] space
const uint8 *src = &srcImage.m_pixels[(y * srcImage.m_pitch) + x * 3];
float grey = float(src) / 255.0f;

// Add to the sum of the return value
float v = float(K * x) / float(srcImage.m_width);
v += float(L * y) / float(srcImage.m_height);
ret += std::complex<float>(grey, 0.0f) * std::polar<float>(1.0f, -2.0f * c_pi * v);
}
}

return ret;
}

//======================================================================================
void DFTImage (const SImageData &srcImage, SImageDataComplex &destImage)
{
// NOTE: this function assumes srcImage is greyscale, so works on only the red component of srcImage.
// ImageToGrey() will convert an image to greyscale.

// size the output dft data
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pixels.resize(destImage.m_width*destImage.m_height);

// calculate 2d dft (brute force, not using fast fourier transform) multithreadedly
std::atomic<size_t> nextRow(0);
{
[&] ()
{
bool reportProgress = (row == 0);
int lastPercent = -1;

while (row < srcImage.m_height)
{
// calculate the DFT for every pixel / frequency in this row
for (size_t x = 0; x < srcImage.m_width; ++x)
{
destImage.m_pixels[row * destImage.m_width + x] = DFTPixel(srcImage, x, row);
}

// report progress if we should
if (reportProgress)
{
int percent = int(100.0f * float(row) / float(srcImage.m_height));
if (lastPercent != percent)
{
lastPercent = percent;
printf("            \rDFT: %i%%", lastPercent);
}
}

// go to the next row
}
}
);
}

t.join();

printf("\n");
}

//======================================================================================
void GetMagnitudeData (const SImageDataComplex& srcImage, SImageData& destImage)
{
// size the output image
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pitch = 4 * ((srcImage.m_width * 24 + 31) / 32);
destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);

// get floating point magnitude data
std::vector<float> magArray;
magArray.resize(srcImage.m_width*srcImage.m_height);
float maxmag = 0.0f;
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Offset the information by half width & height in the positive direction.
// This makes frequency 0 (DC) be at the image origin, like most diagrams show it.
int k = (x + (int)srcImage.m_width / 2) % (int)srcImage.m_width;
int l = (y + (int)srcImage.m_height / 2) % (int)srcImage.m_height;
const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k];

float mag = std::abs(src);
if (mag > maxmag)
maxmag = mag;

magArray[y*srcImage.m_width + x] = mag;
}
}
if (maxmag == 0.0f)
maxmag = 1.0f;

const float c = 255.0f / log(1.0f+maxmag);

// normalize the magnitude data and send it back in [0, 255]
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
float src = c * log(1.0f + magArray[y*srcImage.m_width + x]);

uint8 magu8 = uint8(src);

uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3];
dest = magu8;
dest = magu8;
dest = magu8;
}
}
}

//======================================================================================
void GetPhaseData (const SImageDataComplex& srcImage, SImageData& destImage)
{
// size the output image
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pitch = 4 * ((srcImage.m_width * 24 + 31) / 32);
destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);

// get floating point phase data, and encode it in [0,255]
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Offset the information by half width & height in the positive direction.
// This makes frequency 0 (DC) be at the image origin, like most diagrams show it.
int k = (x + (int)srcImage.m_width / 2) % (int)srcImage.m_width;
int l = (y + (int)srcImage.m_height / 2) % (int)srcImage.m_height;
const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k];

// get phase, and change it from [-pi,+pi] to [0,255]
float phase = (0.5f + 0.5f * std::atan2(src.real(), src.imag()) / c_pi);
if (phase < 0.0f)
phase = 0.0f;
if (phase > 1.0f)
phase = 1.0;
uint8 phase255 = uint8(phase * 255);

// write the phase as grey scale color
uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3];
dest = phase255;
dest = phase255;
dest = phase255;
}
}
}

//======================================================================================
bool ImageSave (const SImageData &image, const char *fileName)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "wb");
if (!file) {
printf("Could not save %s\n", fileName);
return false;
}

// write the data and close the file
fclose(file);

return true;
}

//======================================================================================
bool ImageLoad (const char *fileName, SImageData& imageData)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "rb");
if (!file)
return false;

{
fclose(file);
return false;
}

// read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4
if (fread(&imageData.m_pixels, imageData.m_pixels.size(), 1, file) != 1)
{
fclose(file);
return false;
}

imageData.m_pitch = 4 * ((imageData.m_width * 24 + 31) / 32);

fclose(file);
return true;
}

//======================================================================================
void ImageInit (SImageData& image, size_t width, size_t height)
{
image.m_width = width;
image.m_height = height;
image.m_pitch = 4 * ((width * 24 + 31) / 32);
image.m_pixels.resize(image.m_pitch * image.m_width);
std::fill(image.m_pixels.begin(), image.m_pixels.end(), 0);
}

//======================================================================================
void SampleTest (const SImageData& image, const SImageData& samples, const char* fileName)
{
SImageData outImage;
ImageInit(outImage, image.m_width, image.m_height);

for (size_t y = 0; y < image.m_height; ++y)
{
size_t sampleY = y % samples.m_height;
for (size_t x = 0; x < image.m_width; ++x)
{
size_t sampleX = x % samples.m_width;

const SColor* samplePixel = (SColor*)&samples.m_pixels[sampleY*samples.m_pitch + sampleX * 3];
const SColor* imagePixel = (SColor*)&image.m_pixels[y*image.m_pitch + x * 3];

SColor* outPixel = (SColor*)&outImage.m_pixels[y*outImage.m_pitch + x * 3];

if (samplePixel->R == 255)
*outPixel = *imagePixel;
}
}

ImageSave(outImage, fileName);
}

//======================================================================================
inline float Distance (size_t x1, size_t y1, size_t x2, size_t y2, int imageWidth)
{
// this returns the toroidal distance between the points
// aka the interval [0, width) wraps around
float dx = std::abs(float(x2) - float(x1));
float dy = std::abs(float(y2) - float(y1));

if (dx > float(imageWidth / 2))
dx = float(imageWidth) - dx;

if (dy > float(imageWidth / 2))
dy = float(imageWidth) - dy;

// returning squared distance cause why not
return dx*dx + dy*dy;
}

//======================================================================================
int main (int argc, char** argv)
{
const size_t c_imageSize = 256;
const bool c_doDFT = true;

const size_t c_blueNoiseSampleMultiplier = 1;

const size_t samples1 = 256;   // 16x16
const size_t samples2 = 1024;  // 32x32
const size_t samples3 = 4096; // 128x128

SImageData image;

// init random number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<int> dist(0, c_imageSize - 1);

// white noise
{
SImageData samples;
ImageInit(samples, c_imageSize, c_imageSize);

for (size_t i = 1; i <= samples3; ++i)
{
size_t x = dist(rng);
size_t y = dist(rng);

SColor* pixel = (SColor*)&samples.m_pixels[y*samples.m_pitch + x * 3];
pixel->R = pixel->G = pixel->B = 255;

if (i == samples1 || i == samples2 || i == samples3)
{
printf("White Noise %zu samples\n", i);

char fileName;
sprintf(fileName, "WhiteNoise_%zu.bmp", i);
ImageSave(samples, fileName);

sprintf(fileName, "WhiteNoise_%zu_samples.bmp", i);
SampleTest(image, samples, fileName);

if (c_doDFT)
{
SImageDataComplex frequencyData;
DFTImage(samples, frequencyData);

SImageData magnitudeData;
GetMagnitudeData(frequencyData, magnitudeData);

sprintf(fileName, "WhiteNoise_%zu_mag.bmp", i);
ImageSave(magnitudeData, fileName);
}
}
}
}

// regular samples
{

auto GridTest = [&] (size_t sampleCount) {
SImageData samples;
ImageInit(samples, c_imageSize, c_imageSize);

size_t side = size_t(std::sqrt(float(sampleCount)));

size_t pixels = c_imageSize / side;

for (size_t y = 0; y < side; ++y)
{
size_t pixelY = y * pixels;
for (size_t x = 0; x < side; ++x)
{
size_t pixelX = x * pixels;

SColor* pixel = (SColor*)&samples.m_pixels[pixelY*samples.m_pitch + pixelX * 3];
pixel->R = pixel->G = pixel->B = 255;
}
}

printf("Regular %zu samples\n", sampleCount);

char fileName;
sprintf(fileName, "Regular_%zu.bmp", sampleCount);
ImageSave(samples, fileName);

sprintf(fileName, "Regular_%zu_samples.bmp", sampleCount);
SampleTest(image, samples, fileName);

if (c_doDFT)
{
SImageDataComplex frequencyData;
DFTImage(samples, frequencyData);

SImageData magnitudeData;
GetMagnitudeData(frequencyData, magnitudeData);

sprintf(fileName, "Regular_%zu_mag.bmp", sampleCount);
ImageSave(magnitudeData, fileName);
}
};

GridTest(samples1);
GridTest(samples2);
GridTest(samples3);
}

// blue noise
{
SImageData samples;
ImageInit(samples, c_imageSize, c_imageSize);

std::vector<std::array<size_t, 2>> samplesPos;

size_t percent = (size_t)-1;

for (size_t i = 1; i <= samples3; ++i)
{
size_t newPercent;
if (i <= samples1)
newPercent = size_t(100.0f * float(i) / float(samples1));
else if (i <= samples2)
newPercent = size_t(100.0f * float(i - samples1) / float(samples2 - samples1));
else
newPercent = size_t(100.0f * float(i - samples2) / float(samples3 - samples2));
if (percent != newPercent)
{
percent = newPercent;
printf("\rGenerating Blue Noise Samples: %zu%%", percent);
}

// keep the candidate that is farthest from it's closest point
size_t numCandidates = samplesPos.size() * c_blueNoiseSampleMultiplier + 1;
float bestDistance = 0.0f;
size_t bestCandidateX = 0;
size_t bestCandidateY = 0;
for (size_t candidate = 0; candidate < numCandidates; ++candidate)
{
size_t x = dist(rng);
size_t y = dist(rng);

// calculate the closest distance from this point to an existing sample
float minDist = FLT_MAX;
for (const std::array<size_t, 2>& samplePos : samplesPos)
{
float dist = Distance(x, y, samplePos, samplePos, c_imageSize);
if (dist < minDist)
minDist = dist;
}

if (minDist > bestDistance)
{
bestDistance = minDist;
bestCandidateX = x;
bestCandidateY = y;
}
}
samplesPos.push_back({ bestCandidateX, bestCandidateY });

SColor* pixel = (SColor*)&samples.m_pixels[bestCandidateY*samples.m_pitch + bestCandidateX * 3];
pixel->R = pixel->G = pixel->B = 255;

if (i == samples1 || i == samples2 || i == samples3)
{
printf("\nBlue Noise %zu samples\n", i);

char fileName;
sprintf(fileName, "BlueNoise_%zu.bmp", i);
ImageSave(samples, fileName);

sprintf(fileName, "BlueNoise_%zu_samples.bmp", i);
SampleTest(image, samples, fileName);

if (c_doDFT)
{
SImageDataComplex frequencyData;
DFTImage(samples, frequencyData);

SImageData magnitudeData;
GetMagnitudeData(frequencyData, magnitudeData);

sprintf(fileName, "BlueNoise_%zu_mag.bmp", i);
ImageSave(magnitudeData, fileName);
}
}
}
}

return 0;
}
```

1. Wojtek says:|

Any idea for blue noise generated on a disk instead of a quad :)?

Like

• demofox2 says:|

Does it need any special properties besides being the same except in a circle instead of a rectangle? One way could be to use “rejection sampling” – throw out any generated point that is outside the circle. Or, do it in a rectangle and then afterwords throw out everything that isn’t in the circle. You could do better by considering wrap around “torroidal” distance around a circle instead of a rectangle, but not sure off the top of my head how you’d do that. I’d bet there’s a way though 🙂

That help any?

Like

• demofox2 says:|

Another idea is to generate in a square, then transform to circle where you do…
Angle = blueX * 2pi