Transmuting White Noise To Blue, Red, Green, Purple

There are many algorithms for generating blue noise, and there are a lot of people working on new ways to do it.

It made me wonder: why don’t people just use the inverse discrete Fourier transform to make noise that has the desired frequency spectrum?

I knew there had to be a reason, since that is a pretty obvious thing to try, but I wasn’t sure if it was due to poor quality results, slower execution times, or some other reason.

After trying it myself and not getting great results I asked on twitter and Bart Wronski (@BartWronsk) clued me in.

It turns out that you can set up your frequency magnitudes such that there are only high frequencies, giving them random amplitudes, and random phases, but when you do the inverse DFT, the result isn’t guaranteed to use all possible color values (0-255) and even if it does, it may not use them evenly.

He pointed me at something that Timothy Lottes wrote up (@TimothyLottes), which talked about using some basic DSP operations to transform white noise into blue noise.

This post uses that technique to do some “Noise Alchemy” and turn white noise into a couple other types of noise. Simple single file standalone C++ source code included at bottom of the post!

Red Noise

We’ll start with red noise because it’s the simplest. Here’s how you do it:

1. Start with white noise
2. Low pass filter the white noise
3. Re-normalize the histogram
4. Repeat from step 2 as many times as desired

That’s all there is to it.

If you are wondering how you low pass filter an image, that’s another way of saying “blur”. Blurring makes the high frequency details go away, leaving the low frequency smooth shapes.

There are multiple ways to do a blur, including: box blur (averaging pixels), Gaussian blur, sinc filtering. In this post I use a Gaussian blur and get decent results, but box blurring would be cheaper/faster, and sinc filtering would be the most correct results.

An important detail about doing the blur is that your blur needs to “wrap around”. If you are blurring a pixel on the edge of the image, it should smear over to the other side of the image.

You might be wondering how you would normalize the histogram. Normalizing the histogram just means that we want to make sure that the image uses the full range of greyscale values evenly. We don’t want the noise to only use bright colors or only use dark colors, or even just MOSTLY use dark colors, for instance. If we count each color used in the image (which is the histogram I’m referring to), the counts for each color should be roughly equal.

To fix the histogram, Timothy Lottes suggests making an array that contains each pixel location and the brightness of that pixel. You first shuffle the array and then sort by brightness (Timothy uses a 64 bit int to store the pixel information, so uses a radix sort which is more efficient for fixed size keys). Next set the brightness of each item in the array to be it’s index divided by the number of items in the list to put them in the 0 to 1 range. Lastly you write the brightness values back out to the image, using the pixel locations you stored off.

What this process does is makes sure that the full range of greyscale values are used, and that they are used evenly. It also preserves the order of the brightness of the pixels; if pixel A was darker than pixel B before this process, it still will be darker after this process.

You may wonder why the shuffle is done before the sort. That’s done so that if there are any ties between values that it will be random which one is darker after this process. This is important because if it wasn’t random, there may be obvious (ugly/incorrect) patterns in the results.

When normalizing the histogram, it affects the frequency composition of the image, but if doing this process multiple times, it seems to do an OK job of converging to decent results.

Red noise has low frequency content which means it doesn’t have sharp details / fast data changes. An interesting property of 2d red noise is that if you take a random walk on the 2d red noise texture, that the values you’d hit would be a random walk of 1d values. Also, if you draw a straight line randomly on the texture, the pixels it passes through will be a random walk. That is, you’ll get random numbers, but each number will be pretty close to the previous one.

The formal definition of red noise has a more strict definition about frequency content than what we are going for in this post. (Wikipedia: red noise)

Here’s red noise (top) and the frequency magnitudes (bottom) using 5 iterations of the described process, and a blur sigma (strength of blur) of 1.0:

Using different blur strengths controls what frequencies get attenuated. Weaker blurs leave higher frequency components.

Here is red noise generated the same way but using a blur sigma of 0.5:

And here is red noise generated using a blur sigma of 2.0

Here are some animated gifs showing the evolution of the noise as well as the frequencies over time:

Sigma 0.5:

Sigma 1.0:

Sigma 2.0:

Blue Noise

To make blue noise, you use the exact same method but instead of using a low pass filter you use a high pass filter.

An easy way to high pass filter an image is to do a low pass filter to get the low frequency content, and subtract that from the original image so that you are left with the high frequency content.

Blue noise has high frequency content which means it is only made up of sharp details / fast data changes. An interesting property of 2d blue noise is that if you take a random walk (or a straight line walk) on it in any direction, you’ll get a low discrepancy sequence. That is, you’ll get random numbers, but each number will be very different from the previous one.

The formal definition of blue noise has a more strict definition about frequency content than what we are going for in this post. (Wikipedia: blue noise)

Here is blue noise using 5 iterations and a blur sigma of 1.0:

Just like with red noise, changing the strength of the blur controls what frequencies get attenuated.

Here is a sigma of 0.5:

Here is a sigma of 2.0:

Animations of sigma 0.5:

Animations of sigma 1.0:

Animations of sigma 2.0:

Green Noise

Green noise is noise that doesn’t have either low or high frequency components, only mid frequency components.

To make green noise, use you a “band pass” filter, which is a filter that gets rid of both high and low frequency components leaving only the middle.

Here’s how to make a band pass filter:

1. Make a weak blur of the image – this is the image without the highest frequencies.
2. Make a strong blur of the image – this is the image with just the lowest frequencies.
3. Subtract the strong blur from the weak blur – this is the image with just the middle frequencies.

Here is 5 iterations using a sigma of 0.5 and 2.0:

Here is the animation of it evolving:

Nathan Reed (@ReedBeta) mentioned that the green noise looked a lot like Perlin noise, which made sense due to Perlin noise being designed to be band limited, which makes it easier to control the look of perlin noise by summing mulitple octaves. This makes sense to me because you basically can control what frequencies you put noise into by scaling the frequency ring.

Fabien Giesen (@rygorous) said this also helps with mipmapping. This makes sense to me because there can’t be (as much) aliasing with the higher frequencies missing from the noise.

Purple Noise

I’ve never heard of this noise so it may have another name, but what I’m calling purple noise is just noise which has high and low frequency content, but no middle frequency content. It’s basically red noise plus blue noise.

You could literally make red noise and add it to blue noise to make purple noise, but how I made it for this post is to use a “band stop” filter.

A band stop filter is a filter that gets rid of middle frequencies and leaves high and low frequencies alone.

To band stop filter an image, you do a band pass filter to get the middle frequencies (as described in the last section!), and then subtract that from the original image to get only the low and high frequencies.

Here is 5 iterations using a sigma of 0.5 and 2.0:

Here is the animation:

Links

This technique might be useful if you ever need to generate specific types of noise quickly, but if you are just generating noise textures to use later in performance critical situations, there are better algorithms to use. When generating textures offline in advance, you have “all the time in the world”, so it is probably not worth the simplicity of this algorithm, when the trade off is less good noise results.

Dithering part two – golden ratio sequence, blue noise and highpass-and-remap (Bart Wronski)

VDR Follow Up – Fine Art of Film Grain (Timothy Lottes)

Gaussian Blur (Me)

Image DFT / IDFT (me)

Blue-noise Dithered Sampling (Solid Angle) – a better way to generate colored noises

Apparently there is a relation between blue noise, turing patterns / reaction diffusion and these filtering techniques. (Thanks @R4_Unit!)
Turing Patterns in Photoshop

Here’s a link about generating point samples in specific color distributions (Thanks @nostalgiadriven!)
Point Sampling with General Noise Spectrum

Here is an interesting shadertoy which uses the mip map of a noise texture to get the low frequency content to do a high pass filter: (found by @paniq, who unfortunately got nerd sniped by this noise generation stuff hehe)
pseudo blue noise 2

Source Code

The source code to generate the images is below, but is also on githib at Atrix256 – NoiseShaping

```#define _CRT_SECURE_NO_WARNINGS

#include <windows.h>  // for bitmap headers.  Sorry non windows people!
#include <stdint.h>
#include <vector>
#include <random>
#include <array>
#include <thread>
#include <complex>
#include <atomic>

typedef uint8_t uint8;
typedef int64_t int64;

const float c_pi = 3.14159265359f;

// settings
const size_t    c_imageSize = 256;
const bool      c_doDFT = true;
const float     c_blurThresholdPercent = 0.005f; // lower numbers give higher quality results, but take longer. This is 0.5%
const float     c_numBlurs = 5;

//======================================================================================
struct SImageData
{
SImageData ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
size_t m_pitch;
std::vector<uint8> m_pixels;
};

//======================================================================================
struct SColor
{
SColor (uint8 _R = 0, uint8 _G = 0, uint8 _B = 0)
: R(_R), G(_G), B(_B)
{ }

inline void Set (uint8 _R, uint8 _G, uint8 _B)
{
R = _R;
G = _G;
B = _B;
}

uint8 B, G, R;
};

//======================================================================================
struct SImageDataFloat
{
SImageDataFloat()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
std::vector<float> m_pixels;
};

//======================================================================================
struct SImageDataComplex
{
SImageDataComplex ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
std::vector<std::complex<float>> m_pixels;
};

//======================================================================================
std::complex<float> DFTPixel (const SImageData &srcImage, size_t K, size_t L)
{
std::complex<float> ret(0.0f, 0.0f);

for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Get the pixel value (assuming greyscale) and convert it to [0,1] space
const uint8 *src = &srcImage.m_pixels[(y * srcImage.m_pitch) + x * 3];
float grey = float(src[0]) / 255.0f;

// Add to the sum of the return value
float v = float(K * x) / float(srcImage.m_width);
v += float(L * y) / float(srcImage.m_height);
ret += std::complex<float>(grey, 0.0f) * std::polar<float>(1.0f, -2.0f * c_pi * v);
}
}

return ret;
}

//======================================================================================
void ImageDFT (const SImageData &srcImage, SImageDataComplex &destImage)
{
// NOTE: this function assumes srcImage is greyscale, so works on only the red component of srcImage.
// ImageToGrey() will convert an image to greyscale.

// size the output dft data
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pixels.resize(destImage.m_width*destImage.m_height);

size_t numThreads = std::thread::hardware_concurrency();
//if (numThreads > 0)
//numThreads = numThreads - 1;

std::vector<std::thread> threads;
threads.resize(numThreads);

printf("Doing DFT with %zu threads...\n", numThreads);

// calculate 2d dft (brute force, not using fast fourier transform) multithreadedly
std::atomic<size_t> nextRow(0);
for (std::thread& t : threads)
{
t = std::thread(
[&] ()
{
size_t row = nextRow.fetch_add(1);
bool reportProgress = (row == 0);
int lastPercent = -1;

while (row < srcImage.m_height)
{
// calculate the DFT for every pixel / frequency in this row
for (size_t x = 0; x < srcImage.m_width; ++x)
{
destImage.m_pixels[row * destImage.m_width + x] = DFTPixel(srcImage, x, row);
}

// report progress if we should
if (reportProgress)
{
int percent = int(100.0f * float(row) / float(srcImage.m_height));
if (lastPercent != percent)
{
lastPercent = percent;
printf("            \rDFT: %i%%", lastPercent);
}
}

// go to the next row
row = nextRow.fetch_add(1);
}
}
);
}

for (std::thread& t : threads)
t.join();

printf("\n");
}

//======================================================================================
void GetMagnitudeData (const SImageDataComplex& srcImage, SImageData& destImage)
{
// size the output image
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pitch = 4 * ((srcImage.m_width * 24 + 31) / 32);
destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);

// get floating point magnitude data
std::vector<float> magArray;
magArray.resize(srcImage.m_width*srcImage.m_height);
float maxmag = 0.0f;
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Offset the information by half width & height in the positive direction.
// This makes frequency 0 (DC) be at the image origin, like most diagrams show it.
int k = (x + (int)srcImage.m_width / 2) % (int)srcImage.m_width;
int l = (y + (int)srcImage.m_height / 2) % (int)srcImage.m_height;
const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k];

float mag = std::abs(src);
if (mag > maxmag)
maxmag = mag;

magArray[y*srcImage.m_width + x] = mag;
}
}
if (maxmag == 0.0f)
maxmag = 1.0f;

const float c = 255.0f / log(1.0f+maxmag);

// normalize the magnitude data and send it back in [0, 255]
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
float src = c * log(1.0f + magArray[y*srcImage.m_width + x]);

uint8 magu8 = uint8(src);

uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3];
dest[0] = magu8;
dest[1] = magu8;
dest[2] = magu8;
}
}
}

//======================================================================================
bool ImageSave (const SImageData &image, const char *fileName)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "wb");
if (!file) {
printf("Could not save %s\n", fileName);
return false;
}

// make the header info
BITMAPFILEHEADER header;
BITMAPINFOHEADER infoHeader;

header.bfType = 0x4D42;
header.bfReserved1 = 0;
header.bfReserved2 = 0;
header.bfOffBits = 54;

infoHeader.biSize = 40;
infoHeader.biWidth = (LONG)image.m_width;
infoHeader.biHeight = (LONG)image.m_height;
infoHeader.biPlanes = 1;
infoHeader.biBitCount = 24;
infoHeader.biCompression = 0;
infoHeader.biSizeImage = (DWORD) image.m_pixels.size();
infoHeader.biXPelsPerMeter = 0;
infoHeader.biYPelsPerMeter = 0;
infoHeader.biClrUsed = 0;
infoHeader.biClrImportant = 0;

header.bfSize = infoHeader.biSizeImage + header.bfOffBits;

// write the data and close the file
fwrite(&header, sizeof(header), 1, file);
fwrite(&infoHeader, sizeof(infoHeader), 1, file);
fwrite(&image.m_pixels[0], infoHeader.biSizeImage, 1, file);
fclose(file);

return true;
}

//======================================================================================
bool ImageLoad (const char *fileName, SImageData& imageData)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "rb");
if (!file)
return false;

// read the headers if we can
BITMAPFILEHEADER header;
BITMAPINFOHEADER infoHeader;
if (fread(&header, sizeof(header), 1, file) != 1 ||
fread(&infoHeader, sizeof(infoHeader), 1, file) != 1 ||
header.bfType != 0x4D42 || infoHeader.biBitCount != 24)
{
fclose(file);
return false;
}

// read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4
imageData.m_pixels.resize(infoHeader.biSizeImage);
fseek(file, header.bfOffBits, SEEK_SET);
if (fread(&imageData.m_pixels[0], imageData.m_pixels.size(), 1, file) != 1)
{
fclose(file);
return false;
}

imageData.m_width = infoHeader.biWidth;
imageData.m_height = infoHeader.biHeight;
imageData.m_pitch = 4 * ((imageData.m_width * 24 + 31) / 32);

fclose(file);
return true;
}

//======================================================================================
void ImageInit (SImageData& image, size_t width, size_t height)
{
image.m_width = width;
image.m_height = height;
image.m_pitch = 4 * ((width * 24 + 31) / 32);
image.m_pixels.resize(image.m_pitch * image.m_height);
std::fill(image.m_pixels.begin(), image.m_pixels.end(), 0);
}

//======================================================================================
void ImageFloatInit (SImageDataFloat& image, size_t width, size_t height)
{
image.m_width = width;
image.m_height = height;
image.m_pixels.resize(image.m_width * image.m_height);
std::fill(image.m_pixels.begin(), image.m_pixels.end(), 0.0f);
}

//======================================================================================
int PixelsNeededForSigma (float sigma)
{
// returns the number of pixels needed to represent a gaussian kernal that has values
// down to the threshold amount.  A gaussian function technically has values everywhere
// on the image, but the threshold lets us cut it off where the pixels contribute to
// only small amounts that aren't as noticeable.
return int(floor(1.0f + 2.0f * sqrtf(-2.0f * sigma * sigma * log(c_blurThresholdPercent)))) + 1;
}

//======================================================================================
float Gaussian (float sigma, float x)
{
return expf(-(x*x) / (2.0f * sigma*sigma));
}

//======================================================================================
float GaussianSimpsonIntegration (float sigma, float a, float b)
{
return
((b - a) / 6.0f) *
(Gaussian(sigma, a) + 4.0f * Gaussian(sigma, (a + b) / 2.0f) + Gaussian(sigma, b));
}

//======================================================================================
std::vector<float> GaussianKernelIntegrals (float sigma, int taps)
{
std::vector<float> ret;
float total = 0.0f;
for (int i = 0; i < taps; ++i)
{
float x = float(i) - float(taps / 2);
float value = GaussianSimpsonIntegration(sigma, x - 0.5f, x + 0.5f);
ret.push_back(value);
total += value;
}
// normalize it
for (unsigned int i = 0; i < ret.size(); ++i)
{
ret[i] /= total;
}
return ret;
}

//======================================================================================
const float* GetPixelWrapAround (const SImageDataFloat& image, int x, int y)
{
if (x >= (int)image.m_width)
{
x = x % (int)image.m_width;
}
else
{
while (x < 0)
x += (int)image.m_width;
}

if (y >= (int)image.m_height)
{
y = y % (int)image.m_height;
}
else
{
while (y < 0)
y += (int)image.m_height;
}

return &image.m_pixels[(y * image.m_width) + x];
}

//======================================================================================
void ImageGaussianBlur (const SImageDataFloat& srcImage, SImageDataFloat &destImage, float xblursigma, float yblursigma, unsigned int xblursize, unsigned int yblursize)
{
// allocate space for copying the image for destImage and tmpImage
ImageFloatInit(destImage, srcImage.m_width, srcImage.m_height);

SImageDataFloat tmpImage;
ImageFloatInit(tmpImage, srcImage.m_width, srcImage.m_height);

// horizontal blur from srcImage into tmpImage
{
auto row = GaussianKernelIntegrals(xblursigma, xblursize);

int startOffset = -1 * int(row.size() / 2);

for (int y = 0; y < tmpImage.m_height; ++y)
{
for (int x = 0; x < tmpImage.m_width; ++x)
{
float blurredPixel = 0.0f;
for (unsigned int i = 0; i < row.size(); ++i)
{
const float *pixel = GetPixelWrapAround(srcImage, x + startOffset + i, y);
blurredPixel += pixel[0] * row[i];
}

float *destPixel = &tmpImage.m_pixels[y * tmpImage.m_width + x];
destPixel[0] = blurredPixel;
}
}
}

// vertical blur from tmpImage into destImage
{
auto row = GaussianKernelIntegrals(yblursigma, yblursize);

int startOffset = -1 * int(row.size() / 2);

for (int y = 0; y < destImage.m_height; ++y)
{
for (int x = 0; x < destImage.m_width; ++x)
{
float blurredPixel = 0.0f;
for (unsigned int i = 0; i < row.size(); ++i)
{
const float *pixel = GetPixelWrapAround(tmpImage, x, y + startOffset + i);
blurredPixel += pixel[0] * row[i];
}

float *destPixel = &destImage.m_pixels[y * destImage.m_width + x];
destPixel[0] = blurredPixel;
}
}
}
}

//======================================================================================
void SaveImageFloatAsBMP (const SImageDataFloat& imageFloat, const char* fileName)
{
printf("\n%s\n", fileName);

// init the image
SImageData image;
ImageInit(image, imageFloat.m_width, imageFloat.m_height);

// write the data to the image
const float* srcData = &imageFloat.m_pixels[0];
for (size_t y = 0; y < image.m_height; ++y)
{
SColor* pixel = (SColor*)&image.m_pixels[y*image.m_pitch];

for (size_t x = 0; x < image.m_width; ++x)
{
uint8 value = uint8(255.0f * srcData[0]);

pixel->Set(value, value, value);

++pixel;
++srcData;
}
}

// save the image
ImageSave(image, fileName);

// also save a DFT of the image
if (c_doDFT)
{
SImageDataComplex dftData;
ImageDFT(image, dftData);

SImageData DFTMagImage;
GetMagnitudeData(dftData, DFTMagImage);

char buffer[256];
sprintf(buffer, "%s.mag.bmp", fileName);

ImageSave(DFTMagImage, buffer);
}
}

//======================================================================================
void NormalizeHistogram (SImageDataFloat& image)
{
struct SHistogramHelper
{
float value;
size_t pixelIndex;
};
static std::vector<SHistogramHelper> pixels;
pixels.resize(image.m_width * image.m_height);

// put all the pixels into the array
for (size_t i = 0, c = image.m_width * image.m_height; i < c; ++i)
{
pixels[i].value = image.m_pixels[i];
pixels[i].pixelIndex = i;
}

// shuffle the pixels to randomly order ties. not as big a deal with floating point pixel values though
static std::random_device rd;
static std::mt19937 rng(rd());
std::shuffle(pixels.begin(), pixels.end(), rng);

// sort the pixels by value
std::sort(
pixels.begin(),
pixels.end(),
[] (const SHistogramHelper& a, const SHistogramHelper& b)
{
return a.value < b.value;
}
);

// use the pixel's place in the array as the new value, and write it back to the image
for (size_t i = 0, c = image.m_width * image.m_height; i < c; ++i)
{
float value = float(i) / float(c - 1);
image.m_pixels[pixels[i].pixelIndex] = value;
}
}

//======================================================================================
void BlueNoiseTest (float blurSigma)
{
// calculate the blur size from our sigma
int blurSize = PixelsNeededForSigma(blurSigma) | 1;

// setup the randon number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate some white noise
SImageDataFloat noise;
ImageFloatInit(noise, c_imageSize, c_imageSize);
for (float& v : noise.m_pixels)
{
v = dist(rng);
}

// save off the starting white noise
const char* baseFileName = "bluenoise_%i_%zu.bmp";
char fileName[256];

sprintf(fileName, baseFileName, int(blurSigma * 100.0f), 0);
SaveImageFloatAsBMP(noise, fileName);

// iteratively high pass filter and rescale histogram to the 0 to 1 range
SImageDataFloat blurredImage;
for (size_t blurIndex = 0; blurIndex < c_numBlurs; ++blurIndex)
{
// get a low passed version of the current image
ImageGaussianBlur(noise, blurredImage, blurSigma, blurSigma, blurSize, blurSize);

// subtract the low passed version to get the high passed version
for (size_t pixelIndex = 0; pixelIndex < c_imageSize * c_imageSize; ++pixelIndex)
noise.m_pixels[pixelIndex] -= blurredImage.m_pixels[pixelIndex];

// put all pixels between 0.0 and 1.0 again
NormalizeHistogram(noise);

// save this image
sprintf(fileName, baseFileName, int(blurSigma * 100.0f), blurIndex + 1);
SaveImageFloatAsBMP(noise, fileName);
}
}

//======================================================================================
void RedNoiseTest (float blurSigma)
{
// calculate the blur size from our sigma
int blurSize = PixelsNeededForSigma(blurSigma) | 1;

// setup the randon number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate some white noise
SImageDataFloat noise;
ImageFloatInit(noise, c_imageSize, c_imageSize);
for (float& v : noise.m_pixels)
{
v = dist(rng);
}

// save off the starting white noise
const char* baseFileName = "rednoise_%i_%zu.bmp";
char fileName[256];

sprintf(fileName, baseFileName, int(blurSigma * 100.0f), 0);
SaveImageFloatAsBMP(noise, fileName);

// iteratively high pass filter and rescale histogram to the 0 to 1 range
SImageDataFloat blurredImage;
for (size_t blurIndex = 0; blurIndex < c_numBlurs; ++blurIndex)
{
// get a low passed version of the current image
ImageGaussianBlur(noise, blurredImage, blurSigma, blurSigma, blurSize, blurSize);

// set noise image to the low passed version
noise.m_pixels = blurredImage.m_pixels;

// put all pixels between 0.0 and 1.0 again
NormalizeHistogram(noise);

// save this image
sprintf(fileName, baseFileName, int(blurSigma * 100.0f), blurIndex + 1);
SaveImageFloatAsBMP(noise, fileName);
}
}

//======================================================================================
void BandPassTest (float blurSigma1, float blurSigma2)
{
// calculate the blur size from our sigma
int blurSize1 = PixelsNeededForSigma(blurSigma1) | 1;
int blurSize2 = PixelsNeededForSigma(blurSigma2) | 1;

// setup the randon number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate some white noise
SImageDataFloat noise;
ImageFloatInit(noise, c_imageSize, c_imageSize);
for (float& v : noise.m_pixels)
{
v = dist(rng);
}

// save off the starting white noise
const char* baseFileName = "bandpass_%i_%i_%zu.bmp";
char fileName[256];

sprintf(fileName, baseFileName, int(blurSigma1 * 100.0f), int(blurSigma2 * 100.0f), 0);
SaveImageFloatAsBMP(noise, fileName);

// iteratively high pass filter and rescale histogram to the 0 to 1 range
SImageDataFloat blurredImage1;
SImageDataFloat blurredImage2;
for (size_t blurIndex = 0; blurIndex < c_numBlurs; ++blurIndex)
{
// get two low passed versions of the current image
ImageGaussianBlur(noise, blurredImage1, blurSigma1, blurSigma1, blurSize1, blurSize1);
ImageGaussianBlur(noise, blurredImage2, blurSigma2, blurSigma2, blurSize2, blurSize2);

// subtract one low passed version from the other
for (size_t pixelIndex = 0; pixelIndex < c_imageSize * c_imageSize; ++pixelIndex)
noise.m_pixels[pixelIndex] = blurredImage1.m_pixels[pixelIndex] - blurredImage2.m_pixels[pixelIndex];

// put all pixels between 0.0 and 1.0 again
NormalizeHistogram(noise);

// save this image
sprintf(fileName, baseFileName, int(blurSigma1 * 100.0f), int(blurSigma2 * 100.0f), blurIndex + 1);
SaveImageFloatAsBMP(noise, fileName);
}
}

//======================================================================================
void BandStopTest (float blurSigma1, float blurSigma2)
{
// calculate the blur size from our sigma
int blurSize1 = PixelsNeededForSigma(blurSigma1) | 1;
int blurSize2 = PixelsNeededForSigma(blurSigma2) | 1;

// setup the randon number generator
std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1.0f);

// generate some white noise
SImageDataFloat noise;
ImageFloatInit(noise, c_imageSize, c_imageSize);
for (float& v : noise.m_pixels)
{
v = dist(rng);
}

// save off the starting white noise
const char* baseFileName = "bandstop_%i_%i_%zu.bmp";
char fileName[256];

sprintf(fileName, baseFileName, int(blurSigma1 * 100.0f), int(blurSigma2 * 100.0f), 0);
SaveImageFloatAsBMP(noise, fileName);

// iteratively high pass filter and rescale histogram to the 0 to 1 range
SImageDataFloat blurredImage1;
SImageDataFloat blurredImage2;
for (size_t blurIndex = 0; blurIndex < c_numBlurs; ++blurIndex)
{
// get two low passed versions of the current image
ImageGaussianBlur(noise, blurredImage1, blurSigma1, blurSigma1, blurSize1, blurSize1);
ImageGaussianBlur(noise, blurredImage2, blurSigma2, blurSigma2, blurSize2, blurSize2);

// subtract one low passed version from the other to get the pandpass noise, and subtract that from the original noise to get the band stop noise
for (size_t pixelIndex = 0; pixelIndex < c_imageSize * c_imageSize; ++pixelIndex)
noise.m_pixels[pixelIndex] -= (blurredImage1.m_pixels[pixelIndex] - blurredImage2.m_pixels[pixelIndex]);

// put all pixels between 0.0 and 1.0 again
NormalizeHistogram(noise);

// save this image
sprintf(fileName, baseFileName, int(blurSigma1 * 100.0f), int(blurSigma2 * 100.0f), blurIndex + 1);
SaveImageFloatAsBMP(noise, fileName);
}
}

//======================================================================================
int main (int argc, char ** argv)
{
BlueNoiseTest(0.5f);
BlueNoiseTest(1.0f);
BlueNoiseTest(2.0f);

RedNoiseTest(0.5f);
RedNoiseTest(1.0f);
RedNoiseTest(2.0f);

BandPassTest(0.5f, 2.0f);

BandStopTest(0.5f, 2.0f);

return 0;
}
```

6 comments

1. Chris says:|

What about pink noise?

Like

• I haven’t tried it but you should be able to make red noise and alpha blend it with white noise. As rediculous as that sounds, looking at the spectrum, I think that should work.

Like

• Chris says:|

Is the red noise or white noise on top? (Or does it make a difference? I’m not super familiar with alpha blending.)

Like

• If you look at the spectrum for pink noise, it’s the spectrum for red noise, but it never reaches zero amplitude for any frequency, so the idea is to scale down the red noise, and add in a scaled down white noise, and that should give you the same result.

Basically, if you have a 256×256 image of white noise, and a 256×256 image of red noise, for each pixel you’d lerp (linear interpolation) from one image to the other, or maybe just average the two pixel values, for each pixel. Alpha blending is just a lerp, and averaging is a lerp of 0.5 from one to the other. It’s all the same stuff 🙂

Like