Generating Blue Noise Sample Points With Mitchell’s Best Candidate Algorithm

Lately I’ve been eyeball deep in noise, ordered dithering and related topics, and have been learning some really interesting things.

As the information coalesces it’ll become apparent whether there is going to be a huge mega post coming, or if there will be several smaller ones.

In the meantime, I wanted to share this bite sized chunk of information.

Three sampling patterns that are most often used when sampling (say, when numerically integrating a lighting function for graphics/rendering purposes) are: regular samples, white noise samples, and blue noise samples.

Regular Sampling

Regular sampling just means evenly spacing the samples. This sampling strategy can suffer from aliasing, but gives good coverage of the sample space.

Here are 256, 1024 and 4096 samples:


Here are those samples taken from a source image:


Here is the DFT (frequency amplitude) of those samples:


White Noise Sampling

White noise sampling just chooses random numbers for where to place the samples.
The random numbers are uniformly distributed, meaning they are just plain vanilla random numbers where each number is equally likely to come up.

White noise sampling can make for noisy results, and suffers from the fact that white noise sample points can clump together and leave empty space. Clumped sample points give redundant information while empty space is information that you are lacking in the sample space. In general, noise is often desired over aliasing though, so white noise samples are generally preferred over regular sampling. Monte Carlo integration also requires random samples.

White noise is called white noise because it contains all frequencies approximately evenly, like how white light is made up of all frequencies of light.

Here are 256, 1024 and 4096 samples:


Here are those samples taken from a source image:


Here is the DFT (frequency amplitude) of those samples:


Blue Noise Sampling

Lastly is blue noise sampling which is somewhere between regular sampling and white noise sampling. Blue noise sampling has randomly placed points like white noise does, but the randomly placed points are approximately evenly spaced, which is more like regular sampling.

Things like low discrepancy sequences, stratified sampling, and jittered regular sampling mimic blue noise, and are often a cheaper alternative when an approximation is acceptable. More info on low discrepancy sequences is available on my post here: When Random Numbers Are Too Random: Low Discrepancy Sequences

Blue noise is called blue noise because it contains higher amounts of higher frequencies and lower amounts of lower frequencies. This is the same of blue light, which contains higher frequency (bluer) light.

Here are 256, 1024 and 4096 samples:


Here are those samples taken from a source image:


Here is the DFT (frequency amplitude) of those samples:


Comparison

Imagine you were a robot with 4096 light/color sensors. Which of the arrangements below do you think would give you the best information about the world around you with that number of sensors?



To me, the regular grid and the blue noise are a bit of a coin toss, while the white noise version is awful.

The regular grid does seem to show me straight lined things better (the road, sidewalk, etc), but it makes non straight lined things – like the trees – look blocky. The blue noise grid does the reverse and makes straight things look wavy, while making it easier to see the true shape of non straight things.

Mathematically, blue noise is superior sampling, so maybe this example isn’t the best way to show the value of blue noise.

Here is the real image:

Apparently the photo-receptors in our eyes are arranged in a blue noise pattern. Some people say this is why blue noise is more agreeable with our perception, but since it also helps numerical integration converge faster for lower sample counts (compared to white noise – which wins out with larger sample counts BTW!), it seems like there is a more fundamental reason which would cause an evolutionary advantage for them to be arranged that way in the first place.

Generating Blue Noise Sample Points

The obvious question is: I know how to make uniform and random sample points. How do I make blue noise sample points?

There are multiple ways to do it, but a method that I find very easy to understand and to implement is “Mitchell’s Best Candidate Algorithm”.

The algorithm is as follows:

  1. Place a random sample point as the first sample point.
  2. Generate some number of random dots as candidates to be the next sample point.
  3. Whichever of these dots is farthest away from the closest existing sample point is the winner. Place that dot as the new sample point.
  4. GOTO 2 and Repeat until you have as many sample points as you want.

The algorithm is pretty simple, but there are two other important details that are needed to give you good results:

  • When calculating distance between dots, you need to consider wrap around. More info on how to do that here: Calculating the Distance Between Points in “Wrap Around” (Toroidal) Space.
  • The number of candidates you generate should scale up with the number of existing sample points. As the original paper describing this technique says, doing that helps ensure that the statistics of your sample points stay constant as the number of sample points changes.

When I first tried to get this algorithm working, I was generating a fixed number of candidates each time. That gave me these pretty terrible results:

However, when I multiplied the number of existing sample points by a constant “m” as the number of sample points to generate, I got much better results, even when m was 1! (Note: m=0 is the same as white noise in this image. I did NumSamples*m+1 candidates each time.)

Related Computer Graphics Stack Exchange Question: Mitchell’s Best Candidate Algorithm

If you store existing sample points in a grid, you can speed up the algorithm since it will be faster to find the closest point to a candidate. In the implementation on this post I didn’t do that.

You may be able to multithread this algorithm but I haven’t tried it. The idea would be if you needed to make and test N candidates, that you could split that among M threads, so long as N was large enough to make that worth while. I didn’t do that in this post.

Lastly, instead of working with distance, you can work with SQUARED distance to avoid many unnecessary square root calculations. The example code here does that optimization.

Links

The 1991 paper that described this technique:
Spectrally optimal sampling for distribution ray tracing

Another interesting link on this algorithm:
Mitchell’s Best-Candidate

This algorithm isn’t that great for making dense sample points, or for use in dithering / stippling. Look for a future blog post about those usage cases, but for now, this is a great resource:
Free Blue Noise Textures (and info / examples on blue noise texture usage)

A physics based approach to blue noise distributed samples:
Electrostatic Half Toning

A neat read on the “void and cluster” method for generating blue noise, and also a good read on what ordered dithering is all about:
The void and cluster method for dither array generation

Source Code

Here is some simple standalone C++ source code which can generate blue noise sample points, and also generated the images used in this post.

It’s also on github (along with the source image) at https://github.com/Atrix256/RandomCode/tree/master/Mitchell

#define _CRT_SECURE_NO_WARNINGS

#include <windows.h>  // for bitmap headers.  Sorry non windows people!
#include <stdint.h>
#include <vector>
#include <complex>
#include <thread>
#include <atomic>
#include <random>
#include <array>

typedef uint8_t uint8;
typedef int64_t int64;

const float c_pi = 3.14159265359f;

//======================================================================================
struct SImageData
{
    SImageData ()
        : m_width(0)
        , m_height(0)
    { }
   
    size_t m_width;
    size_t m_height;
    size_t m_pitch;
    std::vector<uint8> m_pixels;
};

SImageData s_stippleImage;
 
//======================================================================================
struct SColor
{
    SColor (uint8 _R = 0, uint8 _G = 0, uint8 _B = 0)
        : R(_R), G(_G), B(_B)
    { }

    inline void Set (uint8 _R, uint8 _G, uint8 _B)
    {
        R = _R;
        G = _G;
        B = _B;
    }
 
    uint8 B, G, R;
};

//======================================================================================
struct SImageDataComplex
{
    SImageDataComplex ()
        : m_width(0)
        , m_height(0)
    { }
 
    size_t m_width;
    size_t m_height;
    std::vector<std::complex<float>> m_pixels;
};

//======================================================================================
std::complex<float> DFTPixel (const SImageData &srcImage, size_t K, size_t L)
{
    std::complex<float> ret(0.0f, 0.0f);
 
    for (size_t x = 0; x < srcImage.m_width; ++x)
    {
        for (size_t y = 0; y < srcImage.m_height; ++y)
        {
            // Get the pixel value (assuming greyscale) and convert it to [0,1] space
            const uint8 *src = &srcImage.m_pixels[(y * srcImage.m_pitch) + x * 3];
            float grey = float(src[0]) / 255.0f;
 
            // Add to the sum of the return value
            float v = float(K * x) / float(srcImage.m_width);
            v += float(L * y) / float(srcImage.m_height);
            ret += std::complex<float>(grey, 0.0f) * std::polar<float>(1.0f, -2.0f * c_pi * v);
        }
    }
 
    return ret;
}
 
//======================================================================================
void DFTImage (const SImageData &srcImage, SImageDataComplex &destImage)
{
    // NOTE: this function assumes srcImage is greyscale, so works on only the red component of srcImage.
    // ImageToGrey() will convert an image to greyscale.

    // size the output dft data
    destImage.m_width = srcImage.m_width;
    destImage.m_height = srcImage.m_height;
    destImage.m_pixels.resize(destImage.m_width*destImage.m_height);

    size_t numThreads = std::thread::hardware_concurrency();
    //if (numThreads > 0)
        //numThreads = numThreads - 1;

    std::vector<std::thread> threads;
    threads.resize(numThreads);

    printf("Doing DFT with %zu threads...\n", numThreads);

    // calculate 2d dft (brute force, not using fast fourier transform) multithreadedly
    std::atomic<size_t> nextRow(0);
    for (std::thread& t : threads)
    {
        t = std::thread(
            [&] ()
            {
                size_t row = nextRow.fetch_add(1);
                bool reportProgress = (row == 0);
                int lastPercent = -1;

                while (row < srcImage.m_height)
                {
                    // calculate the DFT for every pixel / frequency in this row
                    for (size_t x = 0; x < srcImage.m_width; ++x)
                    {
                        destImage.m_pixels[row * destImage.m_width + x] = DFTPixel(srcImage, x, row);
                    }

                    // report progress if we should
                    if (reportProgress)
                    {
                        int percent = int(100.0f * float(row) / float(srcImage.m_height));
                        if (lastPercent != percent)
                        {
                            lastPercent = percent;
                            printf("            \rDFT: %i%%", lastPercent);
                        }
                    }

                    // go to the next row
                    row = nextRow.fetch_add(1);
                }
            }
        );
    }

    for (std::thread& t : threads)
        t.join();

    printf("\n");
}

//======================================================================================
void GetMagnitudeData (const SImageDataComplex& srcImage, SImageData& destImage)
{
    // size the output image
    destImage.m_width = srcImage.m_width;
    destImage.m_height = srcImage.m_height;
    destImage.m_pitch = 4 * ((srcImage.m_width * 24 + 31) / 32);
    destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);
 
    // get floating point magnitude data
    std::vector<float> magArray;
    magArray.resize(srcImage.m_width*srcImage.m_height);
    float maxmag = 0.0f;
    for (size_t x = 0; x < srcImage.m_width; ++x)
    {
        for (size_t y = 0; y < srcImage.m_height; ++y)
        {
            // Offset the information by half width & height in the positive direction.
            // This makes frequency 0 (DC) be at the image origin, like most diagrams show it.
            int k = (x + (int)srcImage.m_width / 2) % (int)srcImage.m_width;
            int l = (y + (int)srcImage.m_height / 2) % (int)srcImage.m_height;
            const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k];
 
            float mag = std::abs(src);
            if (mag > maxmag)
                maxmag = mag;
 
            magArray[y*srcImage.m_width + x] = mag;
        }
    }
    if (maxmag == 0.0f)
        maxmag = 1.0f;
 
    const float c = 255.0f / log(1.0f+maxmag);
 
    // normalize the magnitude data and send it back in [0, 255]
    for (size_t x = 0; x < srcImage.m_width; ++x)
    {
        for (size_t y = 0; y < srcImage.m_height; ++y)
        {
            float src = c * log(1.0f + magArray[y*srcImage.m_width + x]);
 
            uint8 magu8 = uint8(src);
 
            uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3];
            dest[0] = magu8;
            dest[1] = magu8;
            dest[2] = magu8;
        }
    }
}
 
//======================================================================================
void GetPhaseData (const SImageDataComplex& srcImage, SImageData& destImage)
{
    // size the output image
    destImage.m_width = srcImage.m_width;
    destImage.m_height = srcImage.m_height;
    destImage.m_pitch = 4 * ((srcImage.m_width * 24 + 31) / 32);
    destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);
 
    // get floating point phase data, and encode it in [0,255]
    for (size_t x = 0; x < srcImage.m_width; ++x)
    {
        for (size_t y = 0; y < srcImage.m_height; ++y)
        {
            // Offset the information by half width & height in the positive direction.
            // This makes frequency 0 (DC) be at the image origin, like most diagrams show it.
            int k = (x + (int)srcImage.m_width / 2) % (int)srcImage.m_width;
            int l = (y + (int)srcImage.m_height / 2) % (int)srcImage.m_height;
            const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k];
 
            // get phase, and change it from [-pi,+pi] to [0,255]
            float phase = (0.5f + 0.5f * std::atan2(src.real(), src.imag()) / c_pi);
            if (phase < 0.0f)
                phase = 0.0f;
            if (phase > 1.0f)
                phase = 1.0;
            uint8 phase255 = uint8(phase * 255);
 
            // write the phase as grey scale color
            uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3];
            dest[0] = phase255;
            dest[1] = phase255;
            dest[2] = phase255;
        }
    }
}

//======================================================================================
bool ImageSave (const SImageData &image, const char *fileName)
{
    // open the file if we can
    FILE *file;
    file = fopen(fileName, "wb");
    if (!file) {
        printf("Could not save %s\n", fileName);
        return false;
    }
   
    // make the header info
    BITMAPFILEHEADER header;
    BITMAPINFOHEADER infoHeader;
   
    header.bfType = 0x4D42;
    header.bfReserved1 = 0;
    header.bfReserved2 = 0;
    header.bfOffBits = 54;
   
    infoHeader.biSize = 40;
    infoHeader.biWidth = (LONG)image.m_width;
    infoHeader.biHeight = (LONG)image.m_height;
    infoHeader.biPlanes = 1;
    infoHeader.biBitCount = 24;
    infoHeader.biCompression = 0;
    infoHeader.biSizeImage = (DWORD) image.m_pixels.size();
    infoHeader.biXPelsPerMeter = 0;
    infoHeader.biYPelsPerMeter = 0;
    infoHeader.biClrUsed = 0;
    infoHeader.biClrImportant = 0;
   
    header.bfSize = infoHeader.biSizeImage + header.bfOffBits;
   
    // write the data and close the file
    fwrite(&header, sizeof(header), 1, file);
    fwrite(&infoHeader, sizeof(infoHeader), 1, file);
    fwrite(&image.m_pixels[0], infoHeader.biSizeImage, 1, file);
    fclose(file);
  
    return true;
}

//======================================================================================
bool ImageLoad (const char *fileName, SImageData& imageData)
{
    // open the file if we can
    FILE *file;
    file = fopen(fileName, "rb");
    if (!file)
        return false;
 
    // read the headers if we can
    BITMAPFILEHEADER header;
    BITMAPINFOHEADER infoHeader;
    if (fread(&header, sizeof(header), 1, file) != 1 ||
        fread(&infoHeader, sizeof(infoHeader), 1, file) != 1 ||
        header.bfType != 0x4D42 || infoHeader.biBitCount != 24)
    {
        fclose(file);
        return false;
    }
 
    // read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4
    imageData.m_pixels.resize(infoHeader.biSizeImage);
    fseek(file, header.bfOffBits, SEEK_SET);
    if (fread(&imageData.m_pixels[0], imageData.m_pixels.size(), 1, file) != 1)
    {
        fclose(file);
        return false;
    }
 
    imageData.m_width = infoHeader.biWidth;
    imageData.m_height = infoHeader.biHeight;
    imageData.m_pitch = 4 * ((imageData.m_width * 24 + 31) / 32);
 
    fclose(file);
    return true;
}

//======================================================================================
void ImageInit (SImageData& image, size_t width, size_t height)
{
    image.m_width = width;
    image.m_height = height;
    image.m_pitch = 4 * ((width * 24 + 31) / 32);
    image.m_pixels.resize(image.m_pitch * image.m_width);
    std::fill(image.m_pixels.begin(), image.m_pixels.end(), 0);
}

//======================================================================================
void SampleTest (const SImageData& image, const SImageData& samples, const char* fileName)
{
    SImageData outImage;
    ImageInit(outImage, image.m_width, image.m_height);

    for (size_t y = 0; y < image.m_height; ++y)
    {
        size_t sampleY = y % samples.m_height;
        for (size_t x = 0; x < image.m_width; ++x)
        {
            size_t sampleX = x % samples.m_width;

            const SColor* samplePixel = (SColor*)&samples.m_pixels[sampleY*samples.m_pitch + sampleX * 3];
            const SColor* imagePixel = (SColor*)&image.m_pixels[y*image.m_pitch + x * 3];

            SColor* outPixel = (SColor*)&outImage.m_pixels[y*outImage.m_pitch + x * 3];

            if (samplePixel->R == 255)
                *outPixel = *imagePixel;
        }
    }

    ImageSave(outImage, fileName);
}

//======================================================================================
inline float Distance (size_t x1, size_t y1, size_t x2, size_t y2, int imageWidth)
{
    // this returns the toroidal distance between the points
    // aka the interval [0, width) wraps around
    float dx = std::abs(float(x2) - float(x1));
    float dy = std::abs(float(y2) - float(y1));

    if (dx > float(imageWidth / 2))
        dx = float(imageWidth) - dx;

    if (dy > float(imageWidth / 2))
        dy = float(imageWidth) - dy;

    // returning squared distance cause why not
    return dx*dx + dy*dy;
}

//======================================================================================
int main (int argc, char** argv)
{
    const size_t c_imageSize = 256;
    const bool c_doDFT = true;

    const size_t c_blueNoiseSampleMultiplier = 1;

    const size_t samples1 = 256;   // 16x16
    const size_t samples2 = 1024;  // 32x32
    const size_t samples3 = 4096; // 128x128

    // load the source image
    SImageData image;
    ImageLoad("Image.bmp", image);

    // init random number generator
    std::random_device rd;
    std::mt19937 rng(rd());
    std::uniform_int_distribution<int> dist(0, c_imageSize - 1);

    // white noise
    {
        SImageData samples;
        ImageInit(samples, c_imageSize, c_imageSize);

        for (size_t i = 1; i <= samples3; ++i)
        {
            size_t x = dist(rng);
            size_t y = dist(rng);

            SColor* pixel = (SColor*)&samples.m_pixels[y*samples.m_pitch + x * 3];
            pixel->R = pixel->G = pixel->B = 255;

            if (i == samples1 || i == samples2 || i == samples3)
            {
                printf("White Noise %zu samples\n", i);

                char fileName[256];
                sprintf(fileName, "WhiteNoise_%zu.bmp", i);
                ImageSave(samples, fileName);

                sprintf(fileName, "WhiteNoise_%zu_samples.bmp", i);
                SampleTest(image, samples, fileName);

                if (c_doDFT)
                {
                    SImageDataComplex frequencyData;
                    DFTImage(samples, frequencyData);

                    SImageData magnitudeData;
                    GetMagnitudeData(frequencyData, magnitudeData);

                    sprintf(fileName, "WhiteNoise_%zu_mag.bmp", i);
                    ImageSave(magnitudeData, fileName);
                }
            }
        }
    }

    // regular samples
    {

        auto GridTest = [&] (size_t sampleCount) {
            SImageData samples;
            ImageInit(samples, c_imageSize, c_imageSize);

            size_t side = size_t(std::sqrt(float(sampleCount)));

            size_t pixels = c_imageSize / side;

            for (size_t y = 0; y < side; ++y)
            {
                size_t pixelY = y * pixels;
                for (size_t x = 0; x < side; ++x)
                {
                    size_t pixelX = x * pixels;

                    SColor* pixel = (SColor*)&samples.m_pixels[pixelY*samples.m_pitch + pixelX * 3];
                    pixel->R = pixel->G = pixel->B = 255;
                }
            }

            printf("Regular %zu samples\n", sampleCount);

            char fileName[256];
            sprintf(fileName, "Regular_%zu.bmp", sampleCount);
            ImageSave(samples, fileName);

            sprintf(fileName, "Regular_%zu_samples.bmp", sampleCount);
            SampleTest(image, samples, fileName);

            if (c_doDFT)
            {
                SImageDataComplex frequencyData;
                DFTImage(samples, frequencyData);

                SImageData magnitudeData;
                GetMagnitudeData(frequencyData, magnitudeData);

                sprintf(fileName, "Regular_%zu_mag.bmp", sampleCount);
                ImageSave(magnitudeData, fileName);
            }
        };

        GridTest(samples1);
        GridTest(samples2);
        GridTest(samples3);
    }

    // blue noise
    {
        SImageData samples;
        ImageInit(samples, c_imageSize, c_imageSize);

        std::vector<std::array<size_t, 2>> samplesPos;

        size_t percent = (size_t)-1;

        for (size_t i = 1; i <= samples3; ++i)
        {
            size_t newPercent;
            if (i <= samples1)
                newPercent = size_t(100.0f * float(i) / float(samples1));
            else if (i <= samples2)
                newPercent = size_t(100.0f * float(i - samples1) / float(samples2 - samples1));
            else
                newPercent = size_t(100.0f * float(i - samples2) / float(samples3 - samples2));
            if (percent != newPercent)
            {
                percent = newPercent;
                printf("\rGenerating Blue Noise Samples: %zu%%", percent);
            }

            // keep the candidate that is farthest from it's closest point
            size_t numCandidates = samplesPos.size() * c_blueNoiseSampleMultiplier + 1;
            float bestDistance = 0.0f;
            size_t bestCandidateX = 0;
            size_t bestCandidateY = 0;
            for (size_t candidate = 0; candidate < numCandidates; ++candidate)
            {
                size_t x = dist(rng);
                size_t y = dist(rng);

                // calculate the closest distance from this point to an existing sample
                float minDist = FLT_MAX;
                for (const std::array<size_t, 2>& samplePos : samplesPos)
                {
                    float dist = Distance(x, y, samplePos[0], samplePos[1], c_imageSize);
                    if (dist < minDist)
                        minDist = dist;
                }

                if (minDist > bestDistance)
                {
                    bestDistance = minDist;
                    bestCandidateX = x;
                    bestCandidateY = y;
                }
            }
            samplesPos.push_back({ bestCandidateX, bestCandidateY });

            SColor* pixel = (SColor*)&samples.m_pixels[bestCandidateY*samples.m_pitch + bestCandidateX * 3];
            pixel->R = pixel->G = pixel->B = 255;

            if (i == samples1 || i == samples2 || i == samples3)
            {
                printf("\nBlue Noise %zu samples\n", i);

                char fileName[256];
                sprintf(fileName, "BlueNoise_%zu.bmp", i);
                ImageSave(samples, fileName);

                sprintf(fileName, "BlueNoise_%zu_samples.bmp", i);
                SampleTest(image, samples, fileName);

                if (c_doDFT)
                {
                    SImageDataComplex frequencyData;
                    DFTImage(samples, frequencyData);

                    SImageData magnitudeData;
                    GetMagnitudeData(frequencyData, magnitudeData);

                    sprintf(fileName, "BlueNoise_%zu_mag.bmp", i);
                    ImageSave(magnitudeData, fileName);
                }
            }
        }
    }

    return 0;
}

21 comments

  1. Pingback: What the Heck is Blue Noise? « The blog at the bottom of the sea

    • Does it need any special properties besides being the same except in a circle instead of a rectangle? One way could be to use “rejection sampling” – throw out any generated point that is outside the circle. Or, do it in a rectangle and then afterwords throw out everything that isn’t in the circle. You could do better by considering wrap around “torroidal” distance around a circle instead of a rectangle, but not sure off the top of my head how you’d do that. I’d bet there’s a way though 🙂

      That help any?

      Like

  2. Pingback: Not All Blue Noise is Created Equal « The blog at the bottom of the sea

  3. Pingback: Dice, Distributions & Noise Colors « The blog at the bottom of the sea

  4. Pingback: Calculating Information Entropy « The blog at the bottom of the sea

  5. Hi, Allan! I was wondering if you could help me.
    I have a rough understanding of the Fourier Transform. That it is used in this instance to translate between the spatial domain to the frequency domain. And I understand that the DFT images that you show have pixels that represent frequency values, but I’m not sure how exactly to interpret them. It seems the answer is in DFTPixel() but I can’t quite determine what it represents in words. Could you help point me in the right direction?

    Thanks! And the blog post was fun regardless of my lack of full understanding! 🙂

    Liked by 1 person

    • Yeah sure. So in those DFT images, the middle is the frequency 0hz aka “DC”. As you go out from the center, you start going to higher frequencies. The brightness of the pixel tells you the magnitude of the frequency at that pixel.
      The brightness itself was put through a log function and the image was normalized so you can’t take anything DIRECTLY from it, like you can’t inspect the pixel and say the amplitude of some frequency is 3, but you can see from the image the relative amplitudes of things. Blue noise shows a big dark circle in the middle, showing that it doesn’t have much low frequency content.
      In 2d, the frequencies are made up of a vertical and horizontal frequency. From the center, if you go right, you get to higher frequencies on the horizontal axis, and going left, you go into negative frequencies. Going up and down is similar. So the frequency at each pixel is made up of a vertical and horizontal component.
      Besides magnitude, there is also a “phase” part of DFT, which tells you if that frequency started at 0 degrees, or some other degrees (but same frequency). These images only show magnitude though, not phase.
      For this kind of image analysis, phase isn’t as useful as far as i know.

      Liked by 1 person

      • Awesome, that’s really helpful!
        If I’m not mistaken, there seems to be a symmetry in the images that feel like potentially redundant data. If you cut the images in half through the x-axis, the bottom half is equal to the top half mirrored along the x-axis and then flipped along the y-axis. So that means that the magnitude of the frequency at (x, y) is always equal to the magnitude of the frequency at (-x, -y)? If that’s true at all, is there a reason why the images aren’t typically half the size? Is it just for completeness?
        Also, I know it’s not the topic of your article. Sorry for the tangent questions.

        Like

      • Great question. A harder core DSP person would know this better but I think its that if you only have “real value signals” (no imaginary numbers in your image lol), that yes… the positive and negative magnitudes will match.
        There must be some different info in the phase data or something though, otherwise we could cut the data in half and get good lossless compression for free. And then do it again, and again, and again.
        Not sure where the important data is though.

        Liked by 1 person

  6. The output of the source code is not working for other images, its only working for the specific image given in the blog. Is there anything we need to change in the source code ?? can anyone help, please.

    Liked by 1 person

      • // load the source image
        SImageData image;
        ImageLoad(“Image.bmp”, image);

        In this part of code I loaded and put an different image of bmp format. And compiled it , the compilation was successful and the output images are obtained in the folder. There are three formats for a certain size1) Bluenoise_256.bmp 2) Bluenoise_256_mag.bmp 3) Bluenoise_256_samples.bmp
        the problem here is the 3rd one (Blenoise_256_samples.bmp) is not opening for the input image which I loaded , while its opening for the source image which you gave. This is the doubt , any information regarding this ? or do we need to load source image anywhere else ?

        Like

  7. I tried using stb_image.h , even though same issue , the samples (3rd format files ) are not opening for the specific image.

    Like

    • unfortunately not. The fastest i’ve seen this algorithm run is to precompute the gaussian kernel for 99.5% of the energy from the gaussian, and then when you turn on a point, additively blend that onto the energy field. So basically, it’s just rasterization at that point and is pretty fast.
      However, the algorithm then is a loop from 0 to the number of pixels, where you turn on a pixel, rasterize a quad to the energy field, then scan for the smallest energy value. There’s a data dependency that makes it impossible or very difficult to meaningfully parallelize.

      Like


Leave a comment