Box Blur

If you ever have heard the terms “Box Blur”, “Boxcar Function”, “Box Filter”, “Boxcar Integrator” or other various combinations of those words, you may have thought it was some advanced concept that is hard to understand and hard to implement. If that’s what you thought, prepare to be surprised!

A box filter is nothing more than taking N samples of data (or NxN samples of data, or NxNxN etc) and averaging them! Yes, that is all there is to it ๐Ÿ˜›

In this post, we are going to implement a box blur by averaging pixels.

1D Case

For the case of a 1d box filter, let’s say we wanted every data point to be the result of averaging it with it’s two neighbors. It’d be easy enough to program that by just doing it, but let’s look at it a different way. What weight would we need to multiply each of the three values by (the value and it’s two neighbors) to make it come up with the average?

Yep, you guessed it! For every data value, you multiply it and it’s neighbors by 1/3 to come up with the average value. We could easily increase the size of the filter to 5 pixels, and multiply each pixel by 1/5 instead. We could continue the pattern as high as we wanted.

One thing you might notice is that if we want a buffer with all the results, we can’t just alter the source data as we go, because we want the unaltered source values of the data to use those weights with, to get the correct results. Because of that, we need to make a second buffer to put the results of the filtering into.

Believe it or not, that diagram above is a convolution kernel, and how we talked about applying it is how you do convolution in 1d! It just so happens that this convolution kernel averages three pixels into one, which also happens to provide a low pass filter type effect.

Low pass filtering is what is done before down sampling audio data to prevent aliasing (frequencies higher than the sample rate can handle, which makes audio sound bad).

Surprise… blurring can also be seen as low pass filtering, which is something you can do before scaling an image down in size, to prevent aliasing.

2D Case

The 2d case isn’t much more difficult to understand than the 1d case. Instead of only averaging on one axis, we average on two instead:

Something interesting to note is that you can either use this 3×3 2d convolution kernel, or, you could apply the 1d convolution kernel described above on the X axis and then the Y axis. The methods are mathematically equivalent.

Using the 2d convolution kernel would result in 9 multiplications per pixel, but if going with the separated axis X and then Y 1d kernel, you’d only end up doing 6 multiplications per pixel (3 multiplications per axis). In general, if you have a seperable 2d convolution kernel (meaning that you can break it into a per axis 1d convolution), you will end up doing N^2 multiplications when using the 2d kernel, versus N*2 multiplications when using the 1d kernels. You can see that this would add up quickly in favor of using 1d kernels, but unfortunately not all kernels are separable.

Doing two passes does come at a cost though. Since you have to use a temporary buffer for each pass, you end up having to create two temporary buffers instead of one.

You can build 2d kernels from 1d kernels by multiplying them as a row vector, by a column vector. For instance, you can see how multiplying the (1/3,1/3,1/3) kernel by itself as a column vector would create the 2nd kernel, that is 3×3 and has 1/9 in every spot.

The resulting 3×3 matrix is called an outer product, or a tensor product. Something interesting to note is that you don’t have to do the same operation on each axis!

Examples

Here are some examples of box blurring with different values, using the sample code provided below.

The source image:

Now blurred by a 10×10 box car convolution kernel:

Now blurred by a 100×10 box car convolution kernel:

Shadertoy

You can find a shadertoy implementation of box blurring here: Shadertoy:DF Box Blur

Code

Here’s the code I used to blur the example images above:

#define _CRT_SECURE_NO_WARNINGS
  
#include <stdio.h>
#include <stdint.h>
#include <array>
#include <vector>
#include <functional>
#include <windows.h>  // for bitmap headers.  Sorry non windows people!
  
typedef uint8_t uint8;
 
const float c_pi = 3.14159265359f;
 
struct SImageData
{
    SImageData()
        : m_width(0)
        , m_height(0)
    { }
  
    long m_width;
    long m_height;
    long m_pitch;
    std::vector<uint8> m_pixels;
};
  
void WaitForEnter ()
{
    printf("Press Enter to quit");
    fflush(stdin);
    getchar();
}
  
bool LoadImage (const char *fileName, SImageData& imageData)
{
    // open the file if we can
    FILE *file;
    file = fopen(fileName, "rb");
    if (!file)
        return false;
  
    // read the headers if we can
    BITMAPFILEHEADER header;
    BITMAPINFOHEADER infoHeader;
    if (fread(&header, sizeof(header), 1, file) != 1 ||
        fread(&infoHeader, sizeof(infoHeader), 1, file) != 1 ||
        header.bfType != 0x4D42 || infoHeader.biBitCount != 24)
    {
        fclose(file);
        return false;
    }
  
    // read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4
    imageData.m_pixels.resize(infoHeader.biSizeImage);
    fseek(file, header.bfOffBits, SEEK_SET);
    if (fread(&imageData.m_pixels[0], imageData.m_pixels.size(), 1, file) != 1)
    {
        fclose(file);
        return false;
    }
  
    imageData.m_width = infoHeader.biWidth;
    imageData.m_height = infoHeader.biHeight;
  
    imageData.m_pitch = imageData.m_width*3;
    if (imageData.m_pitch & 3)
    {
        imageData.m_pitch &= ~3;
        imageData.m_pitch += 4;
    }
  
    fclose(file);
    return true;
}
  
bool SaveImage (const char *fileName, const SImageData &image)
{
    // open the file if we can
    FILE *file;
    file = fopen(fileName, "wb");
    if (!file)
        return false;
  
    // make the header info
    BITMAPFILEHEADER header;
    BITMAPINFOHEADER infoHeader;
  
    header.bfType = 0x4D42;
    header.bfReserved1 = 0;
    header.bfReserved2 = 0;
    header.bfOffBits = 54;
  
    infoHeader.biSize = 40;
    infoHeader.biWidth = image.m_width;
    infoHeader.biHeight = image.m_height;
    infoHeader.biPlanes = 1;
    infoHeader.biBitCount = 24;
    infoHeader.biCompression = 0;
    infoHeader.biSizeImage = image.m_pixels.size();
    infoHeader.biXPelsPerMeter = 0;
    infoHeader.biYPelsPerMeter = 0;
    infoHeader.biClrUsed = 0;
    infoHeader.biClrImportant = 0;
  
    header.bfSize = infoHeader.biSizeImage + header.bfOffBits;
  
    // write the data and close the file
    fwrite(&header, sizeof(header), 1, file);
    fwrite(&infoHeader, sizeof(infoHeader), 1, file);
    fwrite(&image.m_pixels[0], infoHeader.biSizeImage, 1, file);
    fclose(file);
    return true;
}

const uint8* GetPixelOrBlack (const SImageData& image, int x, int y)
{
    static const uint8 black[3] = { 0, 0, 0 };
    if (x < 0 || x >= image.m_width ||
        y < 0 || y >= image.m_height)
    {
        return black;
    }
 
    return &image.m_pixels[(y * image.m_pitch) + x * 3];
}
 
void BlurImage (const SImageData& srcImage, SImageData &destImage, unsigned int xblur, unsigned int yblur)
{
    // allocate space for copying the image for destImage and tmpImage
    destImage.m_width = srcImage.m_width;
    destImage.m_height = srcImage.m_height;
    destImage.m_pitch = srcImage.m_pitch;
    destImage.m_pixels.resize(destImage.m_height * destImage.m_pitch);
 
    SImageData tmpImage;
    tmpImage.m_width = srcImage.m_width;
    tmpImage.m_height = srcImage.m_height;
    tmpImage.m_pitch = srcImage.m_pitch;
    tmpImage.m_pixels.resize(tmpImage.m_height * tmpImage.m_pitch);
 
    // horizontal blur from srcImage into tmpImage
    {
        float weight = 1.0f / float(xblur);
        int half = xblur / 2;
        for (int y = 0; y < tmpImage.m_height; ++y)
        {
            for (int x = 0; x < tmpImage.m_width; ++x)
            {
                std::array<float, 3> blurredPixel = { 0.0f, 0.0f, 0.0f };
                for (int i = -half; i <= half; ++i)
                {
                    const uint8 *pixel = GetPixelOrBlack(srcImage, x + i, y);
                    blurredPixel[0] += float(pixel[0]) * weight;
                    blurredPixel[1] += float(pixel[1]) * weight;
                    blurredPixel[2] += float(pixel[2]) * weight;
                }
                 
                uint8 *destPixel = &tmpImage.m_pixels[y * tmpImage.m_pitch + x * 3];
 
                destPixel[0] = uint8(blurredPixel[0]);
                destPixel[1] = uint8(blurredPixel[1]);
                destPixel[2] = uint8(blurredPixel[2]);
            }
        }
    }
 
    // vertical blur from tmpImage into destImage
    {
        float weight = 1.0f / float(yblur);
        int half = yblur / 2;
 
        for (int y = 0; y < destImage.m_height; ++y)
        {
            for (int x = 0; x < destImage.m_width; ++x)
            {
                std::array<float, 3> blurredPixel = { 0.0f, 0.0f, 0.0f };
                for (int i = -half; i <= half; ++i)
                {
                    const uint8 *pixel = GetPixelOrBlack(tmpImage, x, y + i);
                    blurredPixel[0] += float(pixel[0]) * weight;
                    blurredPixel[1] += float(pixel[1]) * weight;
                    blurredPixel[2] += float(pixel[2]) * weight;
                }
 
                uint8 *destPixel = &destImage.m_pixels[y * destImage.m_pitch + x * 3];
 
                destPixel[0] = uint8(blurredPixel[0]);
                destPixel[1] = uint8(blurredPixel[1]);
                destPixel[2] = uint8(blurredPixel[2]);
            }
        }
    }
}
 
int main (int argc, char **argv)
{
    int xblur, yblur;
  
    bool showUsage = argc < 5 ||
        (sscanf(argv[3], "%i", &xblur) != 1) ||
        (sscanf(argv[4], "%i", &yblur) != 1);
  
    char *srcFileName = argv[1];
    char *destFileName = argv[2];
  
    if (showUsage)
    {
        printf("Usage: <source> <dest> <xblur> <yblur>nn");
        WaitForEnter();
        return 1;
    }
     
    // make sure blur size is odd
    xblur = xblur | 1;
    yblur = yblur | 1;
 
    printf("Attempting to blur a 24 bit image.n");
    printf("  Source=%sn  Dest=%sn  blur=[%d,%d]nn", srcFileName, destFileName, xblur, yblur);
  
    SImageData srcImage;
    if (LoadImage(srcFileName, srcImage))
    {
        printf("%s loadedn", srcFileName);
        SImageData destImage;
        BlurImage(srcImage, destImage, xblur, yblur);
        if (SaveImage(destFileName, destImage))
            printf("Blurred image saved as %sn", destFileName);
        else
        {
            printf("Could not save blurred image as %sn", destFileName);
            WaitForEnter();
            return 1;
        }
    }
    else
    {
        printf("could not read 24 bit bmp file %snn", srcFileName);
        WaitForEnter();
        return 1;
    }
    return 0;
}

Next Up

Next up will be a Gaussian blur, and I’m nearly done w/ that post but wanted to make this one first as an introductory step!

Before we get there, I wanted to mention that if you do multiple box blurs in a row, it will start to approach Gaussian blurring. I’ve heard that three blurs in a row will make it basically indistinguishable from a Gaussian blur.

Resizing Images With Bicubic Interpolation

In the last post we saw how to do cubic interpolation on a grid of data.

Strangely enough, when that grid is a grid of pixel data, bicubic interpolation is a common method for resizing images!

Bicubic interpolation can also used in realtime rendering to make textures look nicer when scaled than standard bilinear texture interpolation.

This technique works when making images larger as well as smaller, but when making images smaller, you can still have problems with aliasing. There are are better algorithms to use when making an image smaller. Check the links section at the bottom for more details!

Example

Here’s the old man from The Legend of Zelda who gives you the sword.

Here he is scaled up 4x with nearest neighbor, bilinear interpolation and bicubic interpolation.


Here he is scaled up 16x with nearest neighbor, bilinear interpolation and bicubic interpolation.


Shadertoy

I made a shadertoy to show you how to do this in a GLSL pixel shader as well. Shadertoy: Bicubic Texture Filtering

In the screenshot below, going from left to right it uses: Nearest Neighbor, Bilinear, Lagrange Bicubic interpolation (only interpolates values, not slopes), Hermite Bicubic interpolation.

Sample Code

Here’s the code that I used to resize the images in the examples above.

#define _CRT_SECURE_NO_WARNINGS

#include <stdio.h>
#include <stdint.h>
#include <array>
#include <vector>
#include <windows.h>  // for bitmap headers.  Sorry non windows people!

#define CLAMP(v, min, max) if (v < min) { v = min; } else if (v > max) { v = max; } 

typedef uint8_t uint8;

struct SImageData
{
    SImageData()
        : m_width(0)
        , m_height(0)
    { }

    long m_width;
    long m_height;
    long m_pitch;
    std::vector<uint8> m_pixels;
};

void WaitForEnter ()
{
    printf("Press Enter to quit");
    fflush(stdin);
    getchar();
}

bool LoadImage (const char *fileName, SImageData& imageData)
{
    // open the file if we can
    FILE *file;
    file = fopen(fileName, "rb");
    if (!file)
        return false;

    // read the headers if we can
    BITMAPFILEHEADER header;
    BITMAPINFOHEADER infoHeader;
    if (fread(&header, sizeof(header), 1, file) != 1 ||
        fread(&infoHeader, sizeof(infoHeader), 1, file) != 1 ||
        header.bfType != 0x4D42 || infoHeader.biBitCount != 24)
    {
        fclose(file);
        return false;
    }

    // read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4
    imageData.m_pixels.resize(infoHeader.biSizeImage);
    fseek(file, header.bfOffBits, SEEK_SET);
    if (fread(&imageData.m_pixels[0], imageData.m_pixels.size(), 1, file) != 1)
    {
        fclose(file);
        return false;
    }

    imageData.m_width = infoHeader.biWidth;
    imageData.m_height = infoHeader.biHeight;

    imageData.m_pitch = imageData.m_width*3;
    if (imageData.m_pitch & 3)
    {
        imageData.m_pitch &= ~3;
        imageData.m_pitch += 4;
    }

    fclose(file);
    return true;
}

bool SaveImage (const char *fileName, const SImageData &image)
{
    // open the file if we can
    FILE *file;
    file = fopen(fileName, "wb");
    if (!file)
        return false;

    // make the header info
    BITMAPFILEHEADER header;
    BITMAPINFOHEADER infoHeader;

    header.bfType = 0x4D42;
    header.bfReserved1 = 0;
    header.bfReserved2 = 0;
    header.bfOffBits = 54;

    infoHeader.biSize = 40;
    infoHeader.biWidth = image.m_width;
    infoHeader.biHeight = image.m_height;
    infoHeader.biPlanes = 1;
    infoHeader.biBitCount = 24;
    infoHeader.biCompression = 0;
    infoHeader.biSizeImage = image.m_pixels.size();
    infoHeader.biXPelsPerMeter = 0;
    infoHeader.biYPelsPerMeter = 0;
    infoHeader.biClrUsed = 0;
    infoHeader.biClrImportant = 0;

    header.bfSize = infoHeader.biSizeImage + header.bfOffBits;

    // write the data and close the file
    fwrite(&header, sizeof(header), 1, file);
    fwrite(&infoHeader, sizeof(infoHeader), 1, file);
    fwrite(&image.m_pixels[0], infoHeader.biSizeImage, 1, file);
    fclose(file);
    return true;
}

// t is a value that goes from 0 to 1 to interpolate in a C1 continuous way across uniformly sampled data points.
// when t is 0, this will return B.  When t is 1, this will return C.  Inbetween values will return an interpolation
// between B and C.  A and B are used to calculate slopes at the edges.
float CubicHermite (float A, float B, float C, float D, float t)
{
    float a = -A / 2.0f + (3.0f*B) / 2.0f - (3.0f*C) / 2.0f + D / 2.0f;
    float b = A - (5.0f*B) / 2.0f + 2.0f*C - D / 2.0f;
    float c = -A / 2.0f + C / 2.0f;
    float d = B;

    return a*t*t*t + b*t*t + c*t + d;
}

float Lerp (float A, float B, float t)
{
    return A * (1.0f - t) + B * t;
}

const uint8* GetPixelClamped (const SImageData& image, int x, int y)
{
    CLAMP(x, 0, image.m_width - 1);
    CLAMP(y, 0, image.m_height - 1);    
    return &image.m_pixels[(y * image.m_pitch) + x * 3];
}

std::array<uint8, 3> SampleNearest (const SImageData& image, float u, float v)
{
    // calculate coordinates
    int xint = int(u * image.m_width);
    int yint = int(v * image.m_height);

    // return pixel
    auto pixel = GetPixelClamped(image, xint, yint);
    std::array<uint8, 3> ret;
    ret[0] = pixel[0];
    ret[1] = pixel[1];
    ret[2] = pixel[2];
    return ret;
}

std::array<uint8, 3> SampleLinear (const SImageData& image, float u, float v)
{
    // calculate coordinates -> also need to offset by half a pixel to keep image from shifting down and left half a pixel
    float x = (u * image.m_width) - 0.5f;
    int xint = int(x);
    float xfract = x - floor(x);

    float y = (v * image.m_height) - 0.5f;
    int yint = int(y);
    float yfract = y - floor(y);

    // get pixels
    auto p00 = GetPixelClamped(image, xint + 0, yint + 0);
    auto p10 = GetPixelClamped(image, xint + 1, yint + 0);
    auto p01 = GetPixelClamped(image, xint + 0, yint + 1);
    auto p11 = GetPixelClamped(image, xint + 1, yint + 1);

    // interpolate bi-linearly!
    std::array<uint8, 3> ret;
    for (int i = 0; i < 3; ++i)
    {
        float col0 = Lerp(p00[i], p10[i], xfract);
        float col1 = Lerp(p01[i], p11[i], xfract);
        float value = Lerp(col0, col1, yfract);
        CLAMP(value, 0.0f, 255.0f);
        ret[i] = uint8(value);
    }
    return ret;
}

std::array<uint8, 3> SampleBicubic (const SImageData& image, float u, float v)
{
    // calculate coordinates -> also need to offset by half a pixel to keep image from shifting down and left half a pixel
    float x = (u * image.m_width) - 0.5;
    int xint = int(x);
    float xfract = x - floor(x);

    float y = (v * image.m_height) - 0.5;
    int yint = int(y);
    float yfract = y - floor(y);

    // 1st row
    auto p00 = GetPixelClamped(image, xint - 1, yint - 1);
    auto p10 = GetPixelClamped(image, xint + 0, yint - 1);
    auto p20 = GetPixelClamped(image, xint + 1, yint - 1);
    auto p30 = GetPixelClamped(image, xint + 2, yint - 1);

    // 2nd row
    auto p01 = GetPixelClamped(image, xint - 1, yint + 0);
    auto p11 = GetPixelClamped(image, xint + 0, yint + 0);
    auto p21 = GetPixelClamped(image, xint + 1, yint + 0);
    auto p31 = GetPixelClamped(image, xint + 2, yint + 0);

    // 3rd row
    auto p02 = GetPixelClamped(image, xint - 1, yint + 1);
    auto p12 = GetPixelClamped(image, xint + 0, yint + 1);
    auto p22 = GetPixelClamped(image, xint + 1, yint + 1);
    auto p32 = GetPixelClamped(image, xint + 2, yint + 1);

    // 4th row
    auto p03 = GetPixelClamped(image, xint - 1, yint + 2);
    auto p13 = GetPixelClamped(image, xint + 0, yint + 2);
    auto p23 = GetPixelClamped(image, xint + 1, yint + 2);
    auto p33 = GetPixelClamped(image, xint + 2, yint + 2);

    // interpolate bi-cubically!
    // Clamp the values since the curve can put the value below 0 or above 255
    std::array<uint8, 3> ret;
    for (int i = 0; i < 3; ++i)
    {
        float col0 = CubicHermite(p00[i], p10[i], p20[i], p30[i], xfract);
        float col1 = CubicHermite(p01[i], p11[i], p21[i], p31[i], xfract);
        float col2 = CubicHermite(p02[i], p12[i], p22[i], p32[i], xfract);
        float col3 = CubicHermite(p03[i], p13[i], p23[i], p33[i], xfract);
        float value = CubicHermite(col0, col1, col2, col3, yfract);
        CLAMP(value, 0.0f, 255.0f);
        ret[i] = uint8(value);
    }
    return ret;
}

void ResizeImage (const SImageData &srcImage, SImageData &destImage, float scale, int degree)
{
    destImage.m_width = long(float(srcImage.m_width)*scale);
    destImage.m_height = long(float(srcImage.m_height)*scale);
    destImage.m_pitch = destImage.m_width * 3;
    if (destImage.m_pitch & 3)
    {
        destImage.m_pitch &= ~3;
        destImage.m_pitch += 4;
    }
    destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);

    uint8 *row = &destImage.m_pixels[0];
    for (int y = 0; y < destImage.m_height; ++y)
    {
        uint8 *destPixel = row;
        float v = float(y) / float(destImage.m_height - 1);
        for (int x = 0; x < destImage.m_width; ++x)
        {
            float u = float(x) / float(destImage.m_width - 1);
            std::array<uint8, 3> sample;

            if (degree == 0)
                sample = SampleNearest(srcImage, u, v);
            else if (degree == 1)
                sample = SampleLinear(srcImage, u, v);
            else if (degree == 2)
                sample = SampleBicubic(srcImage, u, v);

            destPixel[0] = sample[0];
            destPixel[1] = sample[1];
            destPixel[2] = sample[2];
            destPixel += 3;
        }
        row += destImage.m_pitch;
    }
}

int main (int argc, char **argv)
{
    float scale = 1.0f;
    int degree = 0;

    bool showUsage = argc < 5 ||
        (sscanf(argv[3], "%f", &scale) != 1) ||
        (sscanf(argv[4], "%i", &degree) != 1);

    char *srcFileName = argv[1];
    char *destFileName = argv[2];

    if (showUsage)
    {
        printf("Usage: <source> <dest> <scale> <degree>ndegree 0 = nearest, 1 = bilinear, 2 = bicubic.nn");
        WaitForEnter();
        return 1;
    }

    printf("Attempting to resize a 24 bit image.n");
    printf("  Source = %sn  Dest = %sn  Scale = %0.2fnn", srcFileName, destFileName, scale);

    SImageData srcImage;
    if (LoadImage(srcFileName, srcImage))
    {
        printf("%s loadedn", srcFileName);
        SImageData destImage;
        ResizeImage(srcImage, destImage, scale, degree);
        if (SaveImage(destFileName, destImage))
            printf("Resized image saved as %sn", destFileName);
        else
            printf("Could not save resized image as %sn", destFileName);
    }
    else
        printf("could not read 24 bit bmp file %snn", srcFileName);
    return 0;
}

Links

A small tutorial about how to load a bitmap file
The BMP Format
Reconstruction Filters in Computer Graphics

The link below talks about how to do cubic texture sampling on the GPU without having to do 16 texture reads!
GPU Gems 2 Chapter 20. Fast Third-Order Texture Filtering

This link is from Inigo Quilez, where he transforms a texture coordinate before passing it to the bilinear filtering, to get higher quality texture sampling without having to do extra texture reads. That is pretty cool.
IQ: improved texture interpolation

Cubic Hermite Interpolation

It’s a big wide world of curves out there and I have to say that most of the time, I consider myself a Bezier man.

Well let me tell you… cubic Hermite splines are technically representable in Bezier form, but they have some really awesome properties that I never fully appreciated until recently.

Usefulness For Interpolation

If you have a set of data points on some fixed interval (like for audio data, but could be anything), you can use a cubic Hermite spline to interpolate between any two data points. It interpolates the value between those points (as in, it passes through both end points), but it also interpolates a derivative that is consistent if you approach the point from the left or the right.

In short, this means you can use cubic Hermite splines to interpolate data such that the result has C1 continuity everywhere!

Usefulness As Curves

If you have any number N control points on a fixed interval, you can treat it as a bunch of piece wise cubic Hermite splines and evaluate it that way.

The end result is that you have a curve that is C1 continuous everywhere, it has local control (moving any control point only affects the two curve sections to the left and the two curve sections to the right), and best of all, the computational complexity doesn’t rise as you increase the number of control points!

The image below was taken as a screenshot from one of the HTML5 demos I made for you to play with. You can find links to them at the end of this post.

Cubic Hermite Splines

Cubic Hermite splines have four control points but how it uses the control points is a bit different than you’d expect.

The curve itself passes only through the middle two control points, and the end control points are there to help calculate the tangent at the middle control points.

Let’s say you have control points P_{-1}, P_0, P_1, P_2. The curve at time 0 will be at point P_0 and the slope will be the same slope as a line would have if going from P_{-1} to P_1. The curve at time 1 will be at point P_1 and the slope will be the same slope as a line would have if going from P_0 to P_2.

Check out the picture below to see what I mean visually.

That sounds like a strange set of properties, but they are actually super useful.

What this means is that you can treat any group of 4 control points / data points as a separate cubic hermite spline, but when you put it all together, it is a single smooth curve.

Note that you can either interpolate 1d data, or you can interpolate 2d data points by doing this interpolation on each axis. You could also use this to make a surface, which will likely be the next blog post!

The Math

I won’t go into how the formula is derived, but if you are interested you should check out Signal Processing: Bicubic Interpolation.

The formula is:

a*t^3+b*t^2+c*t+d

Where…

a = \frac{-P_{-1} + 3*P_0 - 3*P_1 + P_2}{2}
b = P_{-1} - \frac{5*P_0}{2} + 2*P_1 - \frac{P_2}{2}
c = \frac{-P_{-1} + P_1}{2}
d = P_0

Note that t is a value that goes from 0 to 1. When t is 0, your curve will be at P_1 and when t is 1, your curve will be at P_2. P_{-1} and P_{2} are used to be able to make this interpolation C1 continuous.

Here it is in some simple C++:

// t is a value that goes from 0 to 1 to interpolate in a C1 continuous way across uniformly sampled data points.
// when t is 0, this will return B.  When t is 1, this will return C.
static float CubicHermite (float A, float B, float C, float D, float t)
{
    float a = -A/2.0f + (3.0f*B)/2.0f - (3.0f*C)/2.0f + D/2.0f;
    float b = A - (5.0f*B)/2.0f + 2.0f*C - D / 2.0f;
    float c = -A/2.0f + C/2.0f;
    float d = B;

    return a*t*t*t + b*t*t + c*t + d;
}

Code

Here is an example C++ program that interpolates both 1D and 2D data.

#include <stdio.h>
#include <vector>
#include <array>
 
typedef std::vector<float> TPointList1D;
typedef std::vector<std::array<float,2>> TPointList2D;
 
void WaitForEnter ()
{
    printf("Press Enter to quit");
    fflush(stdin);
    getchar();
}

// t is a value that goes from 0 to 1 to interpolate in a C1 continuous way across uniformly sampled data points.
// when t is 0, this will return B.  When t is 1, this will return C.
float CubicHermite (float A, float B, float C, float D, float t)
{
    float a = -A/2.0f + (3.0f*B)/2.0f - (3.0f*C)/2.0f + D/2.0f;
    float b = A - (5.0f*B)/2.0f + 2.0f*C - D / 2.0f;
    float c = -A/2.0f + C/2.0f;
    float d = B;
 
    return a*t*t*t + b*t*t + c*t + d;
}

template <typename T>
inline T GetIndexClamped(const std::vector<T>& points, int index)
{
    if (index < 0)
        return points[0];
    else if (index >= int(points.size()))
        return points.back();
    else
        return points[index];
}

int main (int argc, char **argv)
{
    const float c_numSamples = 13;

    // show some 1d interpolated values
    {
        const TPointList1D points =
        {
            0.0f,
            1.6f,
            2.3f,
            3.5f,
            4.3f,
            5.9f,
            6.8f
        };

        printf("1d interpolated values.  y = f(t)n");
        for (int i = 0; i < c_numSamples; ++i)
        {
            float percent = ((float)i) / (float(c_numSamples - 1));
            float x = (points.size()-1) * percent;

            int index = int(x);
            float t = x - floor(x);
            float A = GetIndexClamped(points, index - 1);
            float B = GetIndexClamped(points, index + 0);
            float C = GetIndexClamped(points, index + 1);
            float D = GetIndexClamped(points, index + 2);

            float y = CubicHermite(A, B, C, D, t);
            printf("  Value at %0.2f = %0.2fn", x, y);
        }
        printf("n");
    }

    // show some 2d interpolated values
    {
        const TPointList2D points =
        {
            { 0.0f, 1.1f },
            { 1.6f, 8.3f },
            { 2.3f, 6.5f },
            { 3.5f, 4.7f },
            { 4.3f, 3.1f },
            { 5.9f, 7.5f },
            { 6.8f, 0.0f }
        };

        printf("2d interpolated values.  x = f(t), y = f(t)n");
        for (int i = 0; i < c_numSamples; ++i)
        {
            float percent = ((float)i) / (float(c_numSamples - 1));
            float x = 0.0f;
            float y = 0.0f;

            float tx = (points.size() -1) * percent;
            int index = int(tx);
            float t = tx - floor(tx);

            std::array<float, 2> A = GetIndexClamped(points, index - 1);
            std::array<float, 2> B = GetIndexClamped(points, index + 0);
            std::array<float, 2> C = GetIndexClamped(points, index + 1);
            std::array<float, 2> D = GetIndexClamped(points, index + 2);
            x = CubicHermite(A[0], B[0], C[0], D[0], t);
            y = CubicHermite(A[1], B[1], C[1], D[1], t);

            printf("  Value at %0.2f = (%0.2f, %0.2f)n", tx, x, y);
        }
        printf("n");
    }
 
    WaitForEnter();
    return 0;
}

The output of the program is below:

Links

Here are some interactive HTML5 demos i made:
1D cubic hermite interpolation
2D cubic hermite interpolation

More info here:
Wikipedia: Cubic Hermite Spline

Closely related to cubic hermite splines, catmull-rom splines allow you to specify a “tension” parameter to make the result more or less curvy:
Catmull-Rom spline

Rectangular Bezier Patches

Rectangular Bezier Patches are one way to bring Bezier curves into the 3rd dimension as a Bezier surface. Below is a rendered image of a quadratic Bezier rectangle (degree of (2,2)) and a cubic Bezier rectangle (degree of (3,3)) taken as screenshots from a shadertoy demo I created that renders these in real time. Links at bottom of post!


Intuition

Imagine that you had a Bezier curve with some number of control points. Now, imagine that you wanted to animate those control points over time instead of having a static curve.

One way to do this would be to just have multiple sets of control points as key frames, and just linearly interpolate between the key frames over time. You’d get something that might look like the image below (lighter red = farther back in time).

That is a simple and intuitive way to animate a Bezier curve, and is probably what you thought of immediately. Interestingly though, since linear interpolation is really a degree 1 Bezier curve, this method is actually using a degree 1 Bezier curve to control each control point!

What if we tried a higher order curve to animate each control point? Well… we could have three sets of control points, so that each control point was controlled over time by a quadratic curve. We could also try having four sets of control points, so that each control point was controlled over time by a cubic curve.

We could have any number of sets of control points, to be able to animate the control points over time using any degree curve.

Now, instead of animating the curve over TIME, what if we controlled it over DISTANCE (like, say, the z-axis, or “depth”). Look at the image above and think of it like you are looking at a surface from the side. If you took a bunch of the time interpolations as slices and set them next to each other so that there were no gaps between them, you’d end up with a smooth surface. TA-DA! This is how a Rectangular Bezier Patch is made.

Note that the degree of the curve on one axis doesn’t have to match the degree of the curve on the other axis. You could have a cubic curve where each control point is controlled by a linear interpolation, or you could have a degree 5 curve where each control point is controlled by degree 7 curves. Since there are two degrees involved in a Bezier rectangle, you describe it’s order with two numbers. The first example is degree (3,1) and the second example is degree (5,7).

Higher Dimensions

While you thinking about this, I wanted to mention that you could animate a bezier rectangle over time, using bezier curves to control those control points. If you then laid that out over distance instead of time, you’d end up with a rectangular box Bezier solid. If you are having trouble visualizing that, don’t feel dumb, it’s actually four dimensional!

You can think of it like a box that has a value stored at every (x,y,z) location, and those values are controlled by Bezier formulas so are smooth and are based on control points. It’s kind of a strange concept but is useful in some situations.

Say you made a 3d hot air baloon game and wanted to model temperature of the air at differently locations to simulate thermals. One way you could do this would be to store a bunch of temperatures in a 3d grid. Another way might involve using a grid of rectangular box Bezier solids perhaps. One benefit to the Bezier solid representation is that the data points are much smoother than a grid would be, and another is that you could make the grid much less dense.

Now, let’s say that you wanted to animate the thermals over time. You could use a fifth dimensional bezier hypercube solid. Let’s move on, my brain hurts ๐Ÿ˜›

Math

The equation for a Bezier Rectangle is:

\mathbf{p}(u, v) = \sum_{i=0}^n \sum_{j=0}^m B_i^n(u) \; B_j^m(v) \; \mathbf{k}_{i,j}

\mathbf{p}(u, v) is the point on the surface that you get after you plug in the parameters. u and v are the parameters to the surface and should be within the range 0 to 1. These are the same thing as the t you see in Bezier curves, but there are two of them since there are two axes.

There are two Sigmas (summations) which mean that it’s a double for loop.

One of the for loops make i go from 0 to n and the other makes j go from 0 to m. m and n are the degree of each axis.

B_i^n(u) and B_i^n(u) are Bernstein polynomials (aka binomial expansion terms) just as you see in Bezier Curves – there is just one per axis.

Lastly comes the control points \mathbf{k}_{i,j}. The number of control on one axis are multiplied by the number of control points on the other axis.

A biquadratic Bezier patch has a degree of (2,2) and has 3 control points on one axis, and 3 control points on the other. That means that it has 9 control points total.

A bicubic Bezier patch has a degree of (3,3) with 4 control points on each axis, for a total of 16 control points.

If you had a patch of degree (7,1), it would have 8 control points on one axis and 2 control points on the other axis, and so would also have 16 control points total, but they would be laid out differently than a bicubic Bezier patch.

As far as actually calculating points on a curve, the above only calculates the value for a single axis for the final point on the curve. If you have three dimensional control points (X,Y,Z), you have to do the above math for each one to get the final result. This is the same as how it works for evaluating Bezier curves.

Code

#include 
#include 

typedef std::array TFloat3;
typedef std::array TFloat3x3;

const TFloat3x3 c_ControlPointsX =
{
    {
        { 0.7f, 0.8f, 0.9f },
        { 0.2f, 0.5f, 0.4f },
        { 0.6f, 0.3f, 0.1f },
    }
};

const TFloat3x3 c_ControlPointsY =
{
    {
        { 0.2f, 0.8f, 0.5f },
        { 0.6f, 0.9f, 0.3f },
        { 0.7f, 0.1f, 0.4f },
    }
};

const TFloat3x3 c_ControlPointsZ =
{
    {
        { 0.6f, 0.5f, 0.3f },
        { 0.7f, 0.1f, 0.9f },
        { 0.8f, 0.4f, 0.2f },
    }
};

void WaitForEnter ()
{
    printf("Press Enter to quit");
    fflush(stdin);
    getchar();
}

float QuadraticBezier (const TFloat3& p, float t)
{
    float s = 1.0f - t;
    float s2 = s * s;
    float t2 = t * t;

    return
        p[0] * s2 +
        p[1] * 2.0f * s * t +
        p[2] * t2;
}

float BiquadraticBezierPatch(const TFloat3x3& p, float u, float v)
{
    TFloat3 uValues;
    uValues[0] = QuadraticBezier(p[0], u);
    uValues[1] = QuadraticBezier(p[1], u);
    uValues[2] = QuadraticBezier(p[2], u);
    return QuadraticBezier(uValues, v);
}

int main(int argc, char **argv)
{
    // how many values to display on each axis. Limited by console resolution!
    const int c_numValues = 4;

    printf("Bezier rectangle:n");
    for (int i = 0; i < c_numValues; ++i)
    {
        float iPercent = ((float)i) / ((float)(c_numValues - 1));
        for (int j = 0; j < c_numValues; ++j)
        {
            if (j == 0)
                printf("  ");
            float jPercent = ((float)j) / ((float)(c_numValues - 1));
            float valueX = BiquadraticBezierPatch(c_ControlPointsX, jPercent, iPercent);
            float valueY = BiquadraticBezierPatch(c_ControlPointsY, jPercent, iPercent);
            float valueZ = BiquadraticBezierPatch(c_ControlPointsZ, jPercent, iPercent);
            printf("(%0.2f, %0.2f, %0.2f) ", valueX, valueY, valueZ);
        }
        printf("n");
    }
    printf("n");

    WaitForEnter();
    return 0;
}

And here is the output it gives:

Note that in the program above, I evaluate the surface points by evaluating one axis and then the other. This is basically the same as how I explained it at the top, where I’m effectively animating the control points over distance, then evaluating the curve slice of the surface at that specific distance.

You could also write it another way though, where you literally expand the mathematical formula to get just one expression to evaluate that takes all control points at once. I like the simplicity (of understanding) of the method I used, but the other method works just as well.

The Rendering

It’s easy enough to calculate values on a Bezier Rectangle, but what if you want to draw one?

One way is to tessellate it, or break it up into triangles and then render the triangles. You can think of it like trying to render a grid, where each point of the grid is moved to be where ever the Bezier rectangle function says it should be.

Raytracing against these objects in the general case is very difficult however, because it basically comes down to solving equations of very high degree.

Raymarching against these objects is also difficult unfortunately because while raymarching only needs to know “am i above the shape, or underneath it?”, knowing what u,v to plug into the equation to get the height most relevant to a random point in space is also very difficult. Not as difficult as the raytracing equations, but probably just as much out of reach.

But never fear, as always, you can cheat!

If you read my post about one dimensional (explicit) Bezier curves (One Dimensional Bezier Curves), you may remember that math gets easier if you use one dimensional control points. The same is actually true with Bezier rectangles!

For the ray marching case, you can march a point through space, and plug the x,z coordinate of the point into the Bezier rectangle function as u,v values and the number that comes out you can treat as a y coordinate.

Now, ray marching a Bezier rectangle is the same as ray marching any old height map (check links section for more info on that).

What I did in my demos, is since i knew that the curve was constrained to 0-1 on the x and z axis, and the y axis min and max was the control point min and maxes, I did a raytrace of that bounding box to get a minimum and maximum distance that the ray was inside that box. From there, I did raymarching from that min time to the max time along the ray, considering the ray as hitting the surface whenever the distance from the ray to the surface on the y axis (rayPos.y – bezierRectangle.y) changed sign.

After I had a hit, I got the height of the curve slightly offset on the x axis, then slightly offset on the z axis to get a triangle that I could calculate a surface normal from, to do lighting and shading with.

There is room for improvement in the ray marching though. I evenly divide the space through the box by a specific amount to control the size of the steps. A better way to do this I think would be to get the gradient of the function and use that to get a distance estimate (check links section below for more information). I could use that value to control the distance the ray marches at each step, and should be able to march through the box much quicker.

Also, as the link on terrain marching explains, you can usually take farther steps when the ray is farther from the camera, because the eye notices less detail. I removed that since the Bezier rectangles are pretty close to the camera, but it probably still would be helpful. Also, it would DEFINITELY be helpful in the case of the “Infinite Bezier Rectangles” scene.

I am pretty sure you could directly raytrace an explicit Bezier rectangle (one who has one dimensional control points) – at least for low degrees. I personally don’t know how you would do that, but I think it might boil down to solving a 4th degree function or something else “reasonable” based on a similar question I had about Bezier triangles on the mathematics stack exchange site (link below).

Another Way To Render

There is another way to render Bezier surfaces using ray based methods that I didn’t use but want to mention.

A property of Bezier curves and surfaces is that they are guaranteed to be completely contained by the convex hull created by their control points.

Another property of Bezier curves and surfaces is that you can use the De Casteljeau algorithm to cut them up. For instance you could cut a Bezier curve into two different Bezier curves, and the same holds for Bezier surfaces.

Using these two properties, there is an interesting way to be able to tell if a ray intersects a bezier curve or not, which is:

  1. If the line misses the convex hull, return a miss
  2. If the convex hull is smaller than a pixel, return a hit
  3. Otherwise, cut the Bezier object into a couple smaller Bezier objects
  4. Recurse for each smaller Bezier object

Yes, believe it or not, that is a real technique! It’s called Bezier Clipping and there is a research paper in the links section below that talks about some of the details of using that rendering technique.

Links

Lastly, I wanted to mention that the above is completely about Bezier rectangles, but there is no reason you couldn’t extend these rectangles to use rational Bezier functions, or be based on B-splines or NURBS, or even go a different direction and make hermite surfaces or catmull-rom surfaces, or even make surfaces that used exotic basis functions of your own crafting based on trigonometric functions or whatever else!

Here are the shadertoy demos I made:
Shadertoy: Cubic Bezier Rectangle
Shadertoy: Quadratic Bezier Rectangle
Shadertoy: Infinite Bezier Rectangles

And some other links about this stuff:
IQ – terrain raymarching
IQ – distance estimation (using function gradients)
Math Stack Exchange – Ray intersection with explicit (1 axis) Bezier triangle?
Math Stack Exchange – Intersect Ray (Line) vs Quadratic Bezier Triangle
Bรฉzier Surfaces: de Casteljau’s Algorithm
Ray Tracing Triangular Bรฉzier Patches (including Bezier clipping)
Wikipedia: Bezier Surface
Wikipedia: Bezier Triangle

HyperLogLog: Estimate Unique Value Counts Like The Pros

This is the last of my planned posts on probabilistic algorithms. There may be more in the future, but the initial volley is finished ๐Ÿ˜›

Thanks again to Ben Deane for exposure to this really interesting area of computer science.

This post is about HyperLogLog, which is used to estimate a count of how many “uniques” there are in a stream of data. It can also do intersection and union between two hyperloglog objects allowing you to see how many items HLL objects have in common. It was invented in 2007 and is currently used in many big data situations including use by Google and Redis and Amazon.

This algorithm is probabilistic in that you can trade storage space for accuracy. You can calculate how much accuracy to expect for a given amount of storage, or calculate how much storage you’d need for a specific amount of accuracy.

If this sounds a lot like the first probabilistic algorithm post I made (Estimating Counts of Distinct Values with KMV) that means you have been paying attention. HyperLogLog is in the same family of algorithms, but it is way better at most things than KMV is, and seems to be the current standard for DV Sketches (distinct value estimators). The one thing KMV seems to be better at is calculating intersections between objects, which i’ll talk about more below.

To give you an idea of the power of HyperLogLog, here’s a quote from the paper it was first described in (link at end of post):

“…the new algorithm makes it possible to estimate cardinalities well beyond 10^9 with a typical accuracy of 2% while using a memory of only 1.5 kilobytes”

By the end of this post you should understand how that is possible, and also be able to start with the sample code and have HyperLogLog capabilities in your own C++ project immediately!

A Usage Case: Upvoting

A usage case for this algorithm would be adding an “upvote” button to a webpage.

A naive solution to this would be to have an upvote button that when clicked would increment a counter on the server. That is problematic though because people can vote as many times as they want. You could hack together a solution to this by having some client side cookies and javascript limiting people from doing that, but all your security is out the window since it’s a client side fix, and you will soon have trolls spamming your vote counters by doing raw HTTP requests to your servers, to ruin your day, just because they can.

A less naive solution would be to have some unique identifier per user – whether that was something like a username, or just an IP address – and store that in a voting table, only allowing the counter to increment if the person wasn’t already in the table, and entering them into the table when doing the increment.

The problem with that solution is that the table of users who have already voted might get extremely huge, causing lots of memory usage to hold it, lots of processing time to see if a user exists in the table already (even with a hash based lookup), and it doesn’t parallelize very well to multiple servers. You also have to implement some protection against race conditions around the “look for user in table, increment vote counter, add user to table” work, which means some potentially costly synchronization logic.

A better solution would be to use HyperLogLog and Here are some reasons why:

  • It has a fixed size you determine in advance. The bold quote from the paper would indicate that 1.5KB is likely enough for our needs, being able to count over one billion unique values. Bringing it up to 2KB would be enough to store a heck of a lot more, like somewhere up near 2^250 unique items.
  • It automatically disallows the same item being accounted for more than once, so our multiple votes from same voter problem is gone with no need for costly synchronization work.
  • It lends itself well to being parallelized across machines, or just using SIMD / GPGPU if you wanted to squeeze some more performance out of it.
  • You can very quickly do set operations on multiple HLL objects.

The first three items solve the problems in the naive solutions, but the fourth adds some bonus functionality that is pretty cool.

Using set operations, you can do things like figure out if the same people are upvoting a dulce de leche flan desert compared to a cheesy grits recipe, and you can calculate that VERY QUICKLY.

A possibly naive way to suggest a recipe to a user would be to show them recipes that a lot of people have upvoted.

A more custom tailored suggestion (and thus, hopefully a better suggestion) would be when a person votes on a recipe, you can tell them “hey, a lot of the same people that upvoted the recipe you just upvoted also upvoted this other recipe, why don’t you check it out!”

So now, you don’t just have a voting system, you now also have a CLASSIFICATION SYSTEM.

Yeah ok, so that’s a little hand wavy and making a real suggestion system has more details to solve than just those etc etc, but hopefully you can see how this algorithm can be a fundamental building block to many “big data” problems.

Onto the details of how it works!

Basic Premise – Coin Flips

When you flip a coin, what are the odds that it will be heads? There’s even chance of heads or tails, so the chance of a heads is 1 in 2 or 1/2 or 50%.

What are the odds that if you flip a coin twice it a row it will be heads both times? Well, on each flip there is a 50% chance, so you multiply .5 * .5 to get the answer which is .25 or 25%. There’s a 25% chance, or a 1 in 4 chance, that you will flip heads twice in a row with two coin flips. It’s the same chance that you will flip two tails in a row, or that you will flip heads then tails, or tails then heads. All possible outcomes have the same probability.

Another way to look at this, instead of probability, is to see how many combinations possible there are like this:

Sequence Number Sequence
0 heads, heads
1 heads, tails
2 tails, heads
3 tails, tails

There’s a 100% chance that two coin flips will be somewhere in those four results, and all results have an even chance of happening, so result 0 “heads, heads” has a 1 in 4 chance of happening. All of the results above have a 1 in 4 chance. Interestingly, looking at the above table you can also see that there is a 2/4 chance that the first flip will be heads, which is also 1/2, 1 in 2 or 50% chance. That agrees with what we said before, that if only doing one coin flip (the first coin flip), there is a 50% chance of heads or tails.

If you switched to 3 coin flips, there would be a 1 in 8 chance of any specific event happening, since there are 8 possible outcomes:

Sequence Number Sequence
0 heads, heads, heads
1 heads, heads, tails
2 heads, tails, heads
3 heads, tails, tails
4 tails, heads, heads
5 tails, heads, tails
6 tails, tails, heads
7 tails, tails, tails

It doesn’t matter which specific sequence you are looking at above, the chance that 3 coin flips will result in any of those specific sequences is 1 in 8.

In fact, for N coin flips, the probability that you will encounter any specific sequence (permutation) of heads and tails is 1 in 2^N, or 1/(2^N).

Now, let’s change it up a bit. What are the odds that you will have some number of heads in a row, and then a tails? In other words, what are the odds that you will have a “run” of heads of a specified length?

well, a run of 0 would just be you flip the coin once and get tails. Since you are doing one coin flip and you are looking for a specific one of two possible outcomes to happen, the probability is or 1 in 2, or 1/2, or 50%.

A run of 1 would mean you flipped the coin once, got heads, flipped it again and got tails. Since you are doing two coin flips and are looking for a specific one of four possible outcomes, the probability of that happening is 1/2*1/2 = 1/4, or 1 in 4, or 25%.

A run of 2 means heads, heads, tails. 3 coin flips = 1/2*1/2*1/2 = 1/8, or 1 in 8, or 12.5%.

By now you may have noticed that the probability for getting N heads and then a tail is just 1 in the number of coin flips, and the number of coin flips is 2^(N+1).

More formally, the odds of getting N heads and then a tail is 1/(2^(N+1)).

Let’s now swap out the idea of heads and tails with the idea of binary digits 0 and 1 in a sequence of random numbers. Think of “heads” as 0, and “tails” as 1.

If you generate a random bit (binary digit), there is a 50% chance it will be a 0, and a 50% chance it will be a 1.

If you generate two random bits, there is a 25% chance for each of the possible outcomes below:

Sequence Number Sequence
0 00
1 01
2 10
3 11

Now, what are the odds that in a random binary number, we will have N zeros and then a 1?

Don’t let the binary digits scare you, it’s the same answer as the coin flip question: 1/(2^(N+1))

An interesting property of this is that if you ask for a random 8 bit binary number and get xxxxx100, using the formula above, you know there is a 1 in 8 chance that a random number would end in “100”.

Using this information you can say to yourself “i’ll bet I’ve seen about 8 different random numbers at this point”, and that’s a fairly decent guess, without actually having had to pay attention to any of the numbers that came before.

A better idea though would be to watch all the random numbers as they are generated, and keep track the longest run of zeros you’ve seen on the right side of any number.

Using this “longest run seen” value, you can guess how many unique random numbers you’ve seen so far. If the longest run you’ve seen is N zeros and then a 1, the guess as to how many random numbers you’ve seen is 2^(N+1).

If you’ve seen a maximum of 4 zeros and a 1 (xxx10000), you’ve probably seen about 32 numbers on average. If you’ve seen at maximum 2 zeros and a 1 (xxxxx100), you’ve probably seen about 8 numbers on average.

Since by definition, randomness is RANDOM, there will be fluctuation and your guess will not always be so accurate. You might have only seen one random number, but it’s value may have been 10000000 (0x80), which would incorrectly cause you to estimate that 256 items have been seen (2^8), when in fact only a single item has been seen.

To combat this, HyperLogLog uses multiple registers (counters) to keep multiple counts, and then averages the estimates together. More info on that below, but for now, hopefully you can see how that would smooth things out towards a more average case. The more registers you use, the more “average case” your estimation should be, so the more accurate your estimate should be.

There’s an interesting twist here though… you aren’t actually estimating how many random numbers you’ve seen, but instead are estimating how many UNIQUE random numbers you’ve seen. Random numbers can repeat, but this count estimation will only count each unique value once, no matter how many times it appears.

To help visualize that, no matter how many times you see 10110100 – whether it’s only once, or ten thousand times – the longest run will still be 2. After you’ve seen ten thousand of those numbers, as soon as you see the next number 10011000, the longest run will then be 3.

That may sound like a trivial difference that we are counting uniques, and not actual values, but as the next section will show, it’s actually a very important difference, and is where this technique derives it’s power.

Also, if we were counting non uniques, we could just use an integer and increment it for each item we saw (;

Hashing Functions as Pseudorandom Number Generators

An interesting property of good hashing algorithms is that the output you get when you hash an object will be indistinguishable from random numbers. If you hash 3, you’ll get a random number, and if you hash 4, you’ll likely get a completely different random number. You can’t tell from the output what the nature of the input was, or how similar the input objects were.

But, of course, when you put the same input into a hash function, you will always get the same output.

These two properties are what makes hash functions so useful in what we are trying to do in HLL.

Quick Tangent: These same properties also apply to encryption by the way. The fact that they are random output is why hashed and encrypted data doesn’t compress very well. There are no patterns in the data that can be exploited to express the data as anything shorter than the full length of data itself. You should not be able to gain any knowledge about the nature of the input by looking at the output, except perhaps the size of the source data in the case of encryption. Also, whereas hashes are not reversible at all, encryption is only reversible if you have the encryption key (password). HLL and similar algorithms use hashes, not encryption, because they want a fixed size output, and they don’t care about reversing the process.

The output of the hash function is the source of the pseudorandom numbers that we plug into the HyperLogLog algorithm, and is what allows us to count uniques of any type of thing, so long as that thing can be hashed.

So, to do HLL counting, you hash every object you see in a stream keeping track of the longest run of zeros (in binary) you’ve seen in the resulting hashes. You store “longest runs seen” in multiple registers which you can then later use to get an averaged estimate of unique items encountered. That’s all there is to it.

MAGIC!

That’s how things work from a high level, now let’s get into the nitty gritty a bit…

Handling Multiple Registers

Let’s say you have a hash function that spits out a 32 bit hash, which is a pretty common thing for HLL implementations.

We talked about figuring out the length of the run of 0’s in the hash output, but if you had 16 registers to store run lengths in, how do you choose which register to store each run length in?

The common way to solve this is to use some of your hash bits for register selection. If you have 16 registers, you could use the lowest 4 bits of your hash as the register index to store the count in for instance.

There is a problem here though, that hopefully you can see already. The problem is that if we have a run of 3 with a hash that ends in the binary number 1000, we will only ever store that run length in register 8! By using the same bits for register selection as we do for counting numbers, we’ve biased our count and introduced inaccuracy (error) because certain numbers will only get accounted for by specific registers. The ideal situation is that every number is as likely to end up in any specific register versus another one. It should “randomly” choose what register to use, but be deterministic in how it chose that register.

The bits you use for register selection cannot be reused for counting runs, or else you’ll fall into the trap of only storing certain numbers in specific registers.

You could perhaps hash your hash to get another pseudo random number to use for register selection, but a better option is to just throw out those register selection bits once you use them.

Reducing the number of bits you evaluate for runs of 0’s comes at a cost though. It means that your estimation of unique values seen is capped at a lower number. with 32 bits, you can estimate a count up to 2^32 (~4 billion), but at 28 bits, after using 4 bits for register selection, you can only estimate a count of up to 2^28 (~268 million).

I believe this is one of the reasons why google invented “HyperLogLog++” which uses a 64 bit hash and has some other improvements. Check the links at the bottom of this post for more information.

It’s a bit overkill, but in the sample code in this post, we create a 128 bit hash, use the top 32 bits for register selection, and the lower 96 bits for looking at runs. I say it’s overkill because at 96 bits, we can estimate counts up to 79 billion billion billion, which is way huger than anyone (even google!) is ever likely to need.

Register Sizes

As I mentioned above, many people use 32 bit hashes, for estimating at most about 4 billion unique objects. Google bumps it up to 64 bits for up to 18 billion billion uniques, and our sample code uses 96 bits for run evaluation, letting us estimate counts up to 79 billion billion billion.

These numbers are beyond huge, but believe it or not, the size of the registers themselves used to track these things are pretty darn tiny.

Since we are looking for runs of zeros, if we have a 32 bit hash, we only need to be able to store the values 0 to 32. 0 to 31 can be stored in 5 bits, and chances are that people aren’t going to bump it up to 6 bits just to get that extra value – especially when in practice you are going to use a few bits of the hash as register selection.

So, for a 32 bit hash, you really only need 5 bits per register to keep track of longest runs seen.

For a 64 bit hash, you need to be able to store the values 0 to 64. Similar to the above, 0-63 can be stored in 6 bits, and we can ditch being able to store 64, so 6 bits per register is plenty.

For our 96 bit hash (since we use 32 bits for register selection), we’d only need to be able to store 0-96, which can fit entirely in 7 bits, since 7 bits can store 0-127.

In our example code, I’m an excessive glutton however, and store our longest run value in 8 bits, wasting an entire bit of memory per register.

Yep, in our excessively gigantic counting 128 bit hash HLL DV Sketch code, i use an ENTIRE BYTE of memory per register. The Horror!!!

With a 32 or 64 bit hash, you could drop that down to 5 or 6 bits per register, and either condense your registers in memory, or perhaps even use those extra bits for something else if you wanted (need some flags?!).

Register Counts

So, our register size itself is fairly tiny, where my gluttonous, wasteful programming uses a single byte per register. How many registers do we need though?

The answer to that depends on how much accuracy you want.

To calculate how much error there is for M registers, the equation is: expectedError = 1.04 / sqrt(M)

To calculate how many registers you need to achieve a specific expected error, the equation is: M = 676 / (625 * expectedError^2)

In those equations, an expectedError of 0.03 would mean 3%.

Check out the table below to get an idea of accuracy vs size.

Note that since we use bits from our hash to do register selection, that our number of registers is a power of 2.

Register Bits Register Count Error
0 1 104%
1 2 73.5%
2 4 52%
3 8 36.7%
4 16 26%
5 32 18.3%
6 64 13%
7 128 9%
8 256 6.5%
9 512 4.6%
10 1024 3.2%
11 2048 2.2%
12 4096 1.6%

Here is a graph showing how number of bits used affects error. Bits used is the x axis, and expected error is the y axis.

From the table and the graph you can see that adding bits (registers) gives diminishing returns in error reduction. It’s especially diminishing because whenever we add another bit, we double our storage size (double the number of registers we use).

This shows us that this algorithm is great at counting a large number of uniques, since one byte per counter can count up to about 2^256 (2^2^8) uniques, but it isn’t super great at getting a low error rate. If you are ok with about 2% accuracy, the storage space needed is pretty small though!

Remember the claim at the top of this post?

โ€œโ€ฆthe new algorithm makes it possible to estimate cardinalities well beyond 10^9 with a typical accuracy of 2% while using a memory of only 1.5 kilobytesโ€

Looking at the bits used / error table, you can see 11 bits, or 2048 registers, gives just a little over 2% accuracy.

If you are using 32 bits of hash to look for runs of zeros, you can use 6 bit registers to store the longest run seen, if you want to waste a bit to be able to store 0-32 instead of just 0-31.

So, 2048 registers * 6 bits = 12288 bits of register storage needed. That is 1536 bytes, or exactly 1.5KB.

You could count up to ~4 billion uniques (2^32) with that configuration, but error increases as you get closer to that limit, so I think that’s why they limited their statement to counting ~1 billion uniques (10^9).

Estimating Counts

The math behind the count estimation is a bit complex (check the paper in the links section for more info!) but part of how it works is it uses the harmonic mean to average the data from the registers together. Since there is randomness involved, and differences in our run lengths means being off by exponential amounts, the harmonic mean is great at filtering out large outliers due to the random fluctuations. Fluctuation to the “too small” end won’t matter too much since it will be over-written by other values since we are storing the maximum value seen. Fluctuations to the “too large” end are mitigated with harmonic mean.

Here’s some pseudo code for estimating the count. Note that we are storing the position of the first 1 in the registers, not storing the run length of zeros. That’s an important distinction because it means the number is 1 higher than it would be otherwise, which if you get it wrong makes your estimation half as large as it should be. It also means that you know if you see a zero register, that it is uninitialized and hasn’t seen a value yet.

Alpha = 0.7213 / (1 + 1.079 / NumRegisters)
Sum = 0
for i = 0 to NumRegisters
  Sum = Sum + pow(2, -Register[i])

Estimation = Alpha * NumRegisters^2 / Sum

// do small range correction
if Estimation < 5 / 2 * NumRegisters
{
  if NumEmptyRegisters > 0
    Estimation = NumRegisters * ln(NumRegisters / NumEmptyRegisters)
}
// do large range correction
else if Estimation > 2^32 / 30
{
  Estimation = -2^32 * ln(1 - Estimation/ 2^32);
}

Small range correction is there because when not enough registers have been filled in (they haven’t gotten any data yet), the normal algorithm path has expected error greater than 1.04. Large range correction is there for the same reason, but on the high end side, when the registers are saturated.

Set Operations

You can do set operations on hyperloglog objects so long as they use the same number of registers, same sized registers, and same hashing algorithm.

There’s a link in the links section at the end of this post that shows you how to resize the number of registers so that you can do set operations on HLL objects that have different numbers of registers.

Union

Taking a union of two HLL objects is actually really simple.

If you have two HLL objects A and B, that you want to union to get C, all you do is take the maximum bucket value from A and B and store it in C. Check out this pseudocode to see what i mean:

for i = 0 to NumRegisters
  C.Register[i] = Max(A.Register[i], B.Register[i])

The neat thing about doing this union is that it is LOSSLESS and doesn’t introduce any new error. Doing a union of two HLL objects is just the same as if you had a third HLL object that processed all the same objects that A and B both processed.

Intersection

To do an intersection is a tiny bit trickier, but not by much. We have to use what is called the Inclusion-Exclusion Principle (check links section for more info).

Using that principle, we can estimate the count of how many items are in the intersection, but we can’t get a HLL object representing the intersection of the two objects unfortunately.

The formula is this:

Count(Intersection(A,B)) = Count(A) + Count(B) – Count(Union(A,B))

And here’s some more pseudocode to show you how to do it:

C = Union(A,B)
IntersectionCountEstimate = A.CountEstimate() + B.CountEstimate() - C.CountEstimate()

Pretty simple eh?

At the beginning I mentioned that KMV was actually better at intersections than HyperLogLog. The reason for that is because with KMV, you have a small, random sample range from both objects, and you can do an intersection between those two ranges and get your result.

KMV really starts to shine when you need to do an intersection between more than 2 or 3 lists, because using the inclusion-exclusion principle causes a combinatorial explosion, while the KMV intersection easily extends for N sets.

Jaccard Index

There’s no special trick to calculating the Jaccard Index, as per usual it’s just:

JaccardIndex = Count(Intersection(A,B)) / Count(Union(A,B))

Which will give you a value from 0 to 1 indicating how similar the two sets A and B are, where 1 is totally the same, and 0 is completely different. In the case of HLL, this is an estimated Jaccard Index of course, not the actual value!

Contains Percent

I was noticing in runs of the sample code below that while the union operation had pretty decent error levels, the intersection operation had not so great error levels and thus the Jaccard Index wasn’t very accurate either. This is mostly due to the fact that the intersection levels were pretty small, so if you had +1 or -1, that came out to be a large percentage of the actual value.

Despite having a reasonable explanation that diminished the actual impact of the “high error levels”, I wanted to see if I could come up with a different similarity metric and see how the error rate was on it.

What I came up with was a “Contains Percent”, which is the percentage of how many items in A are contained in B. You calculate it like this:

ContainsPercent = Count(Intersection(A,B)) / Count(A)

Since it uses the intersection value (that is not so accurate), it didn’t give great results, but I wanted to mention it because it actually is a completely different measurement than Jaccard Index with different meaning. All of the items in A could be in B, which would give it a “ContainsPercent” of 1.0, but B may have 10 times as many items that don’t appear in A, which would make the Jaccard Index very low.

In some cases, you may want to use the information that the Jaccard Index represents to make decisions, and in other cases you may want this “Contains Percent” metric, or maybe even something else.

It’s a bit subtle, but it’s good to think about what it is that you are actually looking for if using these things in actual production code (:

Estimating Set Membership

So, HLL is NOT a bloom filter, but can it still be used like one?

The answer is yes, but I don’t have a whole lot of information about the formal accuracy of that.

Basically how you’d do this is create a temporary HLL object, make it store the single item you want to check the other HLL for set membership of, and then you’d do an estimated intersection count between the two HLL objects.

As crazy as it sounds, it looks like redis exposes this functionality and says it is pretty accurate (for however many registers they used anyways), which is pretty neat:
Redis: PFCOUNT

Dot Product

The dot product between two sets (or multi sets – where you have a count associated with each item) can be really useful to help gauge the similarity between the two sets.

You can do a dot product operation between two HLL objects too. If you think about it, getting the dot product between two HLL objects is the same as getting the estimated count of the intersection between those two objects.

Example Code

Here is some example code in C++ that allows you to do HyperLogLog. It includes only standard include files so has no dependencies and is in a single file for convenience.

I use a hash called MurmurHash3 to generate 128 bits of hash, 32 bits of which are used to generate the register index, and the remaining 96 bits are used for looking at runs of zeros.

Below the code is the output of a run of this program

#include <array>
#include <string>
#include <assert.h>
#include <unordered_set>
#include <stdint.h>
#include <memory>
 
// microsoft only, for _BitScanForward to quickly find the index of the first 1 bit
// Use clz in gcc
#include <intrin.h>
 
//=====================================================================================================
// MurmurHash3
//=====================================================================================================
 
// from https://code.google.com/p/smhasher/source/browse/trunk/MurmurHash3.cpp
// note that this 128 bit MurmurHash3 is optimized for x86.  There is a version at the above link
// optimized for x64 as well, but it gives different output for the same input.
 
#define ROTL32(x,y)     _rotl(x,y)
 
inline uint32_t getblock32 (const uint32_t * p, int i)
{
    return p[i];
}
 
inline uint32_t fmix32 (uint32_t h)
{
    h ^= h >> 16;
    h *= 0x85ebca6b;
    h ^= h >> 13;
    h *= 0xc2b2ae35;
    h ^= h >> 16;
 
    return h;
}
 
void MurmurHash3_x86_128 (const void * key, const int len,
    uint32_t seed, std::array<uint32_t, 4> & out)
{
    const uint8_t * data = (const uint8_t*)key;
    const int nblocks = len / 16;
 
    uint32_t h1 = seed;
    uint32_t h2 = seed;
    uint32_t h3 = seed;
    uint32_t h4 = seed;
 
    const uint32_t c1 = 0x239b961b;
    const uint32_t c2 = 0xab0e9789;
    const uint32_t c3 = 0x38b34ae5;
    const uint32_t c4 = 0xa1e38b93;
 
    //----------
    // body
 
    const uint32_t * blocks = (const uint32_t *)(data + nblocks * 16);
 
    for (int i = -nblocks; i; i++)
    {
        uint32_t k1 = getblock32(blocks, i * 4 + 0);
        uint32_t k2 = getblock32(blocks, i * 4 + 1);
        uint32_t k3 = getblock32(blocks, i * 4 + 2);
        uint32_t k4 = getblock32(blocks, i * 4 + 3);
 
        k1 *= c1; k1 = ROTL32(k1, 15); k1 *= c2; h1 ^= k1;
 
        h1 = ROTL32(h1, 19); h1 += h2; h1 = h1 * 5 + 0x561ccd1b;
 
        k2 *= c2; k2 = ROTL32(k2, 16); k2 *= c3; h2 ^= k2;
 
        h2 = ROTL32(h2, 17); h2 += h3; h2 = h2 * 5 + 0x0bcaa747;
 
        k3 *= c3; k3 = ROTL32(k3, 17); k3 *= c4; h3 ^= k3;
 
        h3 = ROTL32(h3, 15); h3 += h4; h3 = h3 * 5 + 0x96cd1c35;
 
        k4 *= c4; k4 = ROTL32(k4, 18); k4 *= c1; h4 ^= k4;
 
        h4 = ROTL32(h4, 13); h4 += h1; h4 = h4 * 5 + 0x32ac3b17;
    }
 
    //----------
    // tail
 
    const uint8_t * tail = (const uint8_t*)(data + nblocks * 16);
 
    uint32_t k1 = 0;
    uint32_t k2 = 0;
    uint32_t k3 = 0;
    uint32_t k4 = 0;
 
    switch (len & 15)
    {
    case 15: k4 ^= tail[14] << 16;
    case 14: k4 ^= tail[13] << 8;
    case 13: k4 ^= tail[12] << 0;
        k4 *= c4; k4 = ROTL32(k4, 18); k4 *= c1; h4 ^= k4;
 
    case 12: k3 ^= tail[11] << 24;
    case 11: k3 ^= tail[10] << 16;
    case 10: k3 ^= tail[9] << 8;
    case  9: k3 ^= tail[8] << 0;
        k3 *= c3; k3 = ROTL32(k3, 17); k3 *= c4; h3 ^= k3;
 
    case  8: k2 ^= tail[7] << 24;
    case  7: k2 ^= tail[6] << 16;
    case  6: k2 ^= tail[5] << 8;
    case  5: k2 ^= tail[4] << 0;
        k2 *= c2; k2 = ROTL32(k2, 16); k2 *= c3; h2 ^= k2;
 
    case  4: k1 ^= tail[3] << 24;
    case  3: k1 ^= tail[2] << 16;
    case  2: k1 ^= tail[1] << 8;
    case  1: k1 ^= tail[0] << 0;
        k1 *= c1; k1 = ROTL32(k1, 15); k1 *= c2; h1 ^= k1;
    };
 
    //----------
    // finalization
 
    h1 ^= len; h2 ^= len; h3 ^= len; h4 ^= len;
 
    h1 += h2; h1 += h3; h1 += h4;
    h2 += h1; h3 += h1; h4 += h1;
 
    h1 = fmix32(h1);
    h2 = fmix32(h2);
    h3 = fmix32(h3);
    h4 = fmix32(h4);
 
    h1 += h2; h1 += h3; h1 += h4;
    h2 += h1; h3 += h1; h4 += h1;
 
    out[0] = h1;
    out[1] = h2;
    out[2] = h3;
    out[3] = h4;
}
 
struct SMurmurHash3
{
    std::array<uint32_t, 4> HashBytes (const void *key, size_t len)
    {
        // use a random 32 bit number as the seed (salt) of the hash.
        // Gotten from random.org
        // https://www.random.org/cgi-bin/randbyte?nbytes=4&format=h
        static const uint32_t c_seed = 0x2e715b3d;
 
        // MurmurHash3 doesn't do well with small input sizes, so if the input is too small,
        // make it longer in a way that hopefully doesn't cause likely collisions.
        // "the" hashed before the fix (notice the last 3 are the same)
        //  0x45930d0e
        //  0xfc76ee5b
        //  0xfc76ee5b
        //  0xfc76ee5b
        // and after the fix:
        //  0x70220da0
        //  0xe7d0664a
        //  0xb4e4d832
        //  0x25940640
        std::array<uint32_t, 4> ret;
        static const size_t c_minLen = 16;
        if (len < c_minLen)
        {
            unsigned char buffer[c_minLen];
 
            for (size_t i = 0; i < len; ++i)
                buffer[i] = ((unsigned char*)key)[i];
 
            for (size_t i = len; i < c_minLen; ++i)
                buffer[i] = buffer[i%len] + i;
 
            MurmurHash3_x86_128(buffer, c_minLen, c_seed, ret);
        }
        else
        {
            MurmurHash3_x86_128(key, len, c_seed, ret);
        }
 
        return ret;
    }
 
    template <typename T>
    std::array<uint32_t, 4> operator() (const T &object);
 
    template <>
    std::array<uint32_t, 4> operator() <std::string> (const std::string &object)
    {
        return HashBytes(object.c_str(), object.length());
    }
 
    // NOTE: if you need to hash other object types, just make your own template specialization here
};
 
//=====================================================================================================
// The CHyperLogLog class
//=====================================================================================================
//
// TKEY is the type of objects to keep track of
// NUMREGISTERBITS is how many bits of the hash are used to index into registers.  It also controls
//   how many registers there are since that count is 2^NUMREGISTERBITS
// HASHER controls how the keys are hashed
//
template <typename TKEY, size_t NUMREGISTERBITS, typename HASHER>
class CHyperLogLog
{
public:
    CHyperLogLog()
        : m_counts{} // init counts to zero
    { }
 
    // friends
    template <typename TKEY, size_t NUMREGISTERBITS, typename HASHER>
    friend float UnionCountEstimate (const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& A, const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& B);
 
    // constants
    static const size_t c_numRegisters = 1 << NUMREGISTERBITS;
    static const size_t c_registerMask = (c_numRegisters - 1);
 
    // interface
    void AddItem (const TKEY& item)
    {
        // because 2^32 does not fit in 32 bits
        static_assert(NUMREGISTERBITS < 32, "CHyperLogLog must use fewer than 32 register bits");
 
        // make as many hashed bits as we need
        std::array<uint32_t, 4> hash = HASHER()(item);
 
        // use the highest 32 bits for getting our register index
        unsigned int registerIndex = hash[3] & c_registerMask;
 
        // use the other 96 bits as our "unique number" corresponding to our object.  Note that we
        // don't use the high 32 bits that we've already used for the register index because that
        // would bias our results.  Certain values seen would ONLY get tracked by specific registers.
        // That's ok though that we are only using 96 bits because that is still an astronomically large
        // value.  Most HyperLogLog implementations use only 32 bits, while google uses 64 bits.  We are
        // doing even larger with 96 bits.
        // 2^(bits) = how many unique items you can track.
        //
        // 2^32  = 4 billion <--- what most people use
        // 2^64  = 18 billion billion <--- what google uses
        // 2^96  = 79 billion billion billion <--- what we are using
        // 2^128 = 340 billion billion billion billion <--- way huger than anyone needs. Beyond astronomical.
 
        // Figure out where the highest 1 bit is
        unsigned long bitSet = 0;
        if (_BitScanForward(&bitSet, hash[0]))
            bitSet += 1;
        else if (_BitScanForward(&bitSet, hash[1]))
            bitSet += 32 + 1;
        else if (_BitScanForward(&bitSet, hash[2]))
            bitSet += 64 + 1;
 
        // store the highest seen value for that register
        assert(bitSet < 256);
        unsigned char value = (unsigned char)bitSet;
        if (m_counts[registerIndex] < value)
            m_counts[registerIndex] = value;
 
    }
 
    unsigned int EmptyRegisterCount () const
    {
        unsigned int ret = 0;
        std::for_each(m_counts.begin(), m_counts.end(), [&] (unsigned char count) {
            if (count == 0)
                ret++;
        });
        return ret;
    }
 
    float GetCountEstimation () const
    {
        // calculate dv estimate
        const float c_alpha = 0.7213f / (1.0f + 1.079f / c_numRegisters);
        float sum = 0.0f;
        std::for_each(m_counts.begin(), m_counts.end(), [&](unsigned char count) {
            sum += std::pow(2.0f, -(float)count);
        });
 
        float dv_est = c_alpha * ((float)c_numRegisters / sum) * (float)c_numRegisters;
 
        // small range correction
        if (dv_est < 5.0f / 2.0f * (float)c_numRegisters)
        {
            // if no empty registers, use the estimate we already have
            unsigned int emptyRegisters = EmptyRegisterCount();
            if (emptyRegisters == 0)
                return dv_est;
 
            // balls and bins correction
            return (float)c_numRegisters * log((float)c_numRegisters / (float)emptyRegisters);
        }
 
        // large range correction
        if (dv_est > 143165576.533f) // 2^32 / 30
            return -pow(2.0f, 32.0f) * log(1.0f - dv_est / pow(2.0f, 32.0f));
 
        return dv_est;
    }
 
private:
    std::array<unsigned char, c_numRegisters> m_counts;
};
 
// there seem to be numerical problems when using 26 bits or larger worth of registers
typedef CHyperLogLog<std::string, 10, SMurmurHash3> TCounterEstimated;
typedef std::unordered_set<std::string> TCounterActual;
 
//=====================================================================================================
// Set Operations
//=====================================================================================================
 
template <typename TKEY, size_t NUMREGISTERBITS, typename HASHER>
float UnionCountEstimate (
    const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& A,
    const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& B)
{
    // dynamically allocate our hyperloglog object to not bust the stack if you are using a lot of registers.
    std::unique_ptr<CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>> temp =
        std::make_unique<CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>>();
 
    // To make a union between two hyperloglog objects that have the same number of registers and use
    // the same hash, just take the maximum of each register between the two objects.  This operation is
    // "lossless" in that you end up with the same registers as if you actually had another object
    // that tracked the items each individual object tracked.
    for (size_t i = 0; i < (*temp).c_numRegisters; ++i)
        (*temp).m_counts[i] = std::max(A.m_counts[i], B.m_counts[i]);
    return (*temp).GetCountEstimation();
}
 
template <typename TKEY, size_t NUMREGISTERBITS, typename HASHER>
float IntersectionCountEstimate (
    const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& A,
    const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& B)
{
    // We have to use the inclusion-exclusion principle to get an intersection estimate
    // count(Intersection(A,B)) = (count(A) + count(B)) - count(Union(A,B))
    // http://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle
 
    return (A.GetCountEstimation() + B.GetCountEstimation()) - UnionCountEstimate(A, B);
}
 
float UnionCountActual (const TCounterActual& A, const TCounterActual& B)
{
    TCounterActual temp;
    std::for_each(A.begin(), A.end(), [&] (const TCounterActual::key_type &s){temp.insert(s);});
    std::for_each(B.begin(), B.end(), [&] (const TCounterActual::key_type &s){temp.insert(s);});
    return (float)temp.size();
}
 
float IntersectionCountActual (const TCounterActual& A, const TCounterActual& B)
{
    float ret = 0;
    std::for_each(A.begin(), A.end(), [&](const TCounterActual::key_type &s)
    {
        if (B.find(s) != B.end())
            ++ret;
    });
    return ret;
}
 
//=====================================================================================================
// Error Calculations
//=====================================================================================================
 
float ExpectedError (size_t numRegisters)
{
    return 1.04f / ((float)(sqrt((float)numRegisters)));
}
 
float IdealRegisterCount (float expectedError)
{
    return 676.0f / (625.0f * expectedError * expectedError);
}
 
//=====================================================================================================
// Driver Program
//=====================================================================================================
 
template <typename L>
void ForEachWord(const std::string &source, L& lambda)
{
    size_t prev = 0;
    size_t next = 0;
 
    while ((next = source.find_first_of(" ,.-":n", prev)) != std::string::npos)
    {
        if ((next - prev != 0))
        {
            std::string word = source.substr(prev, next - prev);
            std::transform(word.begin(), word.end(), word.begin(), ::tolower);
            lambda(word);
        }
        prev = next + 1;
    }
 
    if (prev < source.size())
    {
        std::string word = source.substr(prev);
        std::transform(word.begin(), word.end(), word.begin(), ::tolower);
        lambda(word);
    }
}
 
template <typename T>
const char *CalculateError (const T&estimate, const T&actual)
{
    float error = 100.0f * ((float)estimate - (float)actual) / (float)actual;
    if (std::isnan(error) || std::isinf(error))
        return "undef";
 
    // bad practice to return a static local string, dont do this in production code!
    static char ret[256];
    sprintf_s(ret, sizeof(ret), "%0.2f%%", error);
    return ret;
}
 
char *BytesToHumanReadable (size_t bytes)
{
    // bad practice to return a static local string, dont do this in production code!
    static char ret[256];
    if (bytes >= 1024 * 1024 * 1024)
        sprintf_s(ret, sizeof(ret), "%0.2fGB", ((float)bytes) / (1024.0f*1024.0f*1024.0f));
    else if (bytes >= 1024 * 1024)
        sprintf_s(ret, sizeof(ret), "%0.2fMB", ((float)bytes) / (1024.0f*1024.0f));
    else if (bytes >= 1024)
        sprintf_s(ret, sizeof(ret), "%0.2fKB", ((float)bytes) / (1024.0f));
    else
        sprintf_s(ret, sizeof(ret), "%u Bytes", bytes);
    return ret;
}
 
void WaitForEnter()
{
    printf("nPress Enter to quit");
    fflush(stdin);
    getchar();
}
 
// These one paragraph stories are from http://birdisabsurd.blogspot.com/p/one-paragraph-stories.html
 
// The Dino Doc : http://birdisabsurd.blogspot.com/2011/11/dino-doc.html (97 words)
const char *g_storyA =
"The Dino Doc:n"
"Everything had gone according to plan, up 'til this moment. His design team "
"had done their job flawlessly, and the machine, still thrumming behind him, "
"a thing of another age, was settled on a bed of prehistoric moss. They'd "
"done it. But now, beyond the protection of the pod and facing an enormous "
"tyrannosaurus rex with dripping jaws, Professor Cho reflected that, had he "
"known of the dinosaur's presence, he wouldn't have left the Chronoculator - "
"and he certainly wouldn't have chosen "Stayin' Alive", by The Beegees, as "
"his dying soundtrack. Curse his MP3 player";
 
// The Robot: http://birdisabsurd.blogspot.com/2011/12/robot.html (121 words)
const char *g_storyB =
"The Robot:n"
"The engineer watched his robot working, admiring its sense of purpose.It knew "
"what it was, and what it had to do.It was designed to lift crates at one end "
"of the warehouse and take them to the opposite end.It would always do this, "
"never once complaining about its place in the world.It would never have to "
"agonize over its identity, never spend empty nights wondering if it had been "
"justified dropping a promising and soul - fulfilling music career just to "
"collect a bigger paycheck.And, watching his robot, the engineer decided that "
"the next big revolution in the robotics industry would be programming "
"automatons with a capacity for self - doubt.The engineer needed some company.";
 
// The Internet: http://birdisabsurd.blogspot.com/2011/11/internet.html (127 words)
const char *g_storyC =
"The Internet:n"
"One day, Sandra Krewsky lost her mind.Nobody now knows why, but it happened - "
"and when it did, Sandra decided to look at every page on the Internet, "
"insisting that she wouldn't eat, drink, sleep or even use the washroom until "
"the job was done. Traps set in her house stalled worried family members, and by "
"the time they trounced the alligator guarding her bedroom door - it managed to "
"snap her neighbour's finger clean off before going down - Sandra was already "
"lostโ€ฆ though the look of despair carved in her waxen features, and the cat "
"video running repeat on her flickering computer screen, told them everything "
"they needed to know.She'd seen too much. She'd learned that the Internet "
"played for keeps.";
 
int main (int argc, char **argv)
{
    // show basic info regarding memory usage, precision, etc
    printf("For %u registers at 1 byte per register, the expected error is %0.2f%%.n",
        TCounterEstimated::c_numRegisters,
        100.0f * ExpectedError(TCounterEstimated::c_numRegisters)
    );
    printf("Memory Usage = %s per HyperLogLog object.n", BytesToHumanReadable(TCounterEstimated::c_numRegisters));
    static const float c_expectedError = 0.03f;
    printf("For expected error of %0.2f%%, you should use %0.2f registers.nn",
        100.0f*c_expectedError,
        IdealRegisterCount(c_expectedError)
    );
 
    // populate our data structures
    // dynamically allocate estimate so we don't bust the stack if we have a large number of registers
    std::unique_ptr<TCounterEstimated> estimateTotal = std::make_unique<TCounterEstimated>();
    std::unique_ptr<TCounterEstimated> estimateA = std::make_unique<TCounterEstimated>();
    std::unique_ptr<TCounterEstimated> estimateB = std::make_unique<TCounterEstimated>();
    std::unique_ptr<TCounterEstimated> estimateC = std::make_unique<TCounterEstimated>();
    TCounterActual actualTotal;
    TCounterActual actualA;
    TCounterActual actualB;
    TCounterActual actualC;
    {
        auto f = [&](const std::string &word)
        {
            estimateTotal->AddItem(word);
            actualTotal.insert(word);
        };
 
        auto fA = [&](const std::string &word)
        {
            estimateA->AddItem(word);
            actualA.insert(word);
            f(word);
        };
 
        auto fB = [&](const std::string &word)
        {
            estimateB->AddItem(word);
            actualB.insert(word);
            f(word);
        };
 
        auto fC = [&](const std::string &word)
        {
            estimateC->AddItem(word);
            actualC.insert(word);
            f(word);
        };
 
        ForEachWord(g_storyA, fA);
        ForEachWord(g_storyB, fB);
        ForEachWord(g_storyC, fC);
    }
 
    // Show unique word counts for the three combined stories
    {
        float estimateCount = estimateTotal->GetCountEstimation();
        float actualCount = (float)actualTotal.size();
        printf("Unique words in the three stories combined:n");
        printf("  %0.1f estimated, %.0f actual, Error = %snn",
            estimateCount,
            actualCount,
            CalculateError(estimateCount, actualCount)
        );
    }
     
    // show unique word counts per story
    {
        printf("Unique Word Count Per Story:n");
        auto g = [](const char *name, const TCounterEstimated &estimate, const TCounterActual &actual)
        {
            float estimateCount = estimate.GetCountEstimation();
            float actualCount = (float)actual.size();
            printf("  %s = %0.1f estimated, %.0f actual, Error = %sn",
                name,
                estimateCount,
                actualCount,
                CalculateError(estimateCount, actualCount)
            );
        };
         
        g("A", *estimateA, actualA);
        g("B", *estimateB, actualB);
        g("C", *estimateC, actualC);
    }
 
    // Set Operations
    {
        printf("nSet Operations:n");
 
        auto f = [] (
            const char *name1,
            const TCounterEstimated &estimate1,
            const TCounterActual &actual1,
            const char *name2,
            const TCounterEstimated &estimate2,
            const TCounterActual &actual2
        )
        {
            printf("  %s vs %s...n", name1, name2);
 
            // union
            float estimateUnionCount = UnionCountEstimate(estimate1, estimate2);
            float actualUnionCount = UnionCountActual(actual1, actual2);
            printf("    Union: %0.1f estimated, %.0f actual, Error = %sn",
                estimateUnionCount,
                actualUnionCount,
                CalculateError(estimateUnionCount, actualUnionCount)
            );
 
            // intersection
            float estimateIntersectionCount = IntersectionCountEstimate(estimate1, estimate2);
            float actualIntersectionCount = IntersectionCountActual(actual1, actual2);
            printf("    Intersection: %0.1f estimated, %.0f actual, Error = %sn",
                estimateIntersectionCount,
                actualIntersectionCount,
                CalculateError(estimateIntersectionCount, actualIntersectionCount)
            );
 
            // jaccard index
            float estimateJaccard = estimateIntersectionCount / estimateUnionCount;
            float actualJaccard = actualIntersectionCount / actualUnionCount;
            printf("    Jaccard Index: %0.4f estimated, %.4f actual, Error = %sn",
                estimateJaccard,
                actualJaccard,
                CalculateError(estimateJaccard, actualJaccard)
            );
 
            // Contains Percent.
            // What percentage of items in A are also in B?
            float estimateSim = estimateIntersectionCount / estimate1.GetCountEstimation();
            float actualSim = actualIntersectionCount / actual1.size();
            printf("    Contains Percent: %0.2f%% estimated, %0.2f%% actual, Error = %sn",
                100.0f*estimateSim,
                100.0f*actualSim,
                CalculateError(estimateSim, actualSim)
            );
        };
 
        f("A", *estimateA, actualA, "B", *estimateB, actualB);
        f("A", *estimateA, actualA, "C", *estimateC, actualC);
        f("B", *estimateB, actualB, "C", *estimateC, actualC);
    }
 
    WaitForEnter();
    return 0;
}

Here is the output of the above program:

Want More?

As per usual, the rabbit hole goes far deeper than what I’ve shown. Check out the links below to go deeper!

HyperLogLog Research Paper
Combining HLL and KMV to Get The Best of Both
Doubling the Count of HLL Registers on the Fly
Set Operations on HLLs of Different Sizes
Interactive HyperLogLog Demo
Interactive HyperLogLog Union Demo
HyperLogLog++ Research Paper
HyperLogLog++ Analyzed
Wikipedia: Inclusion Exclusion Principle
Coin Flipping
Wikipedia: MurmurHash3
MurmurHash3 C++ Source Code

Count Min Sketch: A Probabilistic Histogram

Count min sketch is a probabilistic histogram that was invented in 2003 by Graham Cormode and S. Muthukrishnan.

It’s a histogram in that it can store objects (keys) and associated counts.

It’s probabilistic in that it lets you trade space and computation time for accuracy.

The count min sketch is pretty similar to a bloom filter, except that instead of storing a single bit to say whether an object is in the set, the count min sketch allows you to keep a count per object. You can read more about bloom filters here: Estimating Set Membership With a Bloom Filter.

It’s called a “sketch” because it’s a smaller summarization of a larger data set.

Inserting an Item

The count min sketch is just a 2 dimensional array, with size of W x D. The actual data type in the array depends on how much storage space you want. It could be an unsigned char, it could be 4 bits, or it could be a uint64 (or larger!).

Each row (value of D) uses a different hash function to map objects to an index of W.

To insert an object, for each row you hash the object using that row’s hash function to get the W index for that object, then increment the count at that position.

In this blog post, I’m only going to be talking about being able to add items to the count min sketch. There are different rules / probabilities / etc for count min sketches that can have objects removed, but you can check out the links at the bottom of this post for more information about that!

Getting an Item Count

When you want to get the count of how many times an item has been added to the count min sketch, you do a similar operation as when you insert.

For each row, you hash the object being asked about with that row’s hash function to get the W index and then get the value for that row at the W index.

This will give you D values and you just return the smallest one.

The reason for this, is because due to hash collisions of various hash functions, your count in a specific slot may have been incorrectly incremented extra times due to other objects hashing to the same object. If you take the minimum value you’ve seen across all rows, you are guaranteed to be taking the value that has the least number of hash collisions, so is guaranteed to be most correct, and in fact guaranteed to be greater than or equal to the actual answer – but never lower than the actual answer.

Dot Product (Inner Product)

If you read the last post on using dot product with histograms to gauge similarity, you might be wondering if you can do a dot product between two count min sketch objects.

Luckily yes, you can! They need to have the same W x D dimensions and they need to use the same hash functions per row, but if that’s true, you can calculate a dot product value very easily.

If you have two count min sketch objects A and B that you want to calculate the dot product for, you dot product each row (D index) of the two count min sketch objects. This will leave you with D dot products and you just return the smallest one. This guarantees that the dot product value you calculate will have the fewest hash collisions (so will be most accurate), and will also guarantee that the estimate is greater that or equal to the actual answer, but will never be lower.

To Normalize Or Not To Normalize

There is a caveat here though with doing a dot product between two count min sketch objects. If you do a normalized dot product (normalize the vectors before doing a dot product, or dividing the answer by the length of the two vectors multiplied together), the guarantee that the dot product is greater than or equal to the true answer no longer holds!

The reason for this is that the formula for doing a normalized dot product is like this:
normalized dot product = dot(A,B) / (length(A)*length(B))

In a count min sketch, the dot(A,B) estimate is guaranteed to greater than or equal to the true value.

The length of a vector is also guaranteed to be greater than or equal to the length of the true vector (the vector made from the actual histogram values).

This means that the numerator and the denominator BOTH have varying levels of overestimation in them. Overestimation in the numerator makes the normalized dot product estimate larger, while overestimation in the denominator makes the normalized dot product estimate smaller.

The result is that a normalized dot product estimate can make no guarantee about being greater than or equal to the true value!

This may or may not be a problem for your situation. Doing a dot product with unnormalized vectors still gives you a value that you can use to compare “similarity values” between histograms, but it has slightly different meaning than a dot product with normalized vectors.

Specifically, if the counts are much larger in one histogram versus another (such as when doing a dot product between multiple large text documents and a small search term string), the “weight” of the larger counts will count for more.

That means if you search for “apple pie”, a 100 page novel that mentions apples 10 times will be a better match than a 1/2 page recipe for apple pie!

When you normalize histograms, it makes it so the counts are “by percentage of the total document length”, which would help our search correctly find that the apple pie recipe is more relevant.

In other situations, you might want to let the higher count weigh stronger even though the occurrences are “less dense” in the document.

It really just depends on what your usage case is.

Calculating W & D

There are two parameters (values) used when calculating the correct W and D dimensions of a count min sketch, for the desired accuracy levels. The parameters are ฮต (epsilon) and ฮด (delta).

ฮต (Epsilon) is “how much error is added to our counts with each item we add to the cm sketch”.

ฮด (Delta) is “with what probability do we want to allow the count estimate to be outside of our epsilon error rate”

To calculate W and D, you use these formulas:

W = โŒˆe/ฮตโŒ‰
D = โŒˆln (1/ฮด)โŒ‰

Where ln is “natural log” and e is “euler’s constant”.

Accuracy Guarantees

When querying to get a count for a specific object (also called a “point query”) the accuracy guarantees are:

  1. True Count <= Estimated Count
  2. Estimated Count <= True Count + ฮต * Number Of Items Added
  3. There is a ฮด chance that #2 is not true

When doing an unnormalized dot product, the accuracy guarantees are:

  1. True Dot Product <= Estimated Dot Product
  2. Estimated Dot Product <= True Dot Product + ฮต * Number Of Items Added To A * Number Of Items Added To B
  3. There is a ฮด chance that #2 is not true

Conservative Update

There is an alternate way to implement adding an item to the cm sketch, which results in provably less error. That technique is called a “Conservative Update”.

When doing a conservative update, you first look at the values in each row that you would normally increment and keep track of the smallest value that you’ve seen. You then only increment the counters that have that smallest value.

The reason this works is because we only look at the smallest value across all rows when doing a look up. So long as the smallest value across all rows increases when you insert an object, you’ve satisfied the requirements to make a look up return a value that is greater than or equal to the true value. The reason this conservative update results in less error is because you are writing to fewer values, which means that there are fewer hash collisions happening.

While this increases accuracy, it comes at the cost of extra logic and processing time needed when doing an update, which may or may not be appropriate for your needs.

Example Runs

The example program is a lot like the program from the last post which implemented some search engine type functionality.

This program also shows you some count estimations to show you that functionality as well.

The first run of the program is with normalized vectors, the second run of the program is with unnormalized vectors, and the third run of the program, which is most accurate, is with unnormalized vectors and conservative updates.

First Run: Normalized Vectors, Regular Updates

Second Run: Unnormalized Vectors, Regular Updates

Third Run: Unnormalized Vectors, Conservative Updates

Example Code

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

const float c_eulerConstant = (float)std::exp(1.0);

// The CCountMinSketch class
template <typename TKEY, typename TCOUNTTYPE, unsigned int NUMBUCKETS, unsigned int NUMHASHES, typename HASHER = std::hash>
class CCountMinSketch
{
public:
	typedef CCountMinSketch TType;

	CCountMinSketch ()
		: m_countGrid { } // init counts to zero
		, m_vectorLengthsDirty(true)
	{ }

	static const unsigned int c_numBuckets = NUMBUCKETS;
	static const unsigned int c_numHashes = NUMHASHES;
	typedef TCOUNTTYPE TCountType;

	void AddItem (bool conservativeUpdate, const TKEY& item, const TCOUNTTYPE& count)
	{
		// this count min sketch is only supporting positive counts
		if (count = 0!n");
			return;
		}

		// remember that our vector lengths are inaccurate
		m_vectorLengthsDirty = true;

		// if doing a conservative update, only update the buckets that are necesary
		if (conservativeUpdate)
		{
			// find what the lowest valued bucket is and calculate what our new lowest
			// value should be
			TCOUNTTYPE lowestValue = GetCount(item) + count;

			// make sure every bucket has at least the lowest value it should have
			size_t rawHash = HASHER()(item);
			for (unsigned int i = 0; i < NUMHASHES; ++i)
			{
				size_t hash = std::hash()(rawHash ^ std::hash()(i));
				TCOUNTTYPE value = m_countGrid[i][hash%NUMBUCKETS];
				if (value < lowestValue)
					m_countGrid[i][hash%NUMBUCKETS] = lowestValue;
			}
		}
		// else do a normal update
		else
		{
			// for each hash, find what bucket this item belongs in, and add the count to that bucket
			size_t rawHash = HASHER()(item);
			for (unsigned int i = 0; i < NUMHASHES; ++i)
			{
				size_t hash = std::hash()(rawHash ^ std::hash()(i));
				m_countGrid[i][hash%NUMBUCKETS] += count;
			}
		}
	}

	TCOUNTTYPE GetCount (const TKEY& item)
	{
		// for each hash, get the value for this item, and return the smalles value seen
		TCOUNTTYPE ret = 0;
		size_t rawHash = HASHER()(item);
		for (unsigned int i = 0; i < NUMHASHES; ++i)
		{
			size_t hash = std::hash()(rawHash ^ std::hash()(i));
			if (i == 0 || ret > m_countGrid[i][hash%NUMBUCKETS])
				ret = m_countGrid[i][hash%NUMBUCKETS];
		}
		return ret;
	}

	void CalculateVectorLengths ()
	{
		// if our vector lengths were previously calculated, no need to do anything
		if (!m_vectorLengthsDirty)
			return;

		// calculate vector lengths of each hash
		for (unsigned int hash = 0; hash < NUMHASHES; ++hash)
		{
			m_vectorLengths[hash] = 0.0f;
			for (unsigned int bucket = 0; bucket < NUMBUCKETS; ++bucket)
				m_vectorLengths[hash] += (float)m_countGrid[hash][bucket] * (float)m_countGrid[hash][bucket];
			m_vectorLengths[hash] = sqrt(m_vectorLengths[hash]);
		}

		// remember that our vector lengths have been calculated
		m_vectorLengthsDirty = false;
	}

	friend float HistogramDotProduct (TType& A, TType& B, bool normalize)
	{
		// make sure the vector lengths are accurate. No cost if they were previously calculated
		A.CalculateVectorLengths();
		B.CalculateVectorLengths();

		// whatever hash has the smallest dot product is the most correct
		float ret = 0.0f;
		bool foundValidDP = false;
		for (unsigned int hash = 0; hash < NUMHASHES; ++hash)
		{
			// if either vector length is zero, don't consider this dot product a valid result
			// we cant normalize it, and it will be zero anyways
			if (A.m_vectorLengths[hash] == 0.0f || B.m_vectorLengths[hash] == 0.0f)
				continue;

			// calculate dot product of unnormalized vectors
			float dp = 0.0f;
			for (unsigned int bucket = 0; bucket  dp)
			{
				ret = dp;
				foundValidDP = true;
			}
		}
		return ret;
	}

private:
	typedef std::array TBucketList;
	typedef std::array TTable;

	TTable m_countGrid;
	bool m_vectorLengthsDirty;
	std::array m_vectorLengths;
};

// Calculate ideal count min sketch parameters for your needs.
unsigned int CMSIdealNumBuckets (float error)
{
	return (unsigned int)std::ceil((float)(c_eulerConstant / error));
}

unsigned int CMSIdealNumHashes (float probability)
{
	return (unsigned int)std::ceil(log(1.0f / probability));
}

typedef std::string TKeyType;
typedef unsigned char TCountType;

typedef CCountMinSketch THistogramEstimate;
typedef std::unordered_map THistogramActual;

// These one paragraph stories are from http://birdisabsurd.blogspot.com/p/one-paragraph-stories.html

// The Dino Doc : http://birdisabsurd.blogspot.com/2011/11/dino-doc.html (97 words)
const char *g_storyA =
"The Dino Doc:n"
"Everything had gone according to plan, up 'til this moment. His design team "
"had done their job flawlessly, and the machine, still thrumming behind him, "
"a thing of another age, was settled on a bed of prehistoric moss. They'd "
"done it. But now, beyond the protection of the pod and facing an enormous "
"tyrannosaurus rex with dripping jaws, Professor Cho reflected that, had he "
"known of the dinosaur's presence, he wouldn't have left the Chronoculator - "
"and he certainly wouldn't have chosen "Stayin' Alive", by The Beegees, as "
"his dying soundtrack. Curse his MP3 player";

// The Robot: http://birdisabsurd.blogspot.com/2011/12/robot.html (121 words)
const char *g_storyB =
"The Robot:n"
"The engineer watched his robot working, admiring its sense of purpose.It knew "
"what it was, and what it had to do.It was designed to lift crates at one end "
"of the warehouse and take them to the opposite end.It would always do this, "
"never once complaining about its place in the world.It would never have to "
"agonize over its identity, never spend empty nights wondering if it had been "
"justified dropping a promising and soul - fulfilling music career just to "
"collect a bigger paycheck.And, watching his robot, the engineer decided that "
"the next big revolution in the robotics industry would be programming "
"automatons with a capacity for self - doubt.The engineer needed some company.";

// The Internet: http://birdisabsurd.blogspot.com/2011/11/internet.html (127 words)
const char *g_storyC =
"The Internet:n"
"One day, Sandra Krewsky lost her mind.Nobody now knows why, but it happened - "
"and when it did, Sandra decided to look at every page on the Internet, "
"insisting that she wouldn't eat, drink, sleep or even use the washroom until "
"the job was done. Traps set in her house stalled worried family members, and by "
"the time they trounced the alligator guarding her bedroom door - it managed to "
"snap her neighbour's finger clean off before going down - Sandra was already "
"lostโ€ฆ though the look of despair carved in her waxen features, and the cat "
"video running repeat on her flickering computer screen, told them everything "
"they needed to know.She'd seen too much. She'd learned that the Internet "
"played for keeps.";

void WaitForEnter ()
{
	printf("nPress Enter to quit");
	fflush(stdin);
	getchar();
}

template 
void ForEachWord (const std::string &source, L& lambda)
{
	size_t prev = 0;
	size_t next = 0;

	while ((next = source.find_first_of(" ,.-":n", prev)) != std::string::npos)
	{
		if ((next - prev != 0))
		{
			std::string word = source.substr(prev, next - prev);
			std::transform(word.begin(), word.end(), word.begin(), ::tolower);
			lambda(word);
		}
		prev = next + 1;
	}

	if (prev < source.size())
	{
		std::string word = source.substr(prev);
		std::transform(word.begin(), word.end(), word.begin(), ::tolower);
		lambda(word);
	}
}

void PopulateHistogram (THistogramEstimate &histogram, const char *text, bool conservativeUpdate)
{
	ForEachWord(text, [&](const std::string &word) {
		histogram.AddItem(conservativeUpdate, word, 1);
	});
}

void PopulateHistogram (THistogramActual &histogram, const char *text)
{
	ForEachWord(text, [&histogram](const std::string &word) {
		histogram[word]++;
	});
}

float HistogramDotProduct (THistogramActual &A, THistogramActual &B, bool normalize)
{
	// Get all the unique keys from both histograms
	std::set keysUnion;
	std::for_each(A.cbegin(), A.cend(), [&keysUnion](const std::pair& v)
	{
		keysUnion.insert(v.first);
	});
	std::for_each(B.cbegin(), B.cend(), [&keysUnion](const std::pair& v)
	{
		keysUnion.insert(v.first);
	});

	// calculate and return the normalized dot product!
	float dotProduct = 0.0f;
	float lengthA = 0.0f;
	float lengthB = 0.0f;
	std::for_each(keysUnion.cbegin(), keysUnion.cend(),
		[&A, &B, &dotProduct, &lengthA, &lengthB]
		(const TKeyType& key)
		{
			// if the key isn't found in either histogram ignore it, since it will be 0 * x which is
			// always anyhow.  Make sure and keep track of vector length though!
			auto a = A.find(key);
			auto b = B.find(key);

			if (a != A.end())
				lengthA += (float)(*a).second * (float)(*a).second;

			if (b != B.end())
				lengthB += (float)(*b).second * (float)(*b).second;

			if (a == A.end())
				return;

			if (b == B.end())
				return;

			// calculate dot product
			dotProduct += ((float)(*a).second * (float)(*b).second);
		}
	);

	// if we don't need to normalize, return the unnormalized value we have right now
	if (!normalize)
		return dotProduct;

	// normalize if we can
	if (lengthA * lengthB <= 0.0f)
		return 0.0f;

	lengthA = sqrt(lengthA);
	lengthB = sqrt(lengthB);
	return dotProduct / (lengthA * lengthB);
}

template 
const char *CalculateError (const T&estimate, const T&actual)
{
	float error = 100.0f * ((float)estimate - (float)actual) / (float)actual;
	if (std::isnan(error) || std::isinf(error))
		return "undef";
	
	// bad practice to return a static local string, dont do this in production code!
	static char ret[256];
	sprintf(ret, "%0.2f%%", error);
	return ret;
}

int main (int argc, char **argv)
{
	// settings
	const bool c_normalizeDotProducts = false;
	const bool c_conservativeUpdate = true;

	// show settings and implication
	printf("Dot Products Normalized? %sn",
		c_normalizeDotProducts
			? "Yes! estimate could be  actual"
			: "No! estimate <= actual");

	printf("Conservative Updates? %snn",
		c_conservativeUpdate
			? "Yes! Reduced error"
			: "No! normal error");	

	// populate our probabilistic histograms.
	// Allocate memory for the objects so that we don't bust the stack for large histogram sizes!
	std::unique_ptr TheDinoDocEstimate = std::make_unique();
	std::unique_ptr TheRobotEstimate = std::make_unique();
	std::unique_ptr TheInternetEstimate = std::make_unique();
	PopulateHistogram(*TheDinoDocEstimate, g_storyA, c_conservativeUpdate);
	PopulateHistogram(*TheRobotEstimate, g_storyB, c_conservativeUpdate);
	PopulateHistogram(*TheInternetEstimate, g_storyC, c_conservativeUpdate);

	// populate our actual count histograms for comparison
	THistogramActual TheDinoDocActual;
	THistogramActual TheRobotActual;
	THistogramActual TheInternetActual;
	PopulateHistogram(TheDinoDocActual, g_storyA);
	PopulateHistogram(TheRobotActual, g_storyB);
	PopulateHistogram(TheInternetActual, g_storyC);

	// report whether B or C is a closer match for A
	float dpABEstimate = HistogramDotProduct(*TheDinoDocEstimate, *TheRobotEstimate, c_normalizeDotProducts);
	float dpACEstimate = HistogramDotProduct(*TheDinoDocEstimate, *TheInternetEstimate, c_normalizeDotProducts);
	float dpABActual = HistogramDotProduct(TheDinoDocActual, TheRobotActual, c_normalizeDotProducts);
	float dpACActual = HistogramDotProduct(TheDinoDocActual, TheInternetActual, c_normalizeDotProducts);
	printf(""The Dino Doc" vs ...n");
	printf("  "The Robot"    %0.4f (actual %0.4f) Error: %sn", dpABEstimate, dpABActual, CalculateError(dpABEstimate, dpABActual));
	printf("  "The Internet" %0.4f (actual %0.4f) Error: %snn", dpACEstimate, dpACActual, CalculateError(dpACEstimate, dpACActual));
	if (dpABEstimate > dpACEstimate)
		printf("Estimate: "The Dino Doc" and "The Robot" are more similarn");
	else
		printf("Estimate: "The Dino Doc" and "The Internet" are more similarn");
	if (dpABActual > dpACActual)
		printf("Actual:   "The Dino Doc" and "The Robot" are more similarn");
	else
		printf("Actual:   "The Dino Doc" and "The Internet" are more similarn");

	// let the user do a search engine style query for our stories!
	char searchString[1024];
	printf("nplease enter a search string:n");
	searchString[0] = 0;
	scanf("%[^n]", searchString);

	struct SSearchResults
	{
		SSearchResults(const std::string& pageName, float rankingEstimated, float rankingActual)
			: m_pageName(pageName)
			, m_rankingEstimated(rankingEstimated)
			, m_rankingActual(rankingActual)
		{ }

		bool operator  other.m_rankingEstimated;
		}

		std::string m_pageName;
		float       m_rankingEstimated;
		float       m_rankingActual;
	};
	std::vector results;

	// preform our search and gather our results!
	std::unique_ptr searchEstimate = std::make_unique();
	THistogramActual searchActual;
	PopulateHistogram(*searchEstimate, searchString, c_conservativeUpdate);
	PopulateHistogram(searchActual, searchString);
	results.push_back(
		SSearchResults(
			"The Dino Doc",
			HistogramDotProduct(*TheDinoDocEstimate, *searchEstimate, c_normalizeDotProducts),
			HistogramDotProduct(TheDinoDocActual, searchActual, c_normalizeDotProducts)
		)
	);
	results.push_back(
		SSearchResults(
			"The Robot",
			HistogramDotProduct(*TheRobotEstimate, *searchEstimate, c_normalizeDotProducts),
			HistogramDotProduct(TheRobotActual, searchActual, c_normalizeDotProducts)
		)
	);
	results.push_back(
		SSearchResults(
			"The Internet",
			HistogramDotProduct(*TheInternetEstimate, *searchEstimate, c_normalizeDotProducts),
			HistogramDotProduct(TheInternetActual, searchActual, c_normalizeDotProducts)
		)
	);
	std::sort(results.begin(), results.end());

	// show the search results
	printf("nSearch results sorted by estimated relevance:n");
	std::for_each(results.begin(), results.end(), [](const SSearchResults& result) {
		printf("  "%s" : %0.4f (actual %0.4f) Error: %sn",
			result.m_pageName.c_str(),
			result.m_rankingEstimated,
			result.m_rankingActual,
			CalculateError(result.m_rankingEstimated, result.m_rankingActual)
		);
	});

	// show counts of search terms in each story (estimated and actual)
	printf("nEstimated counts of search terms in each story:n");
	std::for_each(searchActual.cbegin(), searchActual.cend(), [&] (const std::pair& v)
	{
		// show key
		printf(""%s"n", v.first.c_str());

		// the dino doc
		TCountType estimate = TheDinoDocEstimate->GetCount(v.first.c_str());
		TCountType actual = 0;
		auto it = TheDinoDocActual.find(v.first.c_str());
		if (it != TheDinoDocActual.end())
			actual = it->second;
		printf("  "The Dino Doc" %u (actual %u) Error: %sn", estimate, actual, CalculateError(estimate, actual));

		// the robot
		estimate = TheRobotEstimate->GetCount(v.first.c_str());
		actual = 0;
		it = TheRobotActual.find(v.first.c_str());
		if (it != TheRobotActual.end())
			actual = it->second;
		printf("  "The Robot"    %u (actual %u) Error: %sn", estimate, actual, CalculateError(estimate, actual));

		// the internet
		estimate = TheInternetEstimate->GetCount(v.first.c_str());
		actual = 0;
		it = TheInternetActual.find(v.first.c_str());
		if (it != TheInternetActual.end())
			actual = it->second;
		printf("  "The Internet" %u (actual %u) Error: %sn", estimate, actual, CalculateError(estimate, actual));
	});

	// show memory use
	printf("nThe above used %u buckets and %u hashes with %u bytes per countn",
		THistogramEstimate::c_numBuckets, THistogramEstimate::c_numHashes, sizeof(THistogramEstimate::TCountType));
	printf("Totaling %u bytes of storage for each histogramnn",
		THistogramEstimate::c_numBuckets * THistogramEstimate::c_numHashes * sizeof(THistogramEstimate::TCountType));
	
	// show a probabilistic suggestion
	float error = 0.1f;
	float probability = 0.01f;
	printf("You should use %u buckets and %u hashes for...n", CMSIdealNumBuckets(error), CMSIdealNumHashes(probability));
	printf("true count <= estimated count <= true count + %0.2f * Items ProcessednWith probability %0.2f%%n", error, (1.0f - probability)*100.0f);
	
	WaitForEnter();
	return 0;
}

Links

If you use this in production code, you should probably use a better quality hash.

The rabbit hole on this stuff goes deeper, so if you want to know more, check out these links!

Wikipedia: Count Min Sketch
Count Min Sketch Full Paper
Count Min Sketch AT&T Research Paper
Another CMS paper
And another, with some more info like range query details

Next up I’ll be writing about hyperloglog, which does the same thing as KMV (K-Minimum Values) but is better at it!

Writing a Basic Search Engine AKA Calculating Similarity of Histograms with Dot Product

I came across this technique while doing some research for the next post and found it so interesting that it seemed to warrant it’s own post.

Histograms and Multisets

Firstly, histogram is just a fancy word for a list of items that have an associated count.

If I were to make a histogram of the contents of my office, I would end up with something like:

  • Books = 20
  • Phone = 1
  • Headphones = 2
  • Sombrero = 1 (thanks to the roaming tequilla cart, but that’s another story…)
  • VideoGames = 15
  • (and so on)

Another term for a histogram is multiset, so if you see that word, just think of it as being the same thing.

Quick Dot Product Refresher

Just to make sure we are on the same page, using dot product to get the angle between two vectors is as follows:

cos(theta) = (A * B) / (||A||*||B||)

Or in coder-eese, if A and B are vectors of any dimension:

cosTheta = dot(A,B) / (length(A)*length(B))

To actually do the “dot” portion, you just multiply the X’s by the X’s, the Y’s by the Y’s, the Z’s by the Z’s etc, and add them all up to get a single scalar value. For a 3d vector it would look like this:

dot(A,B) = A.x*B.x + A.y*B.y + A.z*B.z

The value from the formulas above will tell you how similar the direction of the two vectors are.

If the value is 1, that means they are pointing the exact same way. if the value is 0 that means they are perpendicular. If the value is -1 that means they are pointing the exact opposite way.

Note that you can forgo the division by lengths in that formula, and just look at whether the result is positive, negative, or zero, if that’s enough information for your needs. We’ll be using the full on normalized vector version above in this post today though.

For a deeper refresher on dot product check out this link:
Wikipedia: Dot Product

Histogram Dot Products

Believe it or not, if you treat the counts in a histogram as an N dimensional vector – where N is the number of categories in the histogram – you can use the dot product to gauge similarity between the contents of two histograms by using it to get the angular difference between the vectors!

In the normal usage case, histograms have counts that are >= 0, which ends up meaning that two histogram vectors can only be up to 90 degrees apart. That ends up meaning that the result of the dot product of these normalized vectors is going to be from 0 to 1, where 0 means they have nothing in common, and 1 means they are completely the same.

This is similar to the Jaccard Index mentioned in previous posts, but is different. In fact, this value isn’t even linear (although maybe putting it through acos and dividing by pi/2 may make it suitably linear!), as it represents the cosine of an angle, not a percentage of similarity. It’s still useful though. If you have histogram A and are trying to see if histogram B or C is a closer match to A, you can calculate this value for the A/B pair and the A/C pair. The one with the higher value is the more similar pairing.

Another thing to note about this technique is that order doesn’t matter. If you are trying to compare two multisets where order matters, you are going to have to find a different algorithm or try to pull some shenanigans – such as perhaps weighting the items in the multiset based on the order they were in.

Examples

Let’s say we have two bags of fruit and we want to know how similar the bags are. The bags in this case represent histograms / multisets.

In the first bag, we have 3 apples. In the second bag, we have 2 oranges.


If we have a 2 dimensional vector where the first component is apples and the second component is oranges, we can represent our bags of fruit with these vectors:

Bag1 = [3, 0]
Bag2 = [0, 2]

Now, let’s dot product the vectors:

Dot Product = Bag1.Apples * Bag2.Apples + Bag1.Oranges * Bag2.Oranges
= 3*0 + 0*2
= 0

We would normally divide our answer by the length of the Bag1 vector and then the length of the Bag2 vector, but since it’s zero we know that won’t change the value.

From this, we can see that Bag1 and Bag2 have nothing in common!

What if we added an apple to bag 2 though? let’s do that and try the process again.


Bag1 = [3,0]
Bag2 = [1,2]
Dot Product = Bag1.Apples * Bag2.Apples + Bag1.Oranges * Bag2.Oranges
= 3*1 + 0*2
= 3

Next up, we need to divide the answer by the length of our Bag1 vector which is 3, multiplied the length of our Bag2 vector which is the square root of 5.

Cosine Angle (Similarity Value) = 3 / (3 * sqrt(5))
= ~0.45

Lets add an orange to bag 1. That ought to make it more similar to bag 2, so should increase our similarity value of the two bags. Let’s find out if that’s true.


Bag1 = [3,1]
Bag2 = [1,2]
Dot Product = Bag1.Apples * Bag2.Apples + Bag1.Oranges * Bag2.Oranges
= 3*1 + 1*2
= 5

Next, we need to divide that answer by the length of the bag 1 vector which is the square root of 10, multiplied by the length of the bag 2 vector which is the square root of 5.

Cosine Angle (Similarity Value) = 5 / (sqrt(10) * sqrt(5))
= ~0.71

So yep, adding an orange to bag 1 made the two bags more similar!

Example Code

Here’s a piece of code that uses the dot product technique to be able to gauge the similarity of text. It uses this to compare larger blocks of text, and also uses it to allow you to search the blocks of text like a search engine!

#include 
#include 
#include 
#include 
#include 
 
typedef std::unordered_map THistogram;
 
// These one paragraph stories are from http://birdisabsurd.blogspot.com/p/one-paragraph-stories.html
 
// The Dino Doc : http://birdisabsurd.blogspot.com/2011/11/dino-doc.html
const char *g_storyA =
"The Dino Doc:n"
"Everything had gone according to plan, up 'til this moment. His design team "
"had done their job flawlessly, and the machine, still thrumming behind him, "
"a thing of another age, was settled on a bed of prehistoric moss. They'd "
"done it. But now, beyond the protection of the pod and facing an enormous "
"tyrannosaurus rex with dripping jaws, Professor Cho reflected that, had he "
"known of the dinosaur's presence, he wouldn't have left the Chronoculator - "
"and he certainly wouldn't have chosen "Stayin' Alive", by The Beegees, as "
"his dying soundtrack. Curse his MP3 player";
 
// The Robot: http://birdisabsurd.blogspot.com/2011/12/robot.html
const char *g_storyB =
"The Robot:n"
"The engineer watched his robot working, admiring its sense of purpose.It knew "
"what it was, and what it had to do.It was designed to lift crates at one end "
"of the warehouse and take them to the opposite end.It would always do this, "
"never once complaining about its place in the world.It would never have to "
"agonize over its identity, never spend empty nights wondering if it had been "
"justified dropping a promising and soul - fulfilling music career just to "
"collect a bigger paycheck.And, watching his robot, the engineer decided that "
"the next big revolution in the robotics industry would be programming "
"automatons with a capacity for self - doubt.The engineer needed some company.";
 
// The Internet: http://birdisabsurd.blogspot.com/2011/11/internet.html
const char *g_storyC =
"The Internet:n"
"One day, Sandra Krewsky lost her mind.Nobody now knows why, but it happened - "
"and when it did, Sandra decided to look at every page on the Internet, "
"insisting that she wouldn't eat, drink, sleep or even use the washroom until "
"the job was done. Traps set in her house stalled worried family members, and by "
"the time they trounced the alligator guarding her bedroom door - it managed to "
"snap her neighbour's finger clean off before going down - Sandra was already "
"lostโ€ฆ though the look of despair carved in her waxen features, and the cat "
"video running repeat on her flickering computer screen, told them everything "
"they needed to know.She'd seen too much. She'd learned that the Internet "
"played for keeps.";
 
void WaitForEnter ()
{
    printf("nPress Enter to quit");
    fflush(stdin);
    getchar();
}
 
template 
void ForEachWord (const std::string &source, L& lambda)
{
    size_t prev = 0;
    size_t next = 0;
 
    while ((next = source.find_first_of(" ,.-":n", prev)) != std::string::npos)
    {
        if ((next - prev != 0))
        {
            std::string word = source.substr(prev, next - prev);
            std::transform(word.begin(), word.end(), word.begin(), ::tolower);
            lambda(word);
        }
        prev = next + 1;
    }
 
    if (prev < source.size())
    {
        std::string word = source.substr(prev);
        std::transform(word.begin(), word.end(), word.begin(), ::tolower);
        lambda(word);
    }
}
 
void PopulateHistogram (THistogram &histogram, const char *text)
{
    ForEachWord(text, [&histogram](const std::string &word) {
        histogram[word] ++;
    });
}
 
float HistogramDotProduct (THistogram &A, THistogram &B)
{
    // Get all the unique keys from both histograms
    std::set keysUnion;
    std::for_each(A.cbegin(), A.cend(), [&keysUnion](const std::pair& v)
    {
        keysUnion.insert(v.first);
    });
    std::for_each(B.cbegin(), B.cend(), [&keysUnion](const std::pair& v)
    {
        keysUnion.insert(v.first);
    });
 
    // calculate and return the normalized dot product!
    float dotProduct = 0.0f;
    float lengthA = 0.0f;
    float lengthB = 0.0f;
    std::for_each(keysUnion.cbegin(), keysUnion.cend(),
        [&A, &B, &dotProduct, &lengthA, &lengthB]
        (const std::string& key)
        {
            // if the key isn't found in either histogram ignore it, since it will be 0 * x which is
            // always anyhow.  Make sure and keep track of vector length though!
            auto a = A.find(key);
            auto b = B.find(key);
 
            if (a != A.end())
                lengthA += (float)(*a).second * (float)(*a).second;
 
            if (b != B.end())
                lengthB += (float)(*b).second * (float)(*b).second;
 
            if (a == A.end())
                return;
 
            if (b == B.end())
                return;
 
            // calculate dot product
            dotProduct += ((float)(*a).second * (float)(*b).second);        
        }
    );
 
    // normalize if we can
    if (lengthA * lengthB  dpAC)
        printf(""The Dino Doc" and "The Robot" are more similarn");
    else
        printf(""The Dino Doc" and "The Internet" are more similarn");
 
    // let the user do a search engine style query for our stories!
    char searchString[1024];
    printf("nplease enter a search string:n");
    scanf("%[^n]", searchString);
 
    // preform our search and gather our results!
    THistogram search;
    PopulateHistogram(search, searchString);
 
    struct SSearchResults
    {
        SSearchResults (const std::string& pageName, float ranking, const char* pageContent)
            : m_pageName(pageName)
            , m_ranking(ranking)
            , m_pageContent(pageContent)
        { }
 
        bool operator  other.m_ranking;
        }
 
        std::string m_pageName;
        float       m_ranking;
        const char* m_pageContent;
    };
    std::vector results;
 
    results.push_back(SSearchResults("The Dino Doc", HistogramDotProduct(TheDinoDoc, search), g_storyA));
    results.push_back(SSearchResults("The Robot", HistogramDotProduct(TheRobot, search), g_storyB));
    results.push_back(SSearchResults("The Internet", HistogramDotProduct(TheInternet, search), g_storyC));
    std::sort(results.begin(), results.end());
 
    // show the search results
    printf("nSearch results sorted by relevance:n");
    std::for_each(results.begin(), results.end(), [] (const SSearchResults& result) {
        printf("  "%s" : %0.4fn", result.m_pageName.c_str(), result.m_ranking);
    });
 
    // show the most relevant result
    printf("n-----Best Result-----n%sn",results[0].m_pageContent);
 
    WaitForEnter();
    return 0;
}

Here is an example run of this program:

Improvements

Our “search engine” code does work but is pretty limited. It doesn’t know that “cat” and “Kitty” are basically the same, and also doesn’t know that the words “and”, “the” & “it” are not important.

I’m definitely not an expert in these matters, but to solve the first problem you might try making a “thesaurus” lookup table, where maybe whenever it sees “kitten”, “kitty”, “feline”, it translates it to “cat” before putting it into the histogram. That would make it smarter at realizing all those words mean the same thing.

For the second problem, there is a strange technique called tfโ€“idf that seems to work pretty well, although people haven’t really been able to rigorously figure out why it works so well. Check out the link to it at the bottom of this post.

Besides just using this technique for text comparisons, I’ve read mentions that this histogram dot product technique is used in places like machine vision and object classification.

It is a pretty neat technique ๐Ÿ˜›

Links

Wikipedia: Cosine Similarity – The common name for this technique
Easy Bar Chart Creator – Used to make the example graphs
Wikipedia: tf-idf – Used to automagically figure out which words in a document are important and which aren’t

Estimating Set Membership With a Bloom Filter

Have you ever found yourself in a situation where you needed to keep track of whether or not you’ve seen an item, but the number of items you have to keep track of is either gigantic or completely unbounded?

This comes up a lot in “massive data” situations (like internet search engines), but it can also come up in games sometimes – like maybe having a monster in an MMO game remember which players have tried to attack it before so it can be hostile when it sees them again.

One solution to the game example could be to keep a list of all players the monster had been attacked by, but that will get huge and eat up a lot of memory and take time to search the list for a specific player.

You might instead decide you only want to keep the last 20 players the monster had been attacked by, but that still uses up a good chunk of memory if you have a lot of different monsters tracking this information for themselves, and 20 may be so low, that the monster will forget people who commonly attack it by the time it sees them again.

Enter The Bloom Filter

There is actually an interesting solution to this problem, using a probabilistic data structure called a bloom filter – and before you ask, no, it has nothing to do with graphics. It was invented by a guy named Burton Howard Bloom in 1970.

A bloom filter basically has M number of bits as storage and K number of hashes.

To illustrate an example, let’s say that we have 10 bits and only 1 hash.

When we insert an item, what we want to do is hash that item and then modulus that has value against 10 to get a pseudo-random (but deterministic) value 0-9. That value is the index of the bit we want to set to true. After we set that bit to true, we can consider that item is inserted into the set.

When we want to ask if an item exists in a given bloom filter, what we do is again hash the item and modulus it against 10 to get the 0-9 value again. If that bit is set to false, we can say that item is NOT in the set with certainty, but if it’s true we can only say it MIGHT be in the set.

If the bit is true, we can’t say that for sure it’s part of the set, because something else may have hashed to the same bit and set it, so we have to consider it a maybe, not a definite yes. In fact, with only 10 bits and 1 hash function, with a good hash function, there is a 10% chance that any item we insert will result in the same bit being set.

To be a little more sure of our “maybe” being a yes, and not a false positive, we can do two things… we can increase the number of bits we use for the set (which will reduce collisions), or we can increase the number of hashes we do (although there comes a point where adding more hashes starts decreasing accuracy again).

If we add a second hash, all that means is that we will do a second hash, get a second 0-9 value, and write a one to that bit AS WELL when we insert an item. When checking if an item exists in a set, we also READ that bit to see if it’s true.

For N hashes, when inserting an item into a bloom filter, you get (up to) N different bits that you need to set to 1.

When checking if an item exists in a bloom filter, you get (up to) N different bits, all of which need to be 1 to consider the item in the set.

There are a few mathematical formulas you can use to figure out how many bits and hashes you would want to use to get a desired reliability that your maybes are in fact yeses, and also there are some formulas for figuring out the probability that your maybe is in fact a yes for any given bloom filter in a runtime state.

Depending on your needs and usage case, you may treat your “maybe” as a “yes” and move on, or you could treat a “maybe” as a time when you need to do a more expensive test (like, a disk read?). In those situations, the “no” is the valuable answer, since it means you can skip the more expensive test.

Estimating Item Count In Bloom Filters

Besides being able to test an item for membership, you can also estimate how many items are in any given bloom filter:

EstimatedCount = -(NumBits * ln(1 – BitsOn / NumBits)) / NumHashes

Where BitsOn is the count of how many bits are set to 1 and ln is natural log.

Set Operations On Bloom Filters

If you have two bloom filters, you can do some interesting set operations on them.

Union

One operation you can do with two bloom filters is union them, which means that if you have a bloom filter A and bloom filter B, you end up with a third bloom filter C that contains all the unique items from both A and B.

Besides being able to test this third bloom filter to see if it contains specific items, you can also ask it for an estimated count of objects it contains, which is useful for helping figure out how similar the sets are.

Essentially, if A estimates having 50 items and B estimates having 50 items, and their union, C, estimates having 51 items, that means that the items in A and B were almost all the same (probably).

How you union two bloom filters is you just do a bitwise OR on every bit in A and B to get the bits for C. Simple and fast to calculate.

Intersection

Another operation you can do with two bloom filters is to calculate their intersection. The intersection of two sets is just the items that the two sets have in common.

Again, besides being able to test this third bloom filter for membership of items, you can also use it to get an estimated count of objects to help figure out how similar the sets are.

Similary to the union, you can reason that if the intersection count is small compared to the counts of the individual bloom filters that went into it, that they were probably not very similar.

There are two ways you can calculate the intersection of two bloom filters.

The first way is to do a bitwise AND on every bit in A and B to get the bits for C. Again, super simple and faster to calculate.

The other way just involves some simple math:

Count(Intersection(A,B)) = (Count(A) + Count(B)) – Count(Union(A,B))

Whichever method is better depends on your needs and your usage case.

Jaccard Index

Just like I talked about in the KMV post two posts ago, you can also calculate an estimated Jaccard Index for two bloom filters, which will give you a value between 0 and 1 that tells you how similar two sets are.

If the Jaccard index is 1, that means the sets are the same and contain all the same items. If the Jaccard index is 0, that means the sets are completely different. If the Jaccard index is 0.5 that means that they have half of their items in common.

To calculate the estimated Jaccard Index, you just use this simple formula:

Jaccard Index = Count(Intersection(A,B)) / Count(Union(A,B))

Estimating False Positive Rate

The more items you insert into a bloom filter the higher the false positive rate gets, until it gets to 100% which means all the bits are set and if you ask if it contains an item, it will never say no.

To combat this, you may want to calculate your error rate at runtime and maybe spit out a warning if the error rate starts getting too high, so that you know you need to adjust the number of bits or number of hashes, or at very least you can alert the user that the reliability of the answers has degraded.

Here is the formula to calculate the false positive rate of a bloom filter at runtime:

ErrorRate = (1 – e^(-NumHashes * NumItems / NumBits))^NumHashes

NumItems is the number of unique items that have been inserted into the bloom filter. If you know that exact value somehow you can use that real value, but in the more likely case, you won’t know the exact value, so you can use the estimated unique item count formula described above to get an estimated unique count.

Managing the Error Rate of False Positives

As I mentioned above, there are formulas to figure out how many bits and how many hashes you need, to store a certain number of items with a specific maximum error rate.

You can work this out in advance by figuring out about how many items you expect to see.

First, you calculate the ideal bit count:

NumBits = – (NumItems * ln(DesiredFalsePositiveProbability)) / (ln(2)^2)

Where NumItems is the number of items you expect to put into the bloom filter, and DesiredFalsePositiveProbability is the error rate you want when the bloom filter has the expected number of items in it. The error rate will be lower until the item count reaches NumItems.

Next, you calculate the ideal number of hashes:

NumHashes = NumBits / NumItems * ln(2)

Then, you just create your bloom filter, using the specified number of bits and hashes.

Example Code

Here is some example bloom filter c++ code with all the bells and whistles. Instead of using multiple hashes, I just grabbed some random numbers and xor them against the hash of each item to make more deterministic but pseudo-random numbers (that’s what a hash does too afterall). If you want to use a bloom filter in a more serious usage case, you may want to actually use multiple hashes, and you probably want to use a better hashing algorithm than std::hash.

#include 
#include 
#include 
#include 
#include 
#include 
#include 
 
// If you get a compile error here, remove the "class" from this enum definition.
// It just means your compiler doesn't support enum classes yet.
enum class EHasItem
{
    e_no,
    e_maybe
};
 
// The CBloomFilter class
template <typename T, unsigned int NUMBYTES, unsigned int NUMHASHES, typename HASHER = std::hash>
class CBloomFilter
{
public:
    // constants
    static const unsigned int c_numHashes = NUMHASHES;
    static const unsigned int c_numBits = NUMBYTES*8;
    static const unsigned int c_numBytes = NUMBYTES;
 
    // friends
    template 
    friend float UnionCountEstimate(const CBloomFilter& left, const CBloomFilter& right);
	template 
	friend float IntersectionCountEstimate (const CBloomFilter& left, const CBloomFilter& right);
 
    // constructor
    CBloomFilter (const std::array& randomSalt)
        : m_storage()              // initialize bits to zero
        , m_randomSalt(randomSalt) // store off the random salt to use for the hashes
    { }
 
    // interface
    void AddItem (const T& item)
    {
        const size_t rawItemHash = HASHER()(item);
        for (unsigned int index = 0; index < c_numHashes; ++index)
        {
            const size_t bitIndex = (rawItemHash ^ m_randomSalt[index])%c_numBits;
            SetBitOn(bitIndex);
        }
    }
 
    EHasItem HasItem (const T& item) const
    {
        const size_t rawItemHash = HASHER()(item);
        for (unsigned int index = 0; index < c_numHashes; ++index)
        {
            const size_t bitIndex = (rawItemHash ^ m_randomSalt[index])%c_numBits;
            if (!IsBitOn(bitIndex))
                return EHasItem::e_no;
        }
        return EHasItem::e_maybe;
    }
 
    // probabilistic interface
    float CountEstimate () const
    {
        // estimates how many items are actually stored in this set, based on how many
        // bits are set to true, how many bits there are total, and how many hashes
        // are done for each item.
        return -((float)c_numBits * std::log(1.0f - ((float)CountBitsOn() / (float)c_numBits))) / (float)c_numHashes;
    }
 
    float FalsePositiveProbability (size_t numItems = -1) const
    {
        // calculates the expected error.  Since this relies on how many items are in
        // the set, you can either pass in the number of items, or use the default
        // argument value, which means to use the estimated item count
        float numItemsf = numItems == -1 ? CountEstimate() : (float)numItems;
        return pow(1.0f - std::exp(-(float)c_numHashes * numItemsf / (float)c_numBits),(float)c_numHashes);
    }
     
private:
 
    inline void SetBitOn (size_t bitIndex)
    {
        const size_t byteIndex = bitIndex / 8;
        const uint8_t byteValue = 1 << (bitIndex%8);
        m_storage[byteIndex] |= byteValue;
    }
 
    inline bool IsBitOn (size_t bitIndex) const
    {
        const size_t byteIndex = bitIndex / 8;
        const uint8_t byteValue = 1 << (bitIndex%8);
        return ((m_storage[byteIndex] & byteValue) != 0);
    }
 
    size_t CountBitsOn () const
    {
        // likely a more efficient way to do this but ::shrug::
        size_t count = 0;
        for (size_t index = 0; index < c_numBits; ++index)
        {
            if (IsBitOn(index))
                ++count;
        }
        return count;
    }
     
    // storage of bits
    std::array m_storage;
 
    // Storage of random salt values
    // It could be neat to use constexpr and __TIME__ to make compile time random numbers.
    // That isn't available til like c++17 or something though sadly.
    const std::array& m_randomSalt;
};
 
// helper functions
template 
float UnionCountEstimate (const CBloomFilter& left, const CBloomFilter& right)
{
    // returns an estimated count of the unique items if both lists were combined
    // example: (1,2,3) union (2,3,4) = (1,2,3,4) which has a union count of 4
    CBloomFilter temp(left.m_randomSalt);
    for (unsigned int index = 0; index < left.c_numBytes; ++index)
        temp.m_storage[index] = left.m_storage[index] | right.m_storage[index];
     
    return temp.CountEstimate();
}
 
template 
float IntersectionCountEstimate (const CBloomFilter& left, const CBloomFilter& right)
{
    // returns an estimated count of the number of unique items that are shared in both sets
    // example: (1,2,3) intersection (2,3,4) = (2,3) which has an intersection count of 2
    CBloomFilter temp(left.m_randomSalt);
    for (unsigned int index = 0; index < left.c_numBytes; ++index)
        temp.m_storage[index] = left.m_storage[index] & right.m_storage[index];
     
    return temp.CountEstimate();
}
 
float IdealBitCount (unsigned int numItemsInserted, float desiredFalsePositiveProbability)
{
    // given how many items you plan to insert, and a target false positive probability at that count, this returns how many bits
    // of flags you should use.
    return (float)-(((float)numItemsInserted*log(desiredFalsePositiveProbability)) / (log(2)*log(2)));
}
 
float IdealHashCount (unsigned int numBits, unsigned int numItemsInserted)
{
    // given how many bits you are using for storage, and how many items you plan to insert, this is the optimal number of hashes to use
    return ((float)numBits / (float)numItemsInserted) * (float)log(2.0f);
}
 
// random numbers from random.org
// https://www.random.org/cgi-bin/randbyte?nbytes=40&format=h
// in 64 bit mode, size_t is 64 bits, not 32.  The random numbers below will be all zero in the upper 32 bits!
static const std::array s_randomSalt =
{
    0x6ff3f8ef,
    0x9b565007,
    0xea709ce4,
    0xf7d5cbc7, 
    0xcb7e38e1,
    0xd54b5323,
    0xbf679080,
    0x7fb78dee,
    0x540c9e8a,
    0x89369800
};
 
// data for adding and testing in our list
static const char *s_dataList1[] =
{
    "hello!",
    "blah!",
    "moo",
    nullptr
};
 
static const char *s_dataList2[] =
{
    "moo",
    "hello!",
    "mooz",
    "kitty",
    "here is a longer string just cause",
    nullptr
};
 
static const char *s_askList[] =
{
    "moo",
    "hello!",
    "unf",
    "boom",
    "kitty",
    "mooz",
    "blah!",
    nullptr
};
 
// driver program
void WaitForEnter ()
{
    printf("nPress Enter to quit");
    fflush(stdin);
    getchar();
}
 
void main(void)
{
    CBloomFilter set1(s_randomSalt);
    CBloomFilter set2(s_randomSalt);
	std::set actualSet1;
	std::set actualSet2;
 
    printf("Creating 2 bloom filter sets with %u bytes of flags (%u bits), and %u hashes.nn", set1.c_numBytes, set1.c_numBits, set1.c_numHashes);
 
	// create data set 1
    unsigned int index = 0;
    while (s_dataList1[index] != nullptr)
    {
        printf("Adding to set 1: "%s"n", s_dataList1[index]);
        set1.AddItem(s_dataList1[index]);
		actualSet1.insert(s_dataList1[index]);
        index++;
    }
 
	// create data set 2
    printf("n");
    index = 0;
    while (s_dataList2[index] != nullptr)
    {
        printf("Adding to set 2: "%s"n", s_dataList2[index]);
        set2.AddItem(s_dataList2[index]);
		actualSet2.insert(s_dataList2[index]);
        index++;
    }
 
	// query each set to see if they think that they contain various items
    printf("n");
    index = 0;
    while (s_askList[index] != nullptr)
    {
        printf("Exists: "%s"? %s & %s (actually %s & %s)n",
            s_askList[index],
            set1.HasItem(s_askList[index]) == EHasItem::e_maybe ? "maybe" : "no",
            set2.HasItem(s_askList[index]) == EHasItem::e_maybe ? "maybe" : "no",
			actualSet1.find(s_askList[index]) != actualSet1.end() ? "yes" : "no",
			actualSet2.find(s_askList[index]) != actualSet2.end() ? "yes" : "no");
        index++;
    }

	// show false positive rates
    printf ("nFalse postive probability = %0.2f%% & %0.2f%%n", set1.FalsePositiveProbability()*100.0f, set2.FalsePositiveProbability()*100.0f);
    printf ("False postive probability at 10 items = %0.2f%%n", set1.FalsePositiveProbability(10)*100.0f);
    printf ("False postive probability at 25 items = %0.2f%%n", set1.FalsePositiveProbability(25)*100.0f);
    printf ("False postive probability at 50 items = %0.2f%%n", set1.FalsePositiveProbability(50)*100.0f);
    printf ("False postive probability at 100 items = %0.2f%%n", set1.FalsePositiveProbability(100)*100.0f);

	// show ideal bit counts and hashes.
    const unsigned int itemsInserted = 10;
    const float desiredFalsePositiveProbability = 0.05f;
    const float idealBitCount = IdealBitCount(itemsInserted, desiredFalsePositiveProbability);
    const float idealHashCount = IdealHashCount((unsigned int)idealBitCount, itemsInserted);
    printf("nFor %u items inserted and a desired false probability of %0.2f%%nYou should use %0.2f bits of storage and %0.2f hashesn",
        itemsInserted, desiredFalsePositiveProbability*100.0f, idealBitCount, idealHashCount);

	// get the actual union
	std::set actualUnion;
	std::for_each(actualSet1.begin(), actualSet1.end(), [&actualUnion] (const std::string& s) {
		actualUnion.insert(s);
	});
	std::for_each(actualSet2.begin(), actualSet2.end(), [&actualUnion] (const std::string& s) {
		actualUnion.insert(s);
	});

	// get the actual intersection
	std::set actualIntersection;
	std::for_each(actualSet1.begin(), actualSet1.end(), [&actualIntersection,&actualSet2] (const std::string& s) {
		if (actualSet2.find(s) != actualSet2.end())
			actualIntersection.insert(s);
	});

	// caclulate actual jaccard index
	float actualJaccardIndex = (float)actualIntersection.size() / (float)actualUnion.size();

	// show the estimated and actual counts, and error of estimations
	printf("nSet1: %0.2f estimated, %u actual.  Error: %0.2f%%n",
		set1.CountEstimate(),
		actualSet1.size(),
		100.0f * ((float)set1.CountEstimate() - (float)actualSet1.size()) / (float)actualSet1.size()
	);
	printf("Set2: %0.2f estimated, %u actual.  Error: %0.2f%%n",
		set2.CountEstimate(),
		actualSet2.size(),
		100.0f * ((float)set2.CountEstimate() - (float)actualSet2.size()) / (float)actualSet2.size()
	);

	float estimatedUnion = UnionCountEstimate(set1, set2);
	float estimatedIntersection = IntersectionCountEstimate(set1, set2);
	float estimatedJaccardIndex = estimatedIntersection / estimatedUnion;
	printf("Union: %0.2f estimated, %u actual.  Error: %0.2f%%n",
		estimatedUnion,
		actualUnion.size(),
		100.0f * (estimatedUnion - (float)actualUnion.size()) / (float)actualUnion.size()
	);
	printf("Intersection: %0.2f estimated, %u actual.  Error: %0.2f%%n",
		estimatedIntersection,
		actualIntersection.size(),
		100.0f * (estimatedIntersection - (float)actualIntersection.size()) / (float)actualIntersection.size()
	);
	printf("Jaccard Index: %0.2f estimated, %0.2f actual.  Error: %0.2f%%n",
		estimatedJaccardIndex,
		actualJaccardIndex,
		100.0f * (estimatedJaccardIndex - actualJaccardIndex) / actualJaccardIndex
	);
 
    WaitForEnter();
}

And here is the output of that program:

In the output, you can see that all the existence checks were correct. All the no’s were actually no’s like they should be, but also, all the maybe’s were actually present, so there were no false positives.

The estimated counts were a little off but were fairly close. The first list was estimated at 2.4 items, when it actually had 3. The second list was estimated at 4.44 items when it actually had 5 items.

It’s reporting a very low false positive rate, which falls in line with the fact that we didn’t see any false positives. The projected false positive rates at 10, 25, 50 and 100 items show us that the set doesn’t have a whole lot more capacity if we want to keep the error rate low.

The union, intersection and jaccard index error rate was pretty low, but the error rate was definitely larger than the false positive rate.

Interestingly, if you look at the part which reports the ideal bit and hash count for 10 items, it says that we should actually use FEWER hashes than we do and a couple less bits. You can actually experiment by changing the number of hashes to 4 and seeing that the error rate goes down. In the example code we are actually using TOO MANY hashes, and it’s hurting our probability rate, for the number of items we plan on inserting.

Interesting Idea

I was chatting with a friend Dave, who said he was using a bloom filter like structure to try and help make sure he didn’t try the same genetic algorithm permutation more than once. An issue with that is that hash collisions could thwart the ability for evolution to happen correctly by incorrectly disallowing a wanted permutation from happening just because it happened to hash to the same value as another permutation already tried. To help this situation, he just biased against permutations found in the data structure, instead of completely blocking them out. Basically, if the permutation was marked as “maybe seen” in the data structure, he’d give it some % chance that it would allow it to happen “again” anyways.

Unfortunately, the idea in general turned out to be impractical. He had about 40 bits of genetic information which is about 1 trillion unique items (2^40).

for being able to store only 1 billion items – which is 1000 times smaller – with 5% false positive rate, that would require about 750MB of storage of bits.

Dropping the requirement to being 25% false positive rate, it still requires 350MB, and to 75% requires 70MB. Even at 99% false positive rate allowed, it requires 2.5MB, and we are still 1000 times too small.

So, for 1 trillion permutations, the size of the bloom filter is unfortunately far too large and he ended up going a different route.

The technique of rolling a random number when you get a maybe is pretty neat though, so wanted to mention it (:

Next Up

We’ve now talked about a probabilistic unique value counter (KMV) that can count the number of unique objects seen.

We then talked about a probabilistic set structure (Bloom Filter) that can keep track of set membership of objects.

How about being able to probabilistically store a list of non unique items?

One way to do this would be to have a count with each item seen instead of only having a bit of whether it’s present or not. When you have a count per item, this is known as a multiset, or the more familiar term: histogram.

If you change a bloom filter to have a count instead of a single bit, that’s called a counting bloom filter and completely works.

There’s a better technique for doing this though called count min sketch. Look for a post soon!

Links

Wikipedia: Bloom Filter
Interactive Bloom Filter Demo

Check this out, it’s a physical implementation of a bloom filter (WAT?!)
Wikipedia: Superimposed code

Estimating Counts of Distinct Values with KMV

A few months ago I saw what I’m about to show you for the first time, and my jaw dropped. I’m hoping to share that experience with you right now, as a I share a quick intro to probabilistic algorithms, starting with KMV, otherwise known as K-Minimum Values, which is a Distinct Value (DV) Sketch, or a DV estimator.

Thanks to Ben Deane for exposure to this really interesting area of computer science.

The Setup

Let’s say that you needed a program to count the number of unique somethings that you’ve encountered. The unique somethings could be unique visitors on a web page, unique logins into a server within a given time period, unique credit card numbers, or anything else that has a way for you to get some unique identifier.

One way to do this would be to keep a list of all the things you’ve encountered before, only inserting a new item if it isn’t already in the list, and then count the items in the list when you want a count of unique items. The obvious downside here is that if the items are large and/or there are a lot of them, your list can get pretty large. In fact, it can take an unbounded amount of memory to store this list, which is no good.

A better solution may be instead of storing the entire item in the list, maybe instead you make a 32 bit hash of the item and then add that hash to a list if it isn’t already in the list, knowing that you have at most 2^32 (4,294,967,296) items in the list. It’s a bounded amount of memory, which is an improvement, but it’s still pretty large. The maximum memory requirement there is 2^32 * 4 which is 17,179,869,184 bytes or 16GB.

You may even do one step better and just decide to have 2^32 bits of storage, and when you encounter a specific 32 bit hash, use that as an index and set that specific bit to 1. You could then count the number of bits set and use that as your answer. That only takes 536,870,912 bytes, or 512MB. A pretty big improvement over 16GB but still pretty big. Also, try counting the number of bits set in a 512MB region of memory. That isn’t the fastest operation around ๐Ÿ˜›

We made some progress by moving to hashes, which put a maximum size amount of memory required, but that came at the cost of us introducing some error. Hashes can have collisions, and when that occurs in our scenarios above, we have no way of knowing that two items hashing to the same value are not the same item.

Sometimes it’s ok though that we only have an approximate answer, because getting an exact answer may require resources we just don’t have – like infinite memory to hold an infinitely large list, and then infinite computing time to count how many items there are. Also notice that the error rate is tuneable if we want to spend more memory. If we used a 64 bit hash instead of a 32 bit hash, our hash collisions would decrease, and our result would be more accurate, at the cost of more memory used.

The Basic Idea

Let’s try something different, let’s hash every item we see, but only store the lowest valued hash we encounter. We can then use that smallest hash value to estimate how many unique items we saw. That’s not immediately intuitive, so here’s the how and the why…

When we put something into a hash function, what we are doing really is getting a deterministic pseudo random number to represent the item. If we put the same item in, we’ll get the same number out every time, and if we put in a different item, we should get a different number out, even if the second item is really similar to the first item. Interestingly the numbers we get out should have no bearing on the nature of the items we put into the hash. Even if the items we put in are similar, the output should be just as different (on average) as if we put in items that were completely different.

This is one of the properties of a good hash function, that it’s output is (usually) evenly distributed, no matter the nature of the input. What that means is that if we put N items into a hash function, those items are on average going to be evenly spaced in the output of the hash, regardless of how similar or different they were before going into the hash function.

Using this property, the distance from zero to the smallest hash we’ve ever seen can be treated as a representative estimate of the average distance between each item that we hashed. So, to get the number of items in the hash, we convert the smallest hash into a percentage from 0 to 1 of the total hash space (for uint32, convert to float and divide by (float)2^32), and then we can use this formula:

numItems = (1 / percentage) – 1

To understand why we subtract the 1, imagine that our minimum hash value is 0.5. If that is an accurate representation of the space between values, that means that we only have 1 value, right in the middle. But, if we didvide 1 by 0.5 we get 2. We have 2 REGIONS, but we have only 1 item in the list, so we need to subtract 1.

As another example imagine that our minimum hash is 0.3333 and that it is an accurate representation of the space between values. If we divide 1 by 0.3333, we get 3. We do have 3 regions if we cut the whole space into 3 parts, but we only have 2 items (we made 2 cuts to make 3 regions).

Reducing Error

This technique doesn’t suffer too much from hash collisions (so long as your hash function is a decent hash function), but it does have a different sort of problem.

As you might guess, sometimes the hash function might not play nice, and you could hash only a single item, get a hash of 1 and ruin the count estimation. So, there is possibility for error in this technique.

To help reduce error, you ultimately need information about more ranges so that you can combine the multiple pieces of information together to get a more accurate estimate.

Here are some ways to gather information about more ranges:

  • Keep the lowest and highest hash seen instead of just the lowest. This doubles the range information you have since you’d know the range 0-min and max-1.
  • Keep the lowest N hashes seen instead of only the lowest. This multiplies the amount of range information you have by N.
  • Instead of keeping the lowest value from a single hash, perform N hashes instead and keep the lowest value seen from each hash. Again, this multiplies the amount of range information you have by N.
  • Alternately, just salt the same hash N different ways (with high quality random numbers) instead of using N different hash functions, but still keep the lowest seen from each “value channel”. Multiplies info by N again.
  • Also a possibility, instead of doing N hashes or N salts, do a single hash, and xor that hash against N different high quality random numbers to come up with N deterministic pseudorandom numbers per item. Once again, still keep the lowest hash value seen from each “value channel”. Multiplies info by N again
  • Mix the above however you see fit!

Whatever route you go, the ultimate goal is to just get information about more ranges to be able to combine them together.

In vanilla KMV, the second option is used, which is to keep the N lowest hashes seen, instead of just a single lowest hash seen. Thus the full name for KMV: K-Minimum Values.

Combining Info From Multiple Ranges

When you have the info from multiple ranges and need to combine that info, it turns out that using the harmonic mean is the way to go because it’s great at filtering out large values in data that don’t fit with the rest of the data.

Since we are using division to turn the hash value into an estimate (1/percentValue-1), unusually small hashes will result in exponentially larger values, while unusually large hashes will not affect the math as much, but also likely will be thrown out before we ever see them since they will likely not be the minimum hash that we’ve seen.

I don’t have supporting info handy, but from what I’ve been told, the harmonic mean is provably better than both the geometric mean and the regular plain old vanilla average (arithmetic mean) in this situation.

So, to combine information from the multiple ranges you’ve gathered, you turn each range into a distinct value estimate (by calculating 1/percentValue-1) and then putting all those values through the mean equation of your choice (which ought to be harmonic mean, but doesn’t strictly have to be!). The result will be your final answer.

Set Operations

Even though KMV is just a distinct value estimator that estimates a count, there are some interesting probabilistic set operations that you can do with it as well. I’ll be talking about using the k min value technique for gathering information from multiple ranges, but if you use some logic you should be able to figure out how to make it work when you use any of the other techniques.

Jaccard Index

Talking about set operations, I want to start with a concept called the Jaccard index (sometimes called the Jaccard similarity coefficient).

If you have 2 sets, the Jaccard index is calculated by:

Jaccard Index = count(intersection(A,B)) / count(union(A,B))

Since the union of A and B is the combined list of all items in those sets, and the intersection of A and B is the items that they have in common, you can see that if the sets have all items in common, the Jaccard index will be 1 and if the sets have no items in common, the Jaccard index will be 0. If you have some items in common it will be somewhere between 0 and 1. So, the Jaccard Index is just a measurement of how similar two sets are.

Union

If you have the information for two KMV objects, you can get an estimate to the number of unique items there if you were to union them together, even though you don’t have much of the info about the items that are in the set.

To do a union, you just combine the minimum value lists, and remove the K largest ones, so that you are left with the K minimum values from both sets.

You then do business as usual to estimate how many items are in that resulting set.

If you think about it, this makes sense, because if you tracked the items from both lists in a third KMV object, you’d end up with the same K minimums as if you took the K smallest values from both sets individually.

Note that if the two KMV objects are of different size, due to K being different sizes, or because either one isn’t completely filled with K minimum values, you should use the smaller value of K as your union set K size.

Intersection

Finding an estimated count of intersections between two sets is a little bit different.

Basically, having the K minimum hashes seen from both lists, you can think of that as a kind of random sample of a range in both lists. You can then calculate the Jaccard index for that range you have for the two sets (by dividing the size of the intersection by the size of the union), and then use that Jaccard index to estimate an intersection count for the entire set based on the union count estimate.

You can do some algebra to the Jaccard index formula above to get this:

count(intersection(A,B)) = Jaccard Index * count(union(A,B))

Just like with union, if the two KMV objects are of different size, due to K being different sizes, or because either one isn’t completely filled with K minimum values, you should use the smaller value of K.

Sample Code

#include 
#include 
#include 
#include 
#include 
#include 

// The CKMVCounter class
template <typename T, unsigned int NUMHASHES, typename HASHER = std::hash>
class CKMVUniqueCounter
{
public:
	// constants
	static const unsigned int c_numHashes = NUMHASHES;
	static const size_t c_invalidHash = (size_t)-1;

	// constructor
	CKMVUniqueCounter ()
	{
		// fill our minimum hash values with the maximum possible value
		m_minHashes.fill(c_invalidHash);
		m_largestHashIndex = 0;
	}

	// interface
	void AddItem (const T& item)
	{
		// if the new hash isn't smaller than our current largest hash, do nothing
		const size_t newHash = HASHER()(item);
		if (m_minHashes[m_largestHashIndex] <= newHash)
			return;

		// if the new hash is already in the list, don't add it again
		for (unsigned int index = 0; index < c_numHashes; ++index)
		{
			if (m_minHashes[index] == newHash)
				return;
		}

		// otherwise, replace the largest hash
		m_minHashes[m_largestHashIndex] = newHash;

		// and find the new largest hash
		m_largestHashIndex = 0;
		for (unsigned int index = 1; index  m_minHashes[m_largestHashIndex])
				m_largestHashIndex = index;
		}
	}

	// probabilistic interface
	void UniqueCountEstimates (float &arithmeticMeanCount, float &geometricMeanCount, float &harmonicMeanCount)
	{
		// calculate the means of the count estimates.  Note that if there we didn't get enough items
		// to fill our m_minHashes array, we are just ignoring the unfilled entries.  In production
		// code, you would probably just want to return the number of items that were filled since that
		// is likely to be a much better estimate.
		// Also, we need to sort the hashes before calculating uniques so that we can get the ranges by
		// using [i]-[i-1] instead of having to search for the next largest item to subtract out
		SortHashes();
		arithmeticMeanCount = 0.0f;
		geometricMeanCount = 1.0f;
		harmonicMeanCount = 0.0f;
		int numHashes = 0;
		for (unsigned int index = 0; index < c_numHashes; ++index)
		{
			if (m_minHashes[index] == c_invalidHash)
				continue;
			numHashes++;
			float countEstimate = CountEstimate(index);
			arithmeticMeanCount += countEstimate;
			geometricMeanCount *= countEstimate;
			harmonicMeanCount += 1.0f / countEstimate;
		}
		arithmeticMeanCount = arithmeticMeanCount / (float)numHashes;
		geometricMeanCount = pow(geometricMeanCount, 1.0f / (float)numHashes);
		harmonicMeanCount /= (float)numHashes;
		harmonicMeanCount = 1.0f / harmonicMeanCount;
	}

	// friends
	template 
	friend CKMVUniqueCounter KMVUnion (
		const CKMVUniqueCounter& a,
		const CKMVUniqueCounter& b
	);

	template 
	friend float KMVJaccardIndex (
		const CKMVUniqueCounter& a,
		const CKMVUniqueCounter& b
	);

private:

	unsigned int NumHashesSet () const
	{
		unsigned int ret = 0;
		for (unsigned int index = 0; index  0 ? m_minHashes[hashIndex- 1] : 0;
		const float percent = (float)(currentHash-lastHash) / (float)((size_t)-1);
		return (1.0f / percent) - 1.0f;
	}
	
	// the minimum hash values
	std::array m_minHashes;
	size_t m_largestHashIndex;
};

// Set interface
template 
CKMVUniqueCounter KMVUnion (
	const CKMVUniqueCounter& a,
	const CKMVUniqueCounter& b
)
{
	// gather the K smallest hashes seen, where K is the smaller, removing duplicates
	std::set setMinHashes;
	std::for_each(a.m_minHashes.begin(), a.m_minHashes.end(), [&setMinHashes](size_t v) {setMinHashes.insert(v); });
	std::for_each(b.m_minHashes.begin(), b.m_minHashes.end(), [&setMinHashes](size_t v) {setMinHashes.insert(v); });
	std::vector minHashes(setMinHashes.begin(), setMinHashes.end());
	std::sort(minHashes.begin(),minHashes.end());
	minHashes.resize(std::min(a.NumHashesSet(), b.NumHashesSet()));

	// create and return the new KMV union object
	CKMVUniqueCounter ret;
	for (unsigned int index = 0; index < minHashes.size(); ++index)
		ret.m_minHashes[index] = minHashes[index];
	ret.m_largestHashIndex = ret.c_numHashes - 1;
	return ret;
}

template 
float KMVJaccardIndex (
	const CKMVUniqueCounter& a,
	const CKMVUniqueCounter& b
)
{
	size_t smallerK = std::min(a.NumHashesSet(), b.NumHashesSet());

	size_t matches = 0;
	for (size_t ia = 0; ia < smallerK; ++ia)
	{
		for (size_t ib = 0; ib < smallerK; ++ib)
		{
			if (a.m_minHashes[ia] == b.m_minHashes[ib])
			{
				matches++;
				break;
			}
		}
	}

	return (float)matches / (float)smallerK;
}

// data to add to the lists
const char *s_boyNames[] =
{
	"Loki",
	"Alan",
	"Paul",
	"Stripes",
	"Shelby",
	"Ike",
	"Rafael",
	"Sonny",
	"Luciano",
	"Jason",
	"Brent",
	"Jed",
	"Lesley",
	"Randolph",
	"Isreal",
	"Charley",
	"Valentin",
	"Dewayne",
	"Trent",
	"Abdul",
	"Craig",
	"Andre",
	"Brady",
	"Markus",
	"Randolph",
	"Isreal",
	"Charley",
	"Brenton",
	"Herbert",
	"Rafael",
	"Sonny",
	"Luciano",
	"Joshua",
	"Ramiro",
	"Osvaldo",
	"Monty",
	"Mckinley",
	"Colin",
	"Hyman",
	"Scottie",
	"Tommy",
	"Modesto",
	"Reginald",
	"Lindsay",
	"Alec",
	"Marco",
	"Dee",
	"Randy",
	"Arthur",
	"Hosea",
	"Laverne",
	"Bobbie",
	"Damon",
	"Les",
	"Cleo",
	"Robt",
	"Rick",
	"Alonso",
	"Teodoro",
	"Rodolfo",
	"Ryann",
	"Miki",
	"Astrid",
	"Monty",
	"Mckinley",
	"Colin",
	nullptr
};

const char *s_girlNames[] =
{
	"Chanel",
	"Colleen",
	"Scorch",
	"Grub",
	"Anh",
	"Kenya",
	"Georgeann",
	"Anne",
	"Inge",
	"Georgeann",
	"Anne",
	"Inge",
	"Analisa",
	"Ligia",
	"Chasidy",
	"Marylee",
	"Lashandra",
	"Frida",
	"Katie",
	"Alene",
	"Brunilda",
	"Zoe",
	"Shavon",
	"Anjanette",
	"Daine",
	"Sheron",
	"Hilary",
	"Felicitas",
	"Cristin",
	"Ressie",
	"Tynisha",
	"Annie",
	"Sharilyn",
	"Astrid",
	"Charise",
	"Gregoria",
	"Angelic",
	"Lesley",
	"Mckinley",
	"Lindsay",
	"Shanelle",
	"Karyl",
	"Trudi",
	"Shaniqua",
	"Trinh",
	"Ardell",
	"Doreen",
	"Leanna",
	"Chrystal",
	"Treasa",
	"Dorris",
	"Rosalind",
	"Lenore",
	"Mari",
	"Kasie",
	"Ann",
	"Ryann",
	"Miki",
	"Lasonya",
	"Olimpia",
	"Shelby",
	"Lesley",
	"Mckinley",
	"Lindsay",
	"Dee",
	"Bobbie",
	"Cleo",
	"Leanna",
	"Chrystal",
	"Treasa",
	nullptr
};

// driver program
void WaitForEnter ()
{
	printf("nPress Enter to quit");
	fflush(stdin);
	getchar();
}

void main(void)
{
	// how many min values all these KMV objects keep around
	static const int c_numMinValues = 15;

	printf("Using %u minimum valuesnn", c_numMinValues);

	// =====================  Boy Names  =====================
	// put our data into the KVM counter
	CKMVUniqueCounter boyCounter;
	unsigned int index = 0;
	while (s_boyNames[index] != nullptr)
	{
		boyCounter.AddItem(s_boyNames[index]);
		index++;
	}

	// get our count estimates
	float arithmeticMeanCount, geometricMeanCount, harmonicMeanCount;
	boyCounter.UniqueCountEstimates(arithmeticMeanCount, geometricMeanCount, harmonicMeanCount);

	// get our actual unique count
	std::set actualBoyUniques;
	index = 0;
	while (s_boyNames[index] != nullptr)
	{
		actualBoyUniques.insert(s_boyNames[index]);
		index++;
	}

	// print the results!
	printf("Boy Names:n%u actual uniquesn", actualBoyUniques.size());
	float actualCount = (float)actualBoyUniques.size();
	printf("Estimated counts and percent error:n  Arithmetic Mean: %0.2ft%0.2f%%n"
		"  Geometric Mean : %0.2ft%0.2f%%n  Harmonic Mean  : %0.2ft%0.2f%%n",
		arithmeticMeanCount, 100.0f * (arithmeticMeanCount - actualCount) / actualCount,
		geometricMeanCount, 100.0f * (geometricMeanCount - actualCount) / actualCount,
		harmonicMeanCount, 100.0f * (harmonicMeanCount - actualCount) / actualCount);

	// =====================  Girl Names  =====================
	// put our data into the KVM counter
	CKMVUniqueCounter girlCounter;
	index = 0;
	while (s_girlNames[index] != nullptr)
	{
		girlCounter.AddItem(s_girlNames[index]);
		index++;
	}

	// get our count estimates
	girlCounter.UniqueCountEstimates(arithmeticMeanCount, geometricMeanCount, harmonicMeanCount);

	// get our actual unique count
	std::set actualGirlUniques;
	index = 0;
	while (s_girlNames[index] != nullptr)
	{
		actualGirlUniques.insert(s_girlNames[index]);
		index++;
	}

	// print the results!
	printf("nGirl Names:n%u actual uniquesn", actualGirlUniques.size());
	actualCount = (float)actualGirlUniques.size();
	printf("Estimated counts and percent error:n  Arithmetic Mean: %0.2ft%0.2f%%n"
		"  Geometric Mean : %0.2ft%0.2f%%n  Harmonic Mean  : %0.2ft%0.2f%%n",
		arithmeticMeanCount, 100.0f * (arithmeticMeanCount - actualCount) / actualCount,
		geometricMeanCount, 100.0f * (geometricMeanCount - actualCount) / actualCount,
		harmonicMeanCount, 100.0f * (harmonicMeanCount - actualCount) / actualCount);

	// =====================  Set Operations  =====================

	// make the KMV union and get our count estimates
	CKMVUniqueCounter boyGirlUnion = KMVUnion(boyCounter, girlCounter);
	boyGirlUnion.UniqueCountEstimates(arithmeticMeanCount, geometricMeanCount, harmonicMeanCount);

	// make the actual union
	std::set actualBoyGirlUnion;
	std::for_each(actualBoyUniques.begin(), actualBoyUniques.end(),
		[&actualBoyGirlUnion](const std::string& s)
		{
			actualBoyGirlUnion.insert(s);
		}
	);
	std::for_each(actualGirlUniques.begin(), actualGirlUniques.end(),
		[&actualBoyGirlUnion](const std::string& s)
		{
			actualBoyGirlUnion.insert(s);
		}
	);

	// print the results!
	printf("nUnion:n%u actual uniques in unionn", actualBoyGirlUnion.size());
	actualCount = (float)actualBoyGirlUnion.size();
	printf("Estimated counts and percent error:n  Arithmetic Mean: %0.2ft%0.2f%%n"
		"  Geometric Mean : %0.2ft%0.2f%%n  Harmonic Mean  : %0.2ft%0.2f%%n",
		arithmeticMeanCount, 100.0f * (arithmeticMeanCount - actualCount) / actualCount,
		geometricMeanCount, 100.0f * (geometricMeanCount - actualCount) / actualCount,
		harmonicMeanCount, 100.0f * (harmonicMeanCount - actualCount) / actualCount);

	// calculate estimated jaccard index
	float estimatedJaccardIndex = KMVJaccardIndex(boyCounter, girlCounter);

	// calculate actual jaccard index and actual intersection
	size_t actualIntersection = 0;
	std::for_each(actualBoyUniques.begin(), actualBoyUniques.end(),
		[&actualGirlUniques, &actualIntersection] (const std::string &s)
		{
			if (actualGirlUniques.find(s) != actualGirlUniques.end())
				actualIntersection++;
		}
	);
	float actualJaccardIndex = (float)actualIntersection / (float)actualBoyGirlUnion.size();

	// calculate estimated intersection
	float estimatedIntersection = estimatedJaccardIndex * (float)actualBoyGirlUnion.size();

	// print the intersection and jaccard index information
	printf("nIntersection:n%0.2f estimated, %u actual.  Error: %0.2f%%n",
		estimatedIntersection,
		actualIntersection,
		100.0f * (estimatedIntersection - (float)actualIntersection) / (float)actualIntersection);

	printf("nJaccard Index:n%0.2f estimated, %0.2f actual.  Error: %0.2f%%n",
		estimatedJaccardIndex,
		actualJaccardIndex,
		100.0f * (estimatedJaccardIndex-actualJaccardIndex) / actualJaccardIndex
	);

	WaitForEnter();
}

Here’s the output of the program:

Upgrading from Set to Multi Set

Interestingly, if you keep a count of how many times you’ve seen each hash in the k min hash, you can upgrade this algorithm from being a set algorithm to a multi set algorithm and get some other interesting information from it.

Where a set is basically a list of unique items, a multi set is a set of unique items that also have a count associated with them. In this way, you can think of a multiset as just a list of items which may appear more than once.

Upgrading KMV to a multi set algorithm lets you do some new and interesting things where instead of getting information only about unique counts, you can get information about non unique counts too. But to re-iterate, you still keep the ability to get unique information as well, so it is kind of like an upgrade, if you are interested in multiset information.

Links

Want more info about this technique?

Sketch of the Day: K-Minimum Values

Sketch of the Day: Interactive KMV web demo

K-Minimum Values: Sketching Error, Hash Functions, and You

Wikipedia: MinHash – Jaccard similarity and minimum hash values

All Done

I wanted to mention that even though this algorithm is a great, intuitive introduction into probabilistic algorithms, there are actually much better distinct value estimators in use today, such as one called HyperLogLog which seems to be the current winner. Look for a post on HyperLogLog soon ๐Ÿ˜›

KMV is better than other algorithms at a few things though. Specifically, from what I’ve read, that it can extend to multiset makes it very useful, and also it is much easier to calculate intersections versus other algorithms.

I also wanted to mention that there are interesting usage cases for this type of algorithms in game development, but where these probabilistic algorithms really shine is in massive data situations like google, amazon, netflix etc. If you go out searching for more info on this stuff, you’ll probably be led down many big data / web dev rabbit holes, because that’s where the best information about this stuff resides.

Lastly, I wanted to mention that I’m using the C++ std::hash built in hash function. I haven’t done a lot of research to see how it compares to other hashes, but I’m sure, much like rand(), the built in functionality leaves much to be desired for “power user” situations. In other words, if you are going to use this in a realistic situation, you probably are better off using a better hashing algorithm. If you need a fast one, you might look into the latest murmurhash variant!

More probabilistic algorithm posts coming soon, so keep an eye out!

Programmatically Calculating GCD and LCM

I recently came across a really interesting technique for calculating GCD (greatest common divisor) and then found out you can use that to calculate LCM (least common multiple).

Greatest Common Divisor

The greatest common divisor of two numbers is the largest number that divides evenly into those numbers.

For instance the GCD of 12 and 15 is 3, the GCD of 30 and 20 is 10, and the GCD of 7 and 11 is 1.

You could calculate this with brute force – starting with 1 and counting up to the smaller number, keeping track of the largest number that divides evenly into both numbers – but for larger numbers, this technique could take a long time.

Luckily Euclid came up a better way in 300 BC!

Euclid’s algorithm to find the GCD of numbers A and B:

  1. If A and B are the same number, that number is the GCD
  2. Otherwise, subtract the smaller number from the larger number
  3. Goto 1

Pretty simple right? It’s not immediately intuitive why that works, but as an example let’s say that there’s a number that goes into A fives times, and goes into B three times. That same number must go into (A-B) two times.

Try it out on paper, think about it a bit, and check out the links at the end of this section (:

A refinement on that algorithm is to use remainder (modulus) instead of possibly having to do repeated subtraction to get the same result. For instance if you had the numbers 1015 and 2, you are going to have to subtract 2 from 1015 quite a few times before the 2 becomes the larger number.

Here’s the refined algorithm:

  1. If A and B are the same number, that number is the GCD
  2. Otherwise, set the larger number to be the remainder of the larger number divided by the smaller number
  3. Goto 1

And here’s the C++ code:

#include <stdio.h>
#include <algorithm>

unsigned int CalculateGCD (unsigned int smaller, unsigned int larger)
{
	// make sure A <= B before starting
	if (larger < smaller)
		std::swap(smaller, larger);

	// loop
	while (1)
	{
		// if the remainder of larger / smaller is 0, they are the same
		// so return smaller as the GCD
		unsigned int remainder = larger % smaller;
		if (remainder == 0)
			return smaller;

		// otherwise, the new larger number is the old smaller number, and
		// the new smaller number is the remainder
		larger = smaller;
		smaller = remainder;
	}
}

void WaitForEnter ()
{
	printf("nPress Enter to quit");
	fflush(stdin);
	getchar();
}

void main (void)
{
	// Get A
	printf("Greatest Common Devisor Calculator, using Euclid's algorithm!n");
	printf("nFirst number? ");
	unsigned int A = 0;
	if (scanf("%u",&A) == 0 || A == 0) {
		printf("nMust input a positive integer greater than 0!");
		WaitForEnter();
		return;
	}

	// Get B
	printf("Second number? ");
	unsigned int B = 0;
	if (scanf("%u",&B) == 0 || B == 0) {
		printf("nMust input a positive integer greater than 0!");
		WaitForEnter();
		return;
	}
	
	// show the result
	printf("nGCD(%u,%u) = %un", A, B, CalculateGCD(A,B));

	// wait for user to press enter
	WaitForEnter();
}

I found this stuff in Michael Abrash’s Graphics Programming Black Book Special Edition: Patient Coding, Faster Code.

That book is filled with amazing treasures of knowledge and interesting stories to boot. I highly suggest flipping to a couple random chapters and reading it a bit. Very cool stuff in there (:

You might also find these links interesting or useful!
Wikipedia: Greatest Common Divisor
Wikipedia: Euclidean Algorithm

I’m sure there’s a way to extend this algorithm to work for N numbers at a time instead of only 2 numbers. I’ll leave that as a fun exercise for you if you want to play with that ๐Ÿ˜›

Least Common Multiple

The least common multiple of two numbers is the smallest number that is evenly divisible by those numbers.

Kind of an ear full so some examples: The LCM of 3 and 4 is 12, the LCM of 1 and 7 is 7, the LCM of 20 and 35 is 140. Note that in the first two examples, the LCM is just the two numbers multiplied together, but in the 3rd example it isn’t (also an interesting thing of note is that the first 2 examples have a GCD of 1, while the 3rd example has a GCD of 5).

Well interestingly, calculating the LCM is super easy if you already know how to calculate the GCD. You just multiply the numbers together and divide by the GCD.

LCM(A,B) = (A*B) / GCD(A,B)

Interestingly though, GCD(A,B) divides evenly into both A and B and will result in an integer result. This means we can multiply by A or B after the division happens and get the exact same answer. More importantly though, it helps protect you against integer overflow in the A*B calculation. Using that knowledge the equation becomes this:

LCM(A,B) = (A / GCD(A,B))*B

Pretty neat! Here’s some C++ code that calculates LCM.

#include <stdio.h>
#include <algorithm>

unsigned int CalculateGCD (unsigned int smaller, unsigned int larger)
{
	// make sure A <= B before starting
	if (larger < smaller)
		std::swap(smaller, larger);

	// loop
	while (1)
	{
		// if the remainder of larger / smaller is 0, they are the same
		// so return smaller as the GCD
		unsigned int remainder = larger % smaller;
		if (remainder == 0)
			return smaller;

		// otherwise, the new larger number is the old smaller number, and
		// the new smaller number is the remainder
		larger = smaller;
		smaller = remainder;
	}
}

unsigned int CalculateLCM (unsigned int A, unsigned int B)
{
	// LCM(A,B) = (A/GCD(A,B))*B
	return (A/CalculateGCD(A,B))*B;
}

void WaitForEnter ()
{
	printf("nPress Enter to quit");
	fflush(stdin);
	getchar();
}

void main (void)
{
	// Get A
	printf("Least Common Multiple Calculator, using Euclid's algorithm for GCD!n");
	printf("nFirst number? ");
	unsigned int A = 0;
	if (scanf("%u",&A) == 0 || A == 0) {
		printf("nMust input a positive integer greater than 0!");
		WaitForEnter();
		return;
	}

	// Get B
	printf("Second number? ");
	unsigned int B = 0;
	if (scanf("%u",&B) == 0 || B == 0) {
		printf("nMust input a positive integer greater than 0!");
		WaitForEnter();
		return;
	}
	
	// show the result
	printf("nLCM(%u,%u) = %un", A, B, CalculateLCM(A,B));

	// wait for user to press enter
	WaitForEnter();
}

Extending this to N numbers could be an interesting thing to try too (:

Here’s tasty link about LCM: Wikipedia: Least Common Multiple

Compile Time GCD and LCM

I’ve just heard that a compile time GCD and LCM implementation has been recomended for the STL. Check out the link below, kinda neat!

Greatest Common Divisor and Least Common Multiple

TTFN.