Generating Random Numbers From a Specific Distribution With Rejection Sampling

The last post showed how to transform uniformly generated random numbers into any random number distribution you desired.

It did so by turning the PDF (probability density function) into a CDF (cumulative density function) and then inverting it – either analytically (making a function) or numerically (making a look up table).

This post will show you how to generate numbers from a PDF as well, but will do so using rejection sampling.

Dice

Let’s say you wanted to simulate a fair five sided die but that you only had a six sided die.

You can use rejection sampling for this by rolling a six sided die and ignoring the roll any time a six came up. Doing that, you do in fact get a fair five sided die roll!

This shows doing that to get 10,000 five sided die rolls:

One disadvantage to this method is that you are throwing away die rolls which can be a source of inefficiency. In this setup it takes 1.2 six sided die rolls on average to get a valid five sided die roll since a roll will be thrown away 1/6 of the time.

Another disadvantage is that each time you need a new value, there are an unknown number of die rolls needed to get it. On average it’s true that you only need 1.2 die rolls, but in reality, it’s possible you may roll 10 sixes in a row. Heck it’s even technically possible (but very unlikely) that you could be rolling dice until the end of time and keep getting sixes. (Using PRNG’s in computers, this won’t happen, but it does take a variable number of rolls).

This is just to say: there is uneven and unpredictable execution time of this algorithm, and it needs an unknown (but somewhat predictable) amount of random numbers to work. This is true of the other forms of sampling methods I talk about lower down as well.

Instead of using a six sided die you could use a die of any size that is greater than (or equal to…) five. Here shows a twenty sided die simulating a five sided die:

It looks basically the same as using a six sided die, which makes sense (that shows that it works), but in this case, it actually took 4 rolls on average to make a valid five sided die roll, since the roll fails 15/20 times (3 out of 4 rolls will fail).

Quick Asides:

  • If straying from rejection sampling ideas for a minute, in the case of the twenty sided die, you could use modulus to get a fair five sided die roll each time: ((roll - 1) \% 5) + 1. This works because there is no remainder for 20 % 5. If there was a remainder it would bias the rolls towards the numbers <= the remainder, making them more likely to come up than the other numbers.
  • You could also get a four sided die roll at the same time if you didn’t want to waste any of this precious random information: ((roll - 1) / 5) + 1
  • Another algorithm to check out for discrete (integer) weighted random numbers is Vose’s method: Vose’s Method.

Box Around PDF

Moving back into the world of continuous valued random numbers and PDF’s, a simple version of how rejection sampling can be used is like this:

  1. Graph your PDF
  2. Draw a box around the PDF
  3. Generate a (uniform) random point in that box
  4. If the point is under the curve of the PDF, use the x axis value as your random number, else throw it out and go to 1

That’s all there is to it!

This works because the x axis value of your 2d point is the random number you might be choosing. The y axis value of your 2d point is a probability of choosing that point. Since the PDF graph is higher in places that are more probable, those places are more likely to accept your 2d point than places that have lower PDF values.

Furthermore, the average number of rejected samples vs accepted samples is based on the area under the PDF compared to the area of the box.

The number of samples on average will be the area of the box divided by the area of the PDF.

Since PDF’s by definition have to integrate to 1, that means that you are dividing by 1. So, to simplify: The number of samples on average will be the same as the area of the box!

If it’s hard to come up with the exact size of the box for the PDF, the box doesn’t have to fit exactly, but of course the tighter you can fit the box around the PDF, the fewer rejected samples you’ll have.

You don’t actually need to graph the PDF and draw a box to do this though. Just generate a 2d random number (a random x and a random y) and reject the point if PDF(x) < y.

Here I'm using this technique with the PDF y=2x where x is in [0,1) and I'm using a box that goes from (0,0) to (1,2) to get 100,000 samples.

As expected, it took on average 2 points to get a single valid point since the area of the box is 2. Here are how many failed tests each histogram bucket had. Unsurprisingly, lower values of the PDF have more failed tests!

Moving to a more complex PDF, let’s look at y=\frac{x^3-10x^2+5x+11}{10.417}

Here are 10 million samples (lots of samples to minimize the noise), using a box height of 1.2, which unsurprisingly takes 1.2 samples on average to get a valid sample:

Here is the graph of the failure counts:

Here the box has a height of 2.8. It still works, but uses 2.8 samples on average which is less efficient:

Here’s the graph of failure counts:

Something interesting about this technique is that technically, the distribution you are sampling from doesn’t even have to be a PDF! If you have negative parts of the graph, they will be treated as zero, assuming your box has a minimum y of 0. Also, the fact that your function may not integrate to (have an area of) 1 doesn’t matter at all.

Here we take the PDF from the last examples, and take off the division by a constant, so that it doesn’t integrate to 1: y=x^3-10x^2+5x+11

The interesting thing is that we get as output a normalized PDF (the red line), even though the distribution we were using to sample was not normalized (the blue line, which is mostly hidden behind the yellow line).

Here are the rejection counts:

Generating One PDF from Another PDF

In the last section we showed how to enclose a PDF in a box, make uniformly random 2d points, and use them to generate points from the PDF.

By enclosing it in a box, all we were really doing is putting it under a uniform distribition that was scaled up to be larger than the PDF at all points.

Now here’s the interesting thing: We aren’t limited to using the uniform distribution!

To generalize this technique, if you are trying to sample from a PDF f(x), you can use any PDF g(x) to do so, so long as you multiply g(x) by a scalar value M so that M*g(x)>= f(x) for all values of x. In other words: scale up g so that it’s always bigger than f.

Using this more generalized technique has one or two more steps than the other way, but allows for a tighter fit of a generating function, resulting in fewer samples thrown away.

Here’s how to do it:

  1. Generate a random number from the distribution g, and call it x.
  2. Calculate the percentage chance of x being chosen by getting a ratio of how likely that number is to be chosen in each PDF: \frac{f(x)}{M*g(x)}
  3. Generate a uniform random number from 0 to 1. If it’s less than the value you just calculated, accept x as the random number, else reject it and go back to 1.

Let’s see this in action!

We’ll generate numbers in a Gaussian distribution with a mean of 15 and a standard deviation of 5. We’ll truncate it to +/- 3 standard deviations so we want to generate random numbers from [0,30).

To generate these numbers, we’ll draw random numbers from the PDF y=x*0.002222. We’ll use an M value of 3 to scale up this PDF to always be greater than the Gaussian one.

Here is how it looks doing this with 20,000 samples:

We generate random numbers along the red line, multiply them by 3 to make them be the yellow line. Then, at whatever point we are at on the x axis, we divide the blue line value by the yellow line value and use that as an acceptance probability. Doing this and counting numbers in a histogram gives us our result – the green line. Since the end goal is the blue line, you can see it is indeed working! With a larger number of samples, the green line would more closely match the blue line.

Here’s the graph of the failed tests:

We have to take on average 3 samples before we get a valid random number. That shouldn’t be too surprising because both PDF’s start with area of 1, but we are multiplying one of them by 3 to make it always be larger than the other.

Something else interesting you might notice is that we have a lot fewer failed tests where the two PDF functions are more similar.

That is the power of this technique: If you can cheaply and easily generate samples that are “pretty close” to a harder distribution to sample from, you can use this technique to more cheaply sample from it.

Something to note is that just like in the last section, the target PDF doesn’t necessarily need to be a real PDF with only positive values and integrating to 1. It would work just the same with a non PDF function, just so long as the PDF generating the random numbers you start with is always above the function.

Some Other Notes

There is family of techniques called “adaptive rejection sampling” that will change the PDF they are drawing from whenever there is a failed test.

Basically, if you imagine the PDF you are drawing from as being a bunch of line segments connected together, you could imagine that whenever you failed a test, you moved a line segment down to be closer to the curve, so that when you sampled from that area again, the chances would be lower that you’d fail the test again.

Taking this to the limit, your sampling PDF will eventually become the PDF you are trying to sample from, and then using this PDF will be a no-op.

These techniques are a continued area of research.

Something else to note is that rejection sampling can be used to find random points within shapes.

For instance, a random point on a triangle, ellipse or circle could be done by putting a (tight) bounding box around the shape, generating points randomly in that box, and only accepting ones within the inner shape.

This can be extended to 3d shapes as well.

Some shapes have better ways to generate points within them that don’t involve iteration and rejected samples, but if all else fails, rejection sampling does indeed work!

At some point in the future I’d like to look into “Markov Chain Monte Carlo” more deeply. It seems like a very interesting technique to approach this same problem, but I have no idea if it’s used often in graphics, especially real time graphics.

Code

Here is the code that generated all the data from this post. The data was visualized with open office.

#define _CRT_SECURE_NO_WARNINGS
 
#include <stdio.h>
#include <random>
#include <array>
#include <unordered_map>

template <size_t NUM_TEST_SAMPLES, size_t SIMULATED_DICE_SIDES, size_t ACTUAL_DICE_SIDES>
void TestDice (const char* fileName)
{
    // seed the random number generator
    std::random_device rd;
    std::mt19937 rng(rd());
    std::uniform_int_distribution<size_t> dist(0, ACTUAL_DICE_SIDES-1);

    // generate the histogram
    std::array<size_t, SIMULATED_DICE_SIDES> histogram = { 0 };
    size_t rejectedSamples = 0;
    for (size_t i = 0; i < NUM_TEST_SAMPLES; ++i)
    {
        size_t roll = dist(rng);
        while (roll >= SIMULATED_DICE_SIDES)
        {
            ++rejectedSamples;
            roll = dist(rng);
        }
        histogram[roll]++;
    }

    // write the histogram and rejected sample count to a csv
    // an extra 0 data point forces the graph to include 0 in the scale. hack to make the data not look noisier than it really is.
    FILE *file = fopen(fileName, "w+t");
    fprintf(file, "Actual Count, Expected Count, , %0.2f samples needed per roll on average.\n", (float(NUM_TEST_SAMPLES) + float(rejectedSamples)) / float(NUM_TEST_SAMPLES));
    for (size_t value : histogram)
        fprintf(file, "%zu,%zu,0\n", value, (size_t)(float(NUM_TEST_SAMPLES) / float(SIMULATED_DICE_SIDES)));
    fclose(file);
}
 
template <size_t NUM_TEST_SAMPLES, size_t NUM_HISTOGRAM_BUCKETS, typename PDF_LAMBDA>
void Test (const char* fileName, float maxPDFValue, const PDF_LAMBDA& PDF)
{
    // seed the random number generator
    std::random_device rd;
    std::mt19937 rng(rd());
    std::uniform_real_distribution<float> dist(0.0f, 1.0f);
 
    // generate the histogram
    std::array<size_t, NUM_HISTOGRAM_BUCKETS> histogram = { 0 };
    std::array<size_t, NUM_HISTOGRAM_BUCKETS> failedTestCounts = { 0 };
    size_t rejectedSamples = 0;
    for (size_t i = 0; i < NUM_TEST_SAMPLES; ++i)
    {
        // Generate a sample from the PDF by generating a random 2d point.
        // If the y axis of the value is <= the value returned by PDF(x), accept it, else reject it.
        // NOTE: this takes an unknown number of iterations, and technically may NEVER finish.
        float pointX = 0.0f;
        float pointY = 0.0f;
        bool validPoint = false;
        while (!validPoint)
        {
            pointX = dist(rng);
            pointY = dist(rng) * maxPDFValue;
            float pdfValue = PDF(pointX);
            validPoint = (pointY <= pdfValue);

            // track number of failed tests per histogram bucket
            if (!validPoint)
            {
                size_t bin = (size_t)std::floor(pointX * float(NUM_HISTOGRAM_BUCKETS));
                failedTestCounts[std::min(bin, NUM_HISTOGRAM_BUCKETS - 1)]++;
                ++rejectedSamples;
            }
        }
 
        // increment the correct bin in the histogram
        size_t bin = (size_t)std::floor(pointX * float(NUM_HISTOGRAM_BUCKETS));
        histogram[std::min(bin, NUM_HISTOGRAM_BUCKETS -1)]++;
    }
 
    // write the histogram and pdf sample to a csv
    FILE *file = fopen(fileName, "w+t");
    fprintf(file, "PDF, Simulated PDF, Generating Function, Failed Tests, %0.2f samples needed per value on average.\n", (float(NUM_TEST_SAMPLES) + float(rejectedSamples)) / float(NUM_TEST_SAMPLES));
    for (size_t i = 0; i < NUM_HISTOGRAM_BUCKETS; ++i)
    {
        float x = (float(i) + 0.5f) / float(NUM_HISTOGRAM_BUCKETS);
        float pdfSample = PDF(x);
        fprintf(file, "%f,%f,%f,%f\n",
            pdfSample,
            NUM_HISTOGRAM_BUCKETS * float(histogram[i]) / float(NUM_TEST_SAMPLES),
            maxPDFValue,
            float(failedTestCounts[i])
        );
    }
    fclose(file);
}

template <size_t NUM_TEST_SAMPLES, size_t NUM_HISTOGRAM_BUCKETS, typename PDF_LAMBDA>
void TestNotPDF (const char* fileName, float maxPDFValue, float normalizationConstant, const PDF_LAMBDA& PDF)
{
    // seed the random number generator
    std::random_device rd;
    std::mt19937 rng(rd());
    std::uniform_real_distribution<float> dist(0.0f, 1.0f);
 
    // generate the histogram
    std::array<size_t, NUM_HISTOGRAM_BUCKETS> histogram = { 0 };
    std::array<size_t, NUM_HISTOGRAM_BUCKETS> failedTestCounts = { 0 };
    size_t rejectedSamples = 0;
    for (size_t i = 0; i < NUM_TEST_SAMPLES; ++i)
    {
        // Generate a sample from the PDF by generating a random 2d point.
        // If the y axis of the value is <= the value returned by PDF(x), accept it, else reject it.
        // NOTE: this takes an unknown number of iterations, and technically may NEVER finish.
        float pointX = 0.0f;
        float pointY = 0.0f;
        bool validPoint = false;
        while (!validPoint)
        {
            pointX = dist(rng);
            pointY = dist(rng) * maxPDFValue;
            float pdfValue = PDF(pointX);
            validPoint = (pointY <= pdfValue);

            // track number of failed tests per histogram bucket
            if (!validPoint)
            {
                size_t bin = (size_t)std::floor(pointX * float(NUM_HISTOGRAM_BUCKETS));
                failedTestCounts[std::min(bin, NUM_HISTOGRAM_BUCKETS - 1)]++;
                ++rejectedSamples;
            }
        }
 
        // increment the correct bin in the histogram
        size_t bin = (size_t)std::floor(pointX * float(NUM_HISTOGRAM_BUCKETS));
        histogram[std::min(bin, NUM_HISTOGRAM_BUCKETS -1)]++;
    }
 
    // write the histogram and pdf sample to a csv
    FILE *file = fopen(fileName, "w+t");
    fprintf(file, "Function, Simulated PDF, Scaled Simulated PDF, Generating Function, Failed Tests, %0.2f samples needed per value on average.\n", (float(NUM_TEST_SAMPLES) + float(rejectedSamples)) / float(NUM_TEST_SAMPLES));
    for (size_t i = 0; i < NUM_HISTOGRAM_BUCKETS; ++i)
    {
        float x = (float(i) + 0.5f) / float(NUM_HISTOGRAM_BUCKETS);
        float pdfSample = PDF(x);
        fprintf(file, "%f,%f,%f,%f,%f\n",
            pdfSample,
            NUM_HISTOGRAM_BUCKETS * float(histogram[i]) / float(NUM_TEST_SAMPLES),
            NUM_HISTOGRAM_BUCKETS * float(histogram[i]) / float(NUM_TEST_SAMPLES) * normalizationConstant,
            maxPDFValue,
            float(failedTestCounts[i])
        );
    }
    fclose(file);
}

template <size_t NUM_TEST_SAMPLES, size_t NUM_HISTOGRAM_BUCKETS, typename PDF_F_LAMBDA, typename PDF_G_LAMBDA, typename INVERSE_CDF_G_LAMBDA>
void TestPDFToPDF (const char* fileName, const PDF_F_LAMBDA& PDF_F, const PDF_G_LAMBDA& PDF_G, float M, const INVERSE_CDF_G_LAMBDA& Inverse_CDF_G, float rngRange)
{
    // We generate a sample from PDF F by generating a sample from PDF G, and accepting it with probability PDF_F(x)/(M*PDF_G(x))

    // seed the random number generator
    std::random_device rd;
    std::mt19937 rng(rd());
    std::uniform_real_distribution<float> dist(0.0f, 1.0f);
 
    // generate the histogram
    std::array<size_t, NUM_HISTOGRAM_BUCKETS> histogram = { 0 };
    std::array<size_t, NUM_HISTOGRAM_BUCKETS> failedTestCounts = { 0 };
    size_t rejectedSamples = 0;
    for (size_t i = 0; i < NUM_TEST_SAMPLES; ++i)
    {
        // generate random points until we have one that's accepted
        // NOTE: this takes an unknown number of iterations, and technically may NEVER finish.
        float sampleG = 0.0f;
        bool validPoint = false;
        while (!validPoint)
        {
            // Generate a sample from the soure PDF G
            sampleG = Inverse_CDF_G(dist(rng));

            // calculate the ratio of how likely we are to accept this sample
            float acceptChance = PDF_F(sampleG) / (M * PDF_G(sampleG));

            // see if we should accept it
            validPoint = dist(rng) <= acceptChance;

            // track number of failed tests per histogram bucket
            if (!validPoint)
            {
                size_t bin = (size_t)std::floor(sampleG * float(NUM_HISTOGRAM_BUCKETS) / rngRange);
                failedTestCounts[std::min(bin, NUM_HISTOGRAM_BUCKETS - 1)]++;
                ++rejectedSamples;
            }
        }

        // increment the correct bin in the histogram
        size_t bin = (size_t)std::floor(sampleG * float(NUM_HISTOGRAM_BUCKETS) / rngRange);
        histogram[std::min(bin, NUM_HISTOGRAM_BUCKETS - 1)]++;
    }
 
    // write the histogram and pdf sample to a csv
    FILE *file = fopen(fileName, "w+t");
    fprintf(file, "PDF F,PDF G,Scaled PDF G,Simulated PDF,Failed Tests,%0.2f samples needed per value on average.\n", (float(NUM_TEST_SAMPLES) + float(rejectedSamples)) / float(NUM_TEST_SAMPLES));
    for (size_t i = 0; i < NUM_HISTOGRAM_BUCKETS; ++i)
    {
        float x = (float(i) + 0.5f) * rngRange / float(NUM_HISTOGRAM_BUCKETS);
        
        fprintf(file, "%f,%f,%f,%f,%f\n",
            PDF_F(x),
            PDF_G(x),
            PDF_G(x)*M,
            NUM_HISTOGRAM_BUCKETS * float(histogram[i]) / (float(NUM_TEST_SAMPLES)*rngRange),
            float(failedTestCounts[i])
        );
    }
    fclose(file);
}
 
int main(int argc, char **argv)
{
    // Dice
    {
        // Simulate a 5 sided dice with a 6 sided dice
        TestDice<10000, 5, 6>("test1_5_6.csv");

        // Simulate a 5 sided dice with a 20 sided dice
        TestDice<10000, 5, 20>("test1_5_20.csv");
    }

    // PDF y=2x, simulated with a uniform distribution
    {
        auto PDF = [](float x) { return 2.0f * x; };

        Test<1000, 100>("test2_1k.csv", 2.0f, PDF);
        Test<100000, 100>("test2_100k.csv", 2.0f, PDF);
        Test<1000000, 100>("test2_1m.csv", 2.0f, PDF);
    }

    // PDF y=(x^3-10x^2+5x+11)/10.417, simulated with a uniform distribution
    {
        auto PDF = [](float x) {return (x*x*x - 10.0f*x*x + 5.0f*x + 11.0f) / (10.417f); };
        Test<10000000, 100>("test3_10m_1_15.csv", 1.15f, PDF);
        Test<10000000, 100>("test3_10m_1_5.csv", 1.5f, PDF);
        Test<10000000, 100>("test3_10m_2_8.csv", 2.8f, PDF);
    }

    // function (not PDF, Doesn't integrate to 1!) y=(x^3-10x^2+5x+11), simulated with a scaled up uniform distribution
    {
        auto PDF = [](float x) {return (x*x*x - 10.0f*x*x + 5.0f*x + 11.0f); };
        TestNotPDF<10000000, 100>("test4_10m_12_5.csv", 12.5f, 10.417f, PDF);
    }

    // Generate samples from PDF F using samples from PDF G.  random numbers are from 0 to 30.
    // F PDF = gaussian distribution, mean 15, std dev of 5.  Truncated to +/- 3 stddeviations.
    // G PDF = x*0.002222
    // G CDF = 0.001111 * x^2
    // G inverted CDF = (1000 * sqrt(x)) / sqrt(1111)
    // M = 3
    {
        // gaussian PDF F
        const float mean = 15.0f;
        const float stddev = 5.0f;
        auto PDF_F = [=] (float x) -> float
        {
            return (1.0f / (stddev * sqrt(2.0f * (float)std::_Pi))) * std::exp(-0.5f * pow((x - mean) / stddev, 2.0f));
        };

        // PDF G
        auto PDF_G = [](float x) -> float
        {
            return x * 0.002222f;
        };

        // Inverse CDF of G
        auto Inverse_CDF_G = [] (float x) -> float
        {
            return 1000.0f * std::sqrtf(x) / std::sqrtf(1111.0f);
        };

        TestPDFToPDF<20000, 100>("test5.csv", PDF_F, PDF_G, 3.0f, Inverse_CDF_G, 30.0f);
    }

    return 0;
}

Generating Random Numbers From a Specific Distribution By Inverting the CDF

The last post talked about the normal distribution and showed how to generate random numbers from that distribution by generating regular (uniform) random numbers and then counting the bits.

What would you do if you wanted to generate random numbers from a different, arbitrary distribution though? Let’s say the distribution is defined by a function even.

It turns out that in general this is a hard problem, but in practice there are a few ways to approach it. The below are the most common techniques for achieving this that I’ve seen.

  • Inverting the CDF (analytically or numerically)
  • Rejection Sampling
  • Markov Chain Monte Carlo
  • Ziggurat algorithm

This post talks about the first one listed: Inverting the CDF.

What Is A CDF?

The last post briefly explained that a PDF is a probability density function and that it describes the relative probability of numbers being chosen at random. A requirement of a PDF is that it has non negative value everywhere and also that the area under the curve is 1.

It needs to be non negative everywhere because a negative probability doesn’t make any sense. It needs to have an area under the curve of 1 because that means it represents the full 100% probability of all possible outcomes.

CDF stands for “Cumulative distribution function” and is related to the PDF.

A PDF is a function y=f(x) where y is the probability of the number x number being chosen at random from the distribution.

A CDF is a function y=f(x) where y is the probability of the number x, or any lower number, being chosen at random from that distribution.

You get a CDF from a PDF by integrating the PDF. From there you make sure that the CDF has a starting y value of 0, and an ending value of 1. You might have to do a bias (addition or subtraction) and/or scale (multiplication or division) to make that happen.

Why Invert the CDF? (And Not the PDF?)

With both a PDF and a CDF, you plug in a number, and you get information about probabilities relating to that number.

To get a random number from a specific distribution, we want to do the opposite. We want to plug in a probability and get out the number corresponding to that probability.

Basically, we want to flip x and y in the equation and solve for y, so that we have a function that does this. That is what we have to do to invert the CDF.

Why invert the CDF though and not the PDF? Check out the images below from Wikipedia. The first is some Gaussian PDF’s and the second is the same distributions as CDF’s:


The issue is that if we flip x and y’s in a PDF, there would be multiple y values corresponding to the same x. This isn’t true in a CDF.

Let’s work through sampling some PDFs by inverting the CDF.

Example 0: y=1

This is the easiest case and represents uniform random numbers, where every number is evenly likely to be chosen.

Our PDF equation is: y=1 where x \in [0,1]. The graph looks like this:

If we integrate the pdf to get the cdf, we get y=x where x \in [0,1] which looks like this:

Now, to invert the cdf, we flip x and y, and then solve for y again. It’s trivially easy…

y=x \Leftarrow \text{CDF}\\ x=y \Leftarrow \text{Flip x and y}\\ y=x \Leftarrow \text{Solve for y again}

Now that we have our inverted CDF, which is y=x, we can generate uniform random numbers, plug them into that equation as x and get y which is the actual value drawn from our PDF.

You can see that since we are plugging in numbers from an even distribution and not doing anything to them at all, that the result is going to an even distribution as well. So, we are in fact generating uniformly distributed random numbers using this inverted CDF, just like our PDF asked for.

This is so trivially simple it might be confusing. If so, don’t sweat it. Move onto the next example and you can come back to this later if you want to understand what I’m talking about here.

Note: The rest of the examples are going to have x in [0,1] as well but we are going to stop explicitly saying so. This process still works when x is in a different range of values, but for simplicity we’ll just have x be in [0,1] for the rest of the post.

Example 1: y=2x

The next easiest case for a PDF is y=2x which looks like this:

You might wonder why it’s y=2x instead of y=x. This is because the area under the curve y=x is 0.5. PDF’s need to have an area of 1, so I multiplied by 2 to make it have an area of 1.

What this PDF means is that small numbers are less likely to be picked than large numbers.

If we integrate the PDF y=2x to get the CDF, we get y=x^2 which looks like this:

Now let’s flip x and y and solve for y again.

y=x^2 \Leftarrow \text{CDF}\\ x=y^2 \Leftarrow \text{Flip x and y}\\ y=\sqrt{x} \Leftarrow \text{Solve for y again}

We now have our inverted CDF which is y=\sqrt{x} and looks like this:

Now, if we plug uniformly random numbers into that formula as x, we should get as output samples that follow the probability of our PDF.

We can use a histogram to see if this is really true. We can generate some random numbers, square root them, and count how many are in each range of values.

Here is a histogram where I took 1,000 random numbers, square rooted them, and put their counts into 100 buckets. Bucket 1 counted how many numbers were in [0, 0.01), bucket 2 counted how many numbers were in [0.01, 0.02) and so on until bucket 100 which counted how many numbers were in [0.99, 1.0).

Increasing the number of samples to 100,000 it gets closer:

At 1,000,000 samples you can barely see a difference:

The reason it doesn’t match up at lower sample counts is just due to the nature of random numbers being random. It does match up, but you’ll have some variation with lower sample counts.

Example 2: y=3x^2

Let’s check out the PDF y=3x^2. The area under that curve where x is in [0,1) is 1.0 and it’s non negative everywhere in that range too, so it’s a valid PDF.

Integrating that, we get y=x^3 for the CDF. Then we invert the CDF:

y=x^3 \Leftarrow \text{CDF}\\ x=y^3 \Leftarrow \text{Flip x and y}\\ y=\sqrt[3]{x} \Leftarrow \text{Solve for y again}

And here is a 100,000 sample histogram vs the PDF to verify that we got the right answer:

Example 3: Numeric Solution

So far we’ve been able to invert the CDF to get a nice easy function to transform uniform distribution random numbers into numbers from the distribution described by the PDF.

Sometimes though, inverting a CDF isn’t possible, or gives a complex equation that is costly to evaluate. In these cases, you can actually invert the CDF numerically via a lookup table.

A lookup table may also be desired in cases where eg you have a pixel shader that is drawing numbers from a PDF, and instead of making N shaders for N different PDFs, you want to unify them all into a single shader. Passing a lookup table via a constant buffer, or perhaps even via a texture can be a decent solution here. (Note: if storing in a texture you may be interested in fitting the data with curves and using this technique to store it and recall it from the texture: GPU Texture Sampler Bezier Curve Evaluation)

Let’s invert a PDF numerically using a look up table to see how that would work.

Our PDF will be:

y=\frac{x^3-10x^2+5x+11}{10.417}

And looks like this:

It’s non negative in the range we care about and it integrates to 1.0 – or it integrates closely enough… the division by 10.417 is there for that reason, and using more digits would get it closer to 1.0.

What we are going to do is evaluate that PDF at N points to get a probability for those samples of numbers. That will give us a lookup table for our PDF.

We are then going to make each point be the sum of all the PDF samples to the left of it to make a lookup table for a CDF. We’ll also have to normalize the CDF table since it’s likely that our PDF samples don’t all add up (integrate) to 1.0. We do this by dividing every item in the CDF by the last entry in the CDF. If you look at the table after that, it will fully cover everything from 0% to 100% probability.

Below are some histogram comparisons of the lookup table technique vs the actual PDF.

Here is 100 million samples (to make it easier to see the data without very much random noise), in 100 histogram buckets, and a lookup table size of 3 which is pretty low quality:

Increasing it to a lookup table of size 5 gives you this:

Here’s 10:

25:

And here’s 100:

So, not surprisingly, the size of the lookup table affects the quality of the results!

Code

here is the code I used to generate the data in this post, which i visualized with open office. I visualized the function graphs using wolfram alpha.

#define _CRT_SECURE_NO_WARNINGS

#include 
#include 
#include 
#include 

template 
void Test (const char* fileName, const PDF_LAMBDA& PDF, const INVERSE_CDF_LAMBDA& inverseCDF)
{
    // seed the random number generator
    std::random_device rd;
    std::mt19937 rng(rd());
    std::uniform_real_distribution dist(0.0f, 1.0f);

    // generate the histogram
    std::array histogram = { 0 };
    for (size_t i = 0; i < NUM_TEST_SAMPLES; ++i)
    {
        // put a uniform random number into the inverted CDF to sample the PDF
        float x = dist(rng);
        float y = inverseCDF(x);

        // increment the correct bin on the histogram
        size_t bin = (size_t)std::floor(y * float(NUM_HISTOGRAM_BUCKETS));
        histogram[std::min(bin, NUM_HISTOGRAM_BUCKETS -1)]++;
    }

    // write the histogram and pdf sample to a csv
    FILE *file = fopen(fileName, "w+t");
    fprintf(file, "PDF, Inverted CDF\n");
    for (size_t i = 0; i < NUM_HISTOGRAM_BUCKETS; ++i)
    {
        float x = (float(i) + 0.5f) / float(NUM_HISTOGRAM_BUCKETS);
        float pdfSample = PDF(x);
        fprintf(file, "%f,%f\n",
            pdfSample,
            NUM_HISTOGRAM_BUCKETS * float(histogram[i]) / float(NUM_TEST_SAMPLES)
        );
    }
    fclose(file);
}

template 
void TestPDFOnly (const char* fileName, const PDF_LAMBDA& PDF)
{
    // make the CDF lookup table by sampling the PDF
    // NOTE: we could integrate the buckets by averaging multiple samples instead of just the 1. This bucket integration is pretty low tech and low quality.
    std::array CDFLookupTable;
    float value = 0.0f;
    for (size_t i = 0; i < LOOKUP_TABLE_SIZE; ++i)
    {
        float x = float(i) / float(LOOKUP_TABLE_SIZE - 1); // The -1 is so we cover the full range from 0% to 100%
        value += PDF(x);
        CDFLookupTable[i] = value;
    }

    // normalize the CDF - make sure we span the probability range 0 to 1.
    for (float& f : CDFLookupTable)
        f /= value;

    // make our LUT based inverse CDF
    // We will binary search over the y's (which are sorted smallest to largest) looking for the x, which is implied by the index.
    // I'm sure there's a better & more clever lookup table setup for this situation but this should give you an idea of the technique
    auto inverseCDF = [&CDFLookupTable] (float y) {

        // there is an implicit entry of "0%" at index -1
        if (y < CDFLookupTable[0])
        {
            float t = y / CDFLookupTable[0];
            return t / float(LOOKUP_TABLE_SIZE);
        }

        // get the lower bound in the lut using a binary search
        auto it = std::lower_bound(CDFLookupTable.begin(), CDFLookupTable.end(), y);

        // figure out where we are at in the table
        size_t index = it - CDFLookupTable.begin();

        // Linearly interpolate between the values
        // NOTE: could do other interpolation methods, like perhaps cubic (https://blog.demofox.org/2015/08/08/cubic-hermite-interpolation/)
        float t = (y - CDFLookupTable[index - 1]) / (CDFLookupTable[index] - CDFLookupTable[index - 1]);
        float fractionalIndex = float(index) + t;
        return fractionalIndex / float(LOOKUP_TABLE_SIZE);
    };

    // call the usual function to do the testing
    Test(fileName, PDF, inverseCDF);
}

int main (int argc, char **argv)
{
    // PDF: y=2x
    // inverse CDF: y=sqrt(x)
    {
        auto PDF = [] (float x) { return 2.0f * x; };
        auto inverseCDF = [] (float x) { return std::sqrt(x); };

        Test("test1_1k.csv", PDF, inverseCDF);
        Test("test1_100k.csv", PDF, inverseCDF);
        Test("test1_1m.csv", PDF, inverseCDF);
    }

    // PDF: y=3x^2
    // inverse CDF: y=cuberoot(x) aka y = pow(x, 1/3)
    {
        auto PDF = [] (float x) { return 3.0f * x * x; };
        auto inverseCDF = [](float x) { return std::pow(x, 1.0f / 3.0f); };

        Test("test2_100k.csv", PDF, inverseCDF);
    }

    // PDF: y=(x^3-10x^2+5x+11)/10.417
    // Inverse CDF Numerically via a lookup table
    {
        auto PDF = [] (float x) {return (x*x*x - 10.0f*x*x + 5.0f*x + 11.0f) / (10.417f); };
        TestPDFOnly("test3_100m_3.csv", PDF);
        TestPDFOnly("test3_100m_5.csv", PDF);
        TestPDFOnly("test3_100m_10.csv", PDF);
        TestPDFOnly("test3_100m_25.csv", PDF);
        TestPDFOnly("test3_100m_100.csv", PDF);
    }

    return 0;
}