I’ve heard that when sampling in 1D, such as , you should keep your samples away from the edges of the sampling domain (0 and 1). I’ve also heard that when sampling in 2D, such as the square, you should keep the samples away from the origin and presumably the edges as well.
That came up in a recent conversation (https://mastodon.gamedev.place/@jkaniarz/110032776950329500), so this post looks at the evidence of whether or not that is true – looking at regular sampling, blue noise sampling, and low discrepancy sequences (LDS). LDS are a bit of a tangent, but it’s good to see how they converge in relation to the other sampling types.
To be honest, my interest in blue noise has long drifted away from blue noise sample points and is more about blue noise textures (and beyond! Look for more on that soon), but we’ll still look at blue noise in this post since that is part of what the linked conversation is about!
I’ve lost interest in blue noise sample points because they aren’t very good for convergence (although projective blue noise is interesting https://resources.mpi-inf.mpg.de/ProjectiveBlueNoise/ProjectiveBlueNoise.pdf), and they aren’t good at making render noise look better. Blue noise textures are better for the second but aren’t the best for convergence (even when extended temporally), so they are mainly useful for low sample counts. This is why they are a perfect match for real-time rendering.
The value remaining in blue noise sample points in my opinion, are getting them as the result of a binary rendering algorithm, such as stochastic transparency. There the goal isn’t really to generate them though, but more that you want your real time algorithm’s noise to roughly follow their pattern. Blue noise sample points have value in these situations for non real time rendering too, such as stippling.
It turns out there is a utility to generate and test sample points that since you are reading this post, you may be interested in: https://github.com/utk-team/utk
1D Numberline Sampling
This group of tests will primarily look at two types of 1D sampling: regular sampling and blue noise sampling. The golden ratio low discrepancy sequence, stratified sampling and white noise are also included.
So first up, let’s talk about the different regular sampling options. The difference between these all is how you calculate the position of sample out of samples, where .
Regular Ends:. 4 samples would be at 0/3, 1/3, 2/3 and 3/3.
Regular Left:. 4 samples would be at 0/4, 1/4, 2/4 and 3/4.
Regular Center:. 4 samples would be at 1/8, 3/8, 5/8, 7/8.
Regular Center Equal:. 4 samples would be at 1/5, 2/5, 3/5, 4/5.
Believe it or not, which of those you choose has a big difference for the error of your sampling!
There are also four flavors of blue noise.
Blue Wrap: Blue noise is made on a 1D number line using Mitchell’s Best Candidate algorithm, using wrap-around distance. A point at 1.0 is the same as a point at 0.0.
Blue No Wrap: Same as above, but distance does not wrap around. A point at 1.0 is the farthest a point can be from 0.0.
Blue No Wrap Edge: Same as above, but the algorithm pretends there are points at 0.0 and 1.0 that repel candidates by including the distance to the edge in their score.
Blue No Wrap Half Edge: Same as above, but the distance to the edge is divided by 2, making the edge repulsion half as strong.
Lastly, there is also:
White Noise: white noise random numbers that go wherever they want, without regard to other points.
Stratified: This is like “Regular Left” but a random number is added in. .
Golden Ratio: This is the 1D golden ratio low discrepancy sequence. .
Here are 20 of each:
Those sequences are tested on both smooth and non-smooth functions:
Smooth: A cubic Bezier curve is randomly generated with each of the four control points between -10 and 10, and each sequence is used to integrate the curve using 200 points. This is done 1000 times to calculate RMSE graphs.
Non Smooth: A random line is generated for each of the four 1/4 sized regions between 0 and 1. The two end points of the line are generated to be between -10 and 10, so the lines are usually not continuous.
First let’s look at regular sampling of smooth and non smooth functions.
In both cases, white noise does absolutely terribly. Friends don’t let friends use white noise!
Also in both cases, the winner is “Regular Center” doing the best, with stratified sampling coming in second.
It’s important to note that while this type of regular sampling shows better convergence than stratified sampling, regular sampling has problems with aliasing that randomized sampling, like stratified, doesn’t have.
Looking at blue noise sampling next, it doesn’t seem to really matter whether you wrap or not. White noise, stratified, and golden ratio are included to help compare blue noise sampling with the regular sampling types.
2D Square Sampling
The sampling types used here are:
White Noise
Stratified
Regular Ends
Regular Left
Regular Center
Regular Center Equal
Hex Grid: Like regular grid, but each row is half a cell offset from the row above it.
Fibonacci: The x axis is the golden ratio LDS, and the y axis is
Blue Wrap
Blue No Wrap
Blue No Wrap Edge
Blue No Wrap Half Edge
Here are 200 of each:
For the smooth tests, we generate a random bicubic surface, which has a grid of 4×4 control points, each being between -10 and 10.
For the non smooth tests, we generate a 2×2 grid where each cell is a bilinear surface with 4 control points, so is not C0 continuous at the edges.
We also have a third test this time, for a function that is non separable. For this, we pick a random point as a grid origin, and a grid scale, and the value at each point in the square is the sine of the distance to the origin.
We once again do 1000 tests, with 200 samples per test.
First is regular sampling. White noise does terribly as always. Regular center is the winner, along with hex grid coming in second, and stratified in third. This shows that in 2D, like in 1D, avoiding the edges of the sampling domain by half a sample distance is a good idea.
Next up is low discrepancy sequences. Fibonacci does well pretty consistently, but Burley Sobol also does. R2 doesn’t do very well in these tests, but these types of tests usually have much higher sample counts, and R2 is much more competitive there in my experience.
Lastly is the blue noise. Unlike in 1D, it does seem that blue noise cares about wrapping here in 2D, and that wrapping is best, with the half edge no wrapping being in second place. The half edge doing well also indicates that sampling away from the edges is a good idea. Blue noise sample points still don’t converge very well though.
2D Circle Sampling
This last group of tests is on samples in a circle. Unlike a 1D numberline or a 2D square, there is no clear way to calculate “wrap around” distance. You might consider all points near the edge to be near each other, but that is one of many ways to think about wrap around distance in a circle. You could also consider all points near the edge to be next to the center of the circle perhaps.
The sampling used here is:
White
Regular Grid: “Regular center”, but points outside of the circle are rejected. This is done iteratively with more points until the right number of points fit in the circle. Since it can’t always exactly match the count needed, it gets as close as it can, and then adds white noise points to fill in the rest.
Regular Grid Circle: “Regular center”, but x axis is multiplied by to make an angle, and y axis is square rooted to make a radius. This transformation maps a uniform random point in a square to a uniform random point in a circle and will be used by other sampling types too.
Stratified: angle and radius as stratified.
Stratified Circle: A stratified square is mapped to circle.
Hex Grid: With rejection and filling in with white noise.
Hex Grid Circle: Hex grid is mapped to circle
R2: with rejection
R2 Circle: R2 mapped to circle.
Halton23: with rejection
Halton23 Circle: mapped to circle
Sobol: rejection
Sobol Circle: mapped to circle
Burley Sobol: rejection
Burley Sobol Circle: mapped to circle
Fibonacci: mapped to circle
Blue No Wrap
Blue No Wrap Edge
Blue No Wrap Half Edge
The tests are the same as 2D square, but the functions are zero outside of a radius 0.5 circle.
Here are 256 samples of each.
Looking at regular sampling first. Fibonacci dominates but the regular sampling beats stratified.
Looking at LDS next, fibonacci does pretty well in a class by itself, except for the non smooth test where burley sobol joins it.
Lastly is blue noise which seems to indicate that wrapping would be good, by “half edge” distance doing the best for the most part. This is also showing that sampling away from the edges is a good thing.
Conclusion
Looking at sampling a 1D number line, a 2D square, and a 2D circle, regular sampling and blue noise have shown that avoiding the edge of the sampling domain by “half a point distance” (repel at half strength for blue noise) gives best results.
I don’t have much to say about why it looks like a rotated regular grid, but I can show how it’s different than one.
To do this, I grabbed 3 points of Fibonacci in a triangle, to get two vectors to use for lattice axes. I then used that to make a lattice that you can see below. Side by side Fibonacci and the lattice are hard to tell apart, but when they are put on top of each other, you can see how Fibonacci is not a lattice, but deviates from the lattice over space.
When we map it to a circle, the difference is more pronounced. The Fibonacci circle has that familiar look that you see in sunflowers. The Lattice however just looks like spirals. When they are overlaid, you can see they diverge quite a bit.
The last post talked about links between Euler’s number e, and probability, inspired by a really cool youtube video on the subject: https://www.youtube.com/watch?v=AAir4vcxRPU
One of the things illustrated is that if you want to evaluate N candidates and pick the best one, you can evaluate the first N/e candidates (~36.8%), keep track of the highest score, then start evaluating the rest of the candidates, and choose the first one that scores higher.
That made me think of “Mitchell’s Best Candidate” algorithm for generating blue noise sample points. It isn’t the most modern way to make blue noise points, but it is easy to describe, easy to code, and gives decent-quality blue noise points.
If you already have N blue noise points (N can be 0)
Generate N+1 white noise points as candidates.
The score for a candidate is the distance between it and the closest existing blue noise point.
Keep whichever candidate has the largest score.
The above generates one more blue noise point than you have. You repeat that until you have as many points as you want.
So, can we merge these two things? The idea is that instead of generating and evaluating all N+1 candidates, we generate and evaluate (N+1)/e of them, keeping track of the best score.
We then generate more candidates, until we find a candidate with a better score, and then exit, or until we run out of candidates, at which case we return the best candidate we found in the first (N+1)/e of them.
This is done through a simple modification in Mitchell’s Best Candidate algorithm:
We calculate a “minimum stopping index” which is ceil((N+1)/e).
When we are evaluating candidates, if we have a new best candidate, and the candidate index is >= the minimum stopping index, we exit the loop and stop generating and evaluating candidates.
Did it work? Here are two blue noise sets side by side. 1000 points each. Can you tell which is which?
Here are the averaged DFTs of 100 tests.
In both sets of images, the left is Mitchell’s Best Candidate, and the right is Euler’s Best Candidate.
The program output shows us that EBC took 71% of the time that MBC did though. Also, the inner most loop of these best candidate algorithms is calculating the distance between a candidate and an existing point. EBC only had to do 73.6% of these calculations as MBC did for the same quality result. While MBC has a fixed number of “hot loops”, EBC is randomized though and has a standard deviation of 3.5 million, when the operation takes at max 333 million, so has a standard deviation of about 1%.
It looks like a win to me! Note that the timing includes writing the noise to disk as pngs.
This could perhaps be a decent way to do randomized organic object placement on a map, like for bushes and trees and such. There are other algorithms that generate several candidates and keep the best one though. It should work just fine in those situations as well. Here is a mastodon thread on the subject https://mastodon.gamedev.place/@demofox/110030932447201035.
The next time you have to evaluate a bunch of random candidates to find the best one, you can remember “find the best out of the first 1/3, then continue processing til the next best one you find, or you run out of items”.
For more info, check out this Wikipedia page, which also talks about variations such as what to do if you don’t know how many candidates you are going to need: https://en.wikipedia.org/wiki/Secretary_problem
It was a lot of the same old stuff we’ve all seen many times before but then started talking about Euler’s number showing up in probability. The video instantly had my full attention.
I spend a lot of time thinking about how uses of white noise can be turned into using blue noise or low discrepancy sequences because white noise has nice properties at the limit, but those other things have the same properties over much smaller numbers of samples. This link to probability made we wonder how other types of noise / sequences would behave when put through the same situations where Euler’s number emerged. Would e still emerge, or would it be a different value for different sequence types?
This is what sent me on the rabbit hole for the two prior blog posts, which explore making streaming uniform red noise, blue noise, and other random number generators.
Let’s get into it!
Lottery Winnings
One claim the video makes is that if the odds for winning the lottery are 1 in N, that if you buy N lottery tickets, there is a 1/e chance that you will still lose. 1/e is 0.367879, so that is a 36.79% of losing.
That may be true for white noise, but how about other sequences?
To test, I generate a random integer in [0,N) as the winning number, and then generate N more random numbers as the lottery tickets, each also in [0, N). The test results in a win (1) or a loss (0). This test is done 1,000,000 times to get an average chance of losing. Also, standard deviation is calculated by looking at the percentage of each group of 1,000 tests.
Here are the sequences tested:
White Noise – regular old random numbers
Golden Ratio – the deterministic golden ratio sequence: fract(index * phi)
Stratified – stratify N buckets to make N numbers
Regular Offset – N regularly spaced numbers, but with a random offset and modulus to keep them in range.
Blue Noise – made by subtracting one value from another, converting from triangle to uniform, and rerolling one of the numbers for next time. Each number gets regenerated every other index, and which value gets subtracted changes each index as well. (https://blog.demofox.org/2023/02/20/uniform-1d-red-noise-blue-noise/)
We were able to duplicate the results for white noise which is neat. Using the golden ratio sequence cuts the loss rate in half. I chalk this up to it being more fairly distributed than white noise. White noise is going to give you the same lottery ticket number several times, where golden ratio is going to reduce that redundancy.
Stratified and regular offset both did very well on this test, but that isn’t surprising. If you have a 1 in N chance of picking a winning random number, and you make (almost) every number between 1 and N as a guess, you are going to (almost) always win. The little bit of randomization in each will keep them from winning 100% of the time.
the red noise and blue noise were very close to having a 1/e loss chance but were off in the tenths of a percentage place. Running it again gives similar results, so I think this could be a real, but small, effect. Colored noise seems like it affects this outcome a little bit.
Summing Random Values
Another claim the video makes is that if you add random uniform white noise numbers together, it will on average take e of them (2.718282) to be greater than or equal to 1. I do that test 100,000,000 times with each sequence again.
Let’s see:
We duplicated the results with white noise again, which is neat.
The golden ratio sequence takes slightly fewer numbers to reach 1.0. Subsequent numbers of the golden ratio sequence are pretty different usually, so this makes some sense to me.
For stratified and regular offset, I wanted to include them, but their sequence is in order, which doesn’t make sense for this test. So, I shuffled them. The result looks to be just a little bit lower than e.
The lower quality red and blue noise took quite a bit more numbers to reach 1.0 at 3.9 and 3.5 respectively. The better red noise took 3 numbers on average, while the blue noise took the lowest seen at 2.5 numbers. Lastly, the FIR blue noise “better blue noise 2” took just under e.
An interesting observation is that the red noises all have higher standard deviation than the other sequences. Red noise makes similar numbers consecutively, so if the numbers are smaller, it would take more of them, where if they were bigger it would take fewer of them. I think that is why the standard deviation (square root of variance) is higher. It’s just more erratic.
Contrary to that, the blue noise sequences have lower standard deviation. Blue noise makes dis-similar numbers consecutively, so I think it makes sense that it would be more consistent in how many numbers it took to sum to 1.0, and thus have a lower standard deviation and variance.
Finding The Best Scoring Item
The video also had this neat algorithm for fairly reliably finding the best scoring item, without having to look at all items.
Their setup is that you have job applicants to review and after talking to each, you have to decide right then to hire the person or to continue looking at other candidates. The algorithm is that you interview 1/e percent of them (36.79%), not hiring any of them, but keeping track of how well the best interviewer did. You then start interviewing the remaining candidates and hire the first one that does better than the previous best.
Doing this apparently gives you a 1/e (36.79%) chance of picking the actual best candidate, even though you don’t evaluate all the candidates. That is pretty neat.
I did this test 10,000,000 times for each sequence type
All sequence types looked at 733 or 734 candidates out of 1000 on average. The golden ratio sequence actually ended up looking at 755 on average though, and the better blue noise 2 sequence looked at 731. The standard deviation is ~243 for all of them, except for golden ratio which is 214. So… golden ratio looks at more candidates on average, but it’s also more consistent about the number of candidates it looks at.
Most sequence types had roughly 0.5 candidates on average better than the candidate hired. The golden ratio has 0.04 candidates better on average though. Red noise has slightly more candidates better than that though, at 0.6 and 0.73. The “Better Blue Noise 2” has a full 1.08 candidates better on average. The standard deviation follows these counts… golden ratio is the lowest standard deviation by far, red noise is higher than average, and the better blue noise 2 is much higher than the rest.
Closing
As usual, the lesson is that DSP, stochastic algorithms and sampling are deep topics.
My last blog post was on filtering a stream of uniform white noise to make a stream of red or blue noise, and then using the CDF (inverted inverse CDF) to transform the non uniform red or blue noise back into a uniform distribution while preserving the noise color. It is at https://blog.demofox.org/2023/02/20/uniform-1d-red-noise-blue-noise/
That had me thinking about how you could make a CDF of any distribution, numerically. You could then store it in a LUT, or perhaps use least squares to fit a polynomial to the CDF.
I had never seen that before, only ever seeing that you can subtract a low pass filter from the identity filter to get a high pass filter (https://blog.demofox.org/2022/02/26/image-sharpening-convolution-kernels/). It made me want to check out that kind of a filter to see what the differences could be.
So, I started by generating 10 million values of white noise and then filtering it. I made a histogram of the results and divided the counts by 10 million to normalize it into a PDF. From there I made a CDF by making a table where each CDF bucket value was the sum of all PDF values with the same or lower index. From there I made a 1024 entry CDF table, as well as a smaller 64 entry one, and I also fit the CDF with piecewise polynomials (https://blog.demofox.org/2022/06/29/piecewise-least-squares-curve-fitting/).
With any of those 3 CDF approximations, I could filter white noise in the same way I did when generating the CDF, put the colored noise through the CDF function, and get out colored noise that was uniform distribution.
I wasn’t very happy with the results though. They were decent, but I felt like I was leaving some quality on the table. Going from real values to indices and back was a bit fiddly, even when getting the nuances correct. I mentioned that it was surprisingly hard to make a usable CDF from a histogram, and Nikita gave this great comment.
The intuition here is that regions of values that are more probable will take up more indices and so if you randomly select an index, you are more likely to select from those regions.
So, if you have a black box that is giving you random numbers, you can generate random numbers from the same distribution just by storing a bunch of the numbers it gives you, sorting them, and using uniform random numbers to select items from that list. Better yet, select a random position between 0 and 1, multiply by the size of the list, and use linear interpolation between the indices when you get a fractional index.
In our case where we are trying to make non uniform numbers be uniform again, we actually want the inverted inverse CDF, or in other words, we want the CDF. We can get a CDF value from the inverse CDF table by taking a value between 0 and 1 and finding where it occurs in the inverse CDF table. We’ll get a fractional index out if we use linear interpolation, and we can divide by the size of the array to get a value between 0 and 1. This is the CDF value for that input value. If we do this lookup for N evenly spaced values between 0 and 1, we can get a CDF look up table out.
I did that with a 1024 sized look up table, a 64 sized look up table, and I also fit a piecewise polynomial to the CDF, with piecewise least squares.
The results were improved! Fewer moving parts and fewer conversions. Nice.
I’ll show the results in a minute but here are the tests I did:
Box 3 Red Noise: convolved the white noise with the kernel [1, 1, 1]
Box 3 Blue Noise: convolved the white noise with the kernel [-1, 1, -1]. This is an unsharp mask because I subtracted the red noise kernel from [0,2,0].
Box 5 Red Noise: [1,1,1,1,1]
Box 5 Blue Noise 1: [-1, -1, 1, -1, -1]
Box 5 Blue Noise 2: [1, -1, 1, -1, 1]
Gauss 1.0 Blue Noise: A sigma 1 gaussian kernel, with alternating signs to make it a high pass filter [0.0002, -0.0060, 0.0606, -0.2417, 0.3829, -0.2417, 0.0606, -0.0060, 0.0002]. Kernel calculated from here http://demofox.org/gauss.html
FIRHPF – a finite impulse response high pass filter. It’s just convolved with the kernel [0.5, -1.0, 0.5]. Kernel calculated from here http://demofox.org/DSPFIR/FIR.html/
IIRHPF – an infinite impulse response high pass filter. It has the same “x coefficients” as the previous filter, but has a y coefficient of 0.9. Filter calculated from here http://demofox.org/DSPIIR/IIR.html.
FIRLPF – the low pass filter version of FIRHPF. kernel [0.25, 0.5, 0.25]
IIRLPF – the low pass filter version of IIRHPF. x coefficients are the same as the previous filter, but has a y coefficient of -0.9.
Void And Cluster – I used the void and cluster algorithm to make 100,000 blue noise values, and I did this 100 times to get 10 million total blue noise values. There is a “seam” in the blue noise every 100,000 values but that shouldn’t be a big deal.
Final BN LUT – I took the sized 64 CDF LUT from the FIRHPF test and used it to turn filtered white noise back to uniform.
Final BN Polynomial – I took the 4 piecewise quadratic functions that fit the CDF of FIRHPF and used it to turn filtered white noise back to uniform.
Final RN Polynomial – The filter from FIRLPF is used, along with the polynomial from the previous filter to make it uniform again. Luckily FIRLPF and IIRLPF have the same histogram and CDF, so can use the same polynomials!
CDFs
In the below, the blue line is the 4 piecewise quadratic functions fit to the CDF. The green dotted line is the size 1024 CDF table. The orange line is the size 64 CDF table.
For void and cluster, and the “Final BN” tests, the source values are uniform, so it doesn’t make much sense to make a CDF and convert them to uniform etc, but it wasn’t trivial to disable that testing for them.
Here is the error of the CDF approximations, using the size 1024 CDF LUT as the ground truth to calculate the error.
Frequencies (DFTs)
Here are the DFTs to show the frequencies in the streams of noise. Converting filtered white noise to uniform does change the frequency a bit, but looking at the low frequency magnitudes of void and cluster, that might just be part of the nature of uniform distributed colored noise. I’m not sure.
The blue line is the original values from E.g. filtering white noise. The other colors are when converting them to uniform.
Histograms
Here are the histograms, to see if they are appropriately uniform. The histograms come in sets of 4: the filtered noise, and then 3 methods for converting the filtered noise back to uniform. The graphs should be red from top to bottom, then left to right.
Streaming Blue Noise
Looking at the results, the FIRHPF results look pretty decent, with the polynomial fit having a better histogram than the size 64 CDF LUT. Only being a 3 tap filter means that it only needs to remember the last 2 “x values” that it has seen too, as a “delay line” for the FIR filter. Decent quality results and efficient to obtain. nice. The results of that are in the diagrams above as “Final BN Polynomial”.
I tried taking only the first half of the fits – either 32 of the 64 LUT entries, or only the first two of the four polynomials. For the second half of the curve, when x > 0.5, you set x to1-x, and the resulting y value gets flipped as 1-y too. I wasn’t getting great results though, even when adding a least squares constraint that CDF(0.5) = 0.5. I’m saving that battle for a future day. If you solve it though, leave a comment!
Below is the C++ code for streaming scalar valued blue noise – aka a blue noise PRNG. Also the same for red noise. The full source code can be found at https://github.com/Atrix256/ToUniform.
If you have a need of other colors of noise, of other histograms, hopefully, this post can help you achieve that.
class BlueNoiseStreamPolynomial
{
public:
BlueNoiseStreamPolynomial(pcg32_random_t rng)
: m_rng(rng)
{
m_lastValues[0] = RandomFloat01();
m_lastValues[1] = RandomFloat01();
}
float Next()
{
// Filter uniform white noise to remove low frequencies and make it blue.
// A side effect is the noise becomes non uniform.
static const float xCoefficients[3] = {0.5f, -1.0f, 0.5f};
float value = RandomFloat01();
float y =
value * xCoefficients[0] +
m_lastValues[0] * xCoefficients[1] +
m_lastValues[1] * xCoefficients[2];
m_lastValues[1] = m_lastValues[0];
m_lastValues[0] = value;
// the noise is also [-1,1] now, normalize to [0,1]
float x = y * 0.5f + 0.5f;
// Make the noise uniform again by putting it through a piecewise cubic polynomial approximation of the CDF
// Switched to Horner's method polynomials, and a polynomial array to avoid branching, per Marc Reynolds. Thanks!
float polynomialCoefficients[16] = {
5.25964f, 0.039474f, 0.000708779f, 0.0f,
-5.20987f, 7.82905f, -1.93105f, 0.159677f,
-5.22644f, 7.8272f, -1.91677f, 0.15507f,
5.23882f, -15.761f, 15.8054f, -4.28323f
};
int first = std::min(int(x * 4.0f), 3) * 4;
return polynomialCoefficients[first + 3] + x * (polynomialCoefficients[first + 2] + x * (polynomialCoefficients[first + 1] + x * polynomialCoefficients[first + 0]));
}
private:
float RandomFloat01()
{
// return a uniform white noise random float between 0 and 1.
// Can use whatever RNG you want, such as std::mt19937.
return ldexpf((float)pcg32_random_r(&m_rng), -32);
}
pcg32_random_t m_rng;
float m_lastValues[2] = {};
};
class RedNoiseStreamPolynomial
{
public:
RedNoiseStreamPolynomial(pcg32_random_t rng)
: m_rng(rng)
{
m_lastValues[0] = RandomFloat01();
m_lastValues[1] = RandomFloat01();
}
float Next()
{
// Filter uniform white noise to remove high frequencies and make it red.
// A side effect is the noise becomes non uniform.
static const float xCoefficients[3] = { 0.25f, 0.5f, 0.25f };
float value = RandomFloat01();
float y =
value * xCoefficients[0] +
m_lastValues[0] * xCoefficients[1] +
m_lastValues[1] * xCoefficients[2];
m_lastValues[1] = m_lastValues[0];
m_lastValues[0] = value;
float x = y;
// Make the noise uniform again by putting it through a piecewise cubic polynomial approximation of the CDF
// Switched to Horner's method polynomials, and a polynomial array to avoid branching, per Marc Reynolds. Thanks!
float polynomialCoefficients[16] = {
5.25964f, 0.039474f, 0.000708779f, 0.0f,
-5.20987f, 7.82905f, -1.93105f, 0.159677f,
-5.22644f, 7.8272f, -1.91677f, 0.15507f,
5.23882f, -15.761f, 15.8054f, -4.28323f
};
int first = std::min(int(x * 4.0f), 3) * 4;
return polynomialCoefficients[first + 3] + x * (polynomialCoefficients[first + 2] + x * (polynomialCoefficients[first + 1] + x * polynomialCoefficients[first + 0]));
}
private:
float RandomFloat01()
{
// return a uniform white noise random float between 0 and 1.
// Can use whatever RNG you want, such as std::mt19937.
return ldexpf((float)pcg32_random_r(&m_rng), -32);
}
pcg32_random_t m_rng;
float m_lastValues[2] = {};
};
I coded it up and put it through my analysis so you could compare apples to apples. It looks pretty good!
Histograms
DFTs
Code
// From Nick Appleton:
// https://mastodon.gamedev.place/@nickappleton/110009300197779505
// But I'm using this for the single bit random value needed per number:
// https://blog.demofox.org/2013/07/07/a-super-tiny-random-number-generator/
// Which comes from:
// http://www.woodmann.com/forum/showthread.php?3100-super-tiny-PRNG
class BlueNoiseStreamAppleton
{
public:
BlueNoiseStreamAppleton(unsigned int seed)
: m_seed(seed)
, m_p(0.0f)
{
}
float Next()
{
float ret = (GenerateRandomBit() ? 1.0f : -1.0f) / 2.0f - m_p;
m_p = ret / 2.0f;
return ret;
}
private:
bool GenerateRandomBit()
{
m_seed += (m_seed * m_seed) | 5;
return (m_seed & 0x80000000) != 0;
}
unsigned int m_seed;
float m_p;
};
Inverting the CDF lets you convert from uniform to non uniform distributions, but you can also use the CDF to convert from the non uniform distribution to a uniform distribution.
This means that just like you can square root a uniform random number between 0 and 1 to get a “linear” distributed random number where larger numbers are more likely, you could also take linear distributed random numbers and square them to get uniform random numbers.
float UniformToLinear(float x)
{
// PDF In: y = 1
// PDF Out: y = 2x
// ICDF: y = sqrt(x)
return std::sqrt(x);
}
float LinearToUniform(float x)
{
// PDF In: y = 2x
// PDF Out: y = 1
// ICDF: y = x*x
return x*x;
}
This can be useful when you use a two tap FIR filter which gives a triangle distribution out. You can adapt the LinearToUniform function to convert the first half of a triangle distribution back to uniform, the values between 0 and 0.5. For the second half of the triangle distribution, you can subtract it from 1.0, to get values between 0 and 0.5 again, convert from linear to uniform, and then subtract it from 1.0 again to flip it back around.
float TriangleToUniform(float x)
{
if (x < 0.5f)
{
x = LinearToUniform(x * 2.0f) / 2.0f;
}
else
{
x = 1.0f - x;
x = LinearToUniform(x * 2.0f) / 2.0f;
x = 1.0f - x;
}
return x;
}
Let’s see how this works out in practice. In the first column below, it shows uniform being converted to linear and triangle, and back, to show that the round trip works.
In the second column, it shows red noise (low frequency random numbers) being made by adding two random numbers together, and then being made into uniform noise by converting from triangle to unfirom. It then also shows higher order red noise histograms.
The third column shows the same, but with blue noise (high frequency random numbers.
Here are the DFTs of the above, showing that the frequency content stays pretty similar through these various transformations.
So while we have the ability to convert 2 tap filtered noise from triangular distributed noise to uniform distributed noise, what about higher tap counts that give higher order binomial distributions?
It turns out the binomial quantile function (inverse CDF) doesn’t have a closed form that you can evaluate symbolically, which is unfortunate. Maybe two taps is good enough.
Here are 20 values of each noise type to let you see what kind of values they actually have. The numbers are mapped to be between 0 and 1 even though these noise types naturally have other ranges. Order 2 red noise goes between 0 and 2 for example, and order 2 blue noise goes between -1 and 1.
This post is the result of a collaboration between myself and R4 Unit on mastodon (https://mastodon.gamedev.place/@R4_Unit@sigmoid.social/109810895052157457). R4 Unit is actually the one who showed me you could animate blue noise textures by adding the golden ratio to each pixel value every frame too some years back 🙂
Fibonacci and the golden ratio are often used interchangeably, which can be confusing but this is due to them being two sides of the same coin. If you divide a Fibonacci number by the previous Fibonacci number, you will get approximately the golden ratio 1.6180339887.
The first few Fibonacci numbers are
1, 1, 2, 3, 5, 8, 13, 21, 34, and 55
Starting at the second 1 and dividing each number by the last number gives
1, 2, 1.5, 1.666, 1.6, 1.625, 1.615, 1.619, and 1.6176
This converges to the golden ratio and is exactly the golden ratio at infinity.
The golden ratio appears as the greek letter “phi,” which rhymes with pi and pie. It looks like φ (lowercase) or ϕ (uppercase).
You’ll see circles talked about a lot in this post and might wonder why. In 1D sampling, we sample a function for x between 0 and 1. Our actual function might want us to sample a different domain (like say between 1 and 4), but for simplicity’s sake, we generate points between 0 and 1 and map them to the different values if needed. When generating low discrepancy sequences, we want to generate values between 0 and 1 through “some process.” When values outside 0 to 1 are generated, modulus brings them back in. It’s as if we rolled our 0 to 1 number line segment into a circle where 1 and 0 connect. A value of 1.5 goes around the circle 1.5 times and lands at the 0.5 mark. Our line segment is basically a circle.
You may wonder about the difference between a low discrepancy sequence and a low discrepancy set. A low discrepancy sequence is a sequence of points with an ordering, and Wikipedia states that any subsequence starting at the first index is also low discrepancy (https://en.wikipedia.org/wiki/Low-discrepancy_sequence). A low discrepancy set is a set of points where all points must be used before they are low discrepancy.
If you are wondering what low discrepancy means, discrepancy is a measurement of how evenly spaced the points are. Low discrepancy sequences have nearly evenly spaced points, but not quite. This is a valuable property for sampling, with one reason being that perfectly evenly spaced points are subject to aliasing. At the same time, completely random (uniform white noise) will form clumps and voids, have a high discrepancy, and leave parts of the sampling domain unsampled.
We’ll be talking about the golden ratio low discrepancy sequence next, which has an ordering and is low discrepancy at each step.
The Golden Ratio Low Discrepancy Sequence
The golden ratio low discrepancy sequence (LDS) is a 1D sampling sequence with some excellent properties. When viewing on a number line, or a circle, the following sample always appears in the largest gap. This is true no matter what index you start in the sequence, which is a bit of a mind-bender. See the animation below.
Generating the golden ratio sequence is simple and computationally inexpensive:
Multiplying the index by the golden ratio will cause floating point inaccuracies that affect the sequence as the index gets larger, so it’s better to have a function that gives you the next value in the sequence, that you can call repeatedly.
We can realize we only care about the fractional part of the golden ratio and subtract one from it. This value is called the golden ratio conjugate and is the value of 1/φ, and it is every bit as irrational as the golden ratio.
For best results, you can supply a random floating point number between 0 and 1 as the first value in the sequence.
The golden ratio LDS has excellent convergence for 1D integration (* asterisk to be explained in next section), but the samples aren’t in a sorted order. Looking at the number line, you can see that the samples come in a somewhat random (progressive) order. If these sampling positions correspond to different places in memory, this is not great for memory caches.
Furthermore, some sampling algorithms require you to visit the sample points in a sorted order – for instance, when ray marching participating media and alpha compositing the samples to make the final result. Maybe it would be nice if they were in a sorted order?
Wanting a sorted version of the golden ratio LDS, I tried to solve it myself at first. I analyzed the gaps between the numbers for different sample counts and found some interesting identities involving different powers of φ, 1/φ, and 1-1/φ. I found a pattern and was able to describe this as a binary sequence, with rules for how to split the binary sequence to get the next sample. Then I got stuck, unsure how to use this knowledge to make sample points in order.
I shared some tidbits, and R4 Unit asked what I was up to. After explaining, getting some great info back, having further conversations, and implementing in parallel, the solution came into focus.
If you place marks on a circle at multiples of some number of degrees, there will ever be at most three distinct distances between points on the circle (https://en.wikipedia.org/wiki/Three-gap_theorem)
If using the golden angle, and a Fibonacci number of points, there are only ever two distances between points, and you can calculate them easily. Furthermore, if starting at a point on the circle, the “Infinite Fibonacci Word” will tell you the step size (0 = big, 1 = small) to take to get to the next point in the circle for the “Golden Angle Low Discrepancy Sequence.” See the bullet point here about unit circles and the golden angle sequence https://en.wikipedia.org/wiki/Fibonacci_word#Other_properties. We divide the sequence by 2pi to get 0 to 1 on a number line instead of 0 to 2pi as radians on a circle. The golden angle low discrepancy sequence is generated the same way as the golden ratio low discrepancy sequence but uses 0.38196601125 as the constant (2-φ, or 1/φ^2).
You can start at any point of the circle. It just rotates the circle. That means that when we generate our sequence, we can start at any random point 0 to 1 as a randomizing offset, and the quality of the sequence is unaltered.
static const float c_goldenRatio = 1.61803398875f;
static const float c_goldenRatioConjugate = 0.61803398875f; // 1/goldenRatio or goldenRatio-1
float FibonacciWordSequenceNext(float last, float big, float small, float& run)
{
run += c_goldenRatio;
float shift = std::floor(run);
run -= shift;
return last + ((shift == 1.0f) ? small : big);
}
// big can be set to 1.0f / float(numSamples), but will return >= numSamples points if so.
std::vector<float> Sequence_FibonacciWord(float big)
{
std::vector<float> ret;
float small = big * c_goldenRatioConjugate;
float run = c_goldenRatioConjugate;
float sample = 0.0f;
while (sample < 1.0f)
{
ret.push_back(sample);
sample = FibonacciWordSequenceNext(sample, big, small, run);
}
return ret;
}
Generating the next sample is amazingly simple. If you print out the value of “shift,” you’ll see that it is the Infinite Fibonacci Word, which decides whether to add the small or large gap size to get to the next point.
Other Implementation Details
In practice, there is a challenge with this sampling sequence. If you ask for N samples, it will usually return slightly more samples than that.
You might think it’d be ok to truncate them and just throw out the extra ones. If you do that, your convergence will be bad due to areas not being sampled at the end of your sampling domain.
You may decide you could throw out the extra ones and renormalize the point set to span the 0 to 1 interval by stretching it out, even being careful to preserve the ratio of the gap at the end between the last sample and 1.0. Unfortunately, if you do that, your convergence is still bad.
I tried both things, and the convergence degraded such that it was no longer competitive. It was a no-go.
In the sample code, I tell it to generate N samples. If it comes back with too many, I decrement N and try again. This repeats until the right number of samples is returned, or going any lower would make me not have enough samples.
If there are too many samples at this point, I truncate and divide all samples by the first sample thrown away. This rescales the sequence to fill the full 0 to 1 domain.
This process makes the results used in the next section and is what I found to be best.
This is a very costly operation, speculatively generating sequences of unknown length in a loop until we get the right amount, but this is something that could be done in advance. You can know that if you want a specific N number of samples, you say to generate M samples instead and divide each sample by a constant k which is the sample that comes right after the last one.
This might be a pain and require a couple more shader constants or magic numbers, but consider it the cost of better convergence. The things we do for beauty!
One last item… when you implement this, you need a way to apply an offset to the sequence, using a blue noise texture or whatever else you might want to use. I had good luck multiplying my 0 to 1 random value by the number of indices in my sequence, and using the fractional part of the index as a multiplier of the either big or small sized gap for that index. It felt like that might not work well, because each index is either a large or small gap, and this treats them as the same size, but it didn’t damage the quality in my testing, so that was nice.
Convergence
Now that we can make the sequence, let’s see if it is any good.
I tested the usual choices for 1D integration against three functions, did this 10,000 times, and graphed the RMSE.
Sampling Types:
Uniform – Uniform random white noise.
RegularRandomOffset – Regularly spaced sampling (index/N), with a single global random offset added to all the samples to shift the sequence a random amount around the circle for each test.
Stratified – Regular spaced sampling, but with a random offset added to each sample ((index + rand01()) / N).
GoldenRatioRandomOffset – The golden ratio LDS, with a single global random offset added to all the samples to shift the sequence a random amount around the circle for each test.
FibonacciWordRandomOffset – The sequence described in this post, with a single global random offset added to all the samples to shift the sequence a random amount around the circle for each test.
I also tested VanDerCorput base two and base three. Their results were similar to the golden ratio results and are unsorted so only cluttered the graph, and are thus omitted.
Sine: y = sin(pi * x)
Step: y = (x < 0.3) ? 1.0 : 0.0
Triangle: y = x
Well bummer… it turns out not to be so great vs stratified sampling or regular spaced sampling with a random global shift.
The power in the 1d golden ratio sequence is that for any number of samples you have, it is pretty good sampling, and you can add more to it. This is useful if you don’t know how many samples you are going to take, or if you want to have a good (progressive) result as you take more samples.
If you know how many samples you want, it looks like it’s better to go with stratified, or regularly spaced samples with a random global shift.
When I was early in this journey I made a typo in my code where it only tested N-1 samples in these convergence graphs instead of the full N samples. In that situation, golden ratio worked well because it works fine for any number of samples from 0 to M. The sorted golden ratio ended up working as well because it is “well spaced” for a fibonacci number of samples, but otherwise, the samples bunch up a little bit at the end, so removing the last sample wasn’t very damaging. Due to this unfortunate typo, I saw from early on that a sorted golden ratio sequence would be real valuable (like for ray marching fog volumes), to get better convergence while still being able to get samples in order.
This one didn’t work out, but oh well, onto the next thing 😛
Hey all, I’m currently on vacation in Aruba, having a blast snorkeling, hanging out with the family, and working on some stuff on a kindle scribe I got for Xmas. The scribe is an excellent paper replacement, and I’m digging it. I used it a lot while working through things in this post, but I am also using it for a C++ game dev without graphics book I’m working on. I’ll have to figure out how to export pages to make better diagrams for blog posts in the future (;
I was recently playing battleship with my son and thinking about how choosing where to shoot is a lot like sampling a binary 2d function that either returns 1 for a hit or 0 for a miss. There is some more subtlety to it because there are boats of different lengths, so you can get more information to go on as time goes on, but it made me think about what the ideal firing pattern might be when searching for enemy boats.
I got some neat info from mastodon and found an interesting link.
But then I had a different thought: could binary search generalize from 1D arrays to 2D or higher arrays? If so, they would need to be sorted. What does it even mean to sort a 2D array? Unsurprisingly, I’m not the first to think about this, and I’ll link to some resources. Still, the journey was interesting, and I have some code that implements higher dimensional binary searching:
The code is written for readability/understandability, so it uses recursion when in reality, you wouldn’t do that, especially for the 1D binary search case.
Sorting
Tackling the sorting problem first, you could flatten a 2D array (or higher) into a 1D array like it was laid out in memory and sort that. You would then do a 1D binary search, but that doesn’t give you any higher dimensional benefits.
Taking an idea from heaps, you could partially sort it so that the values to the right and below were greater than or equal to the current value at every position, and that’s what I ended up going for.
I played around trying to devise a sorting algorithm for this for a while before finding out that you can sort all the X-axis rows, then all the Y-axis columns, and you’d end up with this. For 3D and above, you continue to sort each new axis.
So, that’s pretty cool. Sorting is seemingly solved but is this helpful at all? Does this help you search?
Trying a Tree / Graph Point of View
I first tried looking at this as a tree, where a value had two children; the value to the right and the value below. Doing this, I noticed that the tree was not a tree, but a graph, as almost all the values had two parents. A nice thing is that since the N-dimensional array could be stored in memory as a 1D array (since that is how memory works), this graph could exploit that and not need any pointers, with the links between parents and children being implicit (add 1 to x to get to 1 child, add 1 to y to get the other child).
Not sure if useful yet, but it sounds promising so far!
To search this structure, I tried a depth-first search and found that it took many reads during the search. It was nearly as bad as a linear search. A big part of this was that since children had multiple parents, you would search the same sub-graphs redundantly. I was planning on adding a “search version index” to each value, writing this index when testing a graph node, and only reading/recursing through graph nodes with smaller version numbers, where this version number incremented every time a search was done.
Before I implemented that, I had a more promising-sounding idea, however.
Back to a Flat Array Point of View
Could you binary search the first column to see where the value was larger than what you were searching for and cut off a large part of the array? If you could also cut off a starting point, you would have a min and max region to search for. Unfortunately, the number of rows would be variable, which isn’t great for getting consistently great perf results, but at least you could then binary search each row to look for your value. Maybe you could also do a binary search across columns and make an axis-aligned bounding box to search in.
Despite having some nice properties, that solution wasn’t quite good enough. It was too variable in how much would need to be searched the old-fashioned way, which means variable execution time and not a consistent benefit.
A nicer way of looking at things comes from realizing that at any element in one of these 2d arrays, anything with a higher or equal x and y coordinate value will be greater than or equal to that element in value. Also, anything with a lower or equal x and y coordinate value will be less than or equal to that element. This allows a binary search style divide and conquer algorithm to be crafted.
Comparing the center element of a 2D array with the search value, if they are equal, your search is done.
If the center element is less than the search value, you know that the search value can’t be in the quadrant where the x and y indices are equal or smaller since those all have the same or smaller values.
If the center element is greater than the search value, you know that the search value can’t be in the quadrant where the x and y indices are equal or greater since those all have the same or greater values.
When you remove a quadrant, you are left with two rectangles to recurse into.
You continue this process until you’ve found the search value and return true, or the area of a search region is zero, meaning it wasn’t found and return false.
3D Arrays and Higher
These ideas generalize to 3D, where you do the three-axis sorting of the 3D array to start.
Just like the 2D array case, you then test the center element against the search key and are either done, remove the low-valued octant that touches (0,0,0) or remove the high-valued octant that touches (width, height, depth).
In the 2D case, removing a quadrant left us with two rectangles to recurse into. In the 3d case, removing an octant leaves us with three cuboids to recurse into.
This generalizes to 4D and above just fine, following the patterns of the lower dimensions.
Performance Properties
In 1D, this algorithm is a binary search where you throw out either the left or right half of an array after comparing the center element vs. the search key. Each step removes 1/2 of the values from consideration.
In 2D, this algorithm throws out either the lower-valued quadrant or the higher-valued quadrant at each step. Each step removes 1/4 of the values from consideration.
3D throws out an octant or 1/8 of the values from consideration at each step.
We are hitting a curse of dimensionality and getting diminishing returns, throwing out 1 / (2^D) with each step for a D-dimensional binary search.
This also shows that the algorithm is most powerful in 1D as the binary search, where half of the values are removed from consideration at each step.
Is there any way to help this? Can we sort the arrays to throw out more values at each step? Not sure.
Other Thoughts
This journey started with me wondering if higher dimensional binary search was possible, but I don’t have a specific usage case. Is this even useful? I’m trying to think of a usage case but haven’t been able to so far. Can you think of one? If so, I’d love to hear it.
In 1D, an interpolation search can beat binary search under certain conditions (https://blog.demofox.org/2019/03/22/linear-fit-search/). Can that be generalized to 2D, 3D, or ND? A challenge here is that in 1D, a sorted list is a monotonic function, so there is only one place where the search value could be. In 2D, with one of these “sorted” arrays, you have a 2D surface that is sort of monotonic (stepping down or to the right won’t find a smaller value), but there are multiple places a search key could be found. So is there a way to take that algorithm to higher dimensions? I’m not sure, but I would love to hear your thoughts. https://math.stackexchange.com/questions/321812/generalizing-monotonicity-to-2d
A fun youtube video came out recently showing how people optimized an algorithm from running for an entire month, to eventually running for less than a millisecond: https://www.youtube.com/watch?v=c33AZBnRHks
In that video, algorithm X was mentioned and I was intrigued. Giving it a google, I found this 2018 video by Knuth talking about his “Algorithm X” method to solve exact cover problems, specifically using “Dancing Links” to implement it efficiently: https://www.youtube.com/watch?v=_cR9zDlvP88
So what is an exact cover problem? If you have some tetrominoes (or Tetris pieces) and you want to put them together to form a rectangle, that is an exact cover problem. If you want to put N queens on an NxN chess board such that none can attack each other, that is an exact cover problem. Solving sudoku is also an exact cover problem.
A more formal definition is on Wikipedia at https://en.wikipedia.org/wiki/Exact_cover. Interestingly it says that you can turn any NP problem into an exact cover problem!
Due to its NP-completeness, any problem in NP can be reduced to exact cover problems, which then can be solved with techniques such as Dancing Links. However, for some well known problems, the reduction is particularly direct. For instance, the problem of tiling a board with pentominoes, and solving Sudoku can both be viewed as exact cover problems.
I wanted to learn this algorithm to have something in my tool belt to approach these sorts of problems more intelligently, but I also think this will lead to new algorithms for generating noise textures (aka sampling matrices), and/or new types of situationally optimal noise.
Let’s talk about the details of the algorithms a bit and then look at some examples.
Exact Cover Problems
An exact cover problem is described by a binary matrix of 0s and 1s.
Using Knuth’s terminology, the columns are called items and are the things that are looking to be covered. For example, for placing objects on a chess board, there would be an item (matrix column) per chess board location.
Rows are called options and they contain one or more items that they cover. For example, a tetromino at a specific position on a chess board would cover multiple chess board locations. That option (matrix row) would have a 1 at each of those locations and a 0 at the rest of the locations.
The goal of an exact cover problem is to find a set of options which cover all items exactly once.
An extension to this idea is to have some items be optional. These items don’t have to be covered, but if they are covered, they can only be covered at most once. We’ll make use of this in the N-Queens example.
There are other extensions that I’ll briefly explain at the bottom of this post.
Algorithm X
Knuth’s Algorithm X is a method for solving exact cover problems. At the highest level, it works like this.
Choose an item to cover from the list of items. (only choose from required items)
For each option that covers that item, try it as part of the solution:
Remove each item that option covers. All those items can be considered covered.
Remove all options that have items that were removed. They are no longer valid choices since they conflict.
Recurse, having Algorithm X solve the problem with reduced items.
Undo making this option part of the solution and try the next option.
If the item list becomes empty (or only has optional items left), you have a valid solution.
If any item runs out of valid options, you’ve hit a dead end.
Depending on the problem you are trying to solve, you may find 0 solutions, 1 solution, or multiple solutions. If you are only looking for one solution instead of all possible solutions, you can exit out after finding a solution.
Note that even though you iterate through all the options that cover an item in step 2, you don’t need to iterate through the items in step 1. Recursing will choose the other items, and you will reach all possible solutions.
In step 1 you can choose an item to cover however you want. A very good way to do it though is to choose the item with the fewest items left which is also “the hardest item to cover”. That is what Knuth recommends, and from observation, it aggressively culls the search tree without changing the solutions that can be reached, so is great stuff. In the IGN example, i’ll talk about how this was the difference between a 24 minute run of the algorithm, and a less than 1 millisecond run of the algorithm for the same exact cover problem!
As far as the order that options are tried in, Knuth recommends either doing them in order, or in a randomized order.
In the code I made for this post, if you only want one solution, it will visit the options in random order, but if you want all options, it will do them in order and save the cost of gathering up and shuffling the options.
Dancing Links
Dancing links is a technique that can be used when implementing Algorithm X to efficiently add and remove items and options.
One observation in Algorithm X is that the matrix is usually pretty sparse with a lot of 0s. Usually, an individual option only covers a few items, and the vast majority of items are left uncovered by that option.
To deal with this, you can make a doubly linked list of items within an option, only putting in the items that are covered.
Having a sparse matrix made up of doubly linked lists makes it easy to remove nodes, by just doing this:
An observation about doubly linked lists is that you can remove a node from a list but leave the node’s pointers in tact. You do the first 2 lines above but not the bottom 2 lines. If you do that, you can put this node back into the list by just doing this:
This allows you to undo any number of node removals, so long as you re-insert them in the reverse order that you removed them. This reverse order handling comes naturally in a recursive algorithm, such as Algorithm X.
Using doubly linked lists for a sparse matrix, and leaving removed nodes “dirty” to be able to quickly undo their removal are the core concepts of using dancing links to implement Algorithm X. This allows items and options to be removed and restored very efficiently, allowing Algorithm X to be very, very fast.
Implementation Details
Knuth’s code implements the linked lists as using integer indices into an array, instead of pointers, with the reasoning that they can be smaller than a pointer (like in x64, a 32 bit int is half size) and so are more cache friendly. Another nice thing about using integer indices is that they don’t become invalid if you resize a dynamic array, which I use (Knuth uses fixed size arrays).
I have a list of items, where an item looks like this:
// An item is something to be covered
struct Item
{
char name[8];
int leftItemIndex = -1;
int rightItemIndex = -1;
int optionCount = -1;
};
The name is a label describing the item and is only useful for user friendliness. If you removed it, the algorithm would still work and it would probably be more cache friendly.
There is also an extra item on the end of the list that serves as the “root node” of the linked list. This is the node you start from when iterating through the items.
An “itemIndex” is the index that an item occurs in the m_items array. The left and right item index are the previous and next items in the available items list.
The “optionCount” is a count of how many items there are available for an item. This helps the item with the lowest count be found more quickly when deciding which item to cover next.
The other data type is a node:
// An option is a sequential list of nodes.
struct Node
{
// Spacer nodes use upNodeIndex as the index of the previous spacer node, and downNodeIndex as the next spacer node.
// Non spacer nodes use these to get to the next option for the current item
int upNodeIndex = -1;
int downNodeIndex = -1;
// What item a node belongs to.
// -1 means it is a spacer node.
// There is a spacer node before and after every option.
// An option is a sequential list of nodes.
int itemIndex = -1;
};
Each item has a node, and in fact, the itemIndex is also the index into the m_nodes array where the node lives for that item.
The upNodeIndex and downNodeIndex goes up and down in the sparse array, so starting at the node of an item and iterating through that list is how you find the options available for that item.
Options also have nodes. They have one node per item in the option, and are laid out sequentially in the m_nodes array so that you can add one to a node index to go to the next item in the option, or subtract one from the node index to go to the last item in the option. The itemIndex of a node is the item that the node belongs to.
There is also a “spacer node” at the beginning of each option which can be identified by having an itemIndex of -1. In this case, the upNodeIndex is the previous spacer node, and the downNodeIndex is the next spacer node. This is useful when iterating through all items in an option to be able to go back to the beginning of the option. There is also a spacer node at the end of the options to simplify code by being able to make the assumption that there is always a spacer node at the beginning and end of the items.
The other thing worth mentioning is that when covering an item, you remove it from the list of items, and then have all current options of that item remove themselves from all other lists except the item being covered. This is necessary so that when this item is uncovered, you can iterate through the options that it still knows about, and have each add itself back into the necessary lists (restore the up and down pointers).
With the implementation details out of the way, let’s see this algorithm in action!
Basic Examples
The code for this section is in BasicExamples() in main.cpp.
The items are 1,2,3,4,5,6,7 and they are all required.
The options are:
147 (named A)
14 (named B)
457 (named C)
356 (named D)
2367 (named E)
27 (named F)
Once again there is only a single solution, which is 14 (B), 356 (D), 27 (F).
Wikipedia then goes on to talk about an “Exact Hitting Set” (https://en.wikipedia.org/wiki/Exact_cover#Exact_hitting_set) which is different than an exact cover problem, but is solved by transposing the exact cover matrix and then solving that. Check the wikipedia page for a description of what sort of problems these are.
The items to cover are the names of the options: A, B, C, D, E, F.
Since the matrix is transposed from the exact cover problem, each option (row) is the list of options in the previous problem which contain each number 1 through 7:
AB (these are the ones that contained a 1 in the last problem)
EF (ditto for 2)
DE (3)
ABC (4)
CD (5)
DE (6)
ACEF (7)
Solving that gives a single answer again: AB, EF, CD.
These are basically diagnostic tests to help make sure the algorithm is working, and help show how it works.
Here is the program output. It found the solutions amazingly fast (< 1 millisecond), but that isn’t too surprising as these are pretty simple problems.
N-Rooks
Now for something a little more tangible… let’s say we had a standard 8×8 chess board and we are trying to place 8 rooks on it such that no rook could attack any other rook.
The code for this section is in NRooks.h.
In this problem, each row must have a rook in it (and only 1), so we have 8 required items named Y1 through Y8. Each column must also have a rook in it (and only 1), so we have 8 more required items named X1 through X8 making for 16 items total.
You then make an option for each location on chess board, which means 64 options total.
Within each option, you put a 1 for the row and column that is covered by placing a rook there. Putting a 1 in an item for an option means just having that item be in the list for that option.
For instance, placing a rook at location (0,0) on the chess board would put a 1 in X0 and a 1 Y0. Placing a rook at location (4,5) would put a 1 in X4 and a 1 in Y5.
Below is the program output, where it found 40320 solutions in 66 milliseconds, and shows the first 4 solutions it found.
This is for a standard 8×8 chess board with 8 rooks on it, but the problem (and supplied code!) generalizes to an NxN chess board with N rooks on it and is called the N-Rooks problem.
N-Queens
The N-Queens problem is the same as the N-Rooks problem, but using queens instead of rooks.
The code for this section is in NQueens.h.
We use the same items for queens as we do for rooks, but we also add the diagonals since queens can attack diagonally. In this case, there are more than N diagonals though so we can’t require that all diagonals are used by queens. We deal with this by making the diagonals be optional items.
There are 2N-1 diagonals going left, and 2N-1 diagonals going right, so for when N=8, we add 30 optional items to the 16 required items that rooks have.
For comparison, when N=12 it finds all answers in 600 ms. When N=13 it finds them in 2.7 seconds. When N=14 it finds them (365,596 solutions) in 14 seconds. Don’t forget that this is a single threaded algorithm running on the CPU!
Sudoku
The code for this section is in Sudoku.h.
Sudoku is a game played on a 9×9 grid where there are some numbers 1-9 already present on the board and you need to fill in the blank spots such that each row has all numbers 1-9 exactly once, so does each column, and so does each 3×3 block of cells (9 total 3×3 blocks of cells, they don’t overlap).
For solving sudoku, the items to cover are all required and they are:
81 for cells: Each of the 81 cells must have a value (it doesn’t matter which value)
81 for columns: Each of the 9 columns must have each possible value 1-9.
81 for rows: Each of the 9 rows must have each possible value 1-9.
81 for blocks: Each of the nine 3×3 blocks must have each possible value 1-9.
1 for initial state: Since there is an initial state for the board, one option will describe the initial state and this item ensures that it is always part of any solution found.
That makes for a total of 325 items to cover.
As far as options go, there will be one option for each value in each cell, which means 81*9 = 729 options. We only make options for the cells where the initial state is blank though, which cuts down the actual number of options needed. In our example, we’ll have 30 numbers already covered, leaving 51 uncovered. That means we’ll only need 51*9 = 459 options.
There will also be one option for the initial state, which is the only one that has the initial state item covered, making for a total of 460 options.
As an example of what an option looks like, let’s consider the option of putting a 4 in (3,5). That option will have the following items:
Cell (3,5) has a value in it
Column 3 has a 4 in it.
Row 5 has a 4 in it.
Block (1,2) has a 4 in it.
Each option has to describe all the things it covers.
Wikipedia has an example Sudoku problem that I plugged in (https://en.wikipedia.org/wiki/Sudoku) and then ran the program. It was able to find the same solution listed on wikipedia and verify that it only had one solution, in 2 milliseconds.
The post shows how to find a formula for that grid, but let’s try see what this algorithm can come up with.
Our items to cover will be:
25 for cells: Each of the entries in the 5×5 matrix needs to have a value (but it doesn’t matter what value is there).
125 for plus shapes: There is a plus shape with a center point of each entry in the matrix, so 25 plus shapes. Each plus shape needs each value 1 through 5, so 25*5 = 125 items.
That makes for 150 items total.
As far as options, we need an option for each possible value at each possible location, which means 25*5 = 150 options.
As an example, the option for putting a 2 in (3,4) would have these items:
Cell (3,4) has a number
The plus with a center at (3,4) has a 2 in it.
The plus with a center at (2,4) has a 2 in it. (The plus to the left)
The plus with a center at (4,4) has a 2 in it. (The plus to the right)
The plus with a center at (3,3) has a 2 in it. (The plus above)
The plus with a center at (3,0) has a 2 in it. (The plus below. Note the y axis wraps here)
Running that finds 240 solutions in 14 milliseconds! The original blog post found the formula (x+3y)%5, but after looking at the results, I realized that (x+2y)%5 is also a solution. So that takes care of 2 solutions, what about the other 238?
Well, one thing to realize is that if you have a solution, you can swap any 2 numbers and have another valid solution. This means that you essentially have 5 symbols (0,1,2,3,4) that you can arrange in any way to find a different solution. That means 5! = 120 solutions. Since there are two different formulas for generating this 5×5 matrix, and each has 120 solutions, that accounts for the 240 solutions this found.
Something interesting is that I tried this with smaller (3×3 and 4×4) grid sizes and it wasn’t able to find a solution. Trying larger sized grids, I was only able to find solutions when the size was a multiple of 5, and it still would only find 240 solutions. This matrix tiles, so this all makes a decent amount of sense. The larger sizes that are multiples of 5 just found a solution by tiling the 5×5 solution, interestingly.
Interleaved Gradient Noise (IGN)
The code for this section is in IGN.h.
In a previous blog post I wrote about Interleaved Gradient Noise, which is a low discrepancy grid created by Jorge Jimenez while he was at Activision (https://blog.demofox.org/2022/01/01/interleaved-gradient-noise-a-different-kind-of-low-discrepancy-sequence/). IGN is a screen space noise designed to leverage the neighborhood sampling that temporal anti aliasing uses to know when to keep or reject history. Using IGN as the source of per pixel random numbers in stochastic rendering under TAA thus gives better renders for the same costs. It works even better than blue noise in this situation, believe it or not.
This noise is a generalized sudoku where the rows and columns must all have values 1 through 9, but instead of only the 9 main 3×3 blocks needing to have each value 1 through 9, it’s EVERY 3×3 block needing to have that, including overlapping blocks. There is a block with a center at each location of the 9×9 matrix, so there are 81 3×3 blocks.
The items are then…
81 for cells: Each of the 81 cells must have a value (it doesn’t matter which value)
81 for columns: Each of the 9 columns must have each possible value 1-9.
81 for rows: Each of the 9 rows must have each possible value 1-9.
729 for blocks: Each cell is the center of a 3×3 block, and each of them must have each value 1-9. So 81*9 = 729 items.
This is 891 items to cover.
Note there is no initial state in this setup, unlike the sudoku section. We want it to find ANY solution it can.
The number of options is still that each cell could have each possible value, so its 81*9 = 729 options.
It rapidly found that there were no solutions. It took less than a millisecond and the search space ended up being very small. Having the program spit out the steps it was taking (look for SHOW_ALL_ATTEMPTS), I found that it was able to quickly find a situation that couldn’t be satisfied and that it was right.
Fun fact, I previously had it choosing items to cover from left to right, instead of covering the item with the fewest options remaining. When I did that, the program took 25 minutes to find the answer as you can see below! The cover fewest options item heuristic is pretty powerful, and gives some intuition behind why the Wave Function Collapse algorithm (https://github.com/mxgmn/WaveFunctionCollapse) has the same heuristic – collapse whichever item has the fewest options.
So this algorithm confirms that there is no exact solution to the constraints that IGN sets up. IGN is an approximate solution though, and has numerical drift across space to distribute the error of not exactly solving the constraints. IGN itself was tuned by hand and eye, so I would bet is not be the optimal formula for being an approximate solution to these constraints. I’m not sure how you’d find the optimal formula though (let’s say, optimal in a least squared error sense?).
Anyhow it made me think… as far as TAA is concerned, it’s primarily the 3×3 block constraints that matter to the neighborhood sampling, and not the rows and columns constraints. So, I made an IGNRelaxed() function which doesn’t have them. After running for 13 minutes, it’s found 49,536 solutions. I don’t have a very exact way to measure progress but it didn’t seem anywhere near done so I killed it.
I’m not sure if this sampling pattern would be any good in practice. On one hand, it is optimized towards TAA’s neighborhood sampling, but it doesn’t have anything to fight regular structure or promote diversity of values in small regions. Maybe it has a use, maybe not. If you give it a try, let me know!
UPDATE: Extensions Briefly Explained
Here are some other extensions to this algorithm:
Knuth’s DLX2 has the ability for optional items to have a color, and be allowed to be covered by any number of items of the same color. An example usage case is in making a cross word puzzle. The letters could be the color, and it’s ok if two words claim the same letter in the same cell where the words overlap.
DLX3 allows you to specify a min and max number of coverings per item instead of requiring exactly 1 coverings. This could let you set up constraints like putting N queens and 1 king on a chess board and making it so between 2 and 4 of the queens must be attacking the king.
DLX5 lets you give a cost per option, and a tax (discount) per item. This lets you find minimum cost solutions. The cost doesn’t drive the search though, unfortunately. It searches exhaustively and gives you the k best scoring solutions.
I’ll bet you can imagine other extensions as well. This basic algorithm seems very natural to modify towards other specific goals, or constraint types.
Closing & Links
AlgorithmX using Dancing Links is super powerful and super fast. I feel like more folks need to know about these techniques.
My goal was to see if this algorithm (+ extensions) could be used as an alternative to void and cluster and simulated annealing for generating noise textures (sampling matrices). After learning more, it looks like it could, but only in narrow & rare usage cases. It has the ability to score solutions, but the scoring doesn’t drive the search. For making blue noise without any other hard constraints, it would just be a brute force search then.
I think this algorithm could be made multithreaded if needed, by splitting the “search tree” up and giving a piece to each thread. It’s hard to know how to split this fairly though, so might need some sort of work stealing setup. I’m not real sure, but it seems doable.
Before we get into umbral calculus I want to talk about discrete calculus.
Discrete calculus lets you do calculus operations on discrete functions. These are functions like where x has to be an integer. Limits can no longer get arbitrarily close to 0 in discrete calculus since the smallest difference between integers is 1. One consequence of that is that derivatives (aka deltas aka ) are calculated using finite differences, commonly forward differences. When the smallest valued difference is 1, it falls out naturally from the definition of the derivative, setting h to 1.
A nice thing is that there are lots of similarities between discrete calculus and regular calculus, such as the derivative of a constant being zero:
But there are plenty of differences. The product rule in calculus is different than the one in discrete calculus for example. (Note: this is because the extra term has a multiplier that approaches the limit, which is zero in regular calculus, but 1 in discrete calculus.)
Here’s where umbral calculus comes in. It lets you turn a discrete calculus problem into a continuous calculus problem, and it lets you turn a continuous calculus problem into a discrete one.
In this post we are going to leverage umbral calculus to see how to craft functions that sum discrete functions. If you have a discrete quadratic polynomial f, you’ll get a cubic polynomial g that gives you the sum for all values f(n) for n ranging from 0 to x-1.
How To Do It
Firstly, we need a function that spits out the numbers we want to sum. The function input needs to be integers, but the function output doesn’t need to be.
Once we have our function, the process to get the function that calculates sums of is this:
Turn the discrete function into a continuous function.
Integrate the continuous function.
Convert the result from a continuous function back into a discrete function.
If we use (phi) to denote converting a function from continuous to discrete, and to denote converting a function from discrete to continuous, it looks like this mathematically:
When you plug an integer x into g, you get out the sum of for all values of n from 0 to x-1.
If you want to get the sum of all values of from a to b, you can calculate that as . This works because if you wanted the sum of numbers from 10 to 20, you could sum up all the numbers from 0 to 20, and subtract out the numbers from 0 to 10. This is also a discrete calculus definite integral.
That’s the overview, let’s talk about how to calculate and .
Calculating Φ
Phi is the operator for converting continuous functions to discrete functions. Phi of powers of x are real easy to compute: They turn powers of x into falling factorials of x which are denoted . So, becomes .
The oddity of this mysterious conversion is what gave umbral (shadow) calculus it’s name. It took a while for people to understand why this works and formalize it. Until then they just knew it worked (although they had counter cases to an imperfect understanding so it didn’t always work), but didn’t know why, so was a bit mysterious.
The above makes it very easy to work with polynomials. In a couple sections will be a cheat sheet with a couple other transformations I scrounged up.
Some other rules of the phi operator include:
The last one is meant to show that left and right multiplication are not the same thing, so ordering needs to be carefully considered and conserved.
Here are a couple other useful properties that might be helpful, that I’ve scraped from other sources 🙂
The last one implies that in discrete calculus, is 2. That is strange, isn’t it? If you work it out with a few values, you can see that it’s true!
This section is far from exhaustive! Check out the links at the bottom of this post for some more information.
Let’s apply phi to a polynomial as an example of how it works…
Calculating Inverse Φ
Calculating inverse phi is less straight forward – at least as far as I know. There may be a more formulaic way to do it that I don’t know of. I’ll show the inverse phis I know and how they are derived, which should hopefully help if you need others.
x
Firstly, the inverse phi of x is just x. That’s an easy one.
x squared
Next up is the inverse phi of x squared. We know what phi of x squared is from the last section, so let’s start with that.
Since x is the same as the phi of x, we can replace the last term with phi of x.
Now let’s get the naked x squared term on the left side, and all the phi terms on the right side.
Lastly, we’ll left multiply both sides by inverse phi, and we are done!
x cubed
Let’s start with the phi of x cubed.
Let’s turn into .
Next we factor -3 out of the middle two terms.
The middle term is 3 times phi of x squared, so we can replace it with that. The -x on the end outside of the parens is also just negative phi of x so we’ll replace that too.
Like last time, let’s get the phi terms on the right and the non phi term on the left.
And again like last time, we can now left multiply by inverse phi to finish!
Cheet Sheet
Here is a table of phi and inverse phi transforms.
The coefficients on the polynomials made from powers of x are the Stirling numbers. Phi relates to signed Stirling numbers of the first kind, and inverse phi relates to unsigned Stirling numbers of the second kind. That fact helps if you need to work with higher powers of x.
Example 1: f(x) = x
Let’s use what we know to come up with a function which sums all integers from 0 to x-1.
Our recipe for making g from f starts out like this:
Our function is just , so we start at:
The inverse phi of x is just x.
Integrating x gives us one half x squared. We can move the half outside of the phi.
We know the phi of x squared so we can plug it in and be done!
If we plug in 1 to get the sum of all integers from 0 to 0, we get 0.
If we plug in 2 to get the sum of all integers from 0 to 1, we get 1. That is 0 + 1.
Plugging in 3 for summing 0 to 2, we get 3. That is 0 + 1 + 2.
Plugging in 4 gives us 6. That is 0 + 1 + 2 + 3.
Plugging in 5 gives us 10. 10 gives us 45. 100 gives us 4950. 200 gives us 19900. Plugging in a million gives us 499,999,500,000!
If we wanted to sum the integers between 5 and 9, we can do that too! We subtract g(5) from g(10), which is 45-10 or 35. That is 5+6+7+8+9.
So, we know that summing the integers between 100 and 199 is 14,950, since we have g(100) and g(200) above, which are 4950 and 19900 respectively.
Example 2: f(x) = x^2
Let’s find the function to sum squares of integers.
Applying inverse phi…
integrating…
Splitting it into two phi terms, and moving the fractions out of phi…
Applying phi…
Plugging in 2 gives 1, which is 0 + 1.
3 gives 5, which is 0 + 1 + 4.
4 gives 14, which is 0 + 1 + 4 + 9.
5 gives 30, which is 0 + 1 + 4 + 9 + 16.
if we want to get the sum of values for x between 2 and 4, we calculate g(5)-g(2), which is 30-1 = 29. That is 4+9+16.
It works!
Example 3: f(x) = x^2 – x/2
This last example is to show that while the function f has to take in integers, it doesn’t have to give out integers.
Let’s do the dance…
Plugging in 1 gives 0.
2 gives 1/2, which is 1 – 1/2.
3 gives 3.5, which is (1-1/2) + (4-1).
4 gives 11, which is (1-1/2) + (4-1) + (9-1.5).
5 gives 25, which is (1-1/2) + (4-1) + (9-1.5) + (16-2).
6 gives 47.5, which is (1-1/2) + (4-1) + (9-1.5) + (16-2) + (25-2.5).
It works!
Closing and Links
Finding formulas that sum ranges of discrete functions is pretty cool, but is it very useful?
Well, if you divide a sum by the count of values in the sum, that gives you an average.
That average is also equivalent to doing Monte Carlo integration using regular spaced samples. I’m not a fan of regular spaced samples, but for constant time sampling of arbitrary sample counts, I think I might make an exception!
How about converting problems between the continuous and discrete domain, is that very useful for other things? Not real sure.
Can this be extended to multi dimensional integrals? I bet so!
If you have more info about any of the above, I’d love to hear it!
This video is what first turned me onto this topic and it is quite good. I’ve watched it about 10 times, trying to scrape all the info out of it. https://www.youtube.com/watch?v=D0EUFP7-P1M
The title of this post is a mouthful but the topic is pretty useful.
Let’s say that we have two histograms:
1,2,3,1,0,0,0,0,0,0
0,0,0,0,0,0,1,2,3,1
When you want to see how similar two histograms are, some common ways are:
Treat the histograms as high dimensional vectors and dot product them. Higher values mean more similar.
Treat the histograms as high dimensional points and calculate the distance between them, using the regular distance formula (aka L2 norm or Euclidian norm), taxi cab distance (aka L1 norm) or others. Lower values mean more similar.
Those aren’t bad methods, but they aren’t always the right choice depending on your needs.
If you dot product these histograms as 10 dimensional vectors, you get a value of zero because there’s no overlap. If there are more or fewer zeros between them, you still get zero. This gives you no idea how similar they are other than they don’t overlap.
If you calculate the distance between these 10 dimensional points, you will get a non zero value which is nice, but adding or removing zeros between them, the value doesn’t change. You can see this how the length of the vector (1,0,1) is the square root of 2, which is also the length of the vector (1,0,0,1) and (1,0,0,0,1). It doesn’t tell you really how close they are on the x axis.
In this case, let’s say we want a similarity metric where if we separate these histograms with more zeros, it tells us that they are less similar, and if we remove zeros or make them overlap, it tells us that they are more similar.
This is what the p-Wasserstein distance gives us. When we interpolate along this distance metric from the first histogram to the second, the first histogram slides to the right, and this is actually 1D optimal transport. This is in contrast to us just linearly interpolating the values of these histograms, where the first histogram would shrink as the second histogram grew.
The title of this post says that we will be working with both histograms and PDFs (probability distribution functions). A PDF is sort of like a continuous version of a histogram, but integrates to 1 and is limited to non negative values. In fact, if a histogram has only non negative values, and sums up to 1, it can be regarded as a PMF, or probability mass function, which is the discrete version of a PDF and is used for things like dice rolls, coin flips, and other discrete valued random events.
You can do similar operations with PDFs as you can with histograms to calculate similarity.
Instead of calculating a dot product between vectors, you take the integral of the product of the two functions. Higher values means more similar, like with dot product. . This is a a kind of continuous dot product.
Instead of calculating a distance between points, you take the integral of the absolute value of the difference of the two functions raised to some power and then the result taken to one over the same power . Lower values means more similar, like with distance. This is a kind of continuous distance calculation.
Also, any function that doesn’t have negative y values can be normalized to integrate to 1 over a range of x values and become a PDF. So, when we talk about PDFs in this post, we are really talking about any function that doesn’t have negative values over a specific range.
That looks a bit like the continuous distance integral we saw in the last section, but there are some important differences.
Let me explain the symbols in this function. First is which is where the p in p-Wasserstein distance comes from. It can be 1, 2, 1.5, or any other number. If using the value 2, you’d call it 2-Wasserstein distance, and it would be dealing with the squares of differences between these functions and .
Let’s say that we have a PDF called f. We can integrate that to get a CDF or cumulative distribution function that we’ll call F. We can then switch x and y and solve for y to get the inverse of that function, or . If we have another PDF called g, we can follow the same process to get .
When we calculate the integral above, either analytically / symbolically if we are able, or numerically otherwise, we get the p-Wasserstein distance which has the nice properties we described before.
A fun tangent: if you have the inverse CDF of a PDF, you can plug in a uniform random number in [0,1] to the inverse CDF and get a random number drawn from the original PDF. It isn’t always possible or easy to get the inverse CDF of a PDF, but this also works numerically – you can make a table out of the PDF, make a table for the CDF, and make a table for the inverse CDF from that, and do the same thing. More info on all of that at https://blog.demofox.org/2017/08/05/generating-random-numbers-from-a-specific-distribution-by-inverting-the-cdf/.
Calculating 2-Wasserstein Distance of Simple PDFs
Let’s start off by calculating some inverse CDFs of simple PDFs, then we’ll use them in the formula for calculating 2-Wasserstein distances.
Uniform PDF
First is the PDF , which is for uniform random values. It integrates to 1 for x between 0 and 1 and is positive in that range as well.
If we integrate that to get the CDF, we get .
If we flip x and y, and solve again for y to get the inverse CDF, we get .
Linear PDF
The next PDF is . It integrates to 1 for x between 0 and 1 and is positive in that range as well.
If we integrate that, the CDF is .
Flipping x and y and solving for y gives us the inverse CDF .
Quadratic PDF
The last PDF is . It integrates to 1 for x between 0 and 1 and is positive in that range as well.
If we integrate that, the CDF is
Flipping x and y and solving for y gives us the inverse CDF .
2-Wasserstein Distance From Uniform to Linear
The 2-Wasserstein distance formula looks like this:
Let’s square the function for now and we’ll square root the answer later. Let’s also replace with the uniform PDF inverse CDF, and replace with the linear PDF inverse CDF. We can drop the absolute value since the square does that for us.
Expanding the equation, we get:
Luckily we can break that integral up into 3 integrals:
https://www.wolframalpha.com/ comes to the rescue here where you can ask it “integrate y=x^2 from 0 to 1” etc to get the answer to each integral.
Remembering that we removed the square root from the formula at the beginning, we have to square root this to get our real answer.
2-Wasserstein Distance From Uniform to Quadratic
We can do this again measuring the distance between uniform and quadratic.
Square rooting that gives:
2-Wasserstein Distance From Linear to Quadratic
We can do this again between linear and quadratic.
Square rooting that gives:
These results tell us that uniform and quadratic are most different, and that linear and quadratic are most similar. Uniform and linear are in the middle. That matches what you’d expect when looking at the graphs.
Calculating p-Wasserstein Distance of More Complicated PDFs
If you have a more complex PDF that you can’t make an inverted CDF of, you can still calculate the distance metric numerically.
You start off by making a table of PDF values at evenly spaced intervals, then divide all entries by the sum of all entries to make it a tabular form of the PDF (it sums to 1). You then make a CDF table where each entry is the sum of PDF values at or before that location.
Now to evaluate the inverted CDF for a value x, you can do a binary search of the CDF table to find where that value x is. You also calculate what percentage the value is between the two indices it’s between so that you have a better approximation of the inverted CDF value. Alternately to a binary search, you could make an ICDF table instead.
If that sounds confusing, just imagine you have an array where the x axis is the array index and the y axis is the value in the array. This is like a function where if you have some index you can plug in as an x value, it gives you a y axis result. That is what the CDF table is. If you want to evaluate the inverted CDF or ICDF, you have a y value, and you are trying to find what x index it is at. Since the y value may be between two index values, you can give a fractional index answer to make the ICDF calculation more accurate.
Now that you can make an inverted CDF table of any PDF, we can use Monte Carlo integration to calculate our distance metric.
Let’s look at the formula again:
To calculate the inner integral, we pick a bunch of x values between 0 and 1, plug them into the equation and average the results that come out. After that, we take the answer to the 1/p power to get our final answer. The more samples you take of x, the more accurate the answer is.
Here is some C++ that implements that, with the ICDF functions being template parameters so you can pass any callable thing in (lambda, function pointer, etc) for evaluating the ICDF.
template <typename ICDF1, typename ICDF2>
float PWassersteinDistance(float p, const ICDF1& icdf1fn, const ICDF2& icdf2fn, int numSamples = 10000000)
{
// https://www.imagedatascience.com/transport/OTCrashCourse.pdf page 45
// Integral from 0 to 1 of abs(ICDF1(x) - ICDF2(x))^p
// Then take the result to ^(1/p)
double ret = 0.0;
for (int i = 0; i < numSamples; ++i)
{
float x = RandomFloat01();
float icdf1 = icdf1fn(x);
float icdf2 = icdf2fn(x);
double y = std::pow(std::abs((double)icdf1 - (double)icdf2), p);
ret = Lerp(ret, y, 1.0 / double(i + 1));
}
return (float)std::pow(ret, 1.0f / p);
}
If both ICDFs are tables, that means they are both piecewise linear functions. You can get a noise free, deterministic calculation of the integral by integrating the individual sections of the tables, but the above is a generic way that will work for any (and mixed) methods for evaluating ICDFs.
Below is a table of the 2-Wasserstein distances calculated different ways. The analytic truth column is what we calculated in the last section and is the correct answer. The numeric function column is us running the above numerical integration code, using the inverted CDF we made symbolically in the last section. The last column “numeric table” is where we made a tabular inverted CDF by taking 10,000 PDF samples and making a 100 entry CDF, then using binary search to do inverse CDF evaluations.
Analytic Truth
Numeric Function
Numeric Table
Uniform To Linear
0.18257418583
0.182568
0.182596
Uniform To Quadratic
0.27602622373
0.276012
0.275963
Linear To Quadratic
0.09534625892
0.095327
0.095353
For kicks, here are p values of 1 and 3 to compare against 2. Calculated using the “numeric table” method.
1
2
3
Uniform To Linear
0.166609
0.182596
0.192582
Uniform To Quadratic
0.249933
0.275963
0.292368
Linear To Quadratic
0.083308
0.095353
0.103187
Calculating p-Wasserstein Distance of PMFs
If you have a random variable with discrete random values – like a coin toss, or the roll of a 6 sided dice – you start with a PMF (probability mass function) instead of a PDF. A PMF says how likely each discrete event is, and all the events add up to 1.0. The PMF is probably already a table. You can then turn that into a CDF and continue on like we did in the last section. Let’s see how to make a CDF for a PMF.
When you evaluate y=CDF(x), the y value you get is the probability of getting the value x or lower. When you turn a PMF into a CDF, since it’s discrete, you end up with a piecewise function.
For instance, here is the the PMF 0.1, 0.3, 0.2, 0.4.
The CDF is 0.1, 0.4, 0.6, 1.0.
With a CDF you can now do the same steps as described in the last section.
Interpolating PDFs & Other Functions
Interpolating over p-Wassertstein distance is challenging in the general case. I’m going to dodge that for now, and just show the nice and easy way to interpolate over 1-Wasserstein distance.
First let’s see what happens if we interpolate two gaussian PDFs directly… by linearly interpolating PDF(x) for each x to make a new PDF. As you can see, the gaussian on the left shrinks as the gaussian on the right grows.
If we instead lerp ICDF(x) for each x to make a new ICDF, invert it to make a CDF, then differentiate it to get a PDF, we get this very interesting optimal transport interpolation:
For a second example here is a uniform PDF interpolating into Gaussian, by lerping the PDFs and then the ICDFs.
As a last set of examples, here is uniform to quadratic. First by lerping the PDFs, then by lerping the ICDFs.
It sounds like a lot of work interpolating functions by integrating the PDFs or PMFs to CDFs, inverting them to ICDFs, lerping, then inverting and differentiating, and it really is a lot of work.
However, like I mentioned before, if you a plug a uniform random value into an ICDF function, it will give you an unbiased random value from the PDF. This means that in practice, if you are generating non uniform random numbers, you often have an ICDF handy for the distribution already; either symbolically or numerically in a table. That makes this whole process a lot simpler: plug your uniform random x value into the two ICDFs, and return the result of lerping them. That’s all there is to it to generate random numbers that are from a PDF interpolated over 1-Wasserstein distance. That is extremely simple and very runtime friendly for graphics and game dev people. Again, more simply: just generate 2 random numbers and interpolate them.
If you are looking for some intuition as to why interpolating the ICDF does what it does, here are some bullet points.
PDF(x) or PMF(x) tells you how probable x is.
CDF(x) tells you how probable it is to get the value x or lower.
ICDF(x) tells you the value you should get, for a uniform random number x.
Interpolating PDF1(x) to PDF2(x) makes the values in PDF1 less likely and the values in PDF2 more likely. It’s shrinking PDF1 on the graph, and growing PDF2.
Interpolating CDF1(x) to CDF2(x) gets the values that the two PDFs would give you for probability x, and interpolates between them. If a random roll would get you a 3 from the first PDF, and a 5 from the second PDF, this interpolates between the 3 and the 5.
In this section we’ve talked about linear interpolation between two PDFs, but this can actually be generalized.
A linear interpolation is just a degree 1 (linear) Bezier curve, which has 2 control points that are interpolated between. The formula for a linear Bezier curve is just A(1-t)+Bt which looks exactly like linear interpolation, because it is! With a quadratic Bezier curve, you have 3 control points, which means you could have three probability distributions you were interpolating between, but it would only pass through the two end points – the middle PDF would never be fully used, it would just influence the interpolation. You can do cubic and higher as well for more PDFs to interpolate between. Check this out for more info about Bezier curves. https://blog.demofox.org/2015/05/25/easy-binomial-expansion-bezier-curve-formulas/.
A linear interpolation is a Bezier curve on a one dimensional 1-simplex aka a line with barycentric coordinates of (s,t) where s=(1-t). The formula for turning barycentric coordinates into a value is As+Bt which is again just linear interpolation. You can take this to higher dimensions though. In two dimensions, you have a 2-simple aka a triangle with barycentric coordinates of (u,v,w). The formula for getting a value on a (linear) Bezier triangle is Au+Bv+Cw where A, B and C would be 3 different PDFs in this case. Higher order Bezier triangles exist too. Have a look here for more information: https://blog.demofox.org/2019/12/07/bezier-triangles/
To see how to linearly interpolate in other p-Wasserstein distance metrics, give this link below a read. I’ll be doing a post on optimal transport in higher dimensions before too long and will revisit it then.