# Taking a Stroll Between The Pixels

This post relates to a paper I wrote which talks about (ab)using linear texture interpolation to calculate points on Bezier curves. Extensions generalize it to Bezier surfaces and (multivariate) polynomials. All that can be found here: https://blog.demofox.org/2016/02/22/gpu-texture-sampler-bezier-curve-evaluation/

The original observation was that if you sample along the diagonal of a 2×2 texture, that as output you get points on a quadratic Bezier curve with the control points of the curve being the values of the pixels like in the image below. When I say you get a quadratic Bezier curve, I mean it literally, and exactly. One way of looking at what’s going on is that the texture interpolation is literally performing the De Casteljau algorithm. (Note: if the “B” values are not equal in the setup below, the 2nd control point will be the average of these two values, which an extension abuses to fit more curves into a smaller number of pixels.)

An item that’s been on my todo list for a while is to look and see what happens when you sample off of the 45 degree diagonal between the pixel values. I was curious about questions like:

• What if we sampled across a different line?
• What if we samples across a quadratic curve like by having $y=x^2$?
• What if we sampled on a circle or a sine wave?
• How does the changed sampling patterns work in higher dimensions – like trilinear or quadrilinear interpolation?

After accidentally coming across the answer to the first question, it was time to look into the other ones too!

PS – if wondering “what use can any of this possibly have?” the best answer I have there is data compression for data on the GPU. If you can fit your data with piecewise rational polynomials, the ideas of this technique could be useful for storing that data in a concise way (pixels in a texture) that are also quickly and easily decoded by the GPU. The ideas from this post allows for more curve types when fitting and storing your data, beyond piecewise rational polynomials. It’s also possible to store higher order curves and surfaces into smaller amounts of texture data.

# Quick Setup: Bilinear Interpolation Formula

Bilinear interpolation is available on modern GPUs as a way of getting sub-pixel detail. In the olden days, when zooming into a texture, the square pixels just got larger because nearest neighbor filtering was used. In modern times, when looking at the space between pixel values, bilinear interpolation is used to fill in the details better than nearest neighbor does.

You can describe bilinear interpolation as interpolating two values across the x axis and interpolating between the results across the y axis (reversing the order of axes also works). Mathematically, that can look like this:

$z = (A(1-x) + Bx)(1-y) + (C(1-x)+Dx)y$

Where x and y are values between 0 and 1 describing where the point is between the pixels, and A,B,C,D are the values of the 4 nearest pixels, which form a box around the point we are calculating. A = (0,0), B = (1,0), C = (0,1) and D = (1,1).

With some algebra, you can get that equation into a power series form which is going to be easier to work with in our experiments:

$z = (A-B-C+D)xy + (B-A)x + (C-A)y + A$

Now that we have our formula, we can begin! 🙂

# Sampling Along Other Lines

So, if we sample along the diagonal from A to D, we know that we get a quadratic equation out. What happens if we sample along other lines though?

My guess before I knew the answer to this was that since the 45 degree angle line is quadratic (degree 2), and that horizontal and vertical lines were linear (degree 1), that sampling along other lines must be a fractional degree polynomial between 1 and 2. It turns out that isn’t the answer, but I wonder if there’s a way to interpret the “real answer” as a fractional polynomial?

Anyways, wikipedia clued me in: https://en.wikipedia.org/wiki/Bilinear_interpolation#Nonlinear

The interpolant is linear along lines parallel to either the x or the y direction, equivalently if x or y is set constant. Along any other straight line, the interpolant is quadratic

What that means is that if you walk along a horizontal or vertical line, it’s going to be linear. Any other line will be quadratic.

Let’s try it out.

Remembering that the equation for a linear function is $y=mx+b$ let’s literally replace $y$ with $mx+b$ and see what we get out.

$z = (A-B-C+D)xy + (B-A)x + (C-A)y + A$

Which becomes this after substitution:

$z = (A-B-C+D)x(mx+b) + (B-A)x + (C-A)(mx+b) + A$

After some expansion and simplification we get this:

$z = (Am-Bm-Cm+Dm)x^2+(Ab-Bb-Cb+Db+Cm-Am+B-A)x+Cb-Ab+A$

This formula tells us the value we get if we have a bilinear interpolation of values A,B,C,D (aka a bilinear surface defined by those points), and we sample along the x,y line defined by $y=mx+b$.

It’s a very generalized function that’s hard to reason about much, but one thing is clear: it is a quadratic function! Whatever constant values you choose for A,B,C,D,m and b, you will get a quadratic polynomial (or lower degree, but never higher).

Here’s a shadertoy that shows curves generated by random sub pixel line segments on a random (white noise) RGB texture: https://www.shadertoy.com/view/XstBz7

(note that the rough edges of the curve are due to the fact that interpolation happens in X.8 fixed point format, so has pretty limited precision. Check the paper for more information and ways to address the issue.)

Let’s explore a bit by plugging in some values for $m$ and $b$ and see what happens for different types of lines.

m=0, b=0

Let’s see what happens when m is 0 and b is 0. In other words, lets see what happens when we sample along the line $y=0$.

Plugging those values in gives:

$z = (B-A)x + A$

interestingly, this is just a linear interpolation between A and B, which makes sense when looking at the graph of where we are sampling on the bilinear surface.

This goes along with what wikipedia told us: when one of the axes is constant (it’s a horizontal or vertical line) the result is linear.

m=1, b=0

Let’s try m = 1 and b = 0. That is the line: $y=x$. This graph shows where that is sampling from on the bilinear surface:

Plugging in the values gives us this equation:

$z = (A-B-C+D)x^2+(C+B-2A)x+A$

We get a quadratic out! This shouldn’t be too surprising. This is the original insight in the technique. This is also the formula for a quadratic Bezier curve with control points $A$, $(B+C)/2$, $D$.

m=2, b=1

Let’s try the line $y=2x+1$. Here’s the graph of where we are sampling on the bilinear surface:

Plugging in the values give us the equation:

$z = (2A-2B-2C+2D)x^2+(C+D-2A)x+C$

Once again we got a quadratic function when sampling along a line.

You might think it’s strange that the equation ends it “+C” instead of “+A”, but if you look at the graph it makes sense. The line literally starts at C when x is zero.

x=2u, y=3u

In the above examples we are only modifying the y variable, to be some function of x. What if we also want to modify the x variable?

One way to do this is to make a 3rd variable $u$ that goes from 0 to 1. Then we can make $x$ and $y$ be based on that variable.

Let’s see what happens when we use these two equations:

$y=2u$

$x=3u$

That makes us sample this line on the bilinear surface.

Plugging the functions of u in for x and y we get:

$z = (6A-6B-6C+6D)u^2+(2B+3C-5A)u+A$

So we now know that when moving along a straight line on a bilinear surface, that you will get a quadratic function as output, except in the case of the line being horizontal or vertical. Note: if the bilinear surface is a plane, all lines on that surface will be linear functions, so this is another way to get a linear result. It could also be degenerate and give you a point result. You will never get a cubic result (or higher) when going along a straight line though.

What would happen though if instead of sampling along straight lines, we sampled on other shapes, like quadratic curves?

y=x*x

Let’s start with the function $y=x^2$. The path that is sampled is:

Going back to the power series form of bilinear interpolation, let’s plug $x^2$ in for y and see what we get out.

The starting equation:

$z = (A-B-C+D)xy + (B-A)x + (C-A)y + A$

becomes:

$z = (A-B-C+D)x(x^2) + (B-A)x + (C-A)(x^2) + A$

Which becomes:

$z = (A-B-C+D)x^3 + (C-A)x^2 + (B-A)x + A$

It’s a cubic equation!

Here is a shadertoy which follows this sampling path on random pixels: https://www.shadertoy.com/view/4sdBz7

x=u*u, y=u*u

Let’s see what happens when we move along both x and y quadratically.

Just like in the linear case, we have our 3rd variable u that goes from 0 to 1 and we have x and y be based on that variable. We will use these equations:

$x=u^2$

$y=u^2$

The sampling path looks like this:

When we plug those in we get this quartic function:

$z = (A-B-C+D)u^4 + (B+C-2A)u^2 + A$

You might be surprised to see what looks like a linear path. It’s just because at all times, x is the same value as y, even though they travel down the line non linearly.

## Higher Order Curves: x=3u^2, y=2u^4

Let’s get a little more wild, using these equations:

$x=3u^2$

$y=u^4$

Which makes a sampling path of this:

Plugging in the equations, the bilinear interpolation equation:

$z = (A-B-C+D)xy + (B-A)x + (C-A)y + A$

becomes a hexic equation:

$z = (3A-3B-3C+3D)u^6 + (C-A)u^4 + (3B-3A)u^2 + A$

The shadertoy visualizes it on random pixels as per usual, but with u going from 0 to 1, it means that x goes from 0 to 3 (y is still 0-1), which makes some obvious discontinuities at the boundaries of pixels. In our pure math formulation, we wouldn’t have any of those, but since we are sampling a real texture, when we leave the safety of our (0,1) box, we enter a new box with different control points. https://www.shadertoy.com/view/4dtfz7

## Trigonometric Function: y = sin(2*pi*x)

Let’s try $y=sin(2\pi x)$, which takes this path on the bilinear surface:

The bilinear interpolation equation becomes a trigonometric polynomial:

$z = (A-B-C+D)x*sin(2\pi x) + (B-A)x + (C-A)*sin(2\pi x) + A$

That has disconuities in it when texture sampling again, due to leaving the original pixel region, so here’s a better looking shadertoy, which is for $y=sin(2\pi x)*0.5+0.5$. It scales and shifts the y values to be between 0 and 1. https://www.shadertoy.com/view/4stfz7

## Circle

Lastly, here’s sampling on a circle.

$x=sin(2 \pi u)*0.5+0.5$

$y=cos(2 \pi u)*0.5+0.5$

It follows this path:

Plugging the functions into the power series bilinear equation gives:

$z = (A-B-C+D)*(sin(2 \pi u)*0.5+0.5)*(cos(2 \pi u)*0.5+0.5) + (B-A)*(sin(2 \pi u)*0.5+0.5) + (C-A)*(cos(2 \pi u)*0.5+0.5) + A$

Something neat about sampling in a circle is that it’s continuous – note how the left side of the curves line up with the right side seamlessly. That seems like a pretty useful property.

## Moving On

We went off into the weeds a bit, but hopefully you can see how there are a ton of possibilities for encoding and decoding data in a very small number of pixels by carefully crafting the path you sample along.

Compared to the simple “sample along the diagonal” technique, there is some added complexity and shader instructions though. Namely, any work you do to modify x or y before passing them to the linear texture interpolator needs to happen in shader code. That means this technique takes more ALU, but can mean it takes even less texture memory than the other method.

The last question from the top of the post is “What does this all mean in higher dimensional interpolation, like trilinear or quadrilinear?”

Well, it works pretty much the same was as bilinear but there are more dimensions to work with.

We saw that in 2 dimensional bilinear interpolation that when we made x and y be functions (either of each other, or of a 3rd variable u), that the resulting polynomial had a degree that was the degree of x plus the degree of y.

In 3 dimensions with trilinear interpolation, the resulting polynomial would have a degree that is the degree of x, plus the degree of y, plus the degree of z.

In 4 dimensions with quadrilinear, add to that the degree of w.

Let’s consider the case when we don’t want a single curve though, but want a surface or (hyper) volume.

As we’ve seen in the extension dealing with surfaces and volumes, if you have a degree N polynomial, you can break it apart into a multivariate polynomial (aka a surface or hyper volume) so long as the sum of the degrees of each axis adds up to N.

It’s basically what we were just talking about but in reverse.

One thing I think would be interesting to explore further would be to see what the limitations are when you take this “too far”.

For instance, a 2×2 texture can give you a quadratic if you sample along any straight line in the uv coordinates. If you first put the u coordinate through a cubic function, and put the v coordinate through a different cubic function, I think you should be able to make a bicubic surface.

The surface will be constrained to a subset of what a general bicubic surface is able to be shaped like, but you will get a bicubic surface. (basically there will be implicit control points that you don’t have control over unless you add more pixels, and do more sampling, or higher dimensional linear interpolation)

I’d like to see what the constraints there are and see if there’s any chance of getting any real use out of something like that.

Anyhow, thanks for reading! Any ideas, corrections, usage cases you have, whatever, hit me up!

@Atrix256

# Prefix Sums and Summed Area Tables

Prefix sums and summed area tables let you sum up regions of arrays or grids in constant time.

If that sounds like it might not have many uses, that is another way of saying that it does discrete integration in constant time, and can also be made to do some kinds of convolution.

These things come up quite a bit in game development and graphics so is pretty interesting for things like depth of field, glossy reflections, and maybe image based lighting. Check the links at the end of the post to see these things in action in some pretty interesting ways.

# One Dimension – Prefix Sums

Say that you have 10 numbers:

$\begin{array}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{index} & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{9} \\ \hline \textbf{value} & 8 & 3 & 7 & 4 & 12 & 6 & 4 & 10 & 1 & 2 \\ \hline \end{array}$

To sum up numbers in a given range you have to manually add up the numbers in that range.

Summing the numbers at index 2 through 5 inclusively takes 3 adds and gives you the answer 29. (index 2 + index 3 + index 4 + index 5)

Summing the numbers at index 0 through index 9 inclusively (the whole table) takes 9 adds to get the answer 57.

Interestingly there is a way to preprocess this data such that summing any range takes only a single subtraction. The technique is called a prefix sum table and you make the table by having the number at each index be the sum from index 0 to that index inclusively.

Here is the prefix sum table for the numbers above:

$\begin{array}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{index} & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{9} \\ \hline \textbf{value} & 8 & 11 & 18 & 22 & 34 & 40 & 44 & 54 & 55 & 57 \\ \hline \end{array}$

Now, to find the sum of range a to b inclusively, you start with the value at index b, and subtract the value at index (a-1).

So, to sum the numbers at index 2 through 5 like we did before, we’d start with the value at index 5 which is 40, and we subtract the value at index (2-1) aka index 1, which is 11. That gives us a result of 29 like our manual summing did before.

To sum the numbers at index 0 through index 9, we’d start with the value at index 9, which is 57, and subtract the value at index -1. Since we don’t have anything before index 0, the sum for anything before index 0 is 0. That makes our result be 57-0 or 57, which we calculated before.

Let’s move on to 2D!

## Two Dimensions – Making a Summed Area Table

In two dimensions, the same technique is called a summed area table, and things get only a little more complicated.

$i = \begin{array}{|c|c|c|c|} \hline 3 & 2 & 1 & 8 \\ \hline 9 & 11 & 15 & 0 \\ \hline 8 & 4 & 7 & 6 \\ \hline 12 & 7 & 8 & 3 \\ \hline \end{array}$

Then you make a grid of the same size, where the value at a location is the sum of all the values in the rectangle going from (0,0) to (x,y) inclusive. Assuming that (0,0) is in the top left, that would give us this summed area table:

$I = \begin{array}{|c|c|c|c|} \hline 3 & 5 & 6 & 14 \\ \hline 12 & 25 & 41 & 49 \\ \hline 20 & 37 & 60 & 74 \\ \hline 32 & 56 & 87 & 104 \\ \hline \end{array}$

You can literally sum up all the values for each index to make the table if you want to, but you can also use this formula which lets you iteratively create the table by starting at (0,0) and expand outwards from there. As before, when reading out of bounds values, just use zero.

$I(x,y)=i(x,y)+I(x,y-1)+I(x-1,y)-I(x-1,y-1)$

## Two Dimensions – Using a Summed Area Table

So we know that $I(x,y)$ is the sum of all the values in the rectangle from $(0,0)$ to $(x,y)$ inclusively, but what if we want to find the sum of a different rectangle? What if we have 4 points A,B,C,D and we want to know the sum of the numbers within that sub-rectangle?

With some cleverness we can calculate the sum inside this exact region.

First we get the value at point D, which gives us the sum of this rectangle:

Next, we subtract the value at point B, which gives us the sum of this rectangle:

The next step is to subtract the value at point C. The red area is a problem though as it has been subtracted out twice.

This is a problem that’s easily solved by adding the value at point A in, to give us our final result:

So, to summarize, using a summed area table to get the sum of all values in the rectangle defined by the points A,B,C,D is done by reading the values at points A,B,C,D and calculating: A+D-B-C

## Storage Costs

When you want to store numbers added together, you are going to need storage larger than what you are storing the numbers in.

For instance, if you have the table below using 3 bits per value:
$I = \begin{array}{|c|c|} \hline 7 & 7 \\ \hline 7 & 7 \\ \hline \end{array}$

Turning that into a summed area table, you are going to hit overflow problems:
$I = \begin{array}{|c|c|} \hline 7 & 6 \\ \hline 6 & 4 \\ \hline \end{array}$

For summing up N items, you need $log_2{(N)}$ more bits of storage which means we would need 2 more bits of storage in this case for the 2×2 grid (4 samples), making it be 5 bits total per value (3 bits of storage + 2 extra bits to hold the sum of 4 values). That would let us store the proper table:

$I = \begin{array}{|c|c|} \hline 7 & 14 \\ \hline 14 & 28 \\ \hline \end{array}$

Using the previously shown 2×2 table of 3bit 7’s as an example, what this means is that if you are only ever going to want to ask about 1×1 ranges (which is pointless to use summed area tables for, but makes a nice simple example), you don’t need 2 extra bits, and in fact don’t need any extra bits in this case since a 1×1 range is just 1 sample, and $log_2{(1)}$ is 0.

Looking back at the summed area table that had roll over problems:
$I = \begin{array}{|c|c|} \hline 7 & 6 \\ \hline 6 & 4 \\ \hline \end{array}$

Let’s ask about the range (1,1) to (1,1). So we start with the value at index (1,1) which is 4. Next we add in the value at index (0,0) which is 7 and get 11. Keeping that in 3 bits (eg mod 8), that gives us a value of 3. Next we subtract the value at index (0,1) aka 6, which keeping it in 3 bits gives us 5. Subtracting index (1,0) from that (6 again) and keeping it in 3 bits gives us 7.

So, the sum of the numbers from (1,1) to (1,1) – aka the VALUE in the original table at (1,1) – is 7. Since we made the table, we know this is correct.

It works interestingly!

If we did a 2×2 lookup instead, it would fall apart. we’d need those 2 extra bits since we’d be summing 4 samples, and $log_2{(4)}$ is 2.

So, just to re-iterate… summed area tables do need increased storage per data item to store the sums. However, while most descriptions base that increased storage on the size of the image being made into a summed area table, it is actually based on the largest range you want to sum from that table, which may be smaller than the total size.

I have an idea I’d like to try (next blog post?) where instead of storing the sum of the rectangle at each position, you store the sum divided by the area. In other words, you store the average value for the rectangle.

Calculating the sum for a specific rectangle then becomes getting the 4 values, multiplying by their area, and then doing the usual math.

Apparently this is similar to an idea of using floating point numbers in SAT, which also sounds interesting! Thread from Bart Wronski (https://twitter.com/BartWronsk):

While my idea is similar to using floating point, a handful of people (especially Tom Forsyth! https://twitter.com/tom_forsyth) have made sure I know that using floating point with large textures (~screen sized and above) is not a good idea.

Tom says:
“The entries in the bottom-right of the table start having very similar magnitudes, so the difference between them is very noisy. This is super obvious with float16s where you only have 10 bits of precision, which is less than most current screen widths.”

## Other Stuff

Bilinear Interpolation
If you are wondering whether you should use bilinear interpolation when using this technique (sample between pixels) or not, the answer is that you should. Bilinear interpolation is compatible with this technique and gives you the correct values for sub pixel sample points.

Higher Dimensions
This technique extends to 3 dimensions and beyond. The table still contains the sum of the numbers for the (hyper)rectangle from the origin to that specific index. The way you calculate the sum of a specific range is different in each dimension, but it’s similar, and you should be able to figure it out using the logic described in the 2d case!

Integrating / Summing Over Other Shapes

I had a thought on this that might not be so bad.

My thought was that if you had some shape you wanted to sum values over (aka integrate values over), that you could sum over the bounding box of the shape, divide by the area of the bounding box to get an average sum per unit for that area, and then multiply by the area of the shape you want to sum over.

This makes the assumption that the bounding box is representative of the data inside of the shape, so that makes this an approximation, but it might be good enough depending on your needs.

You might even try having a couple different summed area tables made from rotated versions of the image. That would allow you to get a tighter fitting bounding box in some situations.

I’m definitely not the first to think about how to do this though, and this is not the only way to do it. There is a link in the next section that talks about a different way to do it “Fast and Exact Convolution With Polygonal Filters” that also references a few other ways to do it.

Uses in Graphics / Other Links

Here is the paper from Franklin Crow in 1984 that introduces summed area tables as a way to get box filtered mipmapping on the fly without having to generate mipmaps in advance:
http://www.florian-oeser.de/wordpress/wp-content/2012/10/crow-1984.pdf

Here is a neat paper that talks about how to generate summed area tables efficiently on the GPU, and some interesting ways to use them for things like depth of field, glossy reflections, and refraction through frosted glass:

Here are some great reads from Fabien Giesen (https://twitter.com/rygorous) on doing fast blurs when the radius is very large. The second post also shows you how to do repeated box blurs to get tent filters, quadratic filters, cubic, etc and how they tend towards Gaussian. I’m sure there is some way to mix this concept with summed area tables to get higher order filters, but I haven’t found or worked out the details yet.
https://fgiesen.wordpress.com/2012/07/30/fast-blurs-1/
https://fgiesen.wordpress.com/2012/08/01/fast-blurs-2/

Here are some blog posts I made up explaining and demonstrating box blurs and Gaussian blurs:
https://blog.demofox.org/2015/08/18/box-blur/
https://blog.demofox.org/2015/08/19/gaussian-blur/

Bart also shared these really interesting links

“Fast and Exact Convolution With Polygonal Filters”
https://www.researchgate.net/publication/269699690_Fast_and_Exact_Convolution_with_Polygonal_Filters

“Fast Filter Spreading and its Applications”
https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-54.pdf

“Filtering by repeated integration”
https://www.researchgate.net/publication/220721661_Filtering_by_repeated_integration

“Cinematic Depth Of Field: How to make big filters cheap”

# An Idea: Raytracing Lookup Tables

In rasterized rendering, one of the primary tools we have at our disposal is textures.

We use textures to store things like normal maps, roughness maps, pre-integrated lighting, and more.

We can even abuse the texture interpolator to evaluate arbitrary polynomials when the texture contains coefficients from the Bernstein Basis form of the polynomial (https://blog.demofox.org/2016/02/22/gpu-texture-sampler-bezier-curve-evaluation/).

In raytracing, we do still have the ability to use textures, and we will surely use them in fun new ways with the directx ray tracing support that was recently announced, but raytracing also gives us a different kind of tool: queryable geometry that doesn’t necessarily have to have any correlation to what actually shows up on screen.

This can be used for obvious things like soft shadows, reflections, volumetric lights, rendering non triangle based geometry (when doing procedural shapes), but it can be used for off label things too, just as we use textures for things other than putting color directly onto triangles.

## Lookup Tables

One way I mentioned that textures are (ab)used is for making lookup tables for functions (pre-integrating lighting, the famous PBR split sum texture, etc).

A nice thing about using textures is that bilinear texture sampling is not very expensive on modern hardware compared to point sampling. This means that we can store data at whatever resolution we are ok with getting linear interpolation between.

GPUs interpolate in fixed point with 8 bits for fractional pixels, so the interpolation does break down at a point, but it is still really nice to get interpolated data values cheaply.

A not so nice thing about using textures for lookup tables is that texture data is stored in a regular grid, so you need to make the texture high enough resolution for the most demanding (high frequency) part of the data, while wasting higher resolution on the parts of your data that don’t need it.

Imagine that you have some function z=f(x,y) that you are trying to make a lookup table for. Let’s say that this data is nearly linear in almost all the places you care about, but that there is a very important, smaller section that has a curved part, where getting the curve right is very important to your results.

You’d have to use a high resolution texture to make sure the curved section was well represented, but the other parts would have much higher resolution than needed to represent them which is wasteful to memory and loading time.

Raytracing doesn’t have this problem however because you make a mesh of the function. (Or do you make a mess of the function? Only time will tell I guess!)

In your mesh, the z component of every vertex is the value f(x,y), and it’s up to you which (x,y) values to store. This is in direct contrast to a texture, where the (x,y) values are decided for you and are on a fixed grid.

For the specific function we mentioned, you could use only a few vertices in the places that were linear, and use a lot more vertices in the curved section. How many vertices to use is entirely up to you based on your quality, performance, and memory usage desired.

To actually get a value of this function out for a specific (x,y), assuming the function was always positive, you could cast a ray at the mesh from the position (x,y,0) in the direction (0,0,1). The time t of the ray intersection with the mesh is the value of z at f(x,y).

Something nice here is that you still get linear interpolation, like in the texture case, since a ray vs triangle test does a linear interpolation between the points on the triangle, using barycentric coordinates.

Something else nice is that when you get your intersection information from the ray vs triangle test, you will likely have access to the barycentric coordinates of the intersection, as well as per vertex data. This means that you could store other information per vertex and get a linearly interpolated result, including the data from other functions with entirely different shapes.

This is one way to get around the fact that a texture lookup can give you multiple values as a result (RGBA), while a raytraced lookup can only give you one (ray intersection time) with a naive implementation.

This also lets you do a SIMD type thing, where if you have N functions you are always going to look up for the same input values (Think: diffuse and specular term of image based lighting), that you can do one raytrace to get the answer for all queries.

The “single value result” where you get only a time t ought to be more performant than the multiple value result where you (manually) interpolate vertex data, but as vertex data interpolation is the common case for using the raytracing API, i wouldn’t expect it to be unusably slow for a reasonable amount of data.

To make things really clear and explicit, you could literally replace a cubemap texture lookup with a raytrace into a scene instead, using the same direction vector (of course!). The time down the ray that the intersection happens would be the value of your cube map lookup in that direction. Since that’s only a single value, you could encode more values per triangle vertex and use the barycentric triangle interpolation to get the other values as well. This all works exactly like a texture lookup works, except you get to define your data set sparse in some areas, and dense in others. You are suddenly in control of your data sampling across the entire domain of your data!

## When Should We Actually Do This?

So I don’t actually know how the performance of something like this would be on modern video cards – let alone future ones that are more geared to raytracing.

Experiments should be done to see if it can ever be faster than textures, use less memory than textures, or give higher quality than textures, and by how much under what circumstances.

How I’ve laid this out is just one of many ways to make a ray based lookup table, each with their own pros and cons.

For instance, if you have some hemispherical function z=f(x,y) where x and y are azimuth and altitude, the linear interpolation offered by this setup won’t be that great because the function is laid out like a heightfield, when the data really is hemispherical in nature.

If you instead changed the geometry to literally be a hemisphere that has points pushed in and pulled out, and you convert the angular coordinates to cartesian (a normalized direction vector) before the lookup, the linear interpolation offered by the intersection tests is going to be a lot friendlier to your data set.

I also wonder if there are better ray tracing acceleration structures than a generic solve (BVH with surface area heuristic?), when you intend to use the geo as a lookup table. I feel like knowing that the ray will always be vertical from the z=0 plane is important knowledge that could be used to make a better data structure. A grid based solution sure sounds decent (which ironically is how a texture works).

Anyhow, a total random idea I wanted to share.

If you try this and get any details of perf, quality, mem use, etc please share here or hit me up on twitter at https://twitter.com/Atrix256.

Also, any other crazy raytracing ideas, i’d love to hear them (:

# A Very Quick DirectX Raytracing API Primer

A raytracing API has been announced for DirectX and it seems like real time raytracing may finally be here?

MSDN: Announcing Microsoft DirectX Raytracing!
https://blogs.msdn.microsoft.com/directx/2018/03/19/announcing-microsoft-directx-raytracing/

How & where to get the new (experimental) SDK
http://forums.directxtech.com/index.php?topic=5860.0

There is some nice documentation in the SDK zip file, in the doc folder.

I’ve been lucky enough to be in a position to have played with it for a little while pre-release (about 1-2 weeks of time total) and it is pretty fun.

I’ve been playing with it from a purely triangle mesh perspective (don’t hate me! I know, I know…) and it seems like a hybrid rasterization / raytracing is the most realistic way to go there – eg, primary rays are rasterized, and maybe you do some rasterization style post processing. You actually don’t lose a whole lot going this way if you get creative. For instance, you could ray trace primary rays for non triangle based geo and take the minimum between that intersection time and the rasterized one. The only thing I feel like you lose is the ability to have the rays themselves deviate from a typical frustum setup, since you can’t really “distort rays” very easily while rasterizing.

However, I believe when looking at things from a non triangle based approach, things may be very different, especially on the performance side (better perf!). I would love to explore it myself, and know that many folks will also be exploring it. (I’m looking at you folks at the intersection of the twitter and shadertoy communities!)

Here is a very rough overview of some concepts of the Microsoft DirectX ray tracing API to help you form a mental model before trying to parse the verbose DX12 code. (It is DX12 only sadly!)

There are (useful) details missing for sure, but hopefully no misinformation. Please correct me if you see any (:

Raytracing acceleration structures:

• Bottom Level Acceleration Structure – This is a “per object” acceleration structure. It can either be made from a triangle mesh, or you can specify that it’s a procedural shape. If it’s a procedural shape, you provide a bounding box and an intersection shader. The procedural shape will be useful for raymarching and other non triangle based ray-geometry intersection techniques.
• Top Level Acceleration Structure – This is a “scene” acceleration structure. It contains instances of bottom level acceleration structures, each able to have their own instance data (like a transformation matrix).
• Unfortunately the acceleration structures are made at runtime, and cannot be cached off to disk or similar. It’s a loading time cost that currently is seemingly unavoidable.

There are a few different types of shaders used for raytracing:

• Ray Generation – This generates the primary rays. You could think of this like a compute shader that you author and run once per pixel. It’s also possible to do things like invoke it for each 2×2 pixel group in case you wanted to be able to have derivatives like when rasterizing. You call TraceRay() for each ray you want to generate, and you can use the results however you see fit. It’s typical you’d write the results to a texture or uav though. Whenever you call TraceRay(), you can provide payload data which can be read from and written to by the other shaders. This is useful for sending parameters down with the ray, or having other shaders return information like integrated fog density.
• Any hit – Optional. As a ray traverses the acceleration structure, it will test objects in an order that is likely not front to back. You can supply an “any hit” shader that gets called during this process. You can read or write payload data in this shader, and you can also tell it to ignore a hit (useful for if you are doing alpha testing from a texture) or you can tell it to accept a hit and stop looking for other hits (useful perhaps for shadow rays). If you omit this shader, the hardware/software can make more assumptions about the ray traversal though and possibly run more quickly. So, you should only use it when necessary. You have access to barycentric information if intersecting with a triangle, and the triangle (index) itself. I’m unsure what you get in the procedural case.
• Closest Hit – Optional. Called with the information about the closest hit. Called after all “any hit” shaders have been invoked. You have access to barycentric information if intersecting with a triangle, and the triangle (index) itself. I’m unsure what you get in the procedural case. You can call TraceRay() from this shader to spawn secondary rays.
• Miss – Optional. Called if there are no hits for a ray. You could set some fog density to MAX_FLT, or could perhaps use this for shadow rays, assuming that there was a hit unless a miss shader was invoked.
You can call TraceRay() from this shader to spawn secondary rays.
• Yes, ray shaders can be recursive! a closest hit shader could spawn 3 rays which then have a closest hit which each spawn one more ray. There is a maximum stack depth, but recursive rays are totally supported.

When calling TraceRay() to shoot a ray out, you can give parameters such as…

• Telling it to accept the first hit and end the search (useful for shadows)
• Telling it to only test “opaque” geometry (anything that doesn’t have an any hit shaders)
• Instance Masking – Each geometry instance can have an 8 bit mask. When you shoot a ray out, you can give an 8 bit mask that is ANDed with that instance mask, and will only consider geometry for intersection if the result is non zero. This is a bit like a stencil buffer.
• A minimum and maximum time allowed for collision down the ray. This lets you ignore self intersection of secondary rays by setting the minimum to be greater than zero. The maximum time is useful for things like when shooting shadow rays at point light sources, to make sure you stop searching for occluding geometry at the light source.

There is the concept of a “Hit Group” which contains 0 or 1 of each shader type: intersection, any hit, closest hit.

You can specify a hit group per instance in the top level acceleration structure.

An intersection shader MUST be given for procedural geometry and MUST NOT be given for triangle based geometry.

If a shader is not specified on a hit group, it falls back to a default shader of each type that you specify. This is how you can make it so some objects have different behaviors than others for ray intersection / traversal etc.

You can also specify tables of shaders where shaders are accessible via indexing. This lets you pass numbers around to use in calculations for shader table indexing. In effect, this gives you the ability to have “function pointers” of shaders, and can even be exploited for non raytracing uses wherever having function pointers in shaders would be useful.

Lastly, this is something not super obvious when starting on raytracing, but sampling a mip mapped texture is a bit of a challenge because you no longer have automatic screen space derivatives of uv’s!

I’m sure good solutions to this will spread over time as more people dive into this raytracing API, but I personally think a good place to start is here:

Tracing Ray Differentials
http://graphics.stanford.edu/papers/trd/

That’s all for now! Anything small items think I should add, hit me up here or on twitter at https://twitter.com/Atrix256

Happy Raytracing Folks!! (:

# Don’t Convert sRGB U8 to Linear U8!

In this post I’m going to explain something that I have been doing wrong for a while in my at home graphics programming projects, and show you the noticeable loss in image quality it causes.

The C++ code that generated the data and images for this post is on github. https://github.com/Atrix256/RandomCode/tree/master/sRGBPrecision

# sRGB vs Linear

Every image that is meant to be displayed on your screen is an sRGB image. That’s what it means to be an sRGB image.

When doing things like applying lighting, generating mip maps, or blurring an image, we need to be in linear space (not sRGB space) so that the operations give results that appear correct on the monitor.

This means an sRGB image needs to be converted to linear space, the operations can then be done in linear space, and then the result needs to be converted back to sRGB space to be displayed on a monitor.

If this is news to you, or you are unsure of the details, this is a good read on the topic: Linear-Space Lighting (i.e. Gamma)

A small example of why this matters is really driven home when you try to interpolate between colors. The image below interpolates from green (0, 1, 0) to red (1, 0, 0)

.

The top row interpolates in sRGB space, meaning it interpolates between those colors and writes out the result without doing any other steps. As you can see, there is a dip in brightness in the middle. That comes from not doing the operation in linear space.

The second row uses gamma 1.8. What is meant by that is that the color components are raised to the power of 1.8 to convert from sRGB to linear, the interpolation happens in linear space, and then they are raised to the power of 1.0/1.8 to convert from linear to sRGB. As you can hopefully see, the result is much better and there is no obvious drop in brightness in the middle.

Getting into and out of linear space isn’t so simple though, as it depends on your display. Most displays use a gamma of 2.2, but some use 1.8. Furthermore, some people do a cheaper approximation of gamma operations using a value of 2.0 which translates into squaring the value to make it linear, and square rooting the value to take it back to sRGB. You can see the difference between those options on the image.

The last row is “sRGB”, which means it uses a standard formula to convert from sRGB to linear, do the interpolation, and then use another standard formula to convert back to sRGB.

You can read more about those formulas here: A close look at the sRGB formula

# The Mistake!

The mistake I was making seemed innocent enough to me…

Whenever loading an image that was color information (I’m not talking about normals or roughness maps here, just things that are colors), as part of the loading process I’d take the u8 image (aka 8 bits per channel), and convert it from sRGB to linear, giving a result still in u8.

From there, I’d do my rendering as normal, come up with the results, convert back to linear and go on my way.

Doing this you can look at your results and think “Wow, doing lighting in linear space sure does make it look better!” and you’d be right. But, confirmation bias bites us a bit here. We are missing the fact that by converting to linear, and storing the result in 8 bits means that we lost quite a bit of precision in the dark colors.

Here are some graphs to show the problem. Blue is the input color, red is the color after converting to linear u8, and then back to sRGB u8. Yellow is the difference between the two. (If wondering why I’m showing round trip instead of just 1 way, think about what you are going to do with the linear u8 image. You are going to use it for something, then convert it back to sRGB u8 for the display!)

Gamma 1.8

Gamma 2.0

Gamma 2.2

sRGB

As you can see, there is quite a bit of error in the lower numbers! This translates to error in the darker colors, or just any color which has a lower numbered color component.

The largest amount of error comes up in gamma 2.2. sRGB has lower initial error, but has more error after that. I would bet that was a motivation for the sRGB formulas, to spread the error out a bit better at the front.

Even though gamma 2.2 and sRGB looked really similar in the green to red color interpolation, this shows a reason you may prefer to use the sRGB formulas instead.

Another way of thinking about these graphs is that there are a quite a few input numbers that get clamped to zero. At gamma 1.8, an input u8 value of 12 (aka 0.047) has to be reached before the output is non zero. At gamma 2.0, that value is 16. At gamma 2.2 it’s 21. At sRGB it’s 13.

Showing graphs and talking about numbers is one thing, but looking at images is another, so let’s check it out!

Below are images put through the round trip process, along with the error shown. I multiplied the error by 8 to make it easier to see.

Gamma 1.8

Gamma 1.8 isn’t the most dramatic of the tests but you should be able to see a difference.

Error*8:

Gamma 2.0

Gamma 2.0 is a bit more noticeable.

Error*8:

Gamma 2.2

Gamma 2.2 is a lot more noticeable, and even has some noticeable sections of the images turning from dark colors into complete blackness.

Error*8:

sRGB

sRGB seems basically as bad as Gamma 2.2 to me, despite what the graphs showed earlier.

Error*8:

Since this dark image was basically a “worst case scenario”, you might wonder how the round trip operation treats a more typical image.

It actually has very little effect, except in the areas of shadows. (these animated gifs do show more color banding than they should, and some other compression artifacts. Check out the source images in github to get a clean view of the differences!)

Gamma 1.8

Error*8:

Gamma 2.0

Error*8:

Gamma 2.0

Error*8:

sRGB

Error*8:

# So What Do We Do?

So, while we must work in linear space, converting our sRGB u8 source images into linear u8 source images causes problems with dark colors. What do we do?

Well there are two solutions, depending on what you are trying to do…

If you are going to be using the image in a realtime rendering context, your API will have texture format types that let you specify that a texture is sRGB and needs to be converted to linear before being used. In directx, you would use DXGI_FORMAT_R8G8B8A8_UNORM_SRGB instead of DXGI_FORMAT_R8G8B8A8_UNORM for instance.

If you are going to be doing a blur or generating mip maps, one solution is that you convert from sRGB u8 to linear f32, do your operation, and then convert from linear f32 back to sRGB u8 and write out the results. In other words, you do your linear operations with floating point numbers so that you never have the precision loss from converting linear values to u8.

You can also do your operations in u16 instead of u8 apparently, and also f16 which is a half float.

The takeaway is that you should “never” (there are always exceptions) store linear color data as uint8 – whether in memory, on disk, or anywhere else.

I’ve heard that u12 is enough for storage though, for what that’s worth.

Thanks @romainguy for suggesting a color interpolation for the opening image of this post. It’s a great, simple example for seeing why sRGB vs linear operations matter.

Here is some more info on sRGB and related things from Bart Wronski (@BartWronsk):

Part 1 – https://bartwronski.com/2016/08/29/localized-tonemapping/

Part 2 – https://bartwronski.com/2016/09/01/dynamic-range-and-evs/

And this great presentation from Timothy Lottes (@TimothyLottes)

Advanced Techniques and Optimization of HDR Color Pipelines

This from Matt Pettineo (@MyNameIsMJP) is also very much on topic for this post:

# What the Heck is Blue Noise?

This is a gentle explanation of blue noise and how it can be useful.

We’ll start with something simple that we can all get behind – not getting eaten by a cheetah!

Let’s talk about our eyes for a minute.

Our eyes have about 126 million photo receptors in them – about 6 million cones, 120 million rods (source). These photo receptors give your brain an image of the world around you. They are a bit like pixels because they are just small points of data that your brain combines into an image.

How those photo receptors are arranged in your eye can make a big difference. Imagine for a second that we only had 10 photo receptors. If they were laid out like these blue dots, we wouldn’t be able to see the cheetah and we’d become a tasty cat snack.

In the image above, white noise random numbers were used to place the points. White noise is what most people are talking about when they talk about random numbers. Using white noise to generate numbers, the numbers can clump up in some spots and leave empty holes in other spots. When using white noise to lay out photo receptors, that makes it so some photo receptors give redundant information when they are too close together, and leave big open spaces in your vision where you are not getting any information at all. Not good!

What if the dots were laid out like this instead?

The points are still randomly placed, but they are roughly evenly spaced. This makes it so we get the most bang for our buck from the photo receptors. We basically have the maximum amount of information we can get for the number of photo receptors we have to work with.

In this case, two of the photo receptors are on the cat, so we have some information about that predator, and we have a better chance at reacting before we become lunch!

Blue noise random numbers were used to place the points on this image, and this example shows exactly why blue noise can be better than white noise – you get maximal information with fewer samples.

Interestingly, our photo receptors (as well as other animals) are in fact laid out this way. Here is an image of a primate (macaque) retina (source)

You might also find this an interesting read about chicken eyes which also have blue noise properties:
https://www.princeton.edu/news/2014/02/24/eye-chicken-new-state-matter-comes-view

That’s blue noise in a nut shell, but continue on if you’d like to go just a tad bit deeper.

## A Little More Technical

If maximizing information is the goal, you might wonder why blue noise is better than putting the sample points in a grid, or in a honeycomb structure or some other regular pattern. The short answer is that regular patterns have a problem called “aliasing”. Random numbers in general trade the problem of aliasing for the problem of noise, but blue noise random numbers in particular still get the benefits of “roughly even coverage”, so blue noise is the best of both worlds.

Blue noise is difficult / computationally intensive to generate though, compared to white noise or regular sampling. Generating better blue noise more efficient is in fact is an ongoing area of research!

For a deeper comparison of white noise, blue noise, and regular sampling, and also how to generate blue noise sample points, give this a read: https://blog.demofox.org/2017/10/20/generating-blue-noise-sample-points-with-mitchells-best-candidate-algorithm/

If you want at least some of the benefits of blue noise, but don’t want to spend the resources to compute it, a nice alternative might be low discrepancy sequences. You can read about them (and how to generate them) here: https://blog.demofox.org/2017/05/29/when-random-numbers-are-too-random-low-discrepancy-sequences/

You often hear about blue noise and low discrepancy sequences in graphics / in numerical integration. For low sample counts, the blue noise / LDS’s give you more even spaces for your samples in the sampling domain, but I’ve heard that white noise gives you better results for larger sample counts.

There is a whole rainbow of noises possible, each with their own unique usage cases. If you want to know a way to transmute white noise to other colors of noises, give this a read: https://blog.demofox.org/2017/10/25/transmuting-white-noise-to-blue-red-green-purple/

Lastly, the other day I found out that Tempurpedic beds are the best, because they have some secret formula/process they bought from NASA. This recipe allows them to make memory foam such that the bubbles are all roughly the same size. The foam is not arranged into any regular structure such as a grid or a honeycomb, so in essence, the memory foam is blue noise. More specifically, it’s basically the Voronoi diagram of blue noise distributed sample points in 3d.

So, Tempurpedic is the best because they have blue noise foam.

Weird, right?!

# C++ Differentiable Programming: Searching For An Optimal Dither Pattern

The simple standalone C++ source code that implements this blog post and replicates the results shown is on github at: https://github.com/Atrix256/DitherFindGradientDescent

Neural networks are a hot topic right now. There is a lot of mystery and mystique surrounding them, but at their core, they are just simple programs where parameters are tuned using gradient descent.

(If curious about neural networks, you might find this interesting: How to Train Neural Networks With Backpropagation)

Gradient descent can be used in a lot of other situations though, and in fact, you can even generalize the core functionality of neural networks to work on other types of programs. That is exactly what we are doing in this post.

To be able to use gradient descent to optimize parameters of a program, your program has to be roughly of the form of:

1. It has parameters that specify how it processes some other data
2. There is some way for you to give a score to how well it did

Beyond those two points, much like as a shader program or a SIMD program, you want your program to be as branchless as possible. The reason for this is because ideally your entire program should be made up of differentiable operations. Branches (if statements) cause discontinuities and are not differentiable. There are ways to deal with branches, and some branches don’t actually impact the result, but it’s a good guideline to keep in mind. Because of this, you also want to stay away from non differentiable functions – such as a “step” function which you might be tempted to use instead of an if statement.

This post is going to go into detail about using differentiable programming in C++ for a specific goal. Results are shown, and the simple / no external dependency C++ code that generated them are at https://github.com/Atrix256/DitherFindGradientDescent.

First, let’s have a short introduction to gradient descent.

If you have a function of the form $f(x)$, it takes one input so is one dimensional.

You can think of a function like this as having a value for every point on the number line.

You can visualize those values as a height, which gives you a function of the form $y=f(x)$ which we are still going to call one dimensional, despite it now having two dimensions.

Let’s look at a function $y=3x+1$

You might remember that the equation of a line is $y=mx+b$ where m is the slope of the line ($\frac{\text{rise}}{\text{run}}$ or $\frac{y}{x}$) and b is where the line crosses the y axis.

In calculus, you learn that the slope m is also the derivative of the function: $\frac{dy}{dx}$

The slope / derivative tells you how much is added to y for every 1 you add to x.

Let’s say that you were on this graph at the point $x=1$ (which puts you at $y=4$), and let’s say that you want to go downhill from where you were at. You could do that by looking at the slope / derivative at that point, which is 3 (it’s 3 for every point on the line). Since the derivative is positive, that means going to the right will make the y value larger (you’ll go up hill) and going to the left will make the y value smaller (you’ll go down hill).

So, if you want to go downhill to a smaller y value, you know that you need to subtract values from x.

A simpler way to think of this is that you need to subtract the derivative from your x value to make your y value smaller.

That is a core fact that will help guide you through things as they get more difficult: subtract the derivative (later, subtract the gradient) to make your value smaller. The value subtracted is often multiplied by some scalar value to make it move faster or slower.

What happens if you have a more complex function, such as $y=(x-2)^2$?

Let’s say that you are on this graph at the point $x=1$, which puts you at $y=1$. Now, which way do you move to go downhill?

The derivative of this function is $y=2x-4$, which you can plug your x value into to get the slope / derivative at that point: -2.

Remembering that we subtract that derivative to go down hill, that means we need to subtract a negative value from our x; aka we need to ADD a value to our x.

As you can see, adding a value to x and making it move to the right does in fact make us go down hill.

The rule works, hooray!

Things do get a little more complex when there’s more than one dimension, but not really that much more complex, so hang in there!

Let’s look at the function $z=xy$

Let’s say that we are at the (x,y) point (1,1) – in the upper right corner – which puts us at $z=1$, and let’s say that we want to go down hill. Instead of just having one variable to take the derivative of (x), we now have two variables (x and y). How are we going to deal with this?

First up, we are going to pretend that y is a constant value, and not actually a variable. This will give us the partial derivative for x: $\frac{\partial z}{\partial x}$. That tells us how much we would add to z if we added one to x. It’s a slope that is specifically down the x axis.

In this case, the partial derivative of z with respect to x is just: y.

Doing the same thing for the other variable, the partial derivative of z with respect to y is just: x.

Now that we have partial derivatives for each variable, we put them into a vector. This vector is called the gradient, and has some intimidating notation that looks like this:

$\nabla z = \nabla f(x,y) = (\frac{\partial z}{\partial x}, \frac{\partial z}{\partial y})$

For this function, the gradient is:

$\nabla z = \nabla f(x,y) = (y,x)$

That makes the gradient at our specific point:

$\nabla z = \nabla f(1,1) = (1,1)$

In the last section we saw that the derivative / slope pointed to where the function got larger. The same thing is true of gradients, they point in the direction where the function gets larger too!

So, if we want to go downhill, we need to subtract values from our x and our y to go there. In fact, we know that the steepest way down from our current point is when we subtract the same value from both x and y. This is because the gradient doesn’t just point to where it gets larger, it points to where it gets larger the FASTEST. So, the reverse of the gradient also points to where it gets smaller the fastest.

Pretty cool huh?

You can confirm this visually by looking at the graph of the function.

One last things about slopes, derivatives and gradients before moving on. While they do point in the direction of greatest increase, they are only valid for an infinitely small point on the graph for functions that are non linear. This will be important later when we move in the opposite direction of the gradients, but do so with very small steps to help make sure we find the lowest points on the graph.

Why do we want to use gradient descent? Imagine that we have a function:

$w=f(x,y,z)$

Sure, we can pick some random starting values for x,y and z, and then use gradient descent to find the smallest w, but who cares?

Let’s give some other names to these variables and see if the value becomes a little more apparent:

$DamageTakenMultiplier = CalculateDamageTakenMultiplier(Armor, Dodge, Resist)$

Now, by only changing the names of the variables, we can see that we could use gradient descent to find what amount of Armor, Dodge and Resist would make it so our character takes the least amount of damage. This can now tell you how to distribute stat points to a character to get the best results 😛

Note that if you are ever trying to find the highest number possible, instead of the lowest, you can just multiply your function by -1 and do everything else the same way. You could also do gradient ASCENT, but it’s equivalent to multiplying by -1 and doing gradient descent.

## Problems

Here are a few common problems you can encounter when doing gradient descent.

• Local minima – when you get to the bottom of a bowl, but it isn’t the deepest bowl.
• Flat derivatives – these make it hard to escape a local area because the derivatives are very small, which will make each movement also very small.
• Discontinuities – The problem space (graph) changes abruptly without warning, making gradient descent do the wrong thing

Here’s an example of a local minima versus a global minima. You can see that depending on where you start on this graph, you might end up in the deeper bowl, or the shallower bowl if your only rule is “move downhill”.

(Image from wikipedia By KSmrq – http://commons.wikimedia.org/wiki/File:Extrema_example.svg, GFDL 1.2, https://commons.wikimedia.org/w/index.php?curid=6870865)

Here’s an example of a flat derivative. You can imagine that if you were at x=1, that you could see that the derivative would tell you to go to the left to decrease the y value, but it’s a very, very small number. This is a problem because it’s common to multiply the derivative or gradient by a multiplier before subtracting it, so you’d only take a very small step towards the goal.

It’s also possible to hit a perfectly flat derivative, which will be exactly 0. In this case, no matter how big or small of a number you multiply the derivative by, you won’t move AT ALL!

Below is a discontinuous function where if x is less than 0.5, the value is 1, otherwise the value is x. This essentially shows you what happens when you use if statements in differentiable programming. If you start on the right side, it’s going to correctly tell you that you should move left to improve your score. However, it’ll keep telling you to move left, until you get to x being less than 0.5, at which point your score will suddenly get a lot worse and your derivative will become 0. You will now be stuck!

There are ways to deal with these problems, but they are deep topics. If nothing else, you should know these problems exist, so you can know when they are affecting you, and/or why you should avoid them if you have a choice.

## What If I Want to Avoid Calculus?

Let’s say that you don’t get a kick out of calculating all these partial derivatives. Or, more pragmatically, you don’t want to sit down and manually calculate the gradient function of some generic C++ code!

I have some great news for you.

While we do need partial derivatives for our gradients, we aren’t going to have to do all this calculus to get them!

Here are a few other ways to get partial derivatives:

• Finite Differences – Conceptually super simple, but slow to calculate and not always very precise. More info: Finite Differences
• Backpropagation – What neural networks use. Also called backwards mode automatic differentiation. Fast but a bit complex mentally. I linked this already but for more info: How to Train Neural Networks With Backpropagation
• Dual Numbers – Also called forward mode automatic differentiation. Not as fast as backwards mode, but in the same neighborhood for speed. Super, super convinient and awesome for programmers. I love these. More info: Dual Numbers & Automatic Differentiation

Care to guess which one we are going to use? Yep, Dual Numbers!

In a nutshell, if you have code that uses floats, you can change it to use a templated type instead. Then, you put dual numbers through your code instead of floats. The output you get will be the specific value output from your code, but also the GRADIENT of your code at that value. Even better, this isn’t a numerical method (it’s not an approximation), it’s analytical (it’s exact).

That is seriously all there is to it. Dual numbers are amazing!

Since you made the code templated, you can still use it for floats when you don’t want or need the gradient.

## Differentiable Programming / Gradient Descent Skeleton

Here’s the general skeleton we are going to be following for using gradient descent with our differentiable program.

1. Initialize the parameters to random (but valid) values, storing them in dual numbers.
2. Run the code that does our work, taking dual numbers as input for the parameters of how it does the work.
3. Put the result (which is dual numbers) into a scoring function to give us a score. Usually the score is such that smaller numbers are better. If not, just multiply the score by -1 so it is.
4. Since we did the work and calculated the score using dual numbers, we now have a gradient which describes how we need to adjust the parameters to make our score better.
5. Adjust our parameters using the gradient and go back to step 2. Repeating until whatever exit condition we want is hit: maybe when a certain number of iterations happen, or maybe when our score gets below a certain value.

That’s our game plan. Let’s dive into the specific problem we are going to be attacking.

## Searching For an Ideal Dithering Pattern

Here is the problem we want to tackle:

We want to find a 3×3 dithering pattern such that when we use it to dither an image (by repeating the 3×3 pattern over and over across the image), and then blur the result by a specific amount, that it’s as close as possible to the original image being blurred by that same amount.

That sounds a bit challenging right? It’s not actually that bad, don’t worry (:

The steps the code has to do (differentiably) are:

1. Dither the source image
2. Blur the results
3. Blur the source image
4. Calculate a score for how similar they are
5. Use all this with Gradient Descent to optimize the dither pattern

Once again, we need to do this stuff differentiably, using dual numbers, so that we get a gradient for how to modify the dither pattern to better our score.

### Step 1 – Dither Source Image

Dithering an image is a pretty simple process.

We are going to be dithering it such that we take a greyscale image as input and convert it to a black and white image using the dither pattern.

(If you are starting with a color image, this shows how to convert it to greyscale: Converting RGB to Grayscale)

For every pixel (x,y) in the source image, you look at pixel (x%3, y%3) in the dither pattern, and if the dither pattern pixel is less than the source, you write a black pixel out, else you write a white pixel out.

if (sourcePixel(x,y) < ditherPixel(x%3, y%3))
pixelOut(x,y) = 0.0;
else
pixelOut(x,y) = 1.0;


There’s a problem though… this is a branch, which makes a discontinuity, which will make it so we can’t have good derivatives to help us get to the goal.

Another way to write the dithering operation above is to write it like this:

difference = ditherPixel(x%3, y%3) - sourcePixel(x,y);
pixelOut(x,y) = step(difference);


Where “step” is the “heaviside step function”, which is 1 if x >= 0, otherwise is 0.

(Image from Wikipedia By Omegatron (Own work) [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)%5D, via Wikimedia Commons)

That got rid of the branch (if statement), but we still have a discontinuous function.

Luckily we can approximate a step function with other functions. I decided to use the formula $0.5+atan(100*x)/pi$ which looks like this:

Unfortunately, I found that my results weren’t that good, so i switched it to $0.5+atan(10000*x)/pi$ which ended up working better for me:

This function does have the problem of having flat derivatives, but I found that it worked pretty well anyways. The flat derivatives don’t seem to be a big problem in this case luckily.

To put it all together, the differentiable version of dithering a pixel that I use looks like this:

difference = ditherPixel(x%3, y%3) - sourcePixel(x,y);
pixelOut(x,y) = 0.5+atan(10000.0f * difference) / pi;


As input to this dithering process, we take:

• The source image
• a 3×3 dither pattern, where each pixel is a dual number

As output this dithering process gives us:

• A dithered image that is converted to black and white (either a 1.0 or 0.0 value per pixel)
• It’s the same size as the source image
• Each pixel is a dual number with 9 derivatives. There is one derivative per dither pixel.

### Step 2 – Blur the Results

Blurring the results of the dither wasn’t that difficult. I used a Gaussian blur, but other blurs could be used easily.

I had some Gaussian blur code laying around (from this blog post: Gaussian Blur) and I converted it to using a templated type instead of floats/pixels where appropriate, also making sure there were no branches or anything discontinuous.

It turned out there wasn’t a whole lot to fix up here luckily so wasn’t too difficult.

This allowed me to take the dithered results which are a dual number per pixel, and do a Gaussian blur on them, preserving and correctly modifying the gradient (derivatives) as it did the Blur.

### Step 3 – Blur the Source Image

Blurring the source image was easy since the last step made a generic gaussian blur function. I used the generic Gaussian blur function to blur the image. This doesn’t need to be done as dual numbers, so it was regular pixels in and regular pixels out.

You might wonder why this part doesn’t need to be done as dual numbers.

The simple answer is because these values are in no way dependant on the dither pattern (which are what we are tracking with the derivatives).

The more mathematical explanation is that you could in fact consider these dual numbers, which just have a gradient of zero because they are essentially constants that have nothing to do (yet) with the parameters of the function. The gradient would just implicitly be zero, like any other constant value you might introduce to the function.

### Step 4 – Calculating a Similarity Score

Next up I needed to calculate a similarity score between the dithered then blurred results (which is made up of dual numbers), and the source image which was blurred (and is made up of regular pixels).

The similarity score I went with is just MSE or “Mean Squared Error”.

To calculate MSE, for every pixel you do this:

error = ditheredBlurredImage(x,y) - blurredImage(x,y);
errorSquared = error * error;


After you have the squared error for every pixel, you just take the average of them to get the MSE.

An interesting thing about MSE is that because errors are squared, it will favor smaller errors much more than larger errors, which is a nice property.

A not so nice property about MSE is that it might decide something is a small difference mathematically even though a human would decide that it was a huge difference perceptually. The reverse is also true. Despite this, I chose it because it is simple and I ended up getting decent results with it.

If you want to go down the rabbit hole of looking at “perceptual similarity scores of images” check out these links:

After this step, we have an MSE value which says how similar the images are. A lower value means lower average squared error, so lower numbers are indeed better.

What else is nice is that the MSE value is a dual number with a gradient that has the 9 partial derivatives that describe how much the MSE changes as you adjust each parameter.

That gradient tells us how to adjust the parameters (the 3×3 dither pixels!) to lower the MSE!

### Step 5 – Putting it All Together

Now it’s time to put all of this together and use gradient descent to make our dither pattern better.

Here’s how the program operates:

1. Initialize the 3×3 dither pattern to random values, setting the derivatives to 1.0 in the gradient, for the variable that they represent.
2. do 1000 iterations of this loop:
1. Dither and blur the source image
2. Calculate MSE of this result compared to the source image blurred
3. Using the gradient from the MSE value, subtract the respective partial derivative from each of the pixels in the dither pattern, but scaling the partial derivative by a “learning rate”.
3. Output the best result found

The learning rate starts at 3.0 at loop iteration 0, but decays with each iteration, down to 0.1 at iteration 999. It starts above 1 to help escape local minima, and uses a very small rate at the end to try and get deeper into whatever minimum it has found.

After adjusting the dither pattern pixels, I clamp them to be between 0 and 1.

Something else I ought to mention is that while I’m doing the gradient descent, I keep track of the best scoring dither pattern seen.

This way, after the 1000 iterations are up, if we ever saw anything better than where we are at currently, we just use that instead of the final result.

Presumably, if you tune your parameters (learning rate, iterations, etc!) correctly, this won’t come up often, but it’s always a possibility that your final state is not the best state encountered, so this is a nice way to get better results more often.

## Results

Did you notice that I called this post “searching for an ideal dither pattern” instead of “finding an ideal dither pattern”? (:

The results are decent, but I know they could be better. Even so, I think the techniques talked about here are a good start going down the path of differentiable programming, and similar topics.

Here are some results I was able to get with the code. Click to see the full size images. The shrunken down images have aliasing issues.

The images left to right are: The original, the dither pattern used (repeated), the dithered image, the blurred dither image, and lastly the blurred original image. The program aims to make the last two images look as close as possible as it can, using MSE as the metric for how close they are.

Here is the starting state of using a Gaussian blur with a sigma of 10:

Here it is after the 1000 iterations of gradient descent. Notice the black blob at the top is gone compared to where it started.

Here’s the starting state when using a Gaussian blur sigma of 1:

And here it is after 1000 iterations, which is pretty decent results:

Lastly, here it is with no blurring whatsoever:

And after 1000 iterations, I think it actually looks worse!

Using no blur at all makes for some really awful results. The blur gives the algorithm more freedom on how it can succeed, whereas with no blur, there is a lot less wiggle room for finding a solution.

Another benefit of using the blur before MSE calculation is that a blur is a low pass filter. That means that higher frequencies are gone before the MSE calculation. The result of this is that the algorithm will favor results which are closer to blue noise dithering. Pretty neat right?!

## Closing

I hope you enjoyed this journey through differentiable programming and gradient descent, and I hope you were able to follow along.

Here are some potentially interesting things to do beyond what we talked about here:

• Have it learn from a set of images, instead of only this single image. This should help prevent “over fitting” and let it find a dither pattern which works well for all images instead of just this one specific image.
• Use a separate set of images to gauge the accuracy of the result that weren’t used as part of the training, to help prove that it really hasn’t overfit the training data.
• Try applying “small corruption” in the learning to help prevent overfitting or getting stuck in local minima – one idea would be to have some percentage chance per derivative that you don’t apply the change to the dither pattern pixel. This would add some randomness to the gradient descent instead of it only being down the steepest direction all of the time.
• Instead of optimizing the dithering patterns, you could make a formula that generated the dithering patterns, and instead optimize the coefficients / terms of that formula. If you get good results, you’ll end up with a formula you can use for dithering instead of a pattern, which might be nice for the case of avoiding a texture read in a pixel shader to do the dithering.

I’m not a data scientist or machine learning expert by any means, so there are plenty of improvements to be made. There is a lot of overlap with what is being done here and other algorithms – both in the machine learning realm and outside of the machine learning realm.

For one, you can use Newton’s method for gradient descent. It can find minima faster by using the second derivative in the calculations as well.

However, this algorithm is almost purely “exploitative” in that wherever you start with your parameters, it will try to go from there to the deepest point in whatever valley it’s already in. Some other types of algorithms differ from this in that they are more “explorative” and try to find other valleys, but aren’t always as good at finding the deepest part of the valleys that they do find. More explorative algorithms include simulated annealing, differential evolution, and genetic algorithms.

If you enjoyed this post, check out this book for deeper details on algorithms relating to gradient descent (simulated annealing, genetic algorithms, etc!). It’s a very good book and very easy to read!
Essentials of Metaheuristics

Any corrections to what i’ve said, the code, or suggestions for improvements, please let me know by leaving a comment here, or hitting me up on twitter: https://twitter.com/Atrix256

# Dissecting “Tiny Clouds”

There is an amazing shadertoy called “Tiny Clouds” by stubbe (twitter: @Stubbesaurus) which flies you through nearly photorealistic clouds in only 10 lines of code / 280 characters (2 old sized tweets or 1 new larger sized tweet).

The code is a bit dense, so I wanted to take some time to understand it and share the explanation for anyone else who was interested. Rune (the author) kindly answered a couple questions for me as well. Thanks Rune!

Link: [SH17A] Tiny Clouds (Check out this link, it looks even more amazing in motion)

Here is the code in full. The texture in iChannel0 is just a white noise texture that is bilinearly sampled.

#define T texture(iChannel0,(s*p.zw+ceil(s*p.x))/2e2).y/(s+=s)*4.
void mainImage(out vec4 O,vec2 x){
vec4 p,d=vec4(.8,0,x/iResolution.y-.8),c=vec4(.6,.7,d);
O=c-d.w;
for(float f,s,t=2e2+sin(dot(x,x));--t>0.;p=.05*t*d)
p.xz+=iTime,
s=2.,
f=p.w+1.-T-T-T-T,
f<0.?O+=(O-1.-f*c.zyxw)*f*.4:O;
}


BTW this shadertoy is a shrunken & reinterpreted version of a larger, more feature rich shadertoy by iq: Clouds

Before diving into the details of the code, here is how it works in short:

• Every pixel does a ray march from far to near. It does it backwards to make for simpler alpha blending math.
• At every ray step, it samples FBM data (fractal brownian motion) to figure out if the current position is below the surface of the cloud or above it.
• If below, it alpha blends the pixel color with the cloud color at that point, using the vertical distance into the cloud as the cloud density.

Pretty reasonable and simple – and it would have to be, to look so good in so few characters! Let’s dig into the code.

#define T texture(iChannel0,(s*p.zw+ceil(s*p.x))/2e2).y/(s+=s)*4.
void mainImage(out vec4 O,vec2 x){
vec4 p,d=vec4(.8,0,x/iResolution.y-.8),c=vec4(.6,.7,d);
O=c-d.w;
for(float f,s,t=2e2+sin(dot(x,x));--t>0.;p=.05*t*d)
p.xz+=iTime,
s=2.,
f=p.w+1.-T-T-T-T,
f<0.?O+=(O-1.-f*c.zyxw)*f*.4:O;
}


Line 1 is a define that we’ll come back to and line 2 is just a minimal definition of the mainImage function.

#define T texture(iChannel0,(s*p.zw+ceil(s*p.x))/2e2).y/(s+=s)*4.
void mainImage(out vec4 O,vec2 x){
vec4 p,d=vec4(.8,0,x/iResolution.y-.8),c=vec4(.6,.7,d);
O=c-d.w;
for(float f,s,t=2e2+sin(dot(x,x));--t>0.;p=.05*t*d)
p.xz+=iTime,
s=2.,
f=p.w+1.-T-T-T-T,
f<0.?O+=(O-1.-f*c.zyxw)*f*.4:O;
}


On line 3 several variables are declared:

• p – this is the variable that holds the position of the ray during the ray march. It isn’t initialized here, but that’s ok because the position is calculated each step in the loop. It is interesting to see that the y component of p is never used. p.x is actually depth into the screen, p.z is the screen x axis, and p.w is the screen y axis (aka the up axis). I believe that the axis choices and the fact that the y component is never used is purely to make the code smaller.
• d – this is the direction that the ray for this pixel travels in. It uses the same axis conventions as p, and the y component is also never used (except implicitly for calculating p.y, which is never used). 0.8 is subtracted from d.z and d.w (the screen x and screen y axes). Interestingly that makes the screen x axis 0 nearly centered on the screen. It also points the screen y axis downward a bit, putting the 0 value near the top of the screen to make the camera look more downward at the clouds.
• c – this is the color of the sky, which is a nice sky blue. It’s initialized with constants in x and y, and then d is used for z and w. d.xy goes into c.zw. That gives c the 0.8 value in the z field. I’m sure it was done this way because it’s fewer characters to initialize using “d” compared to “.8,0.” for the same effect. Note that c.w is used to calculate O.w (O.a) but that the alpha channel of the output pixel value is currently ignored by shadertoy, so this is a meaningless by product of the code, not a desired feature.
#define T texture(iChannel0,(s*p.zw+ceil(s*p.x))/2e2).y/(s+=s)*4.
void mainImage(out vec4 O,vec2 x){
vec4 p,d=vec4(.8,0,x/iResolution.y-.8),c=vec4(.6,.7,d);
O=c-d.w;
for(float f,s,t=2e2+sin(dot(x,x));--t>0.;p=.05*t*d)
p.xz+=iTime,
s=2.,
f=p.w+1.-T-T-T-T,
f<0.?O+=(O-1.-f*c.zyxw)*f*.4:O;
}


Line 4 initializes the output pixel color to be the sky color (c), but then subtracts d.w which is the pixel’s ray march direction on the screen y axis. This has a nice effect of making a nice sky blue gradient.

To see this in action, here we set O to c:

Here we set O to c-d.w:

It gets darker blue towards the top – where d.w is positive – because a positive number is being subtracted from the sky color. The color values get smaller.

It gets lighter towards the bottom – where d.w is negative – because a negative number is being subtracted from the sky color. The color values get larger.

#define T texture(iChannel0,(s*p.zw+ceil(s*p.x))/2e2).y/(s+=s)*4.
void mainImage(out vec4 O,vec2 x){
vec4 p,d=vec4(.8,0,x/iResolution.y-.8),c=vec4(.6,.7,d);
O=c-d.w;
for(float f,s,t=2e2+sin(dot(x,x));--t>0.;p=.05*t*d)
p.xz+=iTime,
s=2.,
f=p.w+1.-T-T-T-T,
f<0.?O+=(O-1.-f*c.zyxw)*f*.4:O;
}


On line 5, the for loop for ray marching starts. A few things happen here:

• f is declared – f is the signed vertical distance from the current point in space to the cloud. If negative, it means that the point is inside the cloud. If positive, it means that the point is outside (above) the cloud. It isn’t initialized here, but it’s calculated each iteration of the loop so that’s fine.
• s is declared – s is a scale value for use with the FBM data. FBMs work by sampling multiple octaves of data. You scale up the position and scale down the value for each octave. s is that scale value, used for both purposes. This isn’t initialized but is calculated each frame so that’s fine.
• t is declared and initialized – t (aka ray march step index) is initialized to 2e2 aka 200. It was done this way because “2e2” is smaller than “200.” by one character. Note that the for loop takes t from 200 to 0. The ray marching happens back to front to simplify alpha blending. The sin(dot(x,x)) part I want to talk about briefly below.
• p is calculated – p (aka the position in the current step of the ray march) is calculated, and this happens every step of the loop. p is t (time) multiplied by the direction of the ray for this pixel, and multiplied by .05 to scale it down.

The reason that sin(dot(x,x)) is added to the “ray time” is because the ray is marching through voxelized data (boxes). Unlike boxes, clouds are supposed to look organic, and not geometric. A way to fight the problem of the data looking boxy is to add a little noise to each ray to break up the geometric pattern. You can either literally add some noise to the result, or do what this shader does, which is add some noise to the starting position of the ray so that neighboring rays will cross the box (voxel) boundaries at different times and will look noisy instead of geometric.

I can’t see a difference when removing this from the shader, and other people have said the same. Rune says in the comments that it’d be on the chopping block for sure if he needed to shave off some more characters. He reached his 280 character goal, so no there is no need to remove it.

For what it’s worth, here is that expression visualized in the blue channel. the -1 to +1 is mapped to 0 to 1 by multiplying it by a half and adding a half:

#define T texture(iChannel0,(s*p.zw+ceil(s*p.x))/2e2).y/(s+=s)*4.
void mainImage(out vec4 O,vec2 x){
vec4 p,d=vec4(.8,0,x/iResolution.y-.8),c=vec4(.6,.7,d);
O=c-d.w;
for(float f,s,t=2e2+sin(dot(x,x));--t>0.;p=.05*t*d)
p.xz+=iTime,
s=2.,
f=p.w+1.-T-T-T-T,
f<0.?O+=(O-1.-f*c.zyxw)*f*.4:O;
}


Line 6 adds the current time to p.x and p.z. Remember that the x component is the axis pointing into the screen and the z component is the screen space x axis, so this line of code moves the camera forward and to the right over time.

If you are wondering why the lines in the for loop end in a comma instead of a semicolon, the reason is because if a semicolon was used instead, the for loop would require two more characters: “{” and “}” to show where the scope of the loop started and ended. Ending the lines with commas mean it’s one long statement, so the single line version of a for loop can be used. An interesting trick 😛

#define T texture(iChannel0,(s*p.zw+ceil(s*p.x))/2e2).y/(s+=s)*4.
void mainImage(out vec4 O,vec2 x){
vec4 p,d=vec4(.8,0,x/iResolution.y-.8),c=vec4(.6,.7,d);
O=c-d.w;
for(float f,s,t=2e2+sin(dot(x,x));--t>0.;p=.05*t*d)
p.xz+=iTime,
s=2.,
f=p.w+1.-T-T-T-T,
f<0.?O+=(O-1.-f*c.zyxw)*f*.4:O;
}


Line 7 sets / initializes s to 2. Remember that s is used as the octave scale for sample position and resulting value. That will come into play in the next line.

#define T texture(iChannel0,(s*p.zw+ceil(s*p.x))/2e2).y/(s+=s)*4.
void mainImage(out vec4 O,vec2 x){
vec4 p,d=vec4(.8,0,x/iResolution.y-.8),c=vec4(.6,.7,d);
O=c-d.w;
for(float f,s,t=2e2+sin(dot(x,x));--t>0.;p=.05*t*d)
p.xz+=iTime,
s=2.,
f=p.w+1.-T-T-T-T,
f<0.?O+=(O-1.-f*c.zyxw)*f*.4:O;
}


First let’s look at line 1, which is the “T” macro.

That macro samples the texture (which is just white noise) at a position described by the current ray position in the ray march. the s variable is used to scale up the position, and it’s also used to scale down the noise value at that position. The same position involves p.zw which is the screen space x and y axis respectively, but also includes p.x which is the axis pointing into the screen. This maps a 3d coordinate to a 2d texture location. I have tried making the shader sample a 3d white noise texture instead of doing this and get what looks to be the same quality results.

The macro also multiplies s by 2 each sample, so that the next sample will sample the next octave.

An interesting part of this texture coordinate conversion from 3d to 2d though is that the x component is ceil’d(the axis that goes into the screen). I’m not sure if there is any logic to this other than it’s a way to transform the 3d coordinates into a 2d one for the texture lookup.

Below is what it looks like without the ceil in the macro for s*p.x. It stretches the noise in a weird way.

The uv coordinates sampled are divided by 2e2 (which is 200, but again, fewer characters than “200.”). I believe this value of 200 matches the number of ray march steps intentionally, so that the ray marches across the entire texture (with wrap around) each time.

Line 8 uses this macro. We set f to be p.w, which is the ray’s height. 1 is added to the height which moves the camera up one unit. Lastly, the T macro is used to subtract 4 octaves of noise from f.

The result of this is that f gives us a signed distance to the cloud on the vertical axis. In other words, f tells us how far above or below the surface of the clouds we are. A positive value means the position is above the clouds, and a negative value means the position is below the clouds.

#define T texture(iChannel0,(s*p.zw+ceil(s*p.x))/2e2).y/(s+=s)*4.
void mainImage(out vec4 O,vec2 x){
vec4 p,d=vec4(.8,0,x/iResolution.y-.8),c=vec4(.6,.7,d);
O=c-d.w;
for(float f,s,t=2e2+sin(dot(x,x));--t>0.;p=.05*t*d)
p.xz+=iTime,
s=2.,
f=p.w+1.-T-T-T-T,
f<0.?O+=(O-1.-f*c.zyxw)*f*.4:O;
}


Line 10 is the close of the function, so line 9 is the last meaningful line of code.

This line of code says:

• If f less than zero (“If the point is inside the cloud”)
• Else, “O”. This is a dummy statement with no side effects that is there to satisfy the ternary operator syntax with a minimal number of characters.

I was looking at that formula for a while, trying to figure it out. I was thinking maybe it was something like a cheaper function fitting of some more complex light scattering / absorption function.

I asked Rune and he explained it. All it’s doing is doing an alpha blend (a lerp) from the current pixel color to the color of the cloud at this position. If you do the lerp mathematically, expand the function and combine terms, you get the above. Here’s his explanation from twitter (link to twitter thread):

Alpha blending between accumulated color (O) and incoming cloud color (1+f*c.zyxw). Note density (f) is negative:
O = lerp(O, 1+f*c.zyxw, -f*.4)
O = O * (1+f*.4) + (1+f*c.zyxw)*-f*.4
O = O + O*f*.4 + (1+f*c.zyxw)*-f*.4
O = O + (O-1-f*c.zyxw)*f*.4
O += (O-1-f*c.zyxw)*f*.4

Remember the marching is from far to near which simplifies the calculations quite a bit. If the marching was reversed then you would also need to keep track of an accumulated density.

One obvious question then would be: why is “1+f*c.zyxw” the cloud color of the current sample?

One thing that helps clear that up is that f is negative. if you make “f” mean “density” and flip it’s sign, the equation becomes: “1-density*c.zyxw”

We can then realize that “1” when interpreted as a vec4 is the color white, and that c is the sky color. We can also throw out the w since we (and shadertoy) don’t care about the alpha channel. We can also replace x,y,z with r,g,b. That makes the equation become: “white-density*skycolor.bgr”

In that equation, when density is 0, all we are left with is white. As density increases, the color gets darker.

The colors are the reversed sky color, because the sky color is (0.6, 0.7, 0.8). if we used the sky color instead of the reversed sky color, you can see that blue would drop away faster than green, which would drop away faster than red. If you do that, the clouds turn a reddish color like you can see here:

I’m not an expert in atmospheric rendering (check links at the bottom for more info on that!), but it looks more natural and correct for it to do the reverse. What we really want is for red to drop off the quickest, then green, then blue. I believe a more correct thing to do would be to subtract sky color from 1.0 and use that color to multiply density by. However, reversing the color channels works fine in this case, so no need to spend the extra characters!

Another obvious question might be: why is the amount of lerp “-f*.4”?

It probably looks strange to see a negative value in a lerp amount, but remembering that f is negative when it’s inside a cloud means that it’s a positive value, multiplied by 0.4 to make it smaller. It’s just scaling the density a bit.

## Other Notes

Using bilinear interpolation of the texture makes a big difference. If you switch the texture to using nearest neighbor point sampling you get something like this which looks very boxy. It looks even more boxy when it’s in motion.

One thing I wanted to try when understanding this shader was to try to replace the white noise texture lookup with a white noise function. It does indeed work as you can see below, but it got noticeably slower on my machine doing that. I’m so used to things being texture bound that getting rid of texture reads is usually a win. I didn’t stop to think that in this situation all that was happening was compute and no texture reads. In a more fully featured renderer, you may indeed find yourself texture read bound, and moving it out of a texture read could help speed things up – profile and see! It’s worth noting that to get proper results you need to discretize your noise function into a grid and use bilinear interpolation between the values – mimicing what the texture read does. Check my unpacked version of the shader in the links section for more details!

Something kind of fun is that you can replace the white noise texture with other textures. The results seem to be pretty good usually! Below is where i made the shadertoy use the “Abstract1” image as a source. The clouds got a lot more soft.

Thanks for reading. Anything that I got wrong or missed, please let me and the other readers know!

Here is my unpacked version of the shader, which includes the option to use a white noise function instead of a white noise texture: Tiny Clouds: Unpacked & No Tex

Volumetric Atmospheric Scattering

Creating a Volumetric Ray Marcher

# Demystifying Floating Point Precision

Floating point numbers have limited precision. If you are a game programmer, you have likely encountered bugs where things start breaking after too much time has elapsed, or after something has moved too far from the origin.

This post aims to show you how to answer the questions:

1. What precision do I have at a number?
2. When will I hit precision issues?

First, a very quick look at the floating point format.

## Floating Point Format

Floating point numbers (Wikipedia: IEEE 754) have three components:

1. Sign bit – whether the number is positive or negative
2. Exponent bits – the magnitude of the number
3. Mantissa bits – the fractional bits

32 bit floats use 1 bit for sign, 8 bits for exponent and 23 bits for mantissa. Whatever number is encoded in the exponent bits, you subtract 127 to get the actual exponent, meaning the exponent can be from -126 to +127.

64 bit doubles use 1 bit for sign, 11 bits for exponent and 52 bits for mantissa. Whatever number is encoded in the exponent bits, you subtract 1023 to get the actual exponent, meaning the exponent can be from -1022 to +1023.

16 bit half floats use 1 bit for sign, 5 bits for exponent and 10 bits for mantissa. Whatever number is encoded in the exponent bits, you subtract 15 to get the actual exponent, meaning the exponent can be from -14 to +15.

For all of the above, an exponent of all zeros has the special meaning “exponent 0” (and this is where the denormals / subnormals come into play) and all ones has the special meaning “infinity”

The exponent bits tell you which power of two numbers you are between – $[2^{exponent}, 2^{exponent+1})$ – and the mantissa tells you where you are in that range.

## What precision do I have at a number?

Let’s look at the number 3.5.

To figure out the precision we have at that number, we figure out what power of two range it’s between and then subdivide that range using the mantissa bits.

3.5 is between 2 and 4. That means we are diving the range of numbers 2 to 4 using the mantissa bits. A float has 23 bits of mantissa, so the precision we have at 3.5 is:

$\frac{4-2}{2^{23}} = \frac{2}{8388608} \approx 0.000000238418579$

3.5 itself is actually exactly representable by a float, double or half, but the amount of precision numbers have at that scale is that value. The smallest number you can add or subtract to a value between 2 and 4 is that value. That is the resolution of the values you are working with when working between 2 and 4 using a float.

Using a double instead of a float gives us 52 bits of mantissa, making the precision:

$\frac{4-2}{2^{52}} = \frac{2}{4503599627370496} \approx 0.00000000000000044408921$

Using a half float with 10 bits of mantissa it becomes:

$\frac{4-2}{2^{10}} = \frac{2}{1024} = 0.001953125$

Here’s a table showing the amount of precision you get with each data type at various exponent values. N/A is used when an exponent is out of range for the specific data type.

$\begin{array}{c|c|c|c|c} exponent & range & half & float & double \\ \hline 0 & [1,2) & 0.0009765625 & 0.00000011920929 & 0.0000000000000002220446 \\ 1 & [2,4) & 0.001953125 & 0.000000238418579 & 0.00000000000000044408921 \\ 2 & [4,8) & 0.00390625 & 0.000000476837158 & 0.00000000000000088817842 \\ 9 & [512, 1024) & 0.5 & 0.00006103515 & 0.00000000000011368684 \\ 10 & [1024,2048) & 1 & 0.00012207031 & 0.00000000000022737368 \\ 11 & [2048,4096) & 2 & 0.00024414062 & 0.00000000000045474735 \\ 12 & [4096,8192) & 4 & 0.00048828125 & 0.0000000000009094947 \\ 15 & [32768, 65536) & 32 & 0.00390625 & 0.0000000000072759576 \\ 16 & [65536, 131072) & N/A & 0.0078125 & 0.0000000000014551915 \\ 17 & [131072, 262144) & N/A & 0.015625 & 0.00000000002910383 \\ 18 & [262144, 524288) & N/A & 0.03125 & 0.000000000058207661 \\ 19 & [524288, 1048576) & N/A & 0.0625 & 0.00000000011641532 \\ 23 & [8388608,16777216) & N/A & 1 & 0.00000000186264515 \\ 52 & [4503599627370496, 9007199254740992) & N/A & 536870912 & 1 \\ \end{array}$

A quick note on the maximum number you can store in floating point numbers, by looking at the half float specifically:

A half float has a maximum exponent of 15, which you can see above puts the number range between 32768 and 65536. The precision is 32 which is the smallest step that can be made in a half float at that scale. That range includes the smaller number but not the larger number. That means that the largest number a half float can store is one step away (32) from the high side of that range. So, the largest number that can be stored is 65536 – 32 = 65504.

## How Many Digits Can I Rely On?

Another helpful way of looking at floating point precision is how many digits of precision you can rely on.

A float has 23 bits of mantissa, and 2^23 is 8,388,608. 23 bits let you store all 6 digit numbers or lower, and most of the 7 digit numbers. This means that floating point numbers have between 6 and 7 digits of precision, regardless of exponent.

That means that from 0 to 1, you have quite a few decimal places to work with. If you go into the hundreds or thousands, you’ve lost a few. When you get up into the tens of millions, you’ve run out of digits for anything beyond the decimal place.

You can actually see that this is true in the table in the last section. With floating point numbers, it’s at exponent 23 (8,388,608 to 16,777,216) that the precision is at 1. The smallest value that you can add to a floating point value in that range is in fact 1. It’s at this point that you have lost all precision to the right of the decimal place. Interestingly, you still have perfect precision of the integers though.

Half floats have 10 mantissa bits and 2^10 = 1024, so they just barely have 3 digits of precision.

Doubles have 52 mantissa bits and 2^52 = 4,503,599,627,370,496. That means doubles have between 15 and 16 digits of precision.

This can help you understand how precision will break down for you when using a specific data type for a specific magnitude of numbers.

## When will I hit precision issues?

Besides the loose rules above about how many digits of precision you can count on, you can also solve to see when precision will break down for you.

Let’s say that you are tracking how long your game has been running (in seconds), and you do so by adding your frame delta (in seconds) to a variable every frame.

If you have a 30fps game, your frame delta is going to be 0.0333.

Adding that each frame to a float will eventually cause the float to reach a value where that number is smaller than the smallest difference representable (smaller than the precision), at which point things will start breaking. At first your accuracy will drop and your time will be wrong, but eventually adding your frame delta to the current time won’t even change the value of the current time. Time will effectively stop!

When will this happen though?

We’ll start with the formula we saw earlier and do one step of simple algebra to get us an equation which can give us this answer.

$\frac{range}{mantissa} = precision \\ \\ range = mantissa * precision$

How we use this formula is we put the precision we want into “precision” and we put the size of the mantissa ($2^{MantissaBits}$) into “mantissa”. The result tells us the range that we’ll get the precision at.

Let’s plug in our numbers:

$range = 8388608 * 0.0333 = 279340.6464$

This tells us the range of the floating point numbers where we’ll have our problems, but this isn’t the value that we’ll have our problems at, so we have another step to do. We need to find what exponent has this range.

Looking at the table earlier in the post you might notice that the range at an exponent also happens to be just $2^{exponent}$.

That’s helpful because that just means we take log2 of the answer we got:

$log2(279340.6464) = 18.0916659875$

Looking at the table again, we can see that floating point numbers have a precision of 0.03125 at exponent value 18. So, exponent 18 is close, but it’s precision is smaller than what we want – aka the precision is still ok.

That means we need to ceil() the number we got from the log2.

Doing that, we see that things break down at exponent 19, which has precision of 0.0625. This actual value it has this problem at is 528,288 (which is $2^{19}$).

So, our final formula for “where does precision become this value?” becomes:

$value = pow(2, ceil(log2(mantissa * precision)))$

Note that at exponent 18, there is still imprecision happening. When adding 1/30 to 264144, It goes from 264144 to 264144.031 to 264144.063, instead of 264144, 264144.033, 264144.066. There is error, but it’s fairly small.

At exponent 19 though, things fall apart a lot more noticeably. When adding 1/30 to 528288, it goes from 528288 to 528288.063 to 528288.125. Time is actually moving almost twice as fast in this case!

At exponent 20, we start at 1056576.00 and adding 1/30 doesn’t even change the value. Time is now stopped.

It does take 6.1 days (528,288 seconds) to reach exponent 19 though, so that’s quite a long time.

If we use half floats, it falls apart at value 64. That’s right, it only takes 64 seconds for this to fall apart when using 16 bit half floats, compared to 6.1 days when using 32 bit floats!

With doubles, it falls apart at value 281,474,976,710,656. That is 8,925,512 years!

Let’s check out that equation again:

$value = pow(2, ceil(log2(mantissa * precision)))$

A possibly more programmer friendly way to do the above would be to calculate mantissa * precision and then round up to the next power of 2. That’s exactly what the formula is doing above, but in math terms, not programming terms.

## Storing Integers

I recently learned that floating point numbers can store integers surprisingly well. It blows my mind that I never knew this. Maybe you are in the same boat 😛

Here’s the setup:

1. For any exponent, the range of numbers it represents is a power of 2.
2. The mantissa will always divide that range into a power of 2 different values.

It might take some time and/or brain power to soak that up (it did for me!) but what that ends up ultimately meaning is that floating point numbers can exactly represent a large number of integers.

In fact, a floating point number can EXACTLY store all integers from $-2^{MantissaBits+1}$ to $+2^{MantissaBits+1}$.

For half floats that means you can store all integers between (and including) -2048 to +2048. ($\pm 2^{11}$)

For floats, it’s -16,777,216 to +16,777,216. ($\pm 2^{24}$)

For doubles it’s -9,007,199,254,740,992 to +9,007,199,254,740,992. ($\pm 2^{53}$)

Doubles can in fact exactly represent any 32 bit unsigned integer, since 2^32 = 4,294,967,296.

Here are some links you might find interesting!

Floating point visually explained:
http://fabiensanglard.net/floating_point_visually_explained/

What Every Computer Scientist Should Know About Floating-Point Arithmetic:
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

A matter of precision:
http://tomforsyth1000.github.io/blog.wiki.html#[[A%20matter%20of%20precision]]

Denormal numbers – aka very small numbers that make computations slow when you use them:
https://en.m.wikipedia.org/wiki/Denormal_number

Catastrophic Cancellation – a problem you can run into when doing floating point math:
https://en.wikipedia.org/wiki/Loss_of_significance

A handy web page that lets you play with the binary representation of a float and what number it comes out as:
https://www.h-schmidt.net/FloatConverter/IEEE754.html

Half precision floating point format:
https://en.wikipedia.org/wiki/Half-precision_floating-point_format

What is the first integer that a float is incapable of representing?
https://stackoverflow.com/questions/3793838/which-is-the-first-integer-that-an-ieee-754-float-is-incapable-of-representing-e

Ready to go deeper? Bruce Dawson has some amazing write ups on deeper floating point issues:
https://randomascii.wordpress.com/category/floating-point/

This talks about how to use floating point precision limits as an activation function in a neural network (?!)
https://blog.openai.com/nonlinear-computation-in-linear-networks/

# Animating Noise For Integration Over Time 2: Uniform Over Time

After I put out the last post, Mikkel Gjoel (@pixelmager), made an interesting observation that you can see summarized in his image below. (tweet / thread here)

BTW Mikkel has an amazing presentation about rendering the beautiful game “Inside” that you should check out. Lots of interesting techniques used, including some enlightening uses of noise.
Low Complexity, High Fidelity: The Rendering of INSIDE

The images left to right are:

• One frame of white noise
• N frames of white noise averaged.
• N frames averaged where the first frame is white noise, and a per frame random number is added to all pixels every frame.
• N frames averaged where the first frame is white noise, and 1/N is added to all pixels every frame.
• N frames averaged where the first frame is white noise, and the golden ratio is added to all pixels every frame.

In the above, the smoother and closer to middle grey that an image is, the better it is – that means it converged to the true result of the integral better.

Surprisingly it looks like adding 1/N outperforms the golden ratio, which means that regular spaced samples are outperforming a low discrepancy sequence!

To compare apples to apples, we’ll do the “golden ratio” tests we did last post, but instead do them with adding this uniform value instead.

To be explicit, there are 8 frames and they are:

• Frame 0: The noise
• Frame 1: The noise + 1/8
• Frame 2: The noise + 2/8
• Frame 7: the noise + 7/8

Modulus is used to keep the values between 0 and 1.

Below is how white noise looks animated with golden ratio (top) vs uniform values (bottom). There are 8 frames and it’s played at 8fps so it loops every second.

Interleaved Gradient Noise. Top is golden ratio, bottom is uniform.

Blue Noise. Top is golden ratio, bottom is uniform.

The uniform ones look pretty similar. Maybe a little smoother, but it’s hard to tell by just looking at it. Interestingly, the frequency content of the blue noise seems more stable using these uniform values instead of golden ratio.

The histogram data of the noise was the same for all frames of animation, just like in last post, which is a good thing. The important bit is that adding a uniform value doesn’t modify the histogram shape, other than changing which counts go to which specific buckets. Ideally the histogram would start out perfectly even like the blue noise does, but since this post is about the “adding uniform values” process, and not about the starting noise, this shows that the process does the right thing with the histogram.

• White Noise – min 213, max 306, average 256, std dev 16.51
• Interleaved Gradient Noise – min 245, max 266, average 256, std dev 2.87
• Blue Noise – min, max, average are 256, std dev 0.

Let’s look at the integrated animations.

White noise. Top is golden ratio, bottom is uniform.

Interleaved gradient noise. Top is golden ratio, bottom is uniform.

Blue noise. Top is golden ratio, bottom is uniform.

The differences between these animations are subtle unless you know what you are looking for specifically so let’s check out the final frames and the error graphs.

Each noise comparison below has three images. The first image is the “naive” way to animate the noise. The second uses golden ratio instead. The third one uses 1/N. The first two images (and techniques) are from (and explained in) the last post, and the third image is the technique from this post.

White noise. Naive (top), golden ratio (mid), uniform (bottom).

Interleaved gradient noise. Naive (top), golden ratio (mid), uniform (bottom).

Blue noise. Naive (top), golden ratio (mid), uniform (bottom).

So, what’s interesting is that the uniform sampling over time has lower error and standard deviation (variance) than golden ratio, which has less than the naive method. However, it’s only at the end that the uniform sampling over time has the best results, and it’s actually quite terrible until then.

The reason for this is that uniform has good coverage over the sample space, but it takes until the last frame to get that good coverage because each frame takes a small step over the remaining sample space.

What might work out better would be if our first frame was the normal noise, but then the second frame was the normal noise plus a half, so we get the most information we possibly can from that sample by splitting the sample space in half. We would then want to cut the two halves of the space space in half, and so the next two frames would be the noise plus 1/4 and the noise plus 3/4. We would then continue with 1/8, 5/8, 3/8 and 7/8 (note we didn’t do these 1/8 steps in order. We did it in the order that gives us the most information the most quickly!). At the end of all this, we would have our 8 uniformly spaced samples over time, but we would have taken the samples in an order that makes our intermediate frames look better hopefully.

Now, interestingly, that number sequence I just described has a name. It’s the base 2 Van Der Corput sequence, which is a type of low discrepancy sequence. It’s also the 1D version of the Halton sequence, and is related to other sequences as well. More info here: When Random Numbers Are Too Random: Low Discrepancy Sequences

Mikkel mentioned he thought this would be helpful, and I was thinking the same thing too. Let’s see how it does!

White noise. Uniform (top), Van Der Corput (bottom).

Interleaved gradient noise. Uniform (top), Van Der Corput (bottom).

Blue noise. Uniform (top), Van Der Corput (bottom).

The final frames look the same as before (and the same as each other), so I won’t show those again but here are the updated graphs.

Interestingly, using the Van Der Corput sequence has put intermediate frames more in line with golden ratio, while of course still being superior at the final frame.

I’ve been trying to understand why uniform sampling over time out performs the golden ratio which acts more like blue noise over time. I still don’t grasp why it works as well as it does, but the proof is in the pudding.

Theoretically, this uniform sampling over time should lead to the possibility of aliasing on the time axis, since blue noise / white noise (and other randomness) get rid of the aliasing in exchange for noise.

Noise over the time dimension would mean missing details that were smaller than the sample spacing size. in our case, we are using the time sampled values (noise + uniform value) to threshold a source image to make a sample. It may be that since we are thresholding, that aliasing isn’t possible since our sample represents everything below or equal to the value?

I’m not really sure, but will be thinking about it for a while. If you have any insights please let me know!

It would be interesting to try an actual 1d blue noise sequence and see how it compares. If it does better, it sounds like it would be worth while to try jittering the uniform sampled values on the time axis to try and approximate blue noise a bit. Mikkel tried the jittering and said it gave significantly worse results, so that seems like a no go.

Lastly, some other logical experiments from here seem to be…

• See how other forms of noise and ordered dithers do, including perhaps a Bayer Matrix. IG noise seems to naturally do better on the time axis for some reason I don’t fully understand yet. There may be some interesting properties of other noise waiting to be found.
• Do we get any benefits in this context by arranging the interleaved gradient noise in a spiral like Jorge mentions in his presentation? (Next Generation Post Processing In Call Of Duty: Advanced Warfare
• It would be interesting to see how this works in a more open ended case – such as if you had temporal AA which was averaging a variable number of pixels each frame. Would doing a van Der Corput sequence give good results there? Would you keep track of sample counts per pixel and keep marching the Van Der Corput forward for each pixel individually? Or would you just pick something like an 8 Van Der Corput sequence, adding the current sequence to all pixels and looping that sequence every 8 frames? It really would be interesting to see what is best in that sort of a setup.

I’m sure there are all sorts of other things to try to. This is a deep, interesting and important topic for graphics and beyond (:

## Code

Source code below, but it’s also available on github, along with the source images used: Github:
Atrix256/RandomCode/AnimatedNoise

#define _CRT_SECURE_NO_WARNINGS

#include <windows.h>  // for bitmap headers.  Sorry non windows people!
#include <stdint.h>
#include <vector>
#include <random>
#include <atomic>
#include <complex>
#include <array>

typedef uint8_t uint8;

const float c_pi = 3.14159265359f;

// settings
const bool c_doDFT = true;

// globals
FILE* g_logFile = nullptr;

//======================================================================================
inline float Lerp (float A, float B, float t)
{
return A * (1.0f - t) + B * t;
}

//======================================================================================
struct SImageData
{
SImageData ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
size_t m_pitch;
std::vector<uint8> m_pixels;
};

//======================================================================================
struct SColor
{
SColor (uint8 _R = 0, uint8 _G = 0, uint8 _B = 0)
: R(_R), G(_G), B(_B)
{ }

inline void Set (uint8 _R, uint8 _G, uint8 _B)
{
R = _R;
G = _G;
B = _B;
}

uint8 B, G, R;
};

//======================================================================================
struct SImageDataComplex
{
SImageDataComplex ()
: m_width(0)
, m_height(0)
{ }

size_t m_width;
size_t m_height;
std::vector<std::complex<float>> m_pixels;
};

//======================================================================================
std::complex<float> DFTPixel (const SImageData &srcImage, size_t K, size_t L)
{
std::complex<float> ret(0.0f, 0.0f);

for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Get the pixel value (assuming greyscale) and convert it to [0,1] space
const uint8 *src = &srcImage.m_pixels[(y * srcImage.m_pitch) + x * 3];
float grey = float(src[0]) / 255.0f;

// Add to the sum of the return value
float v = float(K * x) / float(srcImage.m_width);
v += float(L * y) / float(srcImage.m_height);
ret += std::complex<float>(grey, 0.0f) * std::polar<float>(1.0f, -2.0f * c_pi * v);
}
}

return ret;
}

//======================================================================================
void ImageDFT (const SImageData &srcImage, SImageDataComplex &destImage)
{
// NOTE: this function assumes srcImage is greyscale, so works on only the red component of srcImage.
// ImageToGrey() will convert an image to greyscale.

// size the output dft data
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pixels.resize(destImage.m_width*destImage.m_height);

// calculate 2d dft (brute force, not using fast fourier transform) multithreadedly
std::atomic<size_t> nextRow(0);
{
[&] ()
{
bool reportProgress = (row == 0);
int lastPercent = -1;

while (row < srcImage.m_height)
{
// calculate the DFT for every pixel / frequency in this row
for (size_t x = 0; x < srcImage.m_width; ++x)
{
destImage.m_pixels[row * destImage.m_width + x] = DFTPixel(srcImage, x, row);
}

// report progress if we should
if (reportProgress)
{
int percent = int(100.0f * float(row) / float(srcImage.m_height));
if (lastPercent != percent)
{
lastPercent = percent;
printf("            \rDFT: %i%%", lastPercent);
}
}

// go to the next row
}
}
);
}

t.join();

printf("\n");
}

//======================================================================================
void GetMagnitudeData (const SImageDataComplex& srcImage, SImageData& destImage)
{
// size the output image
destImage.m_width = srcImage.m_width;
destImage.m_height = srcImage.m_height;
destImage.m_pitch = 4 * ((srcImage.m_width * 24 + 31) / 32);
destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height);

// get floating point magnitude data
std::vector<float> magArray;
magArray.resize(srcImage.m_width*srcImage.m_height);
float maxmag = 0.0f;
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
// Offset the information by half width & height in the positive direction.
// This makes frequency 0 (DC) be at the image origin, like most diagrams show it.
int k = (x + (int)srcImage.m_width / 2) % (int)srcImage.m_width;
int l = (y + (int)srcImage.m_height / 2) % (int)srcImage.m_height;
const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k];

float mag = std::abs(src);
if (mag > maxmag)
maxmag = mag;

magArray[y*srcImage.m_width + x] = mag;
}
}
if (maxmag == 0.0f)
maxmag = 1.0f;

const float c = 255.0f / log(1.0f+maxmag);

// normalize the magnitude data and send it back in [0, 255]
for (size_t x = 0; x < srcImage.m_width; ++x)
{
for (size_t y = 0; y < srcImage.m_height; ++y)
{
float src = c * log(1.0f + magArray[y*srcImage.m_width + x]);

uint8 magu8 = uint8(src);

uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3];
dest[0] = magu8;
dest[1] = magu8;
dest[2] = magu8;
}
}
}

//======================================================================================
bool ImageSave (const SImageData &image, const char *fileName)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "wb");
if (!file) {
printf("Could not save %s\n", fileName);
return false;
}

// write the data and close the file
fclose(file);

return true;
}

//======================================================================================
bool ImageLoad (const char *fileName, SImageData& imageData)
{
// open the file if we can
FILE *file;
file = fopen(fileName, "rb");
if (!file)
return false;

{
fclose(file);
return false;
}

// read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4
if (fread(&imageData.m_pixels[0], imageData.m_pixels.size(), 1, file) != 1)
{
fclose(file);
return false;
}

imageData.m_pitch = 4 * ((imageData.m_width * 24 + 31) / 32);

fclose(file);
return true;
}

//======================================================================================
void ImageInit (SImageData& image, size_t width, size_t height)
{
image.m_width = width;
image.m_height = height;
image.m_pitch = 4 * ((width * 24 + 31) / 32);
image.m_pixels.resize(image.m_pitch * image.m_height);
std::fill(image.m_pixels.begin(), image.m_pixels.end(), 0);
}

//======================================================================================
template <typename LAMBDA>
void ImageForEachPixel (SImageData& image, const LAMBDA& lambda)
{
size_t pixelIndex = 0;
for (size_t y = 0; y < image.m_height; ++y)
{
SColor* pixel = (SColor*)&image.m_pixels[y * image.m_pitch];
for (size_t x = 0; x < image.m_width; ++x)
{
lambda(*pixel, pixelIndex);
++pixel;
++pixelIndex;
}
}
}

//======================================================================================
template <typename LAMBDA>
void ImageForEachPixel (const SImageData& image, const LAMBDA& lambda)
{
size_t pixelIndex = 0;
for (size_t y = 0; y < image.m_height; ++y)
{
SColor* pixel = (SColor*)&image.m_pixels[y * image.m_pitch];
for (size_t x = 0; x < image.m_width; ++x)
{
lambda(*pixel, pixelIndex);
++pixel;
++pixelIndex;
}
}
}

//======================================================================================
void ImageConvertToLuma (SImageData& image)
{
ImageForEachPixel(
image,
[] (SColor& pixel, size_t pixelIndex)
{
float luma = float(pixel.R) * 0.3f + float(pixel.G) * 0.59f + float(pixel.B) * 0.11f;
uint8 lumau8 = uint8(luma + 0.5f);
pixel.R = lumau8;
pixel.G = lumau8;
pixel.B = lumau8;
}
);
}

//======================================================================================
void ImageCombine2 (const SImageData& imageA, const SImageData& imageB, SImageData& result)
{
// put the images side by side. A on left, B on right
ImageInit(result, imageA.m_width + imageB.m_width, max(imageA.m_height, imageB.m_height));
std::fill(result.m_pixels.begin(), result.m_pixels.end(), 0);

// image A on left
for (size_t y = 0; y < imageA.m_height; ++y)
{
SColor* destPixel = (SColor*)&result.m_pixels[y * result.m_pitch];
SColor* srcPixel = (SColor*)&imageA.m_pixels[y * imageA.m_pitch];
for (size_t x = 0; x < imageA.m_width; ++x)
{
destPixel[0] = srcPixel[0];
++destPixel;
++srcPixel;
}
}

// image B on right
for (size_t y = 0; y < imageB.m_height; ++y)
{
SColor* destPixel = (SColor*)&result.m_pixels[y * result.m_pitch + imageA.m_width * 3];
SColor* srcPixel = (SColor*)&imageB.m_pixels[y * imageB.m_pitch];
for (size_t x = 0; x < imageB.m_width; ++x)
{
destPixel[0] = srcPixel[0];
++destPixel;
++srcPixel;
}
}
}

//======================================================================================
void ImageCombine3 (const SImageData& imageA, const SImageData& imageB, const SImageData& imageC, SImageData& result)
{
// put the images side by side. A on left, B in middle, C on right
ImageInit(result, imageA.m_width + imageB.m_width + imageC.m_width, max(max(imageA.m_height, imageB.m_height), imageC.m_height));
std::fill(result.m_pixels.begin(), result.m_pixels.end(), 0);

// image A on left
for (size_t y = 0; y < imageA.m_height; ++y)
{
SColor* destPixel = (SColor*)&result.m_pixels[y * result.m_pitch];
SColor* srcPixel = (SColor*)&imageA.m_pixels[y * imageA.m_pitch];
for (size_t x = 0; x < imageA.m_width; ++x)
{
destPixel[0] = srcPixel[0];
++destPixel;
++srcPixel;
}
}

// image B in middle
for (size_t y = 0; y < imageB.m_height; ++y)
{
SColor* destPixel = (SColor*)&result.m_pixels[y * result.m_pitch + imageA.m_width * 3];
SColor* srcPixel = (SColor*)&imageB.m_pixels[y * imageB.m_pitch];
for (size_t x = 0; x < imageB.m_width; ++x)
{
destPixel[0] = srcPixel[0];
++destPixel;
++srcPixel;
}
}

// image C on right
for (size_t y = 0; y < imageC.m_height; ++y)
{
SColor* destPixel = (SColor*)&result.m_pixels[y * result.m_pitch + imageA.m_width * 3 + imageC.m_width * 3];
SColor* srcPixel = (SColor*)&imageC.m_pixels[y * imageC.m_pitch];
for (size_t x = 0; x < imageC.m_width; ++x)
{
destPixel[0] = srcPixel[0];
++destPixel;
++srcPixel;
}
}
}

//======================================================================================
float GoldenRatioMultiple (size_t multiple)
{
return float(multiple) * (1.0f + std::sqrtf(5.0f)) / 2.0f;
}

//======================================================================================
void IntegrationTest (const SImageData& dither, const SImageData& groundTruth, size_t frameIndex, const char* label)
{
// calculate min, max, total and average error
size_t minError = 0;
size_t maxError = 0;
size_t totalError = 0;
size_t pixelCount = 0;
for (size_t y = 0; y < dither.m_height; ++y)
{
SColor* ditherPixel = (SColor*)&dither.m_pixels[y * dither.m_pitch];
SColor* truthPixel = (SColor*)&groundTruth.m_pixels[y * groundTruth.m_pitch];
for (size_t x = 0; x < dither.m_width; ++x)
{
size_t error = 0;
if (ditherPixel->R > truthPixel->R)
error = ditherPixel->R - truthPixel->R;
else
error = truthPixel->R - ditherPixel->R;

totalError += error;

if ((x == 0 && y == 0) || error < minError)
minError = error;

if ((x == 0 && y == 0) || error > maxError)
maxError = error;

++ditherPixel;
++truthPixel;
++pixelCount;
}
}
float averageError = float(totalError) / float(pixelCount);

// calculate standard deviation
float sumSquaredDiff = 0.0f;
for (size_t y = 0; y < dither.m_height; ++y)
{
SColor* ditherPixel = (SColor*)&dither.m_pixels[y * dither.m_pitch];
SColor* truthPixel = (SColor*)&groundTruth.m_pixels[y * groundTruth.m_pitch];
for (size_t x = 0; x < dither.m_width; ++x)
{
size_t error = 0;
if (ditherPixel->R > truthPixel->R)
error = ditherPixel->R - truthPixel->R;
else
error = truthPixel->R - ditherPixel->R;

float diff = float(error) - averageError;

sumSquaredDiff += diff*diff;
}
}
float stdDev = std::sqrtf(sumSquaredDiff / float(pixelCount - 1));

// report results
fprintf(g_logFile, "%s %zu error\n", label, frameIndex);
fprintf(g_logFile, "  min error: %zu\n", minError);
fprintf(g_logFile, "  max error: %zu\n", maxError);
fprintf(g_logFile, "  avg error: %0.2f\n", averageError);
fprintf(g_logFile, "  stddev: %0.2f\n", stdDev);
fprintf(g_logFile, "\n");
}

//======================================================================================
void HistogramTest (const SImageData& noise, size_t frameIndex, const char* label)
{
std::array<size_t, 256> counts;
std::fill(counts.begin(), counts.end(), 0);

ImageForEachPixel(
noise,
[&] (const SColor& pixel, size_t pixelIndex)
{
counts[pixel.R]++;
}
);

// calculate min, max, total and average
size_t minCount = 0;
size_t maxCount = 0;
size_t totalCount = 0;
for (size_t i = 0; i < 256; ++i)
{
if (i == 0 || counts[i] < minCount)
minCount = counts[i];

if (i == 0 || counts[i] > maxCount)
maxCount = counts[i];

totalCount += counts[i];
}
float averageCount = float(totalCount) / float(256.0f);

// calculate standard deviation
float sumSquaredDiff = 0.0f;
for (size_t i = 0; i < 256; ++i)
{
float diff = float(counts[i]) - averageCount;
sumSquaredDiff += diff*diff;
}
float stdDev = std::sqrtf(sumSquaredDiff / 255.0f);

// report results
fprintf(g_logFile, "%s %zu histogram\n", label, frameIndex);
fprintf(g_logFile, "  min count: %zu\n", minCount);
fprintf(g_logFile, "  max count: %zu\n", maxCount);
fprintf(g_logFile, "  avg count: %0.2f\n", averageCount);
fprintf(g_logFile, "  stddev: %0.2f\n", stdDev);
fprintf(g_logFile, "  counts: ");
for (size_t i = 0; i < 256; ++i)
{
if (i > 0)
fprintf(g_logFile, ", ");
fprintf(g_logFile, "%zu", counts[i]);
}

fprintf(g_logFile, "\n\n");
}

//======================================================================================
void GenerateWhiteNoise (SImageData& image, size_t width, size_t height)
{
ImageInit(image, width, height);

std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<unsigned int> dist(0, 255);

ImageForEachPixel(
image,
[&] (SColor& pixel, size_t pixelIndex)
{
uint8 value = dist(rng);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);
}

//======================================================================================
void GenerateInterleavedGradientNoise (SImageData& image, size_t width, size_t height, float offsetX, float offsetY)
{
ImageInit(image, width, height);

std::random_device rd;
std::mt19937 rng(rd());
std::uniform_int_distribution<unsigned int> dist(0, 255);

for (size_t y = 0; y < height; ++y)
{
SColor* pixel = (SColor*)&image.m_pixels[y * image.m_pitch];
for (size_t x = 0; x < width; ++x)
{
float valueFloat = std::fmodf(52.9829189f * std::fmod(0.06711056f*float(x + offsetX) + 0.00583715f*float(y + offsetY), 1.0f), 1.0f);
size_t valueBig = size_t(valueFloat * 256.0f);
uint8 value = uint8(valueBig % 256);
pixel->R = value;
pixel->G = value;
pixel->B = value;
++pixel;
}
}
}

//======================================================================================
template <size_t NUM_SAMPLES>
void GenerateVanDerCoruptSequence (std::array<float, NUM_SAMPLES>& samples, size_t base)
{
for (size_t i = 0; i < NUM_SAMPLES; ++i)
{
samples[i] = 0.0f;
float denominator = float(base);
size_t n = i;
while (n > 0)
{
size_t multiplier = n % base;
samples[i] += float(multiplier) / denominator;
n = n / base;
denominator *= base;
}
}
}

//======================================================================================
void DitherWithTexture (const SImageData& ditherImage, const SImageData& noiseImage, SImageData& result)
{
// init the result image
ImageInit(result, ditherImage.m_width, ditherImage.m_height);

// make the result image
for (size_t y = 0; y < ditherImage.m_height; ++y)
{
SColor* srcDitherPixel = (SColor*)&ditherImage.m_pixels[y * ditherImage.m_pitch];
SColor* destDitherPixel = (SColor*)&result.m_pixels[y * result.m_pitch];

for (size_t x = 0; x < ditherImage.m_width; ++x)
{
// tile the noise in case it isn't the same size as the image we are dithering
size_t noiseX = x % noiseImage.m_width;
size_t noiseY = y % noiseImage.m_height;
SColor* noisePixel = (SColor*)&noiseImage.m_pixels[noiseY * noiseImage.m_pitch + noiseX * 3];

uint8 value = 0;
if (noisePixel->R < srcDitherPixel->R)
value = 255;

destDitherPixel->R = value;
destDitherPixel->G = value;
destDitherPixel->B = value;

++srcDitherPixel;
++destDitherPixel;
}
}
}

//======================================================================================
void DitherWhiteNoise (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

// make noise
SImageData noise;
GenerateWhiteNoise(noise, ditherImage.m_width, ditherImage.m_height);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(ditherImage, noise, dither, combined);
ImageSave(combined, "out/still_whitenoise.bmp");
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

// make noise
SImageData noise;

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(ditherImage, noise, dither, combined);
ImageSave(combined, "out/still_ignoise.bmp");
}

//======================================================================================
void DitherBlueNoise (const SImageData& ditherImage, const SImageData& blueNoise)
{
printf("\n%s\n", __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, blueNoise, dither);

// save the results
SImageData combined;
ImageCombine3(ditherImage, blueNoise, dither, combined);
ImageSave(combined, "out/still_bluenoise.bmp");
}

//======================================================================================
void DitherWhiteNoiseAnimated (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/anim_whitenoise%zu.bmp", i);

// make noise
SImageData noise;
GenerateWhiteNoise(noise, ditherImage.m_width, ditherImage.m_height);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1000.0f);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/anim_ignoise%zu.bmp", i);

// make noise
SImageData noise;

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimated (const SImageData& ditherImage, const SImageData blueNoise[8])
{
printf("\n%s\n", __FUNCTION__);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/anim_bluenoise%zu.bmp", i);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, blueNoise[i], dither);

// save the results
SImageData combined;
ImageCombine2(blueNoise[i], dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherWhiteNoiseAnimatedIntegrated (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animint_whitenoise%zu.bmp", i);

// make noise
SImageData noise;
GenerateWhiteNoise(noise, ditherImage.m_width, ditherImage.m_height);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

std::random_device rd;
std::mt19937 rng(rd());
std::uniform_real_distribution<float> dist(0.0f, 1000.0f);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animint_ignoise%zu.bmp", i);

// make noise
SImageData noise;

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&](SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i + 1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimatedIntegrated (const SImageData& ditherImage, const SImageData blueNoise[8])
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animint_bluenoise%zu.bmp", i);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, blueNoise[i], dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(blueNoise[i], dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherWhiteNoiseAnimatedGoldenRatio (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

// make noise
SImageData noiseSrc;
GenerateWhiteNoise(noiseSrc, ditherImage.m_width, ditherImage.m_height);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

SImageDataComplex noiseDFT;
SImageData noiseDFTMag;

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgr_whitenoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// DFT the noise
if (c_doDFT)
{
ImageDFT(noise, noiseDFT);
GetMagnitudeData(noiseDFT, noiseDFTMag);
}
else
{
ImageInit(noiseDFTMag, noise.m_width, noise.m_height);
std::fill(noiseDFTMag.m_pixels.begin(), noiseDFTMag.m_pixels.end(), 0);
}

// Histogram test the noise
HistogramTest(noise, i, __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(noiseDFTMag, noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

// make noise
SImageData noiseSrc;

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

SImageDataComplex noiseDFT;
SImageData noiseDFTMag;

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgr_ignoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// DFT the noise
if (c_doDFT)
{
ImageDFT(noise, noiseDFT);
GetMagnitudeData(noiseDFT, noiseDFTMag);
}
else
{
ImageInit(noiseDFTMag, noise.m_width, noise.m_height);
std::fill(noiseDFTMag.m_pixels.begin(), noiseDFTMag.m_pixels.end(), 0);
}

// Histogram test the noise
HistogramTest(noise, i, __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(noiseDFTMag, noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimatedGoldenRatio (const SImageData& ditherImage, const SImageData& noiseSrc)
{
printf("\n%s\n", __FUNCTION__);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

SImageDataComplex noiseDFT;
SImageData noiseDFTMag;

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgr_bluenoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// DFT the noise
if (c_doDFT)
{
ImageDFT(noise, noiseDFT);
GetMagnitudeData(noiseDFT, noiseDFTMag);
}
else
{
ImageInit(noiseDFTMag, noise.m_width, noise.m_height);
std::fill(noiseDFTMag.m_pixels.begin(), noiseDFTMag.m_pixels.end(), 0);
}

// Histogram test the noise
HistogramTest(noise, i, __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(noiseDFTMag, noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherWhiteNoiseAnimatedUniform (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

// make noise
SImageData noiseSrc;
GenerateWhiteNoise(noiseSrc, ditherImage.m_width, ditherImage.m_height);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

SImageDataComplex noiseDFT;
SImageData noiseDFTMag;

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animuni_whitenoise%zu.bmp", i);

// add uniform value to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
float add = float(i) / 8.0f;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// DFT the noise
if (c_doDFT)
{
ImageDFT(noise, noiseDFT);
GetMagnitudeData(noiseDFT, noiseDFTMag);
}
else
{
ImageInit(noiseDFTMag, noise.m_width, noise.m_height);
std::fill(noiseDFTMag.m_pixels.begin(), noiseDFTMag.m_pixels.end(), 0);
}

// Histogram test the noise
HistogramTest(noise, i, __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(noiseDFTMag, noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

// make noise
SImageData noiseSrc;

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

SImageDataComplex noiseDFT;
SImageData noiseDFTMag;

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animuni_ignoise%zu.bmp", i);

// add uniform value to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
float add = float(i) / 8.0f;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// DFT the noise
if (c_doDFT)
{
ImageDFT(noise, noiseDFT);
GetMagnitudeData(noiseDFT, noiseDFTMag);
}
else
{
ImageInit(noiseDFTMag, noise.m_width, noise.m_height);
std::fill(noiseDFTMag.m_pixels.begin(), noiseDFTMag.m_pixels.end(), 0);
}

// Histogram test the noise
HistogramTest(noise, i, __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(noiseDFTMag, noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimatedUniform (const SImageData& ditherImage, const SImageData& noiseSrc)
{
printf("\n%s\n", __FUNCTION__);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

SImageDataComplex noiseDFT;
SImageData noiseDFTMag;

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animuni_bluenoise%zu.bmp", i);

// add uniform value to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
float add = float(i) / 8.0f;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// DFT the noise
if (c_doDFT)
{
ImageDFT(noise, noiseDFT);
GetMagnitudeData(noiseDFT, noiseDFTMag);
}
else
{
ImageInit(noiseDFTMag, noise.m_width, noise.m_height);
std::fill(noiseDFTMag.m_pixels.begin(), noiseDFTMag.m_pixels.end(), 0);
}

// Histogram test the noise
HistogramTest(noise, i, __FUNCTION__);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// save the results
SImageData combined;
ImageCombine3(noiseDFTMag, noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherWhiteNoiseAnimatedGoldenRatioIntegrated (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// make noise
SImageData noiseSrc;
GenerateWhiteNoise(noiseSrc, ditherImage.m_width, ditherImage.m_height);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgrint_whitenoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// make noise
SImageData noiseSrc;

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgrint_ignoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimatedGoldenRatioIntegrated (const SImageData& ditherImage, const SImageData& noiseSrc)
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animgrint_bluenoise%zu.bmp", i);

// add golden ratio to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherWhiteNoiseAnimatedUniformIntegrated (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// make noise
SImageData noiseSrc;
GenerateWhiteNoise(noiseSrc, ditherImage.m_width, ditherImage.m_height);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animuniint_whitenoise%zu.bmp", i);

// add uniform value to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
float add = float(i) / 8.0f;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// make noise
SImageData noiseSrc;

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animuniint_ignoise%zu.bmp", i);

// add uniform value to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
float add = float(i) / 8.0f;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimatedUniformIntegrated (const SImageData& ditherImage, const SImageData& noiseSrc)
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animuniint_bluenoise%zu.bmp", i);

// add uniform value to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
float add = float(i) / 8.0f;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherWhiteNoiseAnimatedVDCIntegrated (const SImageData& ditherImage)
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// make noise
SImageData noiseSrc;
GenerateWhiteNoise(noiseSrc, ditherImage.m_width, ditherImage.m_height);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// Make Van Der Corput sequence
std::array<float, 8> VDC;
GenerateVanDerCoruptSequence(VDC, 2);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animvdcint_whitenoise%zu.bmp", i);

// add uniform value to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

// make noise
SImageData noiseSrc;

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// Make Van Der Corput sequence
std::array<float, 8> VDC;
GenerateVanDerCoruptSequence(VDC, 2);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animvdcint_ignoise%zu.bmp", i);

// add uniform value to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
void DitherBlueNoiseAnimatedVDCIntegrated (const SImageData& ditherImage, const SImageData& noiseSrc)
{
printf("\n%s\n", __FUNCTION__);

std::vector<float> integration;
integration.resize(ditherImage.m_width * ditherImage.m_height);
std::fill(integration.begin(), integration.end(), 0.0f);

SImageData noise;
ImageInit(noise, noiseSrc.m_width, noiseSrc.m_height);

// Make Van Der Corput sequence
std::array<float, 8> VDC;
GenerateVanDerCoruptSequence(VDC, 2);

// animate 8 frames
for (size_t i = 0; i < 8; ++i)
{
char fileName[256];
sprintf(fileName, "out/animvdcint_bluenoise%zu.bmp", i);

// add uniform value to the noise after each frame
noise.m_pixels = noiseSrc.m_pixels;
ImageForEachPixel(
noise,
[&] (SColor& pixel, size_t pixelIndex)
{
float valueFloat = (float(pixel.R) / 255.0f) + add;
size_t valueBig = size_t(valueFloat * 255.0f);
uint8 value = uint8(valueBig % 256);
pixel.R = value;
pixel.G = value;
pixel.B = value;
}
);

// dither the image
SImageData dither;
DitherWithTexture(ditherImage, noise, dither);

// integrate and put the current integration results into the dither image
ImageForEachPixel(
dither,
[&] (SColor& pixel, size_t pixelIndex)
{
float pixelValueFloat = float(pixel.R) / 255.0f;
integration[pixelIndex] = Lerp(integration[pixelIndex], pixelValueFloat, 1.0f / float(i+1));

uint8 integratedPixelValue = uint8(integration[pixelIndex] * 255.0f);
pixel.R = integratedPixelValue;
pixel.G = integratedPixelValue;
pixel.B = integratedPixelValue;
}
);

// do an integration test
IntegrationTest(dither, ditherImage, i, __FUNCTION__);

// save the results
SImageData combined;
ImageCombine2(noise, dither, combined);
ImageSave(combined, fileName);
}
}

//======================================================================================
int main (int argc, char** argv)
{
// load the dither image and convert it to greyscale (luma)
SImageData ditherImage;
{
return 0;
}
ImageConvertToLuma(ditherImage);

// load the blue noise images.
SImageData blueNoise[8];
for (size_t i = 0; i < 8; ++i)
{
char buffer[256];
sprintf(buffer, "src/BN%zu.bmp", i);
{
return 0;
}

// They have different values in R, G, B so make R be the value for all channels
ImageForEachPixel(
blueNoise[i],
[] (SColor& pixel, size_t pixelIndex)
{
pixel.G = pixel.R;
pixel.B = pixel.R;
}
);
}

g_logFile = fopen("log.txt", "w+t");

// still image dither tests
DitherWhiteNoise(ditherImage);
DitherBlueNoise(ditherImage, blueNoise[0]);

// Animated dither tests
DitherWhiteNoiseAnimated(ditherImage);
DitherBlueNoiseAnimated(ditherImage, blueNoise);

// Golden ratio animated dither tests
DitherWhiteNoiseAnimatedGoldenRatio(ditherImage);
DitherBlueNoiseAnimatedGoldenRatio(ditherImage, blueNoise[0]);

// Uniform animated dither tests
DitherWhiteNoiseAnimatedUniform(ditherImage);
DitherBlueNoiseAnimatedUniform(ditherImage, blueNoise[0]);

// Animated dither integration tests
DitherWhiteNoiseAnimatedIntegrated(ditherImage);
DitherBlueNoiseAnimatedIntegrated(ditherImage, blueNoise);

// Golden ratio animated dither integration tests
DitherWhiteNoiseAnimatedGoldenRatioIntegrated(ditherImage);
DitherBlueNoiseAnimatedGoldenRatioIntegrated(ditherImage, blueNoise[0]);

// Uniform animated dither integration tests
DitherWhiteNoiseAnimatedUniformIntegrated(ditherImage);
DitherBlueNoiseAnimatedUniformIntegrated(ditherImage, blueNoise[0]);

// Van der corput animated dither integration tests
DitherWhiteNoiseAnimatedVDCIntegrated(ditherImage);
DitherBlueNoiseAnimatedVDCIntegrated(ditherImage, blueNoise[0]);

fclose(g_logFile);

return 0;
}