Casual Shadertoy Path Tracing 1: Basic Camera, Diffuse, Emissive

Posts in this series:

  1. Basic Camera, Diffuse, Emissive
  2. Image Improvement and Glossy Reflections
  3. Fresnel, Rough Refraction & Absorption, Orbit Camera

Below is a screenshot of the shadertoy that goes with this post. Click to view full size. That shadertoy can be found at: https://www.shadertoy.com/view/tsBBWW

When you see a photorealistic image that someone says is “ray traced” what they likely mean is that it is “path traced”.

Path tracing can be pretty heavy on math and require a lot of attention to detail, but it can make for some really great images.

Funny thing though. You can throw out a lot of the formalism and still get superb results.

That is the premise of this series of blog posts. The goal is to get you path tracing and to let you play around, have fun, create art, without having to worry about integrals, PDFs and the existential question of graphics: “To pi or not to pi?”. Unfortunately that means that our images may take longer to render, and may not be exactly “correct”, but you can learn the more complicated techniques later when and if you are interested.

Rendering is an artistic and creative endeavor, so lets ignore some of the details for now. We are not making a ground truth renderer, we are just having fun making cool renderings.

Let’s get to it!

Shadertoy.com

Shadertoy is a website that lets you write small programs in a c-like language called GLSL which are ran for every pixel in an image, on your GPU (not CPU). Shadertoy uses WebGL but manages everything except the pixel shader(s) for you.

These programs have very limited input including pixel coordinate, frame number, time, and mouse position, and give only a color as output.

Despite the simplicity, people have done amazing things, like making a playable level of doom 1.

[SH16C] Doom by Paul Malin https://www.shadertoy.com/view/lldGDr

… Or make beautiful photo realistic images and videos which render in real time.

Snail by Inigo Quilez https://www.shadertoy.com/view/ld3Gz2

Shadertoy was founded by demosceners which are people that have parties & competitions to make very small executables that generate impressive audio and visuals.

Besides demosceners, shadertoy.com is frequented by professional and hobbyist game developers, graphics researchers, people who work in movies, and all sorts of other people as well.

It’s a great place to play and learn because it lets you dive in without a lot of fuss. I have learned so much there it boggles my mind sometimes… all sorts of math tricks, graphics techniques, dual numbers for automatic differentiation, ray marching solid geometry, and I even did real time ray tracing back in 2013. Real time ray tracing may SEEM new to the world, but demo sceners are doing real time ray tracing on things like the comodore 64 and have been doing so for years or decades. The rest of us are just catching up!

In these articles we are going to be doing path tracing in shadertoy because it’s very easy to do so. You should head over there, create an account and look around a bit at some of the shadertoys. Shadertoy works best in chrome or firefox.

We’ll get started when you come back 🙂

http://shadertoy.com

New Shader

To get started, after you log in, click on the “new” button in the upper right to create a new shader. You should be greeted by something that looks like this:

Go ahead and give the shader a name – the names must be globally unique! – as well as at least one tag, and a description. All of those are required so that you can click “Submit” and do the initial save for your shader.

After you have submitted your shader, the submit button will turn into a save button, and you will be good to go onto the next step.

Generating Rays

Path tracing is a type of ray tracing, which means we are going to shoot a ray into the world out of every pixel. For each individual ray, we are going to see what it hits and use that information to give a color to our pixel. Doing that for every pixel independently gives us our final image.

So, the first step to doing path tracing is to calculate the rays for each pixel. A ray has a starting point and a direction, so we are going to need to calculate where the ray starts, and what direction it is going in.

The ray is going to start at the camera location (you could think of this as the eye location), and it’s going to shoot at pixels on an imaginary rectangle in front of the camera, go through those pixels, and go out into the scene to see what they hit.

We are going to make this rectangle of pixels be from -1 to +1 on the x and y axis, and for now we’ll make it be 1 unit away on the z axis. We’ll say the camera is at the origin (0,0,0). Our goal is to calculate the 3d point on this rectangle for each pixel, then calculate the direction of the ray from the origin to that 3d point.

We’ll start by calculating each pixel’s percentage on the x and y axis from 0 to 1. We’ll show that in the red and green channel to make sure we are doing it correctly.

Put this code below in the code box in the right and click the play button (circled red in the screenshot) to see the results.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    // calculate pixel coordinates as a percetange of the screen, from 0 to 1 on each axis
    vec2 uv = fragCoord/iResolution.xy;

    // show percentage as red and green
    fragColor = vec4(uv, 0.0f, 1.0f);
}

We can see from this picture that (0,0) is in the lower left, where the color is black. As the pixels go to the right, they get more red, which shows us that the x axis goes to the right. As the pixels go up, they get more green, which shows us that the y axis goes upward.

Visualizing values like this is a great way to debug shadertoy shaders. The usual graphics debuggers and debugging techniques are not available when doing webgl, or when using shadertoy, so this “printf” style debugging is the best way to see what’s going on if you are having problems.

Now that we have screen percentage from 0 to 1, we want to change it to being from -1 to 1 so that it’s actually just the x and y coordinate on the imaginary pixel rectangle.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    // calculate 2d coordinates of the ray target on the imaginary pixel plane.
    // -1 to +1 on each axis.
    vec2 pixelTarget2D = (fragCoord/iResolution.xy) * 2.0f - 1.0f;

    // show percentage as red and green
    fragColor = vec4(pixelTarget2D, 0.0f, 1.0f);
}

We can see now that it’s black in the middle of the screen, and going to the right makes the pixels more red while going up makes the pixels more green. However, going left and down doesn’t change the color at all and just leaves it black. This is because colors are clamped to be between 0 and 1 before being displayed, making all negative numbers just be zero.

if you wanted to make sure the negative values were down there, and it really wasn’t just zero, you could visualize the absolute value like this.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    // calculate 2d coordinates of the ray target on the imaginary pixel plane.
    // -1 to +1 on each axis.
    vec2 pixelTarget2D = (fragCoord/iResolution.xy) * 2.0f - 1.0f;
    
    // take absolute value of the pixel target so we can verify there are negative values in the lower left.
    pixelTarget2D = abs(pixelTarget2D);

    // show percentage as red and green
    fragColor = vec4(pixelTarget2D, 0.0f, 1.0f);
}

Ok so now we have the x,y location of the ray target on the imaginary pixel rectangle. We can make the z location be 1 since we want it to be 1 unit away, and that gives us a target for our ray. We now want to get a NORMALIZED vector from the camera position (the origin) to this ray target. That gives us our ray direction and we can visualize that ray to verify that it looks somewhat reasonable.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    // The ray starts at the camera position (the origin)
    vec3 rayPosition = vec3(0.0f, 0.0f, 0.0f);
    
    // calculate coordinates of the ray target on the imaginary pixel plane.
    // -1 to +1 on x,y axis. 1 unit away on the z axis
    vec3 rayTarget = vec3((fragCoord/iResolution.xy) * 2.0f - 1.0f, 1.0f);
    
    // calculate a normalized vector for the ray direction.
    // it's pointing from the ray position to the ray target.
    vec3 rayDir = normalize(rayTarget - rayPosition);

    // show the ray direction
    fragColor = vec4(rayDir, 1.0f);
}

it’s a little subtle, but as you go right from the center of the screen, the pixels get more red and become a purpleish color. As you go up from the center, the pixels get more green and become a little more teal. This shows us that our ray directions seem sensible. As we move right on the x axis, the normalized ray direction also starts deviating from straight ahead and points more to the right, more positively on the x axis. Similarly for the y axis and the green color.

Going left and down from center, the blue gets darker, but no red and green show up due to the x and y axis being negative values again.

If you play around with the z coordinate of the ray target, you can make the ray direction changes more or less obvious due to basically zooming in or out the camera by moving the imaginary pixel rectangle, but we’ll talk more about that in a little bit.

The nearly constant blue color on the screen is due to the positive z value of the ray direction. All of the rays are aiming forward, which is what we’d expect.

We have successfully calculated a ray starting position and direction, and are ready to start shooting rays into the world!

Rendering Geometry

Next we want to use these rays to actually render some geometry. To do this we need functions that will tell us whether a ray intersects a shape or not, and how far down the ray the intersection happens.

We are going to use the functions “TestSphereTrace” and “TestQuadTrace” for ray intersection vs sphere, and ray intersection vs rectangle (quad). Apologies for not having them in a copy/pastable format here in the post. WordPress malfunctions when I try to put them into a code block, but they are in the final shader toy.

There are other ray vs object intersection functions that you can find. I adapted these from Christer Ericson’s book “Real-Time Collision Detection” (https://www.amazon.com/Real-Time-Collision-Detection-Interactive-Technology/dp/1558607323), and there a bunch on the net if you google for them. Ray marching can also be used as a numerical method for ray vs object intersection, which lets you have much richer shapes, infinitely repeating objects, and more.

Anyhow, instead of using the ray direction for the color for each pixel, we are going to test the ray against a scene made up of multiple objects, and give a different color to each object. We do that in the GetColorForRay function.

// The minimunm distance a ray must travel before we consider an intersection.
// This is to prevent a ray from intersecting a surface it just bounced off of.
const float c_minimumRayHitTime = 0.1f;

// the farthest we look for ray hits
const float c_superFar = 10000.0f;

struct SRayHitInfo
{
    float dist;
    vec3 normal;
};

float ScalarTriple(vec3 u, vec3 v, vec3 w)
{
    return dot(cross(u, v), w);
}

bool TestQuadTrace(in vec3 rayPos, in vec3 rayDir, inout SRayHitInfo info, in vec3 a, in vec3 b, in vec3 c, in vec3 d)
{
// ...
}

bool TestSphereTrace(in vec3 rayPos, in vec3 rayDir, inout SRayHitInfo info, in vec4 sphere)
{
// ...
}

vec3 GetColorForRay(in vec3 rayPos, in vec3 rayDir)
{
    SRayHitInfo hitInfo;
    hitInfo.dist = c_superFar;
    
    vec3 ret = vec3(0.0f, 0.0f, 0.0f);
    
	if (TestSphereTrace(rayPos, rayDir, hitInfo, vec4(-10.0f, 0.0f, 20.0f, 1.0f)))
    {
        ret = vec3(1.0f, 0.1f, 0.1f);
    } 
    
	if (TestSphereTrace(rayPos, rayDir, hitInfo, vec4(0.0f, 0.0f, 20.0f, 1.0f)))
    {
        ret = vec3(0.1f, 1.0f, 0.1f);
    }    
    
    {
        vec3 A = vec3(-15.0f, -15.0f, 22.0f);
        vec3 B = vec3( 15.0f, -15.0f, 22.0f);
        vec3 C = vec3( 15.0f,  15.0f, 22.0f);
        vec3 D = vec3(-15.0f,  15.0f, 22.0f);
        if (TestQuadTrace(rayPos, rayDir, hitInfo, A, B, C, D))
        {
            ret = vec3(0.7f, 0.7f, 0.7f);
        }
	}
    
	if (TestSphereTrace(rayPos, rayDir, hitInfo, vec4(10.0f, 0.0f, 20.0f, 1.0f)))
    {
        ret = vec3(0.1f, 0.1f, 1.0f);
    }       
    
    return ret;
}

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    // The ray starts at the camera position (the origin)
    vec3 rayPosition = vec3(0.0f, 0.0f, 0.0f);
    
    // calculate coordinates of the ray target on the imaginary pixel plane.
    // -1 to +1 on x,y axis. 1 unit away on the z axis
    vec3 rayTarget = vec3((fragCoord/iResolution.xy) * 2.0f - 1.0f, 1.0f);
    
    // calculate a normalized vector for the ray direction.
    // it's pointing from the ray position to the ray target.
    vec3 rayDir = normalize(rayTarget - rayPosition);
    
    // raytrace for this pixel
    vec3 color = GetColorForRay(rayPosition, rayDir);

    // show the result
    fragColor = vec4(color, 1.0f);
}

We see some things on the screen which is nice, but we have a ways to go yet before we are path tracing.

If you look at the code in the final shadertoy, TestSphereTrace and TestQuadTrace only return true if the ray intersects with the shape AND the intersection distance is closer than hitInfo.dist. A common mistake to make when first doing any kind of raytracing (path tracing or other types) is to accept the first thing a ray hits, but you actually want to test all objects and keep whichever hit is closest.

If you only kept the first hit found, not the closest hit, the blue ball would disappear, because the ray test for the grey quad comes before the blue ball, even though it’s farther away.

Correcting Aspect Ratio

You probably noticed that the spheres were stretched in the last image. The reason for that is that our imaginary pixel rectangle is a square (it’s from -1 to +1 on the x and y axis), while the image we are rendering is not a square. That makes the image get stretched horizontally.

The way to fix this is to either grow the imaginary pixel rectangle on the x axis, or shrink it on the y axis, so that the ratio of the width to the height is the same on the imaginary pixel rectangle as it is in our rendered image. I’m going to shrink the y axis. Here is the new mainImage function to use. Lines 10 through 12 were added to fix the problem.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    // The ray starts at the camera position (the origin)
    vec3 rayPosition = vec3(0.0f, 0.0f, 0.0f);
    
    // calculate coordinates of the ray target on the imaginary pixel plane.
    // -1 to +1 on x,y axis. 1 unit away on the z axis
    vec3 rayTarget = vec3((fragCoord/iResolution.xy) * 2.0f - 1.0f, 1.0f);
    
    // correct for aspect ratio
	float aspectRatio = iResolution.x / iResolution.y;
    rayTarget.y /= aspectRatio;
    
    // calculate a normalized vector for the ray direction.
    // it's pointing from the ray position to the ray target.
    vec3 rayDir = normalize(rayTarget - rayPosition);
    
    // raytrace for this pixel
    vec3 color = GetColorForRay(rayPosition, rayDir);

    // show the result
    fragColor = vec4(color, 1.0f);
}

Camera Zoom (FOV)

I said before that the imaginary pixel rectangle would be 1 unit away, but if we make that distance smaller, it’s the same as zooming the camera out or increasing the field of view. If we make that distance larger, it’s the same as zooming the camera in or decreasing the field of view.

It turns out that having the pixel rectangle be 1 unit away gives you a field of view of 90 degrees.

The formula for calculating the distance for a specific field of view is on line 7 of the new mainImage function below and is used on line 11.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    // The ray starts at the camera position (the origin)
    vec3 rayPosition = vec3(0.0f, 0.0f, 0.0f);
    
    // calculate the camera distance
	float cameraDistance = 1.0f / tan(c_FOVDegrees * 0.5f * c_pi / 180.0f);        
    
    // calculate coordinates of the ray target on the imaginary pixel plane.
    // -1 to +1 on x,y axis. 1 unit away on the z axis
    vec3 rayTarget = vec3((fragCoord/iResolution.xy) * 2.0f - 1.0f, cameraDistance);
    
    // correct for aspect ratio
	float aspectRatio = iResolution.x / iResolution.y;
    rayTarget.y /= aspectRatio;
    
    // calculate a normalized vector for the ray direction.
    // it's pointing from the ray position to the ray target.
    vec3 rayDir = normalize(rayTarget - rayPosition);
    
    // raytrace for this pixel
    vec3 color = GetColorForRay(rayPosition, rayDir);

    // show the result
    fragColor = vec4(color, 1.0f);
}

We are going to leave c_FOVDegrees at 90 degrees as we move forward, but here it is at 120 degrees

and here it is at 45 degrees

If you make the FOV too wide, you’ll start getting distortion at the edges. That’s a deep topic that cameras experience too that we won’t go into but I wanted to make sure you knew about that. You can get some real interesting renders by setting the FOV to really large values like 360 or 720 degrees!

Let’s Path Trace!

Now that we can shoot rays into the world, we are finally ready to do some path tracing!

To do shading we’ll need light sources, and our light sources will just be objects that have emissive lighting, aka glowing objects.

For this post we are only going to do diffuse light shading, which means no shiny reflections. We’ll do shiny reflections probably in the very next post in this series, but for now we’ll just do diffuse.

So, objects will have two fields that define their material:

  • a vec3 for emissive, specifying how much they glow in RGB
  • a vec3 for albedo, specifying what color they are under white light

How path tracing works in this setup is pretty simple. We’ll start out a pixel’s color at black, a “throughput” color at white, and we’ll shoot a ray out into the world, obeying the following rules:

  • When a ray hits an object, emissive*throughput is added to the pixel’s color.
  • When a ray hits an object, the throughput is multiplied by the object’s albedo, which affects the color of future emissive lights.
  • When a ray hits an object, a ray will be reflected in a random direction and the ray will continue (more info in a sec)
  • We will terminate when a ray misses all objects, or when N ray bounces have been reached. (N=8 in this shadertoy, but you can tune that value for speed vs correctness)

That is all there is to it!

The concept of “throughput” might seem strange at first but think of these scenarios as they increase in complexity:

  1. A ray hits a white ball, bounces off and hits a white light. The pixel should be white, because the white ball is lit by white light.
  2. A ray hits a red ball, bounces off and hits the same white light. The pixel should be red because red ball is lit by white light.
  3. A ray hits a white ball, bounces off and hits a red ball, bounces off and hits a white light. The pixel should be red because the white ball is lit by the red ball being lit by the white light.

When a ray bounces off a surface, all future lighting for that ray is multiplied by the color of that surface.

These simple rules are all you need to get soft shadows, bounce lighting, ambient occlusion, and all the rest of the cool things you see show up “automagically” in path tracers.

I said that the ray bounces off randomly but there are two different ways to handle this:

  1. Bounce randomly in the positive hemisphere of the normal and then multiply throughput by dot(newRayDir, surfaceNormal). This is the cosine theta term for diffuse lighting.
  2. Or, bounce randomly in a cosine weighted hemisphere direction of the surface normal. This uses importance sampling for the cosine theta term and is the better way to go.

To do #2, all we need to do is get a “random point on sphere” (also known as a random unit vector), add it to the normal, and normalize the result. It’s nice how simple it is.

You are probably wondering how to generate random numbers in a shader. There are many ways to handle it, but here’s how we are going to do it.

First, we are going to initialize a random seed for the current pixel, based on pixel position and frame number, so that each pixel gets different random numbers, and also so that each pixel gets different random numbers each frame.

We can just put this at the top of our mainImage function:

    // initialize a random number state based on frag coord and frame
    uint rngState = uint(uint(fragCoord.x) * uint(1973) + uint(fragCoord.y) * uint(9277) + uint(iFrame) * uint(26699)) | uint(1);

Then, we can toss these functions above our mainImage function, which take the rngState in, modify it, and generate a random number.

uint wang_hash(inout uint seed)
{
    seed = uint(seed ^ uint(61)) ^ uint(seed >> uint(16));
    seed *= uint(9);
    seed = seed ^ (seed >> 4);
    seed *= uint(0x27d4eb2d);
    seed = seed ^ (seed >> 15);
    return seed;
}

float RandomFloat01(inout uint state)
{
    return float(wang_hash(state)) / 4294967296.0;
}

vec3 RandomUnitVector(inout uint state)
{
    float z = RandomFloat01(state) * 2.0f - 1.0f;
    float a = RandomFloat01(state) * c_twopi;
    float r = sqrt(1.0f - z * z);
    float x = r * cos(a);
    float y = r * sin(a);
    return vec3(x, y, z);
}

To implement the path tracing rules we previously described, we are going to change some things…

  1. Move the ray vs object tests from GetColorForRay into a new function called TestSceneTrace. This lets us separate the scene testing logic from the iterative bounce raytracing we are going to do.
  2. Add albedo and emissive vec3’s to hit info and set both fields whenever we intersect an object. This is the object material data.
  3. Implement iterative raytracing in GetColorForRay as we described it
  4. Add a light to the scene so it isn’t completely dark
void TestSceneTrace(in vec3 rayPos, in vec3 rayDir, inout SRayHitInfo hitInfo)
{    
    {
        vec3 A = vec3(-15.0f, -15.0f, 22.0f);
        vec3 B = vec3( 15.0f, -15.0f, 22.0f);
        vec3 C = vec3( 15.0f,  15.0f, 22.0f);
        vec3 D = vec3(-15.0f,  15.0f, 22.0f);
        if (TestQuadTrace(rayPos, rayDir, hitInfo, A, B, C, D))
        {
            hitInfo.albedo = vec3(0.7f, 0.7f, 0.7f);
            hitInfo.emissive = vec3(0.0f, 0.0f, 0.0f);
        }
	}    
    
	if (TestSphereTrace(rayPos, rayDir, hitInfo, vec4(-10.0f, 0.0f, 20.0f, 1.0f)))
    {
        hitInfo.albedo = vec3(1.0f, 0.1f, 0.1f);
        hitInfo.emissive = vec3(0.0f, 0.0f, 0.0f);        
    } 
    
	if (TestSphereTrace(rayPos, rayDir, hitInfo, vec4(0.0f, 0.0f, 20.0f, 1.0f)))
    {
        hitInfo.albedo = vec3(0.1f, 1.0f, 0.1f);
        hitInfo.emissive = vec3(0.0f, 0.0f, 0.0f);        
    }    
    
	if (TestSphereTrace(rayPos, rayDir, hitInfo, vec4(10.0f, 0.0f, 20.0f, 1.0f)))
    {
        hitInfo.albedo = vec3(0.1f, 0.1f, 1.0f);
        hitInfo.emissive = vec3(0.0f, 0.0f, 0.0f);
    }           
    
    
	if (TestSphereTrace(rayPos, rayDir, hitInfo, vec4(10.0f, 10.0f, 20.0f, 5.0f)))
    {
        hitInfo.albedo = vec3(0.0f, 0.0f, 0.0f);
        hitInfo.emissive = vec3(1.0f, 0.9f, 0.7f) * 100.0f;
    }         
}

vec3 GetColorForRay(in vec3 startRayPos, in vec3 startRayDir, inout uint rngState)
{
    // initialize
    vec3 ret = vec3(0.0f, 0.0f, 0.0f);
    vec3 throughput = vec3(1.0f, 1.0f, 1.0f);
    vec3 rayPos = startRayPos;
    vec3 rayDir = startRayDir;
    
    for (int bounceIndex = 0; bounceIndex <= c_numBounces; ++bounceIndex)
    {
        // shoot a ray out into the world
        SRayHitInfo hitInfo;
        hitInfo.dist = c_superFar;
        TestSceneTrace(rayPos, rayDir, hitInfo);
        
        // if the ray missed, we are done
        if (hitInfo.dist == c_superFar)
            break;
        
		// update the ray position
        rayPos = (rayPos + rayDir * hitInfo.dist) + hitInfo.normal * c_rayPosNormalNudge;
        
        // calculate new ray direction, in a cosine weighted hemisphere oriented at normal
        rayDir = normalize(hitInfo.normal + RandomUnitVector(rngState));        
        
		// add in emissive lighting
        ret += hitInfo.emissive * throughput;
        
        // update the colorMultiplier
        throughput *= hitInfo.albedo;      
    }
 
    // return pixel color
    return ret;
}

When we run the shadertoy we see something like the above, but the specific dots we see move around each frame due to being random over time.

This might not look great but we are really close now. We just need to make pixels show their average value instead of only a single value.

Averaging Pixel Values

To average the pixel values We are going to need to make our shader into a multipass shader.

The first step is to make another pass. Click the + sign next to the “image” tab and select “Buffer A”

Now you should have a Buf A tab next to the image tab.

Move everything from the image tab to the Buf A tab. In the image tab, select “Buffer A” for texture “iChannel0” and put this code in to read and show pixels from Buffer A.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    vec3 color = texture(iChannel0, fragCoord / iResolution.xy).rgb;
    fragColor = vec4(color, 1.0f);
}

The next step is to actually average the pixel values.

In the “Buf A” tab, also select “Buffer A” for texture “iChannel0” so that we can read Buffer A’s pixel from last frame, to average into the current frame’s output. Now we just add lines 27, 28, 29 below to average all the pixels together!

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    // initialize a random number state based on frag coord and frame
    uint rngState = uint(uint(fragCoord.x) * uint(1973) + uint(fragCoord.y) * uint(9277) + uint(iFrame) * uint(26699)) | uint(1);
    
    // The ray starts at the camera position (the origin)
    vec3 rayPosition = vec3(0.0f, 0.0f, 0.0f);
    
    // calculate the camera distance
	float cameraDistance = 1.0f / tan(c_FOVDegrees * 0.5f * c_pi / 180.0f);        
    
    // calculate coordinates of the ray target on the imaginary pixel plane.
    // -1 to +1 on x,y axis. 1 unit away on the z axis
    vec3 rayTarget = vec3((fragCoord/iResolution.xy) * 2.0f - 1.0f, cameraDistance);
    
    // correct for aspect ratio
	float aspectRatio = iResolution.x / iResolution.y;
    rayTarget.y /= aspectRatio;
    
    // calculate a normalized vector for the ray direction.
    // it's pointing from the ray position to the ray target.
    vec3 rayDir = normalize(rayTarget - rayPosition);
    
    // raytrace for this pixel
    vec3 color = GetColorForRay(rayPosition, rayDir, rngState);
    
    // average the frames together
    vec3 lastFrameColor = texture(iChannel0, fragCoord / iResolution.xy).rgb;
    color = mix(lastFrameColor, color, 1.0f / float(iFrame+1));

    // show the result
    fragColor = vec4(color, 1.0f);
}

After 5 minutes of rendering, the result looks pretty noisy, pixels are too bright and clip, and it isn’t a great scene, but we are in fact path tracing now! You can even see some neat lighting effects, like how you see some red on the wall next to the red ball, due to light reflecting off of it. The spheres are casting some nice looking shadows too.

We can clean up the scene a little bit, make something like a cornell box and get this render after 5 minutes

When Rays Miss

One last feature before we close up this first post in the series.

Right now when a ray misses the scene and flies out into the void, we stop raytracing, which implicitly is saying that the void outside the scene is black. Instead of having our background be black, let’s have it be an environment.

In the “Buf A” tab choose the forest cube map for texture iChannel1 and then add line 19 to GetColorForRay() below.

vec3 GetColorForRay(in vec3 startRayPos, in vec3 startRayDir, inout uint rngState)
{
    // initialize
    vec3 ret = vec3(0.0f, 0.0f, 0.0f);
    vec3 throughput = vec3(1.0f, 1.0f, 1.0f);
    vec3 rayPos = startRayPos;
    vec3 rayDir = startRayDir;
    
    for (int bounceIndex = 0; bounceIndex <= c_numBounces; ++bounceIndex)
    {
        // shoot a ray out into the world
        SRayHitInfo hitInfo;
        hitInfo.dist = c_superFar;
        TestSceneTrace(rayPos, rayDir, hitInfo);
        
        // if the ray missed, we are done
        if (hitInfo.dist == c_superFar)
        {
            ret += texture(iChannel1, rayDir).rgb * throughput;
            break;
        }
        
		// update the ray position
        rayPos = (rayPos + rayDir * hitInfo.dist) + hitInfo.normal * c_rayPosNormalNudge;
        
        // calculate new ray direction, in a cosine weighted hemisphere oriented at normal
        rayDir = normalize(hitInfo.normal + RandomUnitVector(rngState));        
        
		// add in emissive lighting
        ret += hitInfo.emissive * throughput;
        
        // update the colorMultiplier
        throughput *= hitInfo.albedo;      
    }
 
    // return pixel color
    return ret;
}

With those changes, here is a 30 second render.

Noise

In 30 seconds it looks about as noisy as the previous 5 minute render did. The 5 minute render looks so much worse because the scene has both very dark places and very bright places. The more different the lighting conditions are in different places in a scene, the more noise you will have. Also, smaller brighter lights will cause more noise than larger dimmer lights.

Learning more sophisticated path tracing techniques (like direct light sampling, also known as next event estimation) makes this not be true, but we are aiming for understandability in this path tracer, not convergence speed.

But, you might notice that as your scene is rendering, shadertoy is reporting 60fps, meaning that the scene rendering is being limited by vsync. The rendering would go faster than 60fps if it could, and so would converge faster, but it can’t.

There’s a shadertoy plugin for chrome that you can install to help this, and you can tell it to do up to 64 draws per frame, which really helps it converge more quickly. I’m on a ~5 year old gaming laptop with an nvidia 980m gpu in it, and I got this full screen render in a minute, which has converged pretty nicely! Click on it to view it full screen.

You can get the shadertoy plugin here: https://chrome.google.com/webstore/detail/shadertoy-unofficial-plug/ohicbclhdmkhoabobgppffepcopomhgl

An alternative to getting the shadertoy plugin for chrome is to just shoot N rays out for the pixel each frame, and average them, before putting them into the averaged buffer. That is more efficient than doing the multiple shader passes N times a frame since things are in the memory cache etc when iterating inside the shader. That means your image will converge more in the same amount of time.

Closing

The shadertoy that made this image, that contains everything talked about in this post, is at: https://www.shadertoy.com/view/tsBBWW

Try playing around with the path tracer, making a different scene, and modifying the code for other visual effects.

If you make something, I’d love to see it! You can find me on twitter at @Atrix256

There are going to be 3 or 4 more posts in this series that will be coming pretty quickly, on a variety of topics including depth of field and bokeh, shiny reflections, anti aliasing and fog/smoke. We also are not even sRGB correct yet! These things will improve the image significantly so stay tuned.

Some other awesome resources regarding path tracing are:

  1. Peter Shirley’s “Ray Tracing in One Weekend” (now a free PDF) https://www.realtimerendering.com/raytracing/Ray%20Tracing%20in%20a%20Weekend.pdf
  2. Morgan McGuire’s “Graphics Codex” https://graphicscodex.com/
  3. My more formal blog post on diffuse & emissive path tracing https://blog.demofox.org/2016/09/21/path-tracing-getting-started-with-diffuse-and-emissive/

Path tracing doesn’t only have to be used for realistic rendering though. Below is a stylized path tracing shadertoy I made, in vaporwave style.

“Neon Desert” stylize path tracer shadertoy: https://www.shadertoy.com/view/tdXBW8

Thanks for reading, part 2 coming soon!

Using Blue Noise For Raytraced Soft Shadows

Make sure and click the images in this post and view them at full size. Noise (especially blue noise) tends to disappear in smaller images.

There are 3 shadertoys that go with this post where you can see the techniques in action and see the source code of the implementations.

To go along with the blue noise ray marched fog and light shafts of the last post (Ray Marching Fog With Blue Noise), another fun usage case of blue noise is in raytracing soft shadows.

By soft shadows i mean the kinds of shadows you get from area lights, not the hard edged shadows you get from point lights and directional lights.

Soft shadows have a penumbra – the soft edges where it transitions from being fully in shadow to fully lit, like in the image below.

If you want to know more about why shadows work that way, give this post of mine a read: Why Are Some Shadows Soft And Other Shadows Hard?

So how do we use blue noise for this? We’re going to start with spherical directional lights first, then show how to extend it to spherical positional lights, and then to spherical spot lights.

Spherical Directional Light : White Noise

Shadertoy: https://www.shadertoy.com/view/3sfBWs

A directional light is a light that is shining from a specific direction, regardless of where you are at in the world.

These lights simulate light sources that are so far away, that any movement (translation) you do doesn’t measurably change your relative position to the light source because the sizes and distances involved are gigantic.

We are basically talking about the sun here.

Ok, so raytracing a shadow for a directional light like this just involves shooting a ray along the negative light direction (towards the sun) and seeing if it hits any geometry (the world, blocking the sun light). If it hits anything, the origin of the ray is in shadow. If it doesn’t hit anything, the origin of the ray is lit. The answer is binary and so the shadow edge is hard. There is no penumbra.

The problem with this model of lighting is that it pretends that the sun is an infinitely tiny point in the sky, when the sun is actually a circle, at least from our perspective. It is actually a sphere (or close enough to a sphere), but the “solid angle” 2d projection of it is a circle.

So, to have more realistic lighting and shadows, instead of shooting a ray at a single tiny point in space, we need to see if the ray origin can see the circle that is the spherical lighting source, and even more so, we need to know what percentage of the circle it can see.

The mathy explanation is we need to integrate visibility over the circle, but since our scene isn’t likely to be a closed form equation, we are going to need to resort to numerical methods and sample over the circle domain to get the shadow term.

The less mathy explanation is that we want to shoot rays at a couple places on that circle at random, and see what percentage of those rays were able to see the circle. We multiply the lighting by that percentage, and we automatically get soft shadows.

Ok so now that we know what we need to do, how do we do it?

So first up, we need to generate a “uniform point on a circle”, by which i mean generate a point where every point in the circle is equally likely to be chosen. If we have 2 uniform random numbers from 0 to 1 we can do that with the GLSL code below, assuming rng is a vec2 with two uniform random numbers in it. Any decent shader hash function or shader rng function should work for making those numbers, but i like using the “Hash without sine” functions from this shadertoy https://www.shadertoy.com/view/4djSRW

float pointRadius = c_lightRadius * sqrt(rng.x);
float pointAngle = rng.y * 2.0f * c_pi;
diskPoint = vec2(pointRadius*cos(pointAngle), pointRadius*sin(pointAngle));

Note c_lightRadius in the above. That value is the perceptual size of the circle in the sky. Like if you saw a photograph of the sky, this would be the radius of the circle of the sun if that photograph was 1 world unit away (the radius is in world space units under those strange viewing conditions). In the shadertoy demo I use a value of 0.1.

So now we have a uniform random point on a circle but what do we do with it? Well, to make it an actual point on the sun, we need a tangent basis so we have an X, Y and Z coordinate system to put this circle into.

Here is some GLSL to do that.

vec3 lightTangent = normalize(cross(c_lightDir, vec3(0.0f, 1.0f, 0.0f)));
vec3 lightBitangent = normalize(cross(lightTangent, c_lightDir));

This code assumes that the light direction is not ever going to point straight up. You could put in code to use a different vector if the light direction was ever too close to straight up if you want to fix that assumption. Also, c_lightDir is assumed to be normalized.

So now that we have a uniform random point on a circle, and we have a tangent basis aka coordinate system for that circle, it’s time to make a world space target for the ray to shoot at, so we can get a ray direction.

This part is pretty simple. We just pretend that the circle is 1 unit away and use the coordinate axes to convert the point on the circle into a world space point. From there, we subtract the ray’s position from the ray target position and normalize that vector, to get the ray direction.

vec3 rayTarget = rayPos + c_lightDir + diskPoint.x * lightTangent + diskPoint.y * lightBitangent;
vec3 shadowRayDir = normalize(rayTarget - rayPos);

Looking at that code, you might notice we are adding in the ray position, just to subtract it out again. We can skip a step and do this instead of the code above:

vec3 shadowRayDir = normalize(c_lightDir + diskPoint.x * lightTangent + diskPoint.y * lightBitangent);

If you do this, you do in fact get a penumbra. This is what made the image labeled “white noise raw” in the image at the top of this section, using 16 shadow rays per pixel.

Quick note about perf: 16 shadow rays per pixel is kind of a lot. One way to help perf would be to say “if the first 4 rays all agree as to whether we are in shadow or not, just return that answer without doing the 12 other rays”. That would make this code cheaper everywhere except when in the penumbra. Another way to help perf would be to do shadows at a lower resolution. Another way would be to use fewer rays, but filter spatially and/or temporally (over time) to hide that fact. I’ve also heard of people making low resolution shadow maps and skipping the raytracing for pixels very clearly no where near a penumbra.

Spherical Directional Light : Blue Noise Over Space, Low Discrepancy Over Time

Being the blue noise zealot I am, white noise has a strong stink to me, and i know that if white noise is being used, almost certainly better results can be had by either using blue noise, or low discrepancy sequences instead.

We are actually going to use blue noise in 2 different ways to improve the shadows. The first way is that instead of using white noise (uncorrelated) uniform random numbers on the circle, we are going to use blue noise (negatively correlated) uniform random numbers on the circle.

White noise clumps together making semi redundant samples, and also leaves big holes between samples, making for larger unknown areas. Blue noise, on the other hand, is randomized, but roughly evenly spaced samples. (More info: What the heck is blue noise?)

I have some blue noise in a circle points i generated by using mitchell’s best candidate algorithm, but when generating candidates, i made sure they were all in a circle before doing the rest of the logic (aka i used rejection sampling inside of MBC). I also made it not calculate distances toroidally, since wrap around doesn’t make sense in this context. The points are also generated in [-1,1] instead of the usual [0,1].

For details of Mitchell’s best candidate algorithm check out my blog post: Generating Blue Noise Sample Points With Mitchell’s Best Candidate Algorithm

Below is that list of points I generated. These blue noise points are progressive, meaning that if you use the first N, regardless of what N is, you will have blue noise. So, this supports up to 64 samples, or any lower number of samples.

const vec2 BlueNoiseInDisk[64] = vec2[64](
    vec2(0.478712,0.875764),
    vec2(-0.337956,-0.793959),
    vec2(-0.955259,-0.028164),
    vec2(0.864527,0.325689),
    vec2(0.209342,-0.395657),
    vec2(-0.106779,0.672585),
    vec2(0.156213,0.235113),
    vec2(-0.413644,-0.082856),
    vec2(-0.415667,0.323909),
    vec2(0.141896,-0.939980),
    vec2(0.954932,-0.182516),
    vec2(-0.766184,0.410799),
    vec2(-0.434912,-0.458845),
    vec2(0.415242,-0.078724),
    vec2(0.728335,-0.491777),
    vec2(-0.058086,-0.066401),
    vec2(0.202990,0.686837),
    vec2(-0.808362,-0.556402),
    vec2(0.507386,-0.640839),
    vec2(-0.723494,-0.229240),
    vec2(0.489740,0.317826),
    vec2(-0.622663,0.765301),
    vec2(-0.010640,0.929347),
    vec2(0.663146,0.647618),
    vec2(-0.096674,-0.413835),
    vec2(0.525945,-0.321063),
    vec2(-0.122533,0.366019),
    vec2(0.195235,-0.687983),
    vec2(-0.563203,0.098748),
    vec2(0.418563,0.561335),
    vec2(-0.378595,0.800367),
    vec2(0.826922,0.001024),
    vec2(-0.085372,-0.766651),
    vec2(-0.921920,0.183673),
    vec2(-0.590008,-0.721799),
    vec2(0.167751,-0.164393),
    vec2(0.032961,-0.562530),
    vec2(0.632900,-0.107059),
    vec2(-0.464080,0.569669),
    vec2(-0.173676,-0.958758),
    vec2(-0.242648,-0.234303),
    vec2(-0.275362,0.157163),
    vec2(0.382295,-0.795131),
    vec2(0.562955,0.115562),
    vec2(0.190586,0.470121),
    vec2(0.770764,-0.297576),
    vec2(0.237281,0.931050),
    vec2(-0.666642,-0.455871),
    vec2(-0.905649,-0.298379),
    vec2(0.339520,0.157829),
    vec2(0.701438,-0.704100),
    vec2(-0.062758,0.160346),
    vec2(-0.220674,0.957141),
    vec2(0.642692,0.432706),
    vec2(-0.773390,-0.015272),
    vec2(-0.671467,0.246880),
    vec2(0.158051,0.062859),
    vec2(0.806009,0.527232),
    vec2(-0.057620,-0.247071),
    vec2(0.333436,-0.516710),
    vec2(-0.550658,-0.315773),
    vec2(-0.652078,0.589846),
    vec2(0.008818,0.530556),
    vec2(-0.210004,0.519896) 
);

Just doing that change, we get this strange result:

The problem is that every pixel is using the same blue noise sample sequence, and there are only 16 samples. That means there are 16 overlapping shadows for each sphere basically. Blue noise has done a good job in that those shadows are pretty different from each other, but the shadows aren’t good enough yet.

Now we need to bring in the second blue noise. We are going to tile a blue noise texture in screen space (you can get one here: http://momentsingraphics.de/BlueNoise.html), so that we can get a “blue noise random value” per pixel that is between 0 and 1. We are going to multiply that value by 2 * pi and use that as an angle for how to rotate our 2d blue noise samples.

Each pixel will have a different rotation amount, and each pixel will use a rotation that is very different from the rotation of neighboring pixels. This is due to blue noise textures having that property: pixel values are very different from neighbor pixel values.

A nice optimization here is that since all samples are being rotated by the same amount for a pixel, you can calculate the cosine and sine of the angle outside of the loop for shooting shadow rays, and just re-use them inside that loop for rotation.

If we do that, we end up with the image on the right labeled “blue noise raw”. Compare it vs “white noise raw” to see how the noise in the penumbra is way less noticeable. Same number of samples, same amount of computational complexity.

We aren’t done yet though… we have to now consider the axis of time, before we can declare victory.

Cutting right to the case, we are going to add frameNumber * goldenRatio to the blue noise value and fract it to bring it back to the 0 to 1 range. We want to do that before we multiply by 2 * pi to make it an angle.

If we do that, the blue noise value for each pixel becomes a low discrepancy sequence over time. It damages our blue noise over space property a little but it is a net win.

For a deeper discussion about this topic, check out these posts:
Animating Noise For Integration Over Time
Animating Noise For Integration Over Time 2: Uniform Over Time

The short answer to why this is better is that the distribution of a pixel’s value over time for shorter time durations is going to be closer to the actual distribution of the pixel. This is in contrast to white noise which does have the right average over larger sample counts, but for shorter sample counts may clump values together, and leave voids of unseen values.

In short, this makes the pixel more temporally stable, which is great for TAA and other temporal filtering/accumulation methods, but also is nicer even when viewing it without temporal filtering. You can turn on and off the ANIMATE_NOISE define in the shader and look at the “blue noise raw” panel to see how it looks when animated vs not animated without filtering.

Here are the final results, and the link to the shadertoy again.

Shadertoy: https://www.shadertoy.com/view/3sfBWs

By the way, noise is way more noticeable when it’s high contrast with the things around it. For instance, here’s the same image as the above again, but the ambient lighting is 10 times brighter (0.5, instead of 0.05). It’s way harder to see the noise, even the white noise!

Spherical Positional Light

With spherical directional lights explained, the hard part is done!

You only need to make 2 modifications to your code to get from spherical directional lights to spherical positional lights.

The first change is that instead of c_lightDir being a constant, wherever you use that vector, you should instead use the normalized vector pointing from the pixel being shaded to the position of the light (the center of the light).

The second change is that c_lightRadius isn’t constant either anymore. Unlike the sun, which doesn’t noticeably change size as you move around the world, spherical positional lights DO noticeably change size. As you get farther away from the light, the 2d projected circle for that light gets smaller. As you get closer, the circle gets larger.

The formula is actually real simple. If c_lightRadius is the radius of the sphere, and dist is the distance from the shaded pixel to the center of the light, the actual radius you should use for the circle when doing the rest of the “spherical directional light” logic is just: c_lightRadius / dist.

Making those changes, you can get this:

Shadertoy: https://www.shadertoy.com/view/ts2BRh

Spherical Spot Light

Now that you have a spherical positional light, it’s pretty easy to modify it into a spherical spotlight.

The first thing to do is to re-introduce a light direction. This light direction is the direction that the spot light is shining in. We are going to dot product the vector from the light to the pixel by this light direction to know what angle the pixel is from the light, to know if it’s in the light cone of the spot light or not.

The next thing to do is to define a “cosine theta inner” and a “cosine theta outer”. When the dot product in the last paragraph is less than “cosine theta outer”, then there is no light. If it’s greater than “cosine theta inner” then it’s full light. Between those values we want it to fade from lit to unlit, and for that i like to use smoothstep to do a non linear fade. Here’s some glsl code that does this:

vec3 lightDir = normalize(c_lightPos - hitPos);
float angleAtten = dot(lightDir, -c_lightDir);
angleAtten = smoothstep(c_cosThetaOuter, c_cosThetaInner, angleAtten);
lightColor *= angleAtten;

The rest of the code runs the same as a Spherical Positional Light, and you get a result like this:

Shadertoy: https://www.shadertoy.com/view/tsjfRh

Summary

This post showed how to use blue noise in two different ways for raytracing soft shadows.

  1. Blue noise sample points in a circle were used to sample a circle representing the light. Using blue noise here results in less error than white noise.
  2. A blue noise texture was used to rotate those 2d sample points per each pixel to make the resulting error be screen space blue noise, which looks better, is harder to detect, and is filtered more easily, compared to white noise, even for the same amount of error.

An important note here is that while blue noise converges at the same rate as white noise (but starts with lower error), low discrepancy sequences can converge quite a bit faster than white noise or blue noise.

If we could use a low discrepancy sequence for sampling the circle in step 1 above and converge so that there was no error, we could be done and wouldn’t need to do step 2.

Unfortunately, low discrepancy sequences have aliasing and so have worse looking error than blue noise before they converge. For the ray sample counts we have in budget right now for real time rendering, we definitely can’t afford to converge, even with an LDS, so that is why we reached for blue noise.

This is important to know, because for non real time rendering usage cases, you are likely better off with an LDS, than blue noise. Blue noise is just a neat trick you can do when you are really strapped for samples (aka can’t afford many rays).

One last thing – it turns out that sampling the projected 2d circle for 3d spherical lights is not quite the correct thing to do. It’s a decent approximation but for more information, check out this link (and note, you can apply blue noise the same way with better spherical light sampling):

https://schuttejoe.github.io/post/arealightsampling/

Happy raytracing and feel free to hit me up on shadertoy or on twitter if you have any questions @Atrix256

Ray Marching Fog With Blue Noise

The shadertoy that goes with this post is at: https://www.shadertoy.com/view/WsfBDf

I talk about blue noise a lot more often than I show usage cases of blue noise. Ray marching fog is a great usage case of blue noise that I haven’t shared yet.

Make sure and click the images in this post and view them at their full size to best see the noise. Blue noise tends to melt in thumbnail images, due to the low pass filtering involved in making an image smaller.

Real quick before going into it, here are some other usage cases of blue noise with corresponding shadertoys.

So here’s an algorithm for ray marching fog, which can give some simple scattering effects, like crepuscular lighting aka “god rays”.

  1. Render normally.
  2. Take N steps down the ray for the pixel starting from the camera, going to the depth value for the pixel.
  3. At each step, look in a shadow map to see if that world position is in shadow or not. Calculate the percentage of steps that were in shadow.
  4. Use this percentage to lerp between an “unlit fog color” and a “lit fog color”, use that as the fog color for that pixel.
  5. Use the usual distance fog calculations for that pixel.

Here is an image of doing the algorithm above, doing 256 steps.

If you decrease the step count to 16, the algorithm gets a lot faster, but you get much worse results that look like this:

That looks real bad, but you can break up banding by using noise. When you do the ray marching, you can roll a random number (white noise) for each pixel, and use that 0 to 1 value as a percentage of “one step distance” to push the ray start down the ray. Doing that, you get this result.

That result is better than the banding, but is pretty ugly. Using a tiled screen space blue noise texture as a source of the random numbers instead gives you this result, which is lots better.

As a bonus, here’s a result using Jorge Jimenez’s “Interleaved Gradient Noise” which is designed to work best when used with TAA that uses the “3×3 neighborhood history rejection” method.

Other Notes

There is a shadertoy that shows all this at https://www.shadertoy.com/view/WsfBDf

The step count used in the ray march defines how many “shades” there are between the lit and unlit fog. For instance, 2 steps will only let the lit percentage be: 0%, 50%, 100%. That only gives 3 shades of fog. In the images above where 16 steps were used, it gives 17 different shades (it’s always numsteps+1). If you drop the count lower, the noise will have fewer shades and be more noticeable in all cases.

If you use PCF sampling of the shadow map (if doing this in a raster setup), the fog light shafts get softer edges, which can look real nice, if you want that look.

If this technique looks interesting to you, you should also give this a read, for making better fake fog: https://www.iquilezles.org/www/articles/fog/fog.htm

These still images tell only part of the story. If you animate the noise well over both space AND time, where a single frame has good noise patterns, but each pixel is also blue or low discrepancy over time, you’ll have even better results. The shadertoy has a define for animating the noise ANIMATE_NOISE that you can turn on and off to see the difference. The result is better RAW like the shadertoy shows it, but it’s also better under temporal filtering, because it makes the value temporally converge to something closer to the correct value with less flickering.

The blue noise is animated by adding the golden ratio * the frame number to the blue noise texture value and using fract to bring it back between 0 and 1. Repeatedly adding the golden ratio to any starting value, and fract’ing it, will make a progressive low discrepancy sequence, which is exactly what we want. Doing this process makes each pixel be low discrepancy over time while the blue noise texture makes the pixels be blue noise (randomized low discrepancy) over space. Unfortunately, this damages the blue noise over space a bit, but it is a net win.

Interleaved gradient noise in this shadertoy is animated by scrolling the texture on each axis by 5.588238 pixels each frame. This is a value Jorge Jimenez (the maker of IGN) found through much manual effort, to try and find a scroll amount that made pixels be low discrepancy sequences. He hasn’t published this information but said I was free to share it so long as I gave him credit. Thanks Jorge!

For more info on animating noise, check out these two posts of mine:

The sampling in this setup has four dimensions, but isn’t vanilla four dimensional. It uses blue noise for the 2 dimensions of screen space, it uses low discrepancy sampling for the dimension of time, but it uses regular spaced samples on the dimension of distance down the ray.

I believe there are at least a couple different better ways to sample this setup but I’m not 100% sure what they are, or what the ideal one would be.

The “regular spaced samples over distance” seem like something that would be nice to look at fixing, but it’s also possible that a triangular distributed blue noise could be better for the blue noise over space.

For another rabbit hole to go down, check out this very related presentation slide deck.
“Low Complexity, High Fidelity – INSIDE Rendering”
https://www.gdcvault.com/play/1023002/Low-Complexity-High-Fidelity-INSIDE

Guess a Number Between 1 and 10

When you were a kid and had a gumball to share but 2 of your friends wanted the gumball, what would you do?

When I was a kid, and now that I have two kids of my own, a common way to resolve disputes like this is to say “Pick a number between 1 and 10 and whoever is closer gets the gumball”.

This works with any number of people and with any number of rewards because you can say the N closest people out of M people total get a reward. Ties can be broken by repeating the process.

It SEEMS like a purely random choice too, doesn’t it?

Well, if you want to win, the right move is to always guess 5 or 6, actually. It makes sense if you think about how 5.5 is right in the middle, so 5 and 6 are the numbers least far from all the other numbers.

If you are only playing against one other person, and both the real number and the guess are really random (uniform independent random numbers), guessing 5 will let you win 55% of the time, and tie 14% of the time. That means you will lose only 31% of the time, or less than one out of three times!

There is a way to make this game fair again though, and that is by letting the numbers wrap around between 1 and 10 as if they were on a circle instead of a number line.

So, if the right answer is 1, and you guess 10, without wrap around, the distance would be 9. With wrap around, the distance would be 1.

You calculate wrap around distance by saying “if the difference is greater than 5, then the difference is actually 10 minus that number”.

Playing that way, it puts all guesses on equal footing. Every number acts like a 5, because every number is 5 or less away from every other number.

In that scenario, guessing a 5 will cause you to win 41% of the time, tie 18% of the time, and lose 41% of the time.

The game is fair again, as is shown experimentally below.

The code that generated this output is at:
https://github.com/Atrix256/GuessNumberOneTen

Another way kids have to solve these sorts of situations is through “inka binka” and similar, where there is a song and a deterministic process to pick a winner.

My 5 year old already realized the same position wins every time, which i was pretty proud of hehe.

This sort of stuff is pretty neat though and makes me think people out there must be studying childhood computational sociology 😛