Inverting a Button Press (Featuring Current Dividers)

This post talks about how to make a circuit where you press a button to turn off a light, and also explains how and why it works.

Here’s a circuit lighting up an LED (diagram made in https://www.circuitlab.com/editor/). 5 Volts is powering the circuit and the LED has a voltage drop of 1.85 volts, leaving 3.15 volts. Those 3.15 volts are put across a 100 ohm resistor, resulting in 32 milliamps of current through the circuit. That is a bit high for the LED but my power supply is showing 25 milliamps of current actually going through the circuit, which is more in line with the actual limits of the LED.

This image has an empty alt attribute; its file name is image-1.png
This image has an empty alt attribute; its file name is image-2.png

We can add a switch, so that the light is off until we press the button to turn it on. When the button is pressed, it closes the circuit and allows electricity to flow.

This image has an empty alt attribute; its file name is image-3.png
This image has an empty alt attribute; its file name is image-4.png
This image has an empty alt attribute; its file name is image-5.png

What if we want a circuit where the light is on until we press the button, and then the light turns off?

To make that happen, the basic idea is that you have a switch that when pressed, connects the circuit to a lower resistance path to ground. it can’t be a zero resistance path to ground though, because then it would be a short circuit, draw a lot of current, and your components could heat up and catch fire.

Here i make a circuit through the LED with 200 Ohms of resistance, making 16mA. When the button is pressed, another path opens up though which is 100 Ohms of resistance into ground (32mA).

This image has an empty alt attribute; its file name is image-8.png
This image has an empty alt attribute; its file name is image-6.png
This image has an empty alt attribute; its file name is image-7.png

Since there is less resistance when the button is pressed, there is more current and more power being used when the LED is off (!!!). My power supply says 11 milliamps / 55 milliwatts when the button isn’t pressed, and 45 milliamps / 225 milliwatts when the button is pressed, and the light is off. Interesting that turning off a light would use more power, isn’t it? To help lower the current draw when the button is pressed, you could replace the 2nd resistor with a higher resistor value, but that will also lower the current when the button isn’t pressed, and so make the LED dimmer.

You could keep the same overall resistance but have more of it in the 2nd resistor and less in the first resistor. At best it would make it so pressing the button only used a tiny bit more power than when it wasn’t pressed, but this setup will make it always use more power when the button is pressed.

Why Does Electricity Completely Bypass The Resistor and LED?

You might wonder why when the switch is closed, making the circuit below, that the electricity seems to completely bypass R1 and the LED. The circuit is connected, so shouldn’t some go through R1 and the LED too? What prevents that from happening?

This image has an empty alt attribute; its file name is image-10.png

This gets doubly strange when you realize that while there is no resistor on the path where the switch was, the wire itself does have resistance, so it’s like there is just a very small resistor on that path. Wires have effects on both voltage (voltage drops across sections of them) and current, just like resistors do, so why is this configuration special?

First let’s look at the current in parallel resistors. We’ll start with two 100 Ohm resistors in parallel off of a 5 volt source.

This image has an empty alt attribute; its file name is image-9-3.png

First we can calculate the equivalent parallel resistance of these two resistors. When resistors are in parallel, the effective resistance actually drops. The formula for equivalent parallel resistance is:

1/R = 1/R_1 + 1/R_2 + ... + 1/R_N

So for these, 1/R = 1/100 + 1/100 = 1/50. So, R = 50, meaning these two 100 Ohm resistors in parallel are equivalent to this:

This image has an empty alt attribute; its file name is image-11.png

50 Ohms of resistance at 5 volts means you get 5 volts / 50 ohms = 0.1 amps of current through either circuit.

When a circuit splits though like in the circuit with two 100 ohm resistors, the current splits as well and is divided, possibly unevenly, across those paths.

Paths with lower resistance get more percentage of the current. The formula for current through a resistor in a set of parallel resistors is:

(I * 1/R_k) / (1/R_1 + 1/R_2 + ... + 1/R_N)

Calculating it for R1, we have: (0.1 * 1/100) / (1/100 + 1/100) = 0.001 / (2/100) = 0.05 amps.

It isn’t real surprising that it gets half the amount of amps, since both resistors are the same value. They each get half the current.

What if we change the resistors to have values of 190 Ohms and 10 Ohms respectively?

This image has an empty alt attribute; its file name is image-9-5.png

First up, we can calculate the equivalent parallel resistance: 1/R = 1/190 + 1/10. R = 9.5 Ohms. At 5 volts, we get 5 volts / 9.5 ohms = 526mA of current.

Let’s now calculate how much of the current goes through the 190 Ohm resistor.

(0.526 * 1/190) / (1/190 + 1/10) = 0.026A or 26mA.

Let’s calculate how much goes through the 10 Ohm resistor.

(0.526 * 1/10) / (1/190 + 1/10) = 0.5A or 500mA.

Most of the current by far is now going through the 10 Ohm resistor, the smaller resistor. As R2 gets smaller and approaches 0 Ohms of resistance, like a wire would also approach, it’s going to approach taking all of the current, since one divided by the resistance controls what percentage of the current is taken down that path, and 1 divided by a very small number is a very large number.

That’s the intuition for why current will almost 100% go through the open wire in the circuit with the switch when it’s available. When you put resistors in parallel like this, it’s actually called a current divider, much like a previous post showed how to use resistors to make voltage dividers.

When the switch is pressed in our setup, a small amount of current does go through the resistor though, and tries to go through the LED as well, but there’s another thing at play here: voltage.

Voltage if you remember is the difference in electrical potential between two points. The LED requires a difference of 1.8 volts minimum to let electricity through and to light up. When the switch is pressed, the voltage is the same on the positive and negative side of the LED which means the LED has zero volts and does not light up or let electricity flow through it. Let’s explore why that is…

This image has an empty alt attribute; its file name is image-12.png

In this circuit, there is 5 volts, a total of 1000 Ohms, and so 5mA of current. The voltage drop across R1 is the full 5 volts.

This image has an empty alt attribute; its file name is image-14.png

This circuit also has 5 volts, 1000 Ohms total, and 5mA of current. To calculate the voltage drop of the resistors, you use Ohms law of the form V = IR. You multiply the current of the circuit by the resistor value to get the voltage drop across the resistor. For R1 it’s V = 0.005A * 900 Ohms = 4.5 volts. For R2 it’s V = 0.005A * 100 Ohms = 0.5 volts.

You can see that the larger resistor got more voltage drop, while the smaller resistor got almost no voltage drop.

This image has an empty alt attribute; its file name is image-15.png

This circuit has the same voltage, total resistance, and current. Skipping ahead to calculating the voltage drop across the resistors, R1 is 0.005 * 999 = 4.995 volts. R2 is 0.005 * 1 = 0.005 volts. As R2 approaches zero, the voltage drop also approaches zero. This is why in our original circuit (below), when the switch is closed, there is (almost) no voltage drop across the wire on the right, meaning the voltage above R1 and the voltage below the LED are the same, so there is 0 volts going through the path of the circuit that has an LED on it. This along with the nearly zero current trying to go through there as well.

This image has an empty alt attribute; its file name is image-8.png

If you want to know more about this stuff, give this a read: https://learn.digilentinc.com/Documents/345

There is also a neat technique to do these sorts of calculations called nodal analysis: https://www.youtube.com/watch?v=f-sbANgw4fo

There is another technique called mesh analysis: https://www.youtube.com/watch?v=eQpc2QRFv7Y

Alternatives To This “Off Button” Circuit

The fact that the circuit uses more power when the light is off is pretty bad, so you are probably interested in some alternatives.

The button I am using is called “single pole single throw” switch. The single pole means that there is one electrically connected input, and the single throw means that it only has one output that the input is connected to or not. Double pole switches are switches that can control two different circuits with the same button/switch – you can turn on two different parts of a circuit when one button is pressed. Double throw switches are switches that connect to one output if the button is pressed, and a different output if the button is not pressed. Using a double throw switch in the last circuit, you could hook the LED up to the output for when the button was not pressed, and you could leave the other output disconnected, for when the button was pressed. That would make it use no power when the light was turned off, instead of using more power.

With all this talk of switches, you may be wondering if there’s a switch that you can turn on and off with electricity, instead of requiring a human to actually press a button. There are in fact such things! There is something called a relay, which when it is given power, it powers an electromagnet inside of itself and closes a switch using that magnetism. You can actually hear them click as they turn on and off! Much more common for this task are transistors though, which allow small amounts of electricity to control the flow of larger amounts of electricity. This allows them to be used as electronic switches, but also allows them to work as amplifiers. It would be fun to write a blog post about them at a future point. Transistors can be used to make circuits that invert a button press value too though. At that point, we are basically talking about a NOT gate.

Thanks for reading!

Addendum

Someone pointed out to me that LEDs themselves have internal resistance, so you could move all the resistance down into the shared path. This works because the LED has internal resistance. The nice thing about this is that when the button is pressed, it only uses 25mA instead of 50mA. I tested it and it does indeed work!

This image has an empty alt attribute; its file name is image-19.png

Resistance and Voltage Dividers

When i first started working with electronics, i tended to think of my circuits, or even parts of my circuit, in isolation. The horror of it though is that your circuit is plugged into other things – at minimum a battery, but commonly other devices, or your house and the power grid – and those things can affect how your circuit works.

Beyond being physically connected with wires to other things, your circuits also have a connection to the rest of the world through electromagnetic fields.

In this post we are going to talk about voltage divers, which on one hand can be useful if made on purpose, but can also be made on accident and cause you strange behaviors.

Voltage Dividers

Voltage dividers are a way of giving you a lower voltage. If you have a 9 volt battery and only want 6 volts, a voltage divider can do that for you. There is a downside to voltage dividers that we’ll explore in this post, but they are incredibly simple to make: you only need two resistors.

First let’s look at a single resistor in a circuit. Lets put a 1000 ohm resistor in a circuit with a 9 volt battery. If we connect our multimeter probes to the wire on the same side of the resistor and measure volts we’ll get zero volts (see diagram below). This is because volts is a measurement of electric potential between two points. Our multimeter is measuring the difference in electric potential between two points right next to each other on a wire, and the difference is essentially zero. The red and black arrows on the circuit diagram are where we connect the red (+) and black (-) probes of our multimeter.(Tangent: 9 milliamps is going through this circuit since there is 1000 ohms of resistance and 9 volts. The power supply says 8 but it has limited accuracy, resistors are not exactly their labeled value, wires have resistance, etc. It also reads that there are 9 volts * 8 milliamps = 72 milliwatts of power being used.)

(Circuit diagrams made at https://www.circuitlab.com/editor/)

What if we put our multimeter on different sides of the resistor? In that case, we read 9 volts. The resistor makes it more difficult for electricity to cross, and thus there is a difference in electric potential of 9 volts, on each side.

What would happen if we put two resistors in?

If we measure at the red and black arrows again, we’ll still have 9 volts. If we measure at the red and orange arrows though, we’ll see 4.5 volts. If we read at the orange and black arrows, we’ll also see 4.5 volts. We know that the whole circuit needs to go from 9 volts to 0 volts since that is what is provided by our battery, but it dropped by half on the first resistor, and then dropped the rest of the way on the second resistor. (Tangent: the total resistance here is 2000 ohms, so 4.5 milliamps would flow through the circuit)

Let’s change the value of the resistors and see what happens.

I didn’t have a 2000 ohm resistor so i just put two 1000 ohm resistors in series (more on that further down).

If we measure between the red and black, we still have 9 volts. If we measure between the red and orange, we get 3 volts though, and if we measure between the orange and black, we get 6 volts. Weird! (Tangent: The total resistance here is 3000 ohms, so there should be 9 volts / 3000 ohms = 3 milliamps flowing through the circuit but my power supply isn’t showing that correctly.)

Similarly, you can change the second resistor to be half instead of double and get the opposite result.

I didn’t have a 500 ohm resistor so i put two 1000 resistor ohms in parallel (more on that further down).

What is going on here is that the 9 volts are dropping off across the resistors based on their relative values. When the resistors are equal in value, they each get half of the voltage. When they are unequal, the voltage across the R2 resistor is calculated like this:

V_{R2} = V * \text{R2} / (\text{R1}+\text{R2})

To actually use this as a power source, you would connect new wires as the positive and negative power for a sub circuit.

Note in the above, I’m not saying that is -6V and +6V, which would be 12 volts total, I’m just labeling the positive and negative sides of the 6 volts of power available.

You could use the top part as a 3 volt source if you wanted instead, or in addition to the 6 volts you are using from the bottom part. You could even split the voltage into more than just two levels, but instead could put in N resistors to have N voltage levels.

The famous 555 timer for instance internally uses a voltage divider with three 5K resistors to make three different power levels, and that is why it’s called a 555 interestingly. You can see it at the top of this diagram of a 555 timer, between the ground (pin 1) and the +Vcc supply (pin 8).

555 timer block diagram

(This image is from this 555 timer tutorial: https://www.electronics-tutorials.ws/waveforms/555_timer.html)

Resistors in Series vs in Parallel

When I needed a 2k Ohm resistor in the last section I put two 1k Ohm resistors in series. When you put resistors in series, their values add together, allowing you to additively create whatever resistance you need.

When I needed a 500 Ohm resistor and didn’t have one, I put two 1k Ohm resistors in parallel. This is because putting resistors in parallel gives electricity more than one path to get through, and thus has lower resistance than if there was only one of the resistors. The exact equation for the resistance of resistors in parallel is:

1/R = 1/R_1 + 1/R_2 + ... + 1/R_N

Where R_i is the value of a specific resistor.

This means that if you put two of the same valued resistors in parallel, the resistance will be cut in half. If you put three of them in parallel, the resistance will be cut in three.

This formula comes up again in electronics. For capacitors, when you put them in parallel, their capacitance adds. When you put them in series, their capacitance follows the parallel resistor equation. It’s the same formulas, but parallel / series reversed. Strange huh?

1/C = 1/C_1 + 1/C_2 + ... + 1/C_N

Where C_i is the value of a specific capacitor (in Farads).

Something else strange is that this is why thicker wire has less resistance too. There are more paths for electricity to travel through the thicker wire, compared to thinner wire, so resistance goes down.

Below are some images of two 1k Ohm resistors in series and in parallel, with the multimeter showing the total resistance value.

One resistor:

Two resistors in series:

Two resistors in parallel:

What Happens When Using a Voltage Divider?

Ok so let’s start with the voltage divider we set up before.

Now let’s say we actually use that 6 volts to power something. That something will have a resistance of 2k Ohms. Maybe it’s some kind of light bulb.

We can simplify this circuit though. The 2k Ohms of our load, and the 2k Ohms of the voltage divider are in parallel so we can use our formula for parallel resistance, or remember that two capacitors of equal value in parallel get half the resistance. So that means we could describe our circuit this way, as far as resistance is concerned:

The problem with that is that our voltage divider has changed. The resistors are equal now, which means that our 6 volts has dropped down to 4.5 volts!

If we decreased the resistance of what we were powering, the voltage would drop too. Intuitively, imagine if you had a short circuit so had zero resistance across the load, the electricity would completely bypass the 2k Ohm resistor in the voltage divider as if it weren’t there, so there would be zero volts difference between the top and bottom of the 2k Ohm resistor.

If we increased the resistance of what we were powering, we would raise the combined parallel resistance there on the 2nd part of the voltage divider, but luckily would at most have 2k ohm resistance. For instance, using a 1 mega ohm resistive load, the parallel resistance formula gives us a resistance of 1.996 k Ohms. So, if we had a high resistance load, we’d get nearly our full 6 volts, but would never quite have the full 6 volts. At the limit, if our load was disconnected, and thus had infinite resistance, we would get the full 6 volts.

If you know the resistance of the load you are plugging into the voltage divider, you can take it into account and choose a resistor for the voltage divider that gives you the desired parallel resistance amount and thus the right voltage. Some loads have variable resistances though, and then you have a problem and should look at other methods of changing the DC voltage level, such as a buck converter.

Some loads have no resistance though, and a voltage divider can come in really handy. Supplying power to a transistor’s base, or to an op amp’s input, or to an optocoupler’s input for instance can make great use of these because they just “read” the voltage signal there without putting any extra load on it.

The lesson here is that whenever you plug things together, you might get strange drops in voltage because you’ve accidentally created a voltage divider. If your resistance is sufficiently higher than whatever internal resistance what you’ve plugged into has, you can ignore the voltage drop, but that also decreases the amperage so may not be desirable.

This effect even comes up in batteries (and other power sources) which essentially can be modeled as an ideal voltage source, with a small resistance (like 10 ohms). If you use a low valued resistor on a battery, the voltage will drop because you are secretly part of a voltage divider involving the internal resistance of the battery (and in fact, that “internal resistor” can’t take that much power and will start heating up, which can be dangerous! So don’t short circuit batteries!). Since a battery’s resistance is so small, your resistance level is likely to be much higher when using the battery to power something, and this isn’t something you really have to worry about in normal situations.

Of course, all this talk only deals with DC and resistors. Things get more complex when you have capacitors, inductors or AC power.

Maximum Power (Watts)

So we saw that as R2’s resistance gets larger, the voltage across R2 becomes larger, and at infinite resistance, it gets all the voltage available.

We also know that the larger the resistance, the lower the amps in the circuit, so getting that voltage comes at a cost.

Watts is a unit of measurement of power and is volts multiplied by amps. It turns out that if you want your voltage divider to have maximum power (watts), that R1 should equal R2. Wikipedia has more about that here: https://en.wikipedia.org/wiki/Impedance_matching

Here are some graphs showing this, where if resistor R1 is 1k Ohms, that you get the highest amount of watts when R2 is also 1k Ohms, despite the behavior of the volts and amps.

Calculating Resistance (and Voltage) of an Unknown Circuit

Since plugging your circuit into other things can make an implicit / unintentional voltage divider, you probably want to know how much resistance some other black box circuitry might have. Luckily you can figure this out using Ohms law (see last post: Voltage, Amps, Resistance and LEDs (Ohm’s Law)) and some simple algebra.

First, connect a resistor to the + and – and measure the amps in the circuit. If you use a resistor that is too low value, or has too low of a wattage rating, the resistor will get hot, possibly start glowing or burst into flames (resistors have a rating in watts and the common ones for small electronics like those seen in this post can handle 1/4 of a watt). So basically be careful if doing this with high voltages – and in fact, if my blog is your primary source of knowledge, please don’t mess with high voltage 🙂

So let’s say we connect a 1k Ohm resistor and read a value of 0.01 amps or 10 milliamps.

Ohms law says:

I = V/R

where I is current, V is volts and R is resistance.

So we now have this formula:

0.01 = V / (1000 + R_1)

We have one equation with two unknowns, so we need another equation to make it solvable by having two equations and two uknowns. Let’s say we take an amperage measurement using a 500 Ohm resistor and get 0.017 amps or 17 milliamps.

That gives us a second equation:

0.017 = V / (500 +  R_1)

We now have two equations with two unknowns!

We can solve the first equation for V and get:

V = 0.01 * (1000 + R_1)

From there we can plug V into the second equation to get:

0.017 = 0.01 * (1000 + R_1) / (500 + R_1)

Solving for R1, we get:

R_1 = (0.01 * 1000 - 0.017 * 500) / (0.017 - 0.01) =  214.28 \Omega

If you do the calculations, you get 214.28 ohms, which means the unknown circuit has that much resistance.

What’s nice is that you can also use this to get the total amount of voltage available to this circuit by plugging this resistance into the first equation that we solved for V:

V = 0.01 * (1000 + 214.28) = 12.14 \text{volts}

This was a toy example i made up, using 12 volts and 200 ohms of resistance, so our answer is pretty close. The inaccuracies came from rounding off the numbers, but you’ll get the same problems in real life from not completely accurate measurements and imperfect electronic components.

For convenience, here are the equations to calculate the resistance of an unknown circuit, without having to do the algebra each time.

R_1 = ( I_A * R_{2A} - I_B * R_{2B}) / (I_B-I_A)

Where R_1 is the resistance of the unknown circuit. R_{2A} is the first resistor value you connected and measured to get I_A amps. R_{2B} is the second resistor value you connected and measured to get I_B amps.

Once you have the R_1 value, you can plug it into this to get the voltage available to the circuit:

V = I_A * (R_{2A} + R_1)

Let’s take these equations for a spin with a battery. I accidentally popped the fuse on my digital multimeter and can’t use it to measure amps so i’ll use my analog multimeter.

First i’ll measure the amps with a 1k Ohm resistor. The knob is set to 10 milliamp measurements so the bottom row of readings (that are labeled 0 to 10) are where you read from. I drew some yellow to show you where to read from. I read 8.6 milliamps.

Next i’ll put two 1k Ohm resistors in series to make 2k Ohms of resistance and measure amps to get what looks like 4.6 milliamps.

Ok so let’s plug our values into the equations!

R_1 = ( I_A * R_{2A} - I_B * R_{2B}) / (I_B-I_A)

R_1 = ( 0.0086 * 1000 - 0.0046 * 2000) / (0.0046-0.0086) = 150 \Omega

So it looks like this 9 V battery has 150 ohms of resistance. I’ve heard that as a battery is used, it’s resistance goes up, so maybe this battery is nearing needing to be replaced having such large resistance.

Let’s calculate how many volts it has.

V = I_A * (R_{2A} + R_1)

V = 0.0086 * (1000 + 150) = 9.89 volts

So, the battery has 9.89 volts inside of it. Either they made the battery have higher than 9 volts inside of it, to account for internal resistance dropping the output voltage, or my 5$ analog multimeter is not very accurate and these are just ball park figures.

Closing

Thanks for reading and hopefully you found this interesting or useful.

Have any requests or ideas for other topics to write about? Drop me a message on twitter at @Atrix256.

Voltage, Amps, Resistance and LEDs (Ohm’s Law)

I’ve taken up learning electronics during the pandemic, and have enjoyed it quite a bit. I’ve been programming for 25+ years, so it’s nice to have something different to learn and work on that is still both technical and creative. It’s cool getting a deeper understanding of how the fundamental forces of nature work, as well as being able to MacGyver a hand crank powered flashlight from an old printer if needed (Check out a 40 second video of that here!). It’s also nice having something physical to show at the end of the day, although it does require consumable parts, so there are pros and cons vs making software.

Friends (Hi Wayne!) and YouTube have helped me learn a lot, but I found the subject pretty alien at first and wanted to try my hand at some explanations from a different POV. This post starts that journey by taking the first steps into DC electronics.

Ultra Basics

Electricity flows if there is a path for it to flow in and the flow is made up of electrons.

Electrons are negatively charged so travel from the negative side of a circuit to the positive side.

Conventional current flow is backwards from this though, and says that electricity flows from the positive side to the negative side. In this case, it’s not electrons flowing but “holes” flowing. Holes are a weird concept, but they are just a place that will accept an electron.

Here is an open circuit, which means that there is a gap. Since the circuit is not closed, electricity cannot flow. (made in https://www.circuitlab.com/editor/#)

If you close the circuit, like the below, electricity is able to flow.

The circle on the left is a power source with a + and – terminal. It’s labeled as a 1.5 volt double A battery.

Here is a diagram of a circuit with a switch that can be used to open or close the circuit. Being able to read and make circuit diagrams is real helpful when building things or trying to understand how circuits work.

Of note: the higher the voltage, the farther that electricity can jump across gaps. So, while at low voltage, a circuit may be open, turning up the voltage may make it closed when the electricity arcs across!

Ohm’s Law

ohms-law-illustrated

IMAGE CREDIT: Eberhard Sengpiel

The most useful thing you can learn about DC electricity is Ohm’s law which mathematically explains the relationship between voltage, amperage and resistance. Ohm’s law is:

I = V/R

In the equation I stands for Intensity and means current aka amps, V stands for voltage and R stands for resistance.

If electricity was water, voltage would be the water pressure, amperage would be how much water was running through the pipe, and resistance would be a squeezing of the pipe, like in the image above.

Current is measured in amperes (amps) or the letter A. 500mA is 500 milliamps, or half of an amp, and 1.2A is 1.2 amps. Note: electricity is dangerous! It can take only a few hundred milliamps to be fatal, but voltage is needed to be able to let those amps penetrate your skin.

Voltage is measured in volts or the letter V. If you see 9V on a battery, that means it’s a 9 volt battery, and is capable of providing 9 volts.

Resistance is measured in Ohms or the omega symbol \Omega. So if you see 5\Omega that means 5 Ohms of resistance. If you see 5k\Omega that means 5 kiloohms which is 1000 times as much resistance. If you see 5M\Omega with a capitol M, that means 5 megaohms, which is 1000 times as much resistance again.

Where Ohm’s law comes in handy is when you know two of these three values and you are trying to calculate the third one.

As written, the formula showed how to calculate amps when you know voltage and resistance, but you can use algebra to re-arrange it to a formula for any of the three:

I = V/R

R = V/I

V = I*R

This comes up quite often – if you know how much voltage a battery has, and you know how many amps you need, you can use this to calculate the value of the resistor to get the desired amps.

Diodes, LEDs and Resistors

LED stands for Light Emitting Diode. A diode is something which lets electricity only flow in one direction and it has a couple of common uses:

  • Protecting circuits from electricity flowing in the wrong direction.
  • Turning Alternating Current (AC) into Direct Current (AC) by rectifying it (preventing the negative part of AC from getting through. Same as last bullet point)
  • Lots of cool tricks, like stabilizing uneven power levels by letting voltage over a specific value “spill over” out of the circuit.

Here is a pack of various diodes i bought from amazon for 10$. There are a quite a few different types of diodes, which are useful for different situations.

Here are some diodes close up. The black one is a rectifier diode IN4001, and the more colorful one is a switching diode 1N4148. Those part numbers are actually written on the diodes themselves but are a bit hard to see. You can use these numbers to look up the data sheet for the parts to understand how they work, what their properties are, how much voltage and amps they can handle, and often even see simple circuit diagrams on using them for common tasks. Data sheets are super useful and if doing electronics work, you will be googling quite a few of them! Here is the data sheet for 1N4148 which i found by googling for “1N4148 data sheet” and clicking the first link. 1N4148 Data sheet.

Here are two circuit diagrams with diodes in them. The black triangle with the line on it is a diode. The arrow shows the direction that it allows conventional flow to travel. The line on the arrow corresponds to the bands on the right of the diodes in the image above, which is the negative side of the diode (cathode). The left circuit is a closed circuit and allows electricity to flow. That diode is forward biased. The circuit on the right has the diode reverse biased which does not allow electricity to flow.

LEDs can do many things regular diodes can do, since they are diodes, but they have the property that when electricity flows through them, they light up. Since they are diodes, and only let electricity flow in one direction, LEDs have a + side and a – side and you have to hook them up correctly in a circuit for them to light up. If you hook them up the wrong way, it doesn’t damage them, but they don’t light up and they don’t close the circuit for electricity to flow. The symbol for an LED is the diode symbol, but with arrows coming out of it.

Here are a pack of LEDs i have that came as part of a larger electronics kit. You can get a couple hundred LEDs in a variety of colors from amazon for about 10$. Some LEDs are in colored plastic cases, some are in clear cases. There are even LEDs that shine in infrared and ultraviolet. LEDs also come in different sizes. This pack has 3mm and 5mm LEDS.

Here is an up close look at a white LED. The longer leg is the positive side, which means you need to plug the positive side of the circuit into it if you want it to light up. the negative side has a shorter leg, but the negative side also has a flat side on the circular ring at the bottom, which can’t really be seen in this picture.

All diodes have a voltage drop, which is a voltage amount consumed by the diode. If you are providing less than that amount of voltage, the diode will act as an open switch, and electricity won’t flow through it. The specific voltage drop for diodes can be found in data sheets, but i’ve found it difficult to find data sheets for LEDs. Luckily I picked up a “Mega328” component tester from amazon for 15$. It lets you plug in a component, press the blue button, and then tells you information about the component. It’s super handy! Here you can see the voltage drop of 2 different LEDs. The smaller red LED has a voltage drop of 1.88V while the larger green LED has a voltage drop of 2.5V. If you supply them with less than that amount of voltage, they will not light up!

So what would happen if we tried to connect the LEDs to the batteries below?

The large green LED has a 2.5V voltage drop, while the AAA battery only has 1.5V as you can see on the label. That means the LED doesn’t light up.

The smaller red LED has a 1.88V voltage drop and is connecting to a 9V battery so it has enough voltage and should light up. Let’s use Ohm’s law to calculate how much current – in amps – are going through the LED.

I = V/R and in our case V is 9 and R is 0 because we have no resistance.

I = \frac{9}{0} = \infty

Oops we have infinite current! The LED is destroyed pretty quickly after you plug it in.

There isn’t actually infinite current, because the metal wires connected to the LED have a very tiny amount of resistance to them, just like all wire, and the battery has a limit of how many amps it can give. So in any case, it isn’t infinite amps, but it is a very large number, limited by how many amps the 9V battery can actually deliver. The LED would actually be destroyed. You should basically always use a resistor with an LED to limit the current and keep it from being destroyed. Here is an interesting read about how to calculate the internal resistance of a battery which will then tell you how many amps it can give you: Measuring Internal Resistance of Batteries.

When you have a circuit with this low of resistance, it’s considered a short circuit, and if the LED didn’t get destroyed, the battery would start getting hot and it could become a dangerous situation. This is also why short circuits themselves are bad news. They have a LOT of current running through them which can cause things to heat up, melt and catch fire.

3mm and 5mm LEDs typically want 20 milliamps maximum (20mA or 0.02A) to be at full brightness. If you give them less, they will be less bright but still function.

We can calculate then how much resistance they want to be maximally bright if we know the voltage of the power source we are using and the voltage drop off of the LED we are trying to power.

Let’s take the larger green LED with a 2.5V voltage drop, and power it with a 9V batter, aiming to get 20mA.

First we subtract the voltage drop from the supply to see how much voltage we have to work with: 9V – 2.5V = 6.5V.

Next, we know we want 20mA and we have 6.5V, and we are just trying to solve for resistance so we use Ohm’s law: R = V/I.

R = 6.5V / 0.02A = 325\Omega

So, we need 325 ohms of resistance to get 20mA in our LED from a 9V battery. Here is a pack of resistors i got from amazon for 12$.

Resistors have funny colored bands on them which tell you their rating. You can find charts for decoding them all over the place, but again, the “Mega328” will tell you this too.

In fact, a multi meter will tell you as well. Multi meters aren’t very expensive. Here’s one i got from amazon for 35$ which has tons of features and works really nicely.

I don’t have any 325 ohm resistors, but i do have 470 ohm resistors, so i’ll just use one of those. That’s 14mA if you do the math, which is a bit lower than 20mA, but it still works just fine despite not being as bright as it could be. You can get different resistances by connecting resistors in parallel or series and doing some math, but this works for now. I used a mini breadboard (the green thing) to hook this circuit up. Every horizontal line of 5 holes is connected to each other electrically. It’s a nice way to play with circuits without having to solder things together. By convention, red is used for the positive terminal and black or blue is used for the negative.

By the way, quick fun fact. A 1.5V AA battery is considered dead when it has dropped down to 1.35V. At this point, it still has energy in it though! If you are clever with electronics, you could make circuitry to use this power from dead batteries to give you 1.5V or higher, and you could drain so called dead batteries even further.

LEDs Turning Light Into Power

Many things in electronics turn out to be reversible. Speakers work as poor microphones, and microphones work as poor speakers. Similarly, LEDs can work as poor solar cells and turn light into energy. Want to see? Here i hook my multimeter up to an LED, and have it set to read volts. It reads 48.7mV. Energy is flowing all around us from radio waves, etc, so it’s picking up some of that.

When i put the LED in the beam of the flashlight, it jumps up to 1.644V. Pretty cool huh?

Did You Like This Post?

It’s a little different than what I usually write about, but hopefully you liked it. Careful though, this stuff escalates quickly. Before you know it you’ll be harvesting optocouplers and coils from old printers to make a rail gun.

Perlin Noise Experiments

I talk and write a lot about noise so people will sometimes ask me about Perlin noise and other types of noise used for procedural content generation. I’m not usually much help because the noise I focus on is more about sampling and stochastic rendering techniques.

I was recently ray marching some Perlin noise based fog though, and came across Eevee’s (https://twitter.com/eevee) great write up on Perlin noise here: https://eev.ee/blog/2016/05/29/perlin-noise/

While reading that, it caught my eye that clumping of the random numbers was a problem. “Of course!” I thought to myself “White noise has clumping problems. I wonder how using blue noise instead would fare?” and decided to write this blog post, thinking also that low discrepancy sequences could be useful. This is the results of those and some more basic Perlin noise experiments. TL;DR nothing ground breaking was found, but there may still be some things of interest here.

The simple C++ code that generated the images for this post, and the small python script to make DFTs is available at https://github.com/Atrix256/Perlin.

Smoothing

2D Perlin noise uses a grid of 2D vectors that is smaller than the final image resolution. To shade a pixel, it gets the four corners of the cell containing the pixel, dot products the vector of each corner to the pixel with the vector at the corner, and does bilinear interpolation of this scalar value to get the color of the pixel.

If you just do that, you get an image that looks like this (Image on left, discrete Fourier transform on right):

That obviously is no good, so just like Inigo Quilez does in his article (https://iquilezles.org/www/articles/texture/texture.htm), the fractional part of the pixel’s position on the grid is put through a smoothing function to round it out a bit. The original paper used smooth step (https://en.wikipedia.org/wiki/Smoothstep) which looks like this:

An improvement in a follow up paper is to use smoother step instead, which is a higher degree interpolating polynomial, which looks like this:

Different Sized Grids

This shows what it looks like to use different sized grids for the perlin noise. The first uses 2×2 grids, then 4×4, then 8×8, then 16×16, then 32×32 and lastly 64×64. It’s interesting that the 2×2 grid Perlin noise looks a bit like blue noise. If you look at the DFT it does a bit as well, but is missing the highest frequencies at the corners, and has quite a bit of low frequency noise.

White Noise

Here we use a cell size of 16×16 on 256×256 images, using 1, 2 and 3 octaves. Each octave uses the same (repeating) white noise vectors.

Here a different set of white noise vectors is used per octave, which doesn’t seem to change the quality much:

Blue Noise

Here a 16×16 blue noise texture is used to generate the angle of the 2D vectors for the grid, on the 256×256 image. A 64×64 blue noise texture and DFT is also shown to see things more clearly. The same blue noise texture is used for each octave. First is the blue noise texture and DFT, then the Perlin noise made with 1, 2 and 3 octaves.

The noise doesn’t look that different visually when using blue noise instead of white, but the DFT has a bunch of dark circles repeated it in, which i believe is because the blue noise has a dark circle in the middle, and we are seeing some kind of convolutional effect. In any case, the lack of clumping in blue noise doesn’t seem to really change anything significantly.

Here we use a “different” blue noise texture for each layer. We actually just use a low discrepancy sequence (R2 http://extremelearning.com.au/unreasonable-effectiveness-of-quasirandom-sequences/) to find an offset to read for each octave. Using an LDS to offset reads into a blue noise texture makes for roughly maximally independent reads, which can act as independent blue noise for some usage cases (not 100% sure if that’s true here since there are different scales of the same texture involved, but meh).

Interleaved Gradient Noise

For the “low discrepancy sequence” route, we need a low discrepancy sequence which you plug in an 2D pixel integer index and get a scalar value out. I don’t know that common thinking calls IGN a low discrepancy sequence, or that something of this configuration could be considered a LDS, but I think of it as one because it has the property that every 3×3 block of values (even when they overlap!) have roughly all values 0/9, 1/9, … 8/9.

Here is IGN used to get the angle to make the vectors for the perlin noise grid, using the same noise values for each octave.

Here, R2 is used once again to make “independent” noise values per octave.

An interesting looking result but maybe not real useful. Maybe this just shows that you can plug different styles of noise into Perlin noise to get other looks in the results?

Bigger Renders

Here are larger renders of single octave white noise. First is a 16×16 grid, then a 64×64.

And here’s the same using blue noise – first the 16×16 blue noise texture used for the grid, then the 64×64 blue noise.

Mean Squared Error is Variance

It’s April and this is my first blog post of the year. 2020/2021 has been a hard time for me like it has been for so many other people. After being absolutely destroyed at the end of last year, I discovered I have issues with both anxiety and depression and am talking to a therapist working through the problems, essentially debugging my life and thought patterns to live a better life. The virus and the BS related to the last president pushed me to a breaking point that I just couldn’t brute force muscle through like I normally do. Much improved now though luckily!

So, onto the main topic…

When analyzing randomized things, I often find myself wanting to graph averages to show how well things converge, and also wanting to graph variance or standard deviation to show how much they swing above and below that average. Averages alone can hide that important information. Variance shows up as noise when rendering too, so low variance is a nice thing.

I’ve seen quite a few sampling papers only report variance, not averages, and I never really understood why. The other day someone casually mentioned that mean squared error is variance and it threw me for a loop.

After thinking about it a bit, I was convinced: mean squared error is in fact variance, and root mean squared error is standard deviation. Let me show you…

To calculate the variance of a stream of values, you keep track of:

  1. Average value
  2. Average squared value

Then, variance is just this:

Variance = AverageSquaredValue – AverageValue*AverageValue

And you can square root that to get the standard deviation.

(Which BTW, there is a nice and easy numerically stable way to keep a “running average” that you can read about here: https://blog.demofox.org/2016/08/23/incremental-averaging/)

When we are talking about error, we know that the average value should be 0 if our process is unbiased, so we can modify the variance equation to be the below:

Variance = AverageSquaredValue

And since the value we are tracking is error, we can write it as:

Variance = AverageSquaredError

MSE is “mean squared error” where the word average above is the mean, so…

Variance = MeanSquaredError

And you can square root that to get the standard deviation of the error, which is also RMSE “Root Mean Squared Error”.

The nice thing about MSE being variance and RMSE being std dev is that if you are ok seeing squared error instead of regular error, you can have a single graph that communicates both error and variance in one.

I also find it interesting that squared error is used because that links it to “least squares” curve fitting (https://blog.demofox.org/2016/12/22/incremental-least-squares-curve-fitting/), which is pretty darn useful, and makes it feel a lot more ok to be looking at squared error instead of regular error. A benefit of using squared error is that it makes outliers a lot larger / more costly. This means that given the choice between one large error, or many little ones that equal the same amount of error, it will choose the many little ones instead. That means less noise in a render, and less variance.

This was a short post, but I have another one in mind I want to write next – and soon – that ought to be pretty interesting, combining my favorite noise for sampling (blue noise) and a commonly used noise for procedural content generation (Perlin noise).

Until then, stay safe!

Multiple Importance Sampling in 1D

This is a follow up to an article I wrote a few years ago on Monte Carlo integration and importance sampling in 1D: https://blog.demofox.org/2018/06/12/monte-carlo-integration-explanation-in-1d/

The simple, well commented code that generated all the data for this post can be found at: https://github.com/Atrix256/mis/

A challenge when doing Monte Carlo integration in rendering is that the function you are trying to integrate is often made up of other functions multiplied together. While you may know how to importance sample some of the parts individually, you ultimately have to choose which thing to importance sample, because you are generating random numbers according to whichever thing you choose.

In rendering, the three things usually being multiplied together are lighting, material and visibility (which makes shadows). Lighting and materials are things you can usually importance sample and are based on the type of light (like a spherical area light) and the material model (Like a PBR microfacet BRDF), while visibility is not usually able to be importance sampled because it is entirely due to the geometry in a scene as to whether a pixel can see a light or not.

If you importance sample based on lighting, you can get poor results when the material ended up being more important to the result. Likewise, if you importance sample based on material, you can get poor results when the lighting ended up being more important to the result.

Multiple importance sampling is a way to make it so that you don’t have to choose, and you can get the benefits of both. More generally, it lets you combine N different importance sampling techniques.

TL;DR

Before going into the explanation, here is how you actually get 1 MIS sample using the balance heuristic, when you have two importance sampling techniques:

F is the function being integrated. PDF1 / InverseCDF1 are for the first importance sampling technique. PDF2 / InverseCDF2 are for the second importance sampling technique. You do this in a loop N times, and take the average of those N estimates, to get your final estimate.

You can generalize to more techniques by just following the pattern. Each sampling technique generates it’s own x and y. Each sampling technique calculates the pdf for that x value for each of the other pdfs. The estimate is the sum of: each y value divided by the sum of each pdf for the corresponding x value.

Note that if part of the function F is expensive (like raytracing for visibility!) you don’t have to do that for each sample. You could get your estimate of lighting multiplied by material like in the above, and after combining them, you could then do your raytracing to multiply in the visibility term.

MIS Explained

You can get a single sample from a monte carlo estimator by randomly generating an x value and calculating the estimate as the function value at that x, divided by the PDF value of choosing that x.

\text{Estimate} = \frac{f(x)}{\text{PDF}(x)}

You may also remember that as the shape of the pdf (histogram) of the random numbers gets closer to the shape of the function you are trying to integrate, that you can get a closer estimate to the actual answer with fewer samples. This is called importance sampling.

Let’s say though that you want to integrate the function f multiplied by the function g and you are able to generate random numbers in the shape of f, and random numbers in the shape of g, but not random numbers in the shape of f multiplied by g.

You know that you can choose to importance sample based on f or g, but that the choice is better or worse situationally. Sometimes you want f, other times you want g.

The simplest way to combine these would be to just use them both for each sample and average them. You could also switch off so that even numbered samples importance sampled by f and odd numbered samples importance sampled by g. This is the same as giving each technique a weighting of 0.5.

We can do better though!

We can make an x value to importance sample based on f, and another x value to importance sample based on g, and then we can calculate the PDF values of each x for each PDF.

If we have good importance sampling PDFs, higher PDF values mean higher quality samples, while lower PDF values mean lower quality samples. We now have the means to give a weighting to a sample based on it’s quality as shown below, where we calculate the weight for sample “A”. Sample “B” would do the same.

\text{Weight}_A = \frac{\text{PDF}_A(x_A)}{\text{PDF}_A(x_A)+\text{PDF}_B(x_A)}

This is called the “balance heuristic”. There are other heuristics that you can use instead, which you can read about in Veach’s thesis (in the links section) and other MIS papers which have come out since then.

If we have a Monte Carlo estimate sample like this:

\frac{f(x_A)}{\text{PDF}_A(x_A)}

Some interesting cancelation happens if we multiply that by the weight.

\frac{f(x_A)}{\text{PDF}_A(x_A)} * \frac{\text{PDF}_A(x_A)}{\text{PDF}_A(x_A)+\text{PDF}_B(x_A)} = \frac{f(x_A)}{\text{PDF}_A(x_A)+\text{PDF}_B(x_A)}

That form is the same form seen in the code from the last section, where we also had a sample B that we added to it to get the final estimate.

You may be wondering why sample A and sample B are added together… shouldn’t they be averaged?

Well, if you look at the denominator in that last formula, two PDFs are added together. Each PDF has an expected value of 1, so the expected value of that sum in the denominator is going to be 2. That means that the estimate is going to be half as big as it should be. When you add two of them together, they are going to be as large as they should be. All that has happened is that instead of adding them together and dividing by two to average them, we have divided them by two implicitly in advance before adding them. We are still averaging the two samples. It isn’t exactly averaging, since the PDFs will vary from sample to sample, but on the whole, it’s still an unbiased combination of the two PDFs, which is why we still get the correct answer.

If three PDFs were involved, the weighted samples would be one third the size they should be, and there would be three to add together.

If four PDFs were involved, the weighted samples would be one fourth the size they should be, and there would be four to add together.

It generalizes to any number of importance sampling techniques involved.

One Sample MIS

If you are a fan of stochastic rendering like me, you may be wondering if you really have to do both (all) of the samples, or if you can use the weighting to choose one stochastically and end up with the correct result for less work.

Yes, you can indeed do this and in Veach’s thesis he calls this the “One-Sample Model” in section 9.2.4.

In this case, what you do is calculate the weight for each sample, and then divide each of those weights by the sum of the weights to get a probability for taking that specific sample.

After you roll a random number and choose the single sample to contribute to the estimate, you need to multiply the Monte Carlo estimate by the chance of choosing that item. You are left with something that uses multiple PDFs for importance sampling different parts of the function, but each sample evaluates the function F only once. Useful if F is costly to evaluate.

If you expand out weight1, weight2 and weight1chance, you’ll find that some things cancel out and you are left with the below for actually calculating the estimate. I have to admit I don’t have a good intuitive explanation for why that works, but the algebra says it does, and it checks out experimentally. If you have an explanation, leave a comment!

Piecewise Importance Sampling

Multiple importance sampling is a method for combining any number of importance sampling techniques to sample a specific function.

Something interesting though is that not every PDF involved has to cover the entire function.

What i mean is that you could have a PDF which sampled only from the left half of the domain of a function, and another PDF which sampled only from the right half of the domain of a function.

What would happen is that the inverse CDF for the first technique would only generate x values on the left half of the function to integrate, and the PDF would give zero for any value on the right have of the function.

The second technique would do the opposite.

MIS would not care about this in the least. It would function as normal and let you importance sample a function piecewise, if you could make PDFs that fit the parts of a function well, but weren’t able to make a PDF that fit the entire function well.

Veach’s thesis goes into other things as well, such as being able to give different sample counts to different techniques. It’s definitely worth a read!

Experiment #1 – Importance Sampling & Warm Up

Quick reminder, the code that made the data for these experiments is at: https://github.com/Atrix256/mis/

First up we are going to integrate the function y=\sin(x)*\sin(x) from 0 to pi, doing 10,000 different tests, each test doing 5000 samples, and average the results. We are going to use regular Monte Carlo (mc) as well as importance sampled Monte Carlo (ismc), using the PDF y=\sin(x)/2. Below is the function we want to integrate, and the PDF that we are going to use to importance sample it.

We could show the absolute value of the error at each step (the error being averaged over all those tests) and get this. (data from out1.abse.csv)

That isn’t super easy to read other than seeing importance sampling seems to be less erratic and lower error more reliably. We can change it to be on a log/log plot which helps see decay rates better (especially when things like low discrepancy sequences are involved, which we’ll see later).

That’s an improvement, but there is a lot of noise, even after 10,000 tests. Monte Carlo is noisy by definition, so as you can see, sometimes it gets really low error, but then pops right back up in the next few samples. That erratic nature is not good and if you are doing integration per pixel, the variance will make the noise especially bad. In fact, variance is what we really care about. So long as the integration is converging to the right thing (is unbiased / has zero bias), variance will tell us how quickly it is converging on the right answer.

Here is a log/log variance graph. You can more easily see that the importance sampling is a clear win over the non importance sampled Monte Carlo Integration. (data from out1.var.csv)

Now that we see that yes, importance sampling is helpful, and we have our testing conventions worked out, let’s continue on to more interesting topics!

Experiment #2 – Multiple Importance Sampling

Next up, we are going to integrate the function y=\sin(x)*2x from 0 to pi. We are going to use regular Monte Carlo, but also importance sample using y=\sin(x)/2 again, and also y=x*\frac{2}{\pi^2}. We are also going to do multiple importance sampling using both of those PDFs in conjunction, and also do the “single sample method” of MIS. Here are the functions mentioned.

Here is the log/log variance graph.

Monte Carlo (mc, blue) is the obvious worst. Multiple importance sampling (mismc, green) is the obvious best. The second place worst is importance sampling by the line function (ismc2, yellow). The second place best is importance sampling by the sin based PDF (ismc1, red). The one sample method (mismcstoc, purple) seems to be basically the same as the red line. It takes half as many samples as mismc, so it isn’t surprising that it does worse.

It is good to see that multiple importance sampling is worth while though and does significantly better than the two importance sampling methods involved do by themselves.

Experiment #3 – Piecewise Importance Sampling

Next we are going to do piecewise MIS. We are going to integrate y=\sin(3*x)*\sin(3*x)*\sin(x)*\sin(x) using three PDFs for importance sampling where each is just y=\sin(x)/2 shrunken on the x axis to be 1/3 the size and shifted over so that each PDF is responsible for one third of the function domain. The first PDF for example is y=\sin(3*x)*\frac{3}{2} from 0 to pi/3.

Here is the function we are integrating, showing the 3 zones the PDFs cover:

Here is the first of the PDFs. The other two look the same but are shifted over on the x axis.

Here is the variance for regular Monte Carlo versus the piecewise importance sampling, showing that it is a significant improvement to do the piecewise IS here.

Experiment #4 – Low Discrepancy Sequences

Unsurprisingly it turns out that low discrepancy sequences are useful when doing multiple importance sampling. It would be fun to look at using LDS in MIS / IS deeper in a future blog post, especially because things change in higher dimensions, but here are some interesting results in the mean time.

Here is the first experiment, which compared Monte Carlo (mc, blue) to importance sampling (ismc, yellow), now also using low discrepancy sequences for both.

For low discrepancy Monte Carlo (mclds, orange), instead of using white noise independent random numbers 0 to 1 to make my x values, I start the x value at a random number in 0 to 1 for the first sample x value, but then I add the golden ratio to it and use modulus to keep it between 0 and 1 for each subsequent sample. This is the “Golden Ratio Additive Recurrence Low Discrepancy Sequence”. That beats both Monte Carlo, and importance sampled Monte Carlo by a significant amount.

For low discrepancy importance sampled Monte Carlo (ismclds, green), I did the same, but put that sequence through the inverse CDF to generate numbers from that PDF, using LDS as input. It’s worked well here in 1D, but mixing LDS and IS can be problematic in higher dimensions due to the LDS being distorted from the importance sampling warping, and then losing it’s low discrepancy properties.

Here is the second experiment, which compared MC to IS to MIS, now including low discrepancy sequences:

Everything improved by using LDS, but interestingly, the order of best to worst changed.

Not using LDS, multiple importance sampling was the winner. Using LDS, MIS is still the winner. Since there are two streams of random numbers needed for the MIS (one for each importance sampling technique), I used a different low discrepancy sequence for each. For the first technique, i used the golden ratio sequence. For the second technique, I did the same setup, but used the square root of two instead of the golden ratio. The golden ratio is the best choice for this kind of thing, because it is the most irrational number, but square root of two is a pretty good second choice.

Not using LDS, Monte Carlo was the worst performing, but using LDS, Monte Carlo is in the middle, and it’s the first importance sampling technique that does the worst. The second importance sampling technique is in the middle whether you use LDS or not though.

Here is the third experiment now with LDS, which compared Monte Carlo to a piecewise importance sampled function.

This MIS here needs 3 streams of random numbers, so for the LDS, I used the golden ratio sequence, the square root of 2 sequence, and a square root of 5 sequence. Once again, LDS helps convergence quite a bit in both cases!

I’m starting to run out of “known good irrational numbers” so I’m glad we are at the end of the LDS experiments. There are other type of low discrepancy sequences that don’t use irrational numbers, but then you start having to consider the LDS quality along with the results and all the permutations. If you want to go into a deep dive about irrational numbers, give this article of mine a read: https://blog.demofox.org/2020/07/26/irrational-numbers/

Before moving on, look at that last graph again. The amount of variance that 5,000 white noise samples has is the same variance that piecewise importance sampling had, when using only 10 low discrepancy samples. Without LDS though, even the MIS strategy took something like 800 samples to reach that level of variance.

In graphics, these samples could easily represent rays shot into the world for something like global illumination, soft shadows, or raytraced reflections.

It would be real easy to try the most naive Monte Carlo algorithm, find out that you need 5000 samples to converge and give up.

Facing this, you may bust out the MIS and try to do better, finding that you could cut the cost to about 1/6 of what it was, at 800 samples needed to converge. That’s still a ton of samples for real time rendering, so is still out of budget. It would be real easy to give up at this point as well.

If you take it one step further and figure out how to get a nice LDS into the MIS instead of white noise random numbers, you could find that you can decrease it even further, down to 1/80th of what MIS gave you, or 1/500th of the cost of the naive Monte Carlo.

10 samples is still quite a few if we are talking about per pixel raytracing, but that is in the realm of real time affordable.

Good sampling matters, and can help you do some pretty amazing things.

Experiment #5 – Blue Noise

Where low discrepancy sequences are deterministic number sequences that give you good coverage over a sampling domain, blue noise is randomized (non deterministic) number sequences that do the same.

There is some nuance to LDS vs blue noise, and when one or the other should be used. The summary is that regular blue noise converges at the same rate as white noise (there are variants like projective blue noise which do better at convergence) but that it starts with a lower error. Blue noise also has better noise perceptually, which is also more easily filtered (it is high frequency noise only, instead of full spectrum noise). So, the rule in graphics is basically that if you can converge with LDS, do that, else use blue noise to hide the error. Blue noise also does better at keeping it’s desirable properties when put through transformation functions, such as importance sampling.

Unfortunately, blue noise is pretty expensive to calculate, especially with the algorithm I’m using for it, so the sample and testing counts are going to be decreased for these tests to 100 tests, using 500 samples each. Blue noise is best for low sample counts anyways, so decreasing the sample count makes for a more appropriate comparison.

Here is the first experiment, which compared MC to ISMC. Now it has blue noise results, to go along with the LDS results.

The result shows that blue noise does better than white noise, but not as good as LDS.

Here is the second experiment, comparing MC to MIS, now with blue noise. You can see how again the blue noise quality is between white and LDS as far as variance is concerned.

Here is the third experiment, showing the effectiveness of the piecewise importance sampling, using MIS. Once again, blue noise has variance between white noise and LDS.

Links

Here are some other great links for learning about MIS via different points of views and different explanations.

https://www.breakin.se/mc-intro/index.html

https://64.github.io/multiple-importance-sampling/

Veach’s thesis that introduced MIS and goes into quite a few other options for MIS, as well as more rigorous proofs on variance bounds and similar https://graphics.stanford.edu/courses/cs348b-03/papers/veach-chapter9.pdf

Thanks for reading!

Frequency Domain Image Compression and Filtering

Over 4 years ago I wrote a short blog post on images in the frequency domain: https://blog.demofox.org/2016/07/28/fourier-transform-and-inverse-of-images/

It’s time to revisit the topic a bit and add some more things.

If you are curious about how the Fourier transform works, which can transform images or other data into the frequency domain, give this a read: https://blog.demofox.org/2016/08/11/understanding-the-discrete-fourier-transform/

The C++ code that goes with this blog post can be found at https://github.com/Atrix256/FrequencySpaceImages

Image Compression

When you transform an image into the frequency domain, you get a complex number (with a real and imaginary component) per pixel that you can use to get information about the frequencies (literal sine and cosine waves) that go into making the image. One piece of information is the “phase” or starting angle of that wave. You get the phase by using atan2(imaginary, real). The other piece of information is the “amplitude” of that wave, or how large the wave is in the image. The amplitude is the length of the 2d vector (real, imaginary).

A quick and easy way to do image compression then, is to convert an image to frequency space, find the lowest amplitude frequencies and throw them away – literally zero out the complex number. If you throw enough of them away, it’ll take less data to describe the frequency content of an image, than the pixels of the image, and you’ll have compressed the image.

The more aggressive you are at throwing away frequencies though, the more the image quality will degrade. This is “lossy” compression and is a simplified version of how jpg image compression works. Lossy compression is in contrast to lossless compression like you find in png files, which use something more like a .zip compression algorithm to perfectly encode all the source data.

In the code that goes with this post, the DoTestZeroing() function throws out the lowest 10% amplitude frequencies, then the lowest 20%, then 30% and so on up to 90%. At each stage, it writes all complex frequency values out into a binary file, which can then be compressed using .zip as a method for realizing the image compression. As the data gets more zeros, it gets more compressible.

The top row in the image below shows an original 512×1024 image, the DFT amplitude information, and the DFT phase information. The bottom row shows the same, but for an image which has had it’s lower 90% amplitude frequencies thrown away. The DFT data is 8MB for both (uncompressed), and compresses to 7.7MB for the top picture, but only 847KB for the bottom picture. The inverse DFT was used to turn the modified frequency data on the bottom back into an image.

Here is another image which is 512×512 and has DFT which is 4MB uncompressed. The top image’s DFT data compresses to 3.83MB, while the bottom compresses to 438KB.

While fairly effective, this is also a pretty naive way of doing frequency based image compression!

More sophisticated methods use the “Discrete Cosine Transform” or DCT instead of the DFT because it tends to make more of the frequency magnitudes zero, consolidating the data into fewer important frequencies, which means it’s already smaller before you start throwing away frequencies. DCT and DFT also pretend that the images go on forever, instead of just stopping at the edge. DFT acts as if those images repeat in a tiled fashion, while DCT acts as if they are mirrored at each repeat, which can also be a nice property for image quality.

Other methods break an image up into blocks before doing frequency based compression. Also, you can use wavelets to compress images, or principle component analysis or singular value decomposition. You can also fit your image with “whatever” basis functions you want, using L1 norm regularization to promote the coefficients of your fitting to be zero, to make the fit data be less sparse, just like DCT does compared to DFT.

Another thing you can do is use compressed sensing to skip a couple steps: You take a couple randomized but roughly evenly spaced samples from the image (blue noise or LDS are going to be good options here), and then you can eg find Fourier basis coefficients (DFT!) that match the sparse/irregular data samples you took. This is like throwing out low frequencies, but without having to DFT the whole data set, and then throw things out. It starts with sparse data and then fits it.

Bart Wronski has several write ups on his blog in this area, so give them a read if you are interested: https://bartwronski.com/2020/08/30/compressing-pbr-texture-sets-with-sparsity-and-dictionary-learning/

This is a great read showing how to fit data using L1 regularization and all the related information you might be interested in: https://www.analyticsvidhya.com/blog/2017/06/a-comprehensive-guide-for-linear-ridge-and-lasso-regression/

This video is a great overview of the random grab bag of other things I mentioned: https://www.youtube.com/watch?v=aHCyHbRIz44&feature=youtu.be

Image Filtering

In my previous post on this topic I showed how you could throw away frequencies that were farther than a certain distance from the center to low pass filter an image, aka blur it. I also showed how if you threw away frequencies closer than a certain distance, it would high pass filter an image, aka sharpen it.

That throwing away of frequency data based on distance is the same as multiplying the frequency data by a mask which has a 1.0 in some places and a 0.0 in others. You can generalize this to multiply frequencies by any number. In the below I restrict the multiplications to be between 0 and 1, but you could definitely go to larger numbers or even go to negative numbers if you wanted.

The below shows the patterns that the images are multiplied by in this section. Top row left to right is a low pass filter, then a stronger low pass filter (gets rid of more high frequencies than the other) and lastly is a notch filter or “band stop” filter. The bottom row is the complement, such that you get the bottom by subtracting the image from white (1.0). Left to right, the bottom row is a high pass filter, then a weaker high pass filter (lets more low frequencies in) and then a band pass filter which only lets certain frequencies through.

First up is the “Loki and Alan” picture. Frequencies and actual picture values filtered from the pictures on the top are present in the pictures on the bottom and vice versa. In this way, blurring compared to sharpening (and edge detection) are two sides of the same coin. It just matters which part you throw away and which part you keep.

Here is what the frequency magnitudes look like. Note that each image has the magnitudes put through a log function, and also normalized to be 1.0 max. This is why even though the high pass filters (and band pass) darken the middle, it doesn’t seem like it. The renormalization obscures that fact a bit, and the middle is brightest (largest amplitudes) which we saw when throwing out the lowest amplitudes in the last section.

Here are the same filters applied to the scenery image. The top right image has some strange patterns in it if you look closely (click the image to view the full size in another tab).

Image Convolution

In the last section, we made “images” by using a distance function, to make values to multiply the frequencies by to filter out certain frequencies.

In this section, we are going to take two images, put them into frequency space, multiply them together, take them out of frequency space, and see what kind of results come out.

There is something called the “convolution theorem” which tells us that multiplication in the frequency domain, is the same as convolution between the images. Convolution is an expensive operation, because you have to loop through all the pixels of one image, and at each pixel, loop through the pixels of the other image, and do some multiplications and additions. Convolution is so slow, that it can actually be quicker to take two images you want convolved to frequency domain, multiply them together, and then take them out of frequency space to be images again.

Convolution is used in graphics for things like blurs, sharpening, or applying bokeh for depth of field, so speeding it up can be a big help! Convolution is also used in audio for things like reverberation which makes audio sound like it was played inside of a cave or a big cathedral.

Technical note: the “kernel” image needs to be centered at pixel (0,0), not the center of the image. Also, the kernel image should be normalized so that summing up all of it’s pixels adds up to 1.0. You also need to zero pad (add a black pixel border to) both the source image and kernel image to be the size of source+kernel+1 on the x and y axis before DFT’ing so they are the same size, and to avoid wrapping problems. After you are done multiplying and inverse DFT’ing, you can remove the black border again.

Here are the 4 images we are going to use as kernel images: A star, a plus, a circle, and a blob.

Here are the DFT magnitudes of those images.

Here is the “Loki and Alan” picture convolved with those kernel images.

You can see that the images somehow take on the qualities of the kernel… the star one is very angular, the plus one is very “plus like” and the circular one is very circular. Note how the blob acts a lot like a low pass filter! In frequency space, it does actually look like one, so that makes sense:

Here is the scenery picture convolved by the same shapes.

If you think the above looks weird when doing convolution on images, you should give a listen to convolution being used in audio. When used for reverb it sounds good, and sounds correct, but if you use it to convolve arbitrary audio samples together, you can get some really interesting and bizarre sounds! You can hear that here: https://blog.demofox.org/2015/03/23/diy-synth-convolution-reverb-1d-discrete-convolution-of-audio-samples/

The dark border around the image is an artifact from adding a black border around the images to make them the right size (zero padding). If you instead just make the convolution kernel image as large as the image you are convolving (and that is already a power of 2, since this FFT requires that), you’d get the below, which has part of the image “wrapping” across from the other side.

If you used the DCT (discrete cosine transform) instead, it would MIRROR the texture instead of wrapping it, so you’d get more similar pixels to what should be there most of the time, compared to DFT which wraps. Another way to solve this problem though is if you are doing convolution in image space, instead of frequency space, is you can throw away any samples that go outside of the valid area of the images. You want to sum up the weight of the samples you actually took though in this case, and divide the final convolution sum by that weight, to normalize it. That will make pixels near the border have higher weights than they should, but it can be a less jarring artifact than the black border, wrapping, or mirroring artifacts.

Truth be told, many of the operations in this article can be done in a handful of lines of python. I find a lot of value in implementing things myself though, as it helps me internalize the ideas to better understand when and how to use them, and how to avoid problems/mysteries that come up when things are used as black boxes. I feel the tide turning though after a recent look at the sea of algorithms relating to SVD,PCA and finding eigenvectors. That is some crazy stuff, and way too much for a single person to deal with, while still trying to be competent in other topics 😛

When Life Gives You Lemons, Make Random Numbers

I saw this tweet go by on twitter and wondered – who of us hasn’t been in this situation, with things such as…

  • Novella comments on a blog post
  • stack overflow answers
  • “reviewer #3” response on a research paper
  • Yet another javascript framework
  • Or if it’s just review time and your boss needs to come up with some text to justify review scores that were pre-ordained by corporate politics, budgets constraints and the popularity contest farce they try to pass off as meritocracy.

What all these things have in common is that after the fact, you find yourself in possession of completely useless text.

That is, completely useless until today.

John von Neumann gave us the insight we need. If you have a biased source of bits, look at them in pairs. If they don’t match, you take the value of the first one as the unbiased bit to output. We can do this with whatever garbage we find has dropped into our lap, and turn that steaming heap of dung into something actually useful.

I implemented this and you can find the source code on github at https://github.com/Atrix256/EntropySalvager

More info on the technique: https://en.m.wikipedia.org/wiki/Fair_coin#Fair_results_from_a_biased_coin

Here is some example output, using words from the great orator MacDonald Trump as input.

Remember… When life gives you lemons, make random numbers.

Small print: Yeah this is just a joke and the numbers coming out are only as random as the numbers going in. This technique is a way to turn biased uniform random numbers into unbiased uniform random numbers and works with “random” things like coin flips and dice rolls, but less so with things like text which will make patterns in output. You’d want to use Decorrelation to turn this from a joke into a real thing 🙂 https://en.wikipedia.org/wiki/Decorrelation

Irrational Numbers

An irrational number is a number that can’t be represented as a fraction using integers for the numerator and denominator.

I’m a big fan of irrational numbers, and one of the biggest reasons for that is that they are great at making low discrepancy sequences, which give amazing results when used in stochastic (randomized) algorithms, compared to regular random numbers. Low discrepancy sequences are cousins to blue noise because they both aim to keep samples well spread out, but the usage case is different, so it’s situational whether to use blue noise or an LDS. That means that luckily there is room enough in the world for both LDS and blue noise.

This post is a random grab bag of things relating to irrational numbers.

Pythagorian cultists supposedly murdered someone to keep irrational numbers a secret, so this is technically forbidden knowledge, I guess.

Making Continued Fractions – Pi and Milü

Continued fractions are a way of writing numbers that can be useful for helping analyze irrational numbers, and rational approximations to specific numbers – whether they are rational or irrational.

If a continued fraction is infinitely long, that means it represents an irrational number, and all irrational numbers have continued fractions that are infinitely long. If a continued fraction is not infinitely long, that means it’s a rational number.

Here is the beginning of the continued fraction for pi.

Another way of writing continued fractions is to get rid of all the redundant “1 divided by…” and just write the integers you see on the left. The continued fraction for pi above would look like this, which is a lot more compact:

[3;7,15,1,292,1,1,1,2,1,3,1]

Let’s talk about how you make a continued fraction by walking through how to make it for pi, or at least 3.14159265359 anyways, since pi goes forever.

First up you take the integer part and use that as the first digit. Subtracting that integer part out leaves you with a remainder:

[3]
remainder: 0.14159265359

We take 1/remainder to get 7.06251330592. The integer part of this number is going to be our next number. Subtracting the integer part out gives us our next remainder:

[3;7]
remainder: 0.06251330592

1/remainder is 15.9965944095, which makes our next integer and remainder into:

[3;7,15]
remainder: 0.9965944095

1/remainder is 1.00341722818, so now we are at…

[3;7,15,1]
remainder: 0.00341722818

1/remainder is 292.63483365 so now we are at:

[3;7,15,1,292]
remainder: 0.63483365

1/remainder is 1.57521580653 and we’ll stop at the next step:

[3;7,15,1,292, 1]
remainder: 0.57521580653

Wherever you have a large number in the continued fraction, it’s because you just did one divided by a small number. The larger the integer means the smaller the remainder there was to make that integer.

The smaller a remainder is, the better approximated the number is at that step of the continued fraction.

This means that when you see a large number in a continued fraction, that if you truncated the continued fraction right before that large number, you’d have a pretty good approximation of the actual number that the continued fraction represents.

The larger the number, the better the approximation.

Because of this, looking at the continued fraction for pi, below, you can see that it has a pretty large number (292) pretty early in the sequence.

[3;7,15,1,292,1,1,1,2,1,3,1]

This means that the following continued fraction approximates pi “pretty well”.

[3;7,15,1]

We’ll cover how to figure this out further down in the post, but that fraction is actually 355/113 and has a special name “Milü”, found by Chinese mathematician and astronomer, Zǔ Chōngzhī, born 429 AD. It is within 0.000009% of the value of pi.

More info from wikipedia: https://en.wikipedia.org/wiki/Mil%C3%BC

So, while pi is probably the most famous irrational number, it certain isn’t the most irrational number that there is, as it is so easily approximated by smallish integers in a division.

Quick note if you write a program to convert a floating point number to a continued fraction: If you try to make a continued fraction for a number that a floating point can’t represent exactly – such as 4.1 – you will get a very tiny remainder which then makes a very large integer when you flip it over. In my case, when using doubles, it made such a large number that converting it to an integer overflowed the integer. One way to address this is to just consider any remainder smaller than some threshold as zero. Like maybe any remainder less than 0.00001 can be considered zero.

The Continued Fraction of Phi aka The Golden Ratio

So even though pi is not the most irrational number, phi is (pronounced “fee”, and is aka the golden ratio). Yes, it is the most irrational number!

We’ll use the value 1.61803398875 and make a continued fraction.

[1]
remainder: 0.61803398875
1/remainder: 1.61803398875

[1;1]
remainder: 0.61803398875
1/remainder: 1.61803398875

[1;1,1]
remainder: 0.61803398875
1/remainder: 1.61803398875

Wait a second… do you see a pattern? the remainder is always the same value, and then when we do 1/remainder we always get the golden ratio again. That means this continued fraction is 1’s all the way into infinity.

In the last section we saw how having a large number in the continued fraction meant that a number was well approximated at that point in the continued fraction.

The golden ratio has a 1 for every number in the continued fraction, which is the smallest value you can have. That means it’s as least well approximated as you can be, at every stage in the continued fraction.

This is what is meant by the golden ratio being the most irrational number. It’s the least well approximated by rational numbers (dividing one integer by another).

When I said I liked irrational numbers I was mainly talking about the golden ratio, but other highly irrational numbers, like the square root of 2, are also nice. Highly irrational numbers have the properties that I like – the properties that make them useful to use as low discrepancy sequences.

An interesting thing you may have noticed above is that when you divide 1 by the golden ratio, you get the golden ratio minus 1. That is:

1 / 1.61803398875 = 0.61803398875

If you replace the golden ratio with “x” you get this equation:

1/x = x-1

If you solve that equation, you will get the golden ratio out. It is the only number that has this property! Well actually, -0.61803398875 does too, and relates to the fact that there is a + and – root solution to that quadratic equation, but maybe that’s not that surprising. 1/-0.61803398875 is -1.61803398875. The behavior is the same, it’s just happening on the other side of zero.

Something else interesting about the golden ratio is that if you square it, it’s the same as adding 1.

1.61803398875*1.61803398875=2.61803398875

That gives you a formula:

x*x = x+1

If you solve that, you get the golden ratio again as the only solution (well, and the negative golden ratio minus 1 just like before). Probably not too surprising if you look at the formulas though, as you can use simple algebra to change one formula into the other.

From Continued Fractions To Regular Fractions

If you had a continued fraction and wanted to turn it into a regular fraction (or real number), you could run the process in reverse.

However, doing that means you start at the right most digit of the continued fraction and work left til you get to the beginning. How do you do this with an infinitely long continued fraction, like an irrational number would have?

Luckily there is a way to start at the left and work right, so that you can get approximations to infinitely long continued fractions (irrational numbers).

Let’s do this with pi, with this much of the continued fraction:

[3;7,15,1,292,1,1]

What you do is make a table which has a row for the numerator and denominator. The first numerator is 1, and the first denominator is 0. The second numerator is the first number in the continued fraction sequence, and the second denominator is 1.

The formula for the rest of the numerators and denominators is…

  • numerator[index] = CF[index] * numerator[index-1] + numerator[index-2]
  • denominator[index] = CF[index] * denominator[index-1] + denominator[index-2]

Doing that from left to right, you get this:

Below we do that with pi, golden ratio, sqrt(2) and an irrational number i came up with that is not very irrational, and is well approximated by 1.01.

A quick fun tangent is that you might notice that for golden ratio, both the numerators and denominators are the Fibonacci numbers. This is the link between the golden ratio and Fibonacci numbers.

Below shows the percentage error of each number, as more terms in the continued fraction are added.

Golden ratio error decreases the slowest which shows that the golden ratio is least well approximated by fractions. The made up irrational number that is approximately 1.01 has error decreasing the fastest because it is the least irrational of these numbers. Pi shows itself as having fairly low irrationality, and the square root of 2 is fairly irrational.

Here is the error on a log y axis to better show the difference in error.

It’s worth noting that 1/goldenRatio aka 0.61803398875 is every bit as irrational as the golden ratio itself is. The reason for this is that we are flipping the numerator and denominator over for this fraction that doesn’t exist. Flipping it over doesn’t make it more representable as a fraction.

When I’m using the golden ratio for low discrepancy sequences, i usually use that 1/goldenRatio value, also known as “the golden ratio conjugate” because the smaller value means it’s going to have fewer numerical issues and more precision, when working with small values (like between 0 and 1).

Also, there are an infinite amount of numbers just as irrational as the golden ratio. You can calculate them by calculating:

a' = (p*a+q) / (r*a+s)

where p,q,r,s are integers and the absolute value of p*s-q*r is 1.

That is the Moebius transform.

If you are wondering what the least irrational number is, it looks like there are multiple (infinite) and they are just numbers that are very, very, very well approximated by rationals. They are called Liouville numbers and are transcendental, meaning they can’t be calculated with polynomials basically. Proving the existence of these numbers also probed the existence of transcendental numbers themselves. (https://en.wikipedia.org/wiki/Liouville_number)

There is also something called an irrationality measurement, but I haven’t found it that useful. It doesn’t seem to be something you can calculate with a program and compare two numbers to see which is more irrational. https://mathworld.wolfram.com/IrrationalityMeasure.html

Metallic Means

If all 1’s in a continued fraction make the golden ratio, what would all 2s or all 3s make? That makes the silver and bronze ratio respectively and all three of these are the first three of the “Metallic Means”. You can see the details of them below, along with pi. Wikipedia talks more about them too at https://en.wikipedia.org/wiki/Metallic_mean

You might also be interested in reading about the “Plastic Number” which is a different way of trying to approach the goodness of the golden ratio to get another decently irrational number. https://en.wikipedia.org/wiki/Plastic_number

Where the golden ratio has a formula x^2 = x+1, the plastic number has a formula x^3 = x+1.

Coprime Numbers (Setup For Irrational LDS)

Coprime numbers are 2 or more integers that share only 1 as a factor.

For instance, 7 and 11 are coprime numbers. They also happen to be prime numbers, but coprime numbers don’t have to be prime. 8 and 15 are also coprime numbers although neither number is prime. You can even lump all four together and say that 7, 8, 11, 15 are coprime numbers. Combining lists of coprime numbers doesn’t usually result in a list of numbers that are still coprime. This just worked out because we added only prime numbers to the other list of coprimes, and adding primes will always keep the list of numbers coprime.

You might wonder why you should care about coprimes.

My favorite use of coprimes is when i need cheap shuffles of numbers.

For instance, let’s say you had 8 numbers and you wanted them to be shuffled into a somewhat randomized order, but it didn’t need to be a very high quality shuffle.

What you do is first pick any number that is coprime to 8… we’ll say 5. You then count an index from 1 to 8 and do this…

Out = (index * 5) % 8

Taking index from 1 to 8 gives you the following output:

5, 2, 7, 4, 1, 6, 3, 0

If you look at it a bit, you can probably find some patterns, but all the numbers are there and it is fairly mixed up wouldn’t you say? A casual observer probably isn’t going to notice any patterns if you used this in a game setting for dropping treasure or something.

Not all coprime choices give the same quality results though. Here is what happens if we use 7 as the coprime number instead of 5:

Out = (index * 7) % 8

7, 6, 5, 4, 3, 2, 1, 0

It just reversed the list! That isn’t very mixed up.

Also, let’s look at what happens if you don’t use coprimes. We’ll use 2 instead of 7.

Out = (index * 2) % 8

2, 4, 6, 0, 2, 4, 6, 0

If the numbers aren’t coprime, it won’t generate all the numbers in the list. Another way of looking at this is that the cycle length of the number sequence is maximally long when the numbers are coprime, but are not going to be as long (and will repeat) if the numbers are not coprime.

Note that you don’t have to use index values 1 through 8. You could use 0 through 7 instead or any other contiguous 8 values – like 127 through 134. The numbers in this case are all mod 8, so 127 through 134 is equivalent to 7 through 14.

Irrational LDS

An additive recurrence sequence is basically the same concept as the last section, but taken to the continuous world instead of sticking to discrete integers. You use the formula below, where A is any real number:

Out = (index * A) % 1

Just like before with coprime numbers, the value you multiply index by can make you have better or worse results.

If you pick 1/2 (0.5) for A, the sequence you’ll get out is this:
0.5, 0.0, 0.5, 0.0, 0.5, …

If you pick 1/4 (0.25) for A, you’ll get this:
0.25, 0.5, 0.75, 0.0, 0.25, 0.5, 0.75, 0.0, …

1/3 will get you this:
0.333, 0.666, 0.0, 0.333, 0.666, 0.0, …

2/3 will get you this:
0.666, 0.333, 0.0, 0.666, 0.333, 0.0, …

You can also do something more complex, like 5/8:
0.625, 0.25, 0.875, 0.5, 0.125, 0.75, 0.375, 0.0, …
(it then repeats)

If we picked 2/4, we’d end up with 0.5, just like we did when we used 1/2. When thinking about fractions to use in this setup, you should reduce the fraction, because unreduced fractions give the same value as reduced fractions. Ready for the kicker? The numerator and denominator in a reduced fraction will always be coprime integers – by definition of being a reduced fraction! Also, the denominator of the reduced fraction will tell you the length of the sequence it generates.

I snuck a little trick in the examples, did you catch it?

Let’s look at that sequence using 5/8 again, but let’s write the results as fractions.

5/8, 2/8, 7/8, 4/8, 1/8, 6/8, 3/8, 0/8, …

Now let’s look back at the sequence we made last section where we did (index*5)%8

5, 2, 7, 4, 1, 6, 3, 0

The numerator in the fraction results is the same as the integer one using coprimes. You could multiply the fractional results by 8 and get the exact same numbers. In fact take the formula we are using in this section:

Out = (index * 5/8) % 1

And multiply it by 8 to get the formula used in the last section!

Out = (index * 5) % 8

So, while we seemed to have moved away from integers into a continuous domain, we haven’t really… we are still in the land of coprime number sequences. We will always be in the land of coprime number sequences as long as we are using rational numbers, because there will always be a coprime numerator and denominator pair in the simplified fraction that defines the length of the sequence that comes out when we feed index numbers in.

So what if we ditch rational numbers and go with an irrational number? Well, using an irrational number means that the sequence will never repeat. While that in itself sounds awesome, let’s imagine we are using an irrational number that is well approximated by 1/4. That means that the sequence would never repeat, but it would NEARLY repeat every 4 values.

Let’s show the sequence using the golden ratio, where index goes from 1 to 8.
0.618034, 0.236068, 0.854102, 0.472136, 0.090170, 0.708204, 0.326238, 0.944272

Here is the same for a not very irrational number that we are going to approximate with 0.2490001.
0.249000, 0.498000, 0.747000, 0.996000, 0.245000, 0.494001, 0.743001, 0.992001

You can see that the sequence with the ~0.25 irrational isn’t EXACTLY repeating, but it sure is close to repeating. The golden ratio is making very unique numbers in comparison

So basically, if you use an irrational number that can be closely approximated by a rational number, it’ll behave a lot like that rational number and not be real great.

This is what is great about the golden ratio and other highly irrational numbers. There is no rational number that well approximates them, so not only do they not have any actual repeats, they also don’t have any NEAR repeats.

Using highly irrational numbers in this additive recurrence formula, you get a low discrepancy sequence.

Irrational LDS Offset

Every time you use an irrational LDS, you are going to get the same sequence, which can sometimes be a problem.

You can get around this by starting at a different index every time, or by adding a different starting value to the sequence.

Option #1: Randomized Start Index:

Out = ((index+StartIndex) * Irrational) % 1

Option #2: Randomized Starting Value:

Out = (Index * Irrational + StartValue) % 1

How you get this starting index or starting value is up to you. Honestly, I use white noise random numbers and it works fine, so I’ll keep doing that until I analyze it and find out how much better using an LDS is.

Anyways, if you recall from earlier in the post, this LDS won’t repeat so long as you use an irrational number. That means that the infinite number of integer index values going in means you get an infinite number of unique real numbers coming out. That means that the LDS can output any number possible just by changing the index you are at. So, offsetting by index is equivalent to offsetting by a start value, since there is some index that should give the same output as any specific starting value.

I thought that was true, but it turns out it isn’t, because integers are countably infinite while irrational numbers are a larger infinity.

It turns out that this is nearly true though, and that there always exists an index that will give you a value arbitrarily close to any specific starting value.

Although, this is only mathematically true. Computers have finite storage for integers and floats.

Anyways, it’s complicated, but either offsetting index or starting value should work just fine for getting a different sequence. I go the starting value way myself.

More info here regarding that mathematical topic: https://twitter.com/Atrix256/status/1285646721561899010?s=20

BTW even though i show the formula for irrational LDS as index multiplying by the irrational number, that will have numerical precision problems as the index gets larger. You are way better off just adding the irrational and doing modulus when you want the value for the next index. The difference is huge, surprisingly!

Irrational LDS & Arbitrary Start Progressive Sequences

There is the concept of a progressive low discrepancy sequence, which means that any subset of the sequence, starting at index 0, will have the same properties as the entire sequence.

For instance, if you had a progressive blue noise sequence with 100 samples in it, using only the first 10 samples would be blue noise, or using the first 47 samples, or the first 73, or all 100.

If you had a non progressive blue noise sequence, the sample sequence wouldn’t be blue noise until all 100 samples have been used (or as you got close to that number).

Irrational low discrepancy sequences have a nice property that they are progressive, and they are progressive starting at ANY index, not just from index 0. This is because offsets in index are roughly equivalent to offsets in starting value, so starting in the middle of the sequence is the same as if you just started with that sample’s value as the starting value.

The golden ratio in particular is great for this. Each new sample from the golden ratio LDS falls in the biggest hole left by samples so far, which is why it’s so great for numerical integration (and similar). It has great coverage over the sampling domain. The crazy thing though, is that you can start at any index in the LDS and this is still true from that starting index, while it’s still also true if starting from index 0.

Check out the gif below to see what i’m talking about. There are 16 samples total. All 16 are shown on the left, but only the last 8 are shown on the right. I show the samples on a numberline since they are scalar values between 0 and 1, but I also show them on a circle to see that what i said about falling in the largest gap is true even with wrap around.

Here is the same setup for an animation using the square root of 2 instead.

Lastly, here is the same with pi. Pi is well approximated by 22/7 and since we are walking around a circle, the integer parts are irrelevant. Since 22/7 is 3 1/7, that makes pi act nearly like 1/7 and you can verify that it is indeed nearly repeating every 7 samples, and acting very much like 1/7. You can see how pi has samples that clump and it doesn’t give very good coverage over the sample domain for the number of samples it uses (compare the coverage to golden ratio!). This shows how a less irrational number isn’t as good at being a low discrepancy sequence.

If you have heard that plants grow leaves in a golden ratio pattern, now is probably a good time to explain why. Here is the last frame of the golden ratio gif, when there are 16 samples.

One of a plant’s goals in life is to get as much sun as it possibly can. Imagine that the sun is directly overhead and you are looking down at the plant from above. Toward this goal of maximizing sunlight, whenever light is allowed to hit the ground in the radius of the circle of the plant’s shadow, that is a lost opportunity. Sunlight is being wasted by hitting the dirt.

There may also be temporary or semi-permanent shadows that exist or come into existence at any point in the plants life, and if it put all it’s leaves such that they clump in one direction, all those leaves may become in shadow and the plant could go hungry.

A solution to these problems is to grow leaves equally distributed around the plant. This way there are no overlaps when viewing from above, and the risk of shadowing is minimized by being well spread out over all directions.

This would be the end of it if a plant was born with all of it’s leaves and it knew how many leaves it was going to need to grow – and that no leaves were going to get eaten.

Plants are dynamic life forms though, in a dynamic environment, so the leaves need to be as evenly spaced radially as possible when it’s a seedling with just a couple leaves, but also when it is much older with many leaves, possibly with some of them eaten by animals, and all that jazz.

A good way to do this would be to use the golden ratio to figure out where the grow the next leaf.

Doing this, it will have good coverage over the entire circle of possible growth, it’s ok if old leaves die off and go away since it will still be well distributed, leaves can grow larger because overlap will be minimized.

Check out the gif again and think of each line as a leaf growing around a central stem, as the plant gets taller. The circle on the right shows what happens as the first 8 leaves get old, die and fall off, being replaced by 8 new leaves.

If this talk about plants was interesting, give a read here about “Phyllotaxis”
https://en.wikipedia.org/wiki/Phyllotaxis

A Connection Between Primes and Irrationals

So when we have this function where Out, A, B and index are integers:

Out = (index * A) % B

We will get a non repeating sequence of numbers B long from 0 to B-1, if A and B are coprime. Since prime numbers are coprime with all other numbers, we could also say that if A is prime, the sequence will have those properties. It isn’t required to be prime (just coprime) but let’s make the statement for primes right now.

If we move that to the real number world, where index is an integer, but A and out are real numbers:

Out = (index * A) % 1

If A is irrational, we will get a non repeating sequence of numbers infinitely long. In that way, irrational numbers are like primes in that when used, they make a non repeating sequence that is as long as possible.

More so, as A is more highly irrational, you get fewer “near repeats” as well, until getting to the golden ratio where you get a provably minimum amount of near repeats.

So, primes and irrationals (especially the golden ratio), are linked in that they make maximal length non repeating sequences when used over a field.

Co-Irrational Numbers

We talked about coprime numbers and how they would make a sequence that “went the full length”, that is, they made a sequence that didn’t repeat over the whole sequence. If you multiplied index by 5 and did a modulus by 8, you would get a sequence that was 8 items long and then would repeat.

We also talked about irrational numbers and how they would make sequences that NEVER repeated, and talked about how highly irrational numbers could make sequences that also didn’t have NEAR repeats.

This is great if you want one dimensional sequences, but what if you want two dimensional sequences or higher?

One way to do higher dimensional low discrepancy sequences is to just use a different irrational number per axis.

That is great but now you find yourself having to come up with a bunch of high quality irrational numbers since you need one for each axis. Sadly, there is only a single “golden ratio” really (Moebius transform and conjugate don’t give different behavior). If you used the golden ratio on each axis, you’d have correlation between the axes and it wouldn’t be great. Basically, each axis would have the same pattern, even if you start at a different index, or starting value.

There’s a concept I’m sure someone has thought of before, maybe going by a different name, that I call “co-irrationality”.

Coprime numbers have no shared factors other than 1, and thus “no shared cycle lengths” which make their output sequences be maximal lengths.

Similarly, co-irrational numbers should minimize how close they get to each other with their rational approximations. Where non coprime numbers used to make a sequence will repeat, two numbers that aren’t very co-irrational will make a sequence that will nearly repeat, which is nearly just as bad.

Just like how a coprime number doesn’t have to be prime, a co-irrational number doesn’t have to be very good as an irrational number in itself, and in fact, a rational number and irrational number could together be “co-irrational” to each other (like the golden ratio and 2). Also, you could have more than 2 numbers in a set that are co-irrational to each other.

The question of whether two or more numbers are coprime results in a black and white answer: a yes or a no.

The question of whether two or more numbers are co-irrational results in a grey answer. Numbers can range from not being co-irrational at all (if one is a multiple of the other), to being various levels of co-irrational, up to maximally co-irrational (again, the example of the golden ratio and 2).

So how can you tell if two numbers are co-irrational, or rather, how co-irrational they are?

Basically, divide one number by the other (doesn’t matter which is the numerator and which is the denominator) and then look at the irrationality of the resulting number. If the result is a rational number, the numbers are not co-irrational. If the result is an irrational number that is well approximated by dividing relatively small integers, the numbers are not very irrational. If the result is the golden ratio, the numbers are maximally irrational.

If you wanted to see how co-irrational 3 or more numbers were, one way could be to look at them pair-wise. You’d have to figure out how to combine the results of the multiple tests since you are testing each possible pair, but it should give you an idea of how co-irrational they were.

This might also give some thoughts on how you could come up with a set of N co-irrational numbers. Maybe gradient descent could be used to find N numbers that when looked at pairwise, gave the results closest to the golden ratio, and made the error be evenly split up between the pairs.

This section is light on evidence / experimentation / etc, but the post is already getting pretty long. It could be fun to look more deeply at co-irrationals in a separate post.

Links

This is a really great read by Martin Roberts (https://twitter.com/TechSparx) talking about generalizing the golden ratio and using that generalization for multi dimensional low discrepancy sequences, that is competitive with Sobol.
http://extremelearning.com.au/unreasonable-effectiveness-of-quasirandom-sequences/

This talks about whether adding irrational numbers together or multiplying them together results in an irrational number:
https://mathbitsnotebook.com/Algebra1/RatIrratNumbers/RNRationalSumProduct.html?s=09

This talks about how we know that e+pi is irrational or e*pi is irrational, or both, but that’s all we can say about it:
http://mathforum.org/library/drmath/view/51617.html

This shows a link between primes, Pascal’s triangle, and constructable polygons:
https://en.wikipedia.org/wiki/Constructible_polygon

Here is an alternate take on how to measure co-irrationality:
https://twitter.com/R4_Unit/status/1284588140473155585?s=20

Legend has it that a man named Hippasus was killed by Pythagorean cultists for proving the existence of irrational numbers, which destroyed their world view that all numbers were rational.

Here’s a video about that:
https://www.youtube.com/watch?v=sbGjr_awePE

Here’s an article:
https://nrich.maths.org/2671

Here’s another article:
http://kiwihellenist.blogspot.com/2015/11/were-greeks-scared-of-irrational-numbers.html

And another:
https://io9.gizmodo.com/did-pythagoras-really-murder-a-guy-1460668208

Numberphile has a nice video on the irrationality of phi too: https://www.youtube.com/watch?v=sj8Sg8qnjOg

Using Low Discrepancy Sequences With Rejection Sampling

The code that generated the data for this post & implements the things talked about can be found at: https://github.com/Atrix256/LDSRejectionSampling

Rejection sampling lets you convert numbers from one probability distribution into numbers from a different probability distribution. It does that by throwing numbers away.

Imagine I gave you one hundred binary digits where 50% were zeros and 50% were ones. If you wanted them to be 75% zeros and 25% ones you could throw away 33 of the ones. That would leave you with 67 numbers where 50 were zeros and 17 were ones. Now, 74.6% of the numbers are zeros, and 25.4% of the numbers are ones. The transformation worked.

That change in distribution came at a cost though, the sequence got smaller.

I previously wrote up a post about rejection sampling here:
Generating Random Numbers From a Specific Distribution With Rejection Sampling

I also wrote about inverting a CDF here, which is a more complex method where you don’t throw numbers away.
Generating Random Numbers From a Specific Distribution By Inverting the CDF

When doing rejection sampling, a random number is compared against the probability for a number to survive, and the number is thrown away if it fails that test.

This post is going to look at what happens if we use low discrepancy sequences instead of random numbers (white noise) when working with rejection sampling. We are going to try substituting LDS for the random number generation to see if we should throw away a sample, and also we are going to use an LDS to generate the sequence we use as the source of rejection sampling.

Why LDS Here?

You might be asking what the motivation is for trying low discrepancy sequences here.

As a general rule, whenever I see white noise (regular random numbers) being used, it’s usually an indication that money is being left on the table and that the situation could be improved by using something else.

My heuristic there is roughly that if it’s for graphics, with sample counts that aren’t going to converge, that blue noise is going to be a good choice to make the remaining error be least noticeable, otherwise use low discrepancy sequences.

I’ve found this to be true for almost everything I’ve tried so far.

The only two exceptions I can think of at the moment are high dimensional Monte Carlo, where white noise seems to reign supreme (not my area though so not sure. I think Sobol can go pretty high dimension??), and also in random walks.

Random walks have problems with blue noise and LDS because they are so well distributed over the sampling domain, that the random walks never really leave the origin, which isn’t useful. I believe that random walks could possibly be helped by red noise and/or high discrepancy sequences (they are a thing that exist!).

I have two other more well motivated reasons for using LDS with rejection sampling though.

Firstly, if using uniform white noise random numbers to plot a histogram, it will be a flat line, matching that flat uniform PDF, but it will only do so at the limit of an infinite number of samples. In smaller number of samples, white noise is quite lumpy. Low discrepancy sequences on the other hand will make the histogram look a lot flatter with lower sample counts (arguably blue noise does a better job at very low sample counts too). LDS will match the shape of the PDF better with fewer sample counts, but will be a better match at higher sample counts too. So, working with probabilities, being able to have better statistical properties for smaller numbers of samples seems like a no brainer.

With rejection sampling specifically, if the area of your acceptance is A, and the area you are generating random numbers in is B, the probability of acceptance of a sample is A/B. White white noise, the average acceptance rate will be right after a large number of samples, but in a lower number of samples, there may be too many rejections or too little rejections which manifests as error. Using a low discrepancy sequence instead, you should always be closer to the correct acceptance rate than white noise, which also means lower error.

Going back to our situation of having 50 ones, 50 ones and throwing away 33 of the ones… a low discrepancy sequence will mean that the ones thrown away are roughly evenly spaced in the sequence. You can see how if you threw out the first 33 ones the averages would be right for the whole sequence, but that all the ones would be at the end, with none at the beginning, which is weird. White noise can cause similar things to happen, but a low discrepancy sequence will do better at making sure the ones thrown away are more evenly spaced across the whole sequence.

The second better motivated reason is this. Imagine that whenever you accepted a sample, your function emitted a 1, and when you rejected a sample, your function emitted a 0. Let’s also say that the area you are generating random numbers in is 1, which isn’t a stretch since it’s common to generate random numbers from 0 to 1 on each axis, which defines a (hyper)cube with area 1.

If you integrate this function, the result will be A/B… the acceptance probability.

If your goal was to integrate this function, using white noise, we know we’d get the usual white noise integration situation (slow convergence, high variance, etc). You’d know to use LDS in that situation to have lower error for the same sample count compared to white noise

Rejection sampling goes through the same motions as Monte Carlo integration, it just uses the output for a different purpose.

Explanation of motivation out of the way, let’s move onto the experiments!

Uniform To Linear

The first test is to use rejection sampling to convert from a uniform probability distribution to a linear probability distribution.

The linear PDF (Probability Density Function) is y=(2x+3)/4 for random numbers x being between 0 and 1. Being a PDF, that function integrates to 1 over that domain of x being from 0 to 1. For the purposes of rejection sampling, we want this function to be at most 1, instead of integrating to 1, so we are going to use the probability function y=(2x+3)/5

We want it to be at most 1 because we are essentially wrapping the function in a box that is 1×1, rolling a 2d random number to get a point in that box, and only keeping the sample if it’s underneath our function. So, the probability function needs to fit within our box by having all values be less than or equal to 1, but we also don’t want to waste space because it would cause more numbers to be thrown away than needed, so we need to make it as large as possible by making the largest value on the function be 1.

For our test, we are going to generate a number of samples in the linear distribution, by using rejection sampling on uniform distribution inputs until we have enough samples. From there we are going to break the range 0 to 1 into 10 sections and count how many numbers are in each section. That is going to give us a histogram. We are also going to subtract out the “expected” histogram value from the real PDF to show the error. We are also going to do this test 1000 times and show the average error and standard deviation.

We are going to do this for the following scenarios:

  • white/white – white noise used as the input stream, white noise used to get a random number for testing against the probability of keeping the sample.
  • white/LDS – white noise used as the input stream, but a LDS (golden ratio additive sequence) used to generate a “random number” for the probability test.
  • LDS/white – a LDS used as the input stream (square root of 2 additive sequence), white noise used for the probability test.
  • LDS/LDS – square root of 2 LDS used for the input stream, golden ratio LDS used for the probability test.

It’s also important to tell you that I’m using a (white noise) random number for the starting value of each LDS in every test. Without doing that, it would give the same results every time.

Here is the test for 100 samples generated.

Here is the test for 1000 samples generated.

Here is the test for 10000 samples generated.

Looking at the results, it shows that the clear winner is to use a low discrepancy sequence as input to your rejection sampling, while also using a low discrepancy sequence for the probability test.

Second to that is to use a low discrepancy sequence as input, while using white noise for the probability test.

Beyond that, with white noise input, it doesn’t seem to matter much if you are using a low discrepancy sequence for the probability test or not.

That was pretty surprising when I saw that. I was sure an LDS for the probability test would be useful, and it turns out it is, but we’ll see how in a little bit.

Uniform To Linear To Cubic

Let’s look at what happens if we convert a non uniform PDF to another PDF. To do that we need to generate the non uniform PDF first. An inversion method could have been used but I used rejection sampling. The type of sequence used to generate the linear PDF is the same used to convert the linear PDF to cubic.

The same tests are done as before. Here is 100 samples.

Here is 1000 samples.

Here is 10000 samples.

We got the same results as last time basically. Low discrepancy sequence as input is better than white noise as input, and if you are using LDS as input, it’s also better to use LDS for the probability test. If you are using white noise as input, it doesn’t seem to matter if you are using white noise or LDS for the probability test.

It is interesting to see though that the standard deviation (square root of variance) of LDS/LDS is noticeably higher for the higher numbers, when it was flat going from uniform to linear in the previous section. I think that area of the curve might be sensitive to problems because it’s a really likely section in the linear pdf but less likely in the cubic pdf.

As another test, here are 10000 samples of going from uniform straight to cubic to show the difference. Now there is a noticeable spike in LDS/LDS std dev in the lower numbers, which is a bit of a mystery.

As one final test, here are 10000 samples again of going from uniform to cubic, but i swapped the roles of the golden ratio and the square root of 2 LDSs. We seemingly get better results, which isn’t too surprising if you consider that the golden ratio is a better (more irrational) number than square root 2 is, and that from our previous tests, the quality of the input sequence seems to matter more than the sequence used for the probability test.

Survival Rate

We looked at the quality of the histogram coming out of the rejection sampling, but we didn’t look at how it rejects samples.

Here is the information about attempts vs samples generated from uniform to linear.

As you probably expected, LDS/LDS did best in this metric too. The main thing to pay attention to is the variance graph. Not only is the LDS/LDS variance graph lowest, it’s also flattest. This is a good thing because as we get farther to the right on the x axis, the variance of those samples sort of accumulate the variance of the samples before them too since we are looking at totals. That it’s flatter means that variance in future samples negates variance in previous samples.

Something else interesting is that in the last test where we looked at the histogram, the quality of the input mattered most to the metric we were looking at. Better quality inputs made better output histograms.

Here, the reverse is true. Better quality “random numbers” for the probability test make for better (lower) variance of sample survival.

LDS/LDS is best in both cases, but the one in 2nd place switches from LDS/white to white/LDS. 3rd and 4th place are basically tied, which is white noise and the unimportant LDS.

To give further proof & information, here is linear to cubic.

Here is uniform to cubic

Lastly, here is uniform to cubic, but swapping the roles of golden ratio and sqrt 2 LDSs.

The ripples in LDS/LDS are pretty interesting. I wonder what is causing them?

Bonus: Other 2D LDS

Lefteris Stamatogiannakis (@estama2) mentioned on twitter that it might be neat to see real 2D LDS’s used for this.

The golden ratio / sqrt(2) LDS’s seem to be pretty well suited for 2d use, but they aren’t a “designed for 2D” LDS like sobol or R2 is. (R2 is by Martin Roberts, from here: http://extremelearning.com.au/unreasonable-effectiveness-of-quasirandom-sequences/)

So, I had a look! The results are mixed so I’ll let you make your own conclusions.

First, let’s look at the histogram average error / std deviation for the uniform to linear test.



Next, here is the uniform to linear to cubic test:



Here’s the uniform to cubic test. The graphs are mislabeled as uniform to linear to cubic. It’s really uniform straight to cubic. The last section goes to linear first, this one doesn’t.



Next up are the survival graphs.

First is the uiniform to linear survival data:

Next is linear to uniform to cubic:

Last is uniform to linear to cubic: