A place to keep track of blog posts I’d like to do:

**Chebyshev curve fitting / interpolation** – with simple working c++ code. Possibly rational chebyshev too. Chebyshev apparently is optimal for polynomial interpolation.

https://www.embeddedrelated.com/showarticle/152.php

**Optimistic concurrency (in databases)**. Select data and version id per row. Update versionid = versionid+1, blah where blah and version id = version id. If rows affected is 0, that means something else beat you to the punch and you can deal with it however.

**Cordic math**. Every iteration in a loop gives you another bit of precision, since it’s basically a binary search.

**2D SDFs for vector graphics** – using modulus for “free repeating”. anti aliasing. use your shadertoy verlet physics game as an example?

**Verlet Physics** Keep last and current position to get implicit velocity. Simulate. Iterative constraint solving. Things “just work” pretty well.

**Minkowski Portal Refinement** A nice & simple algorithm for collision detection. Maybe talk about algorithm to get depth. Mention JGK, possibly do that after.

**Deterministic Simulations** using deterministic sim to eg decrease network traffic.

**Quick Math: phi / goden ratio** show how golden ratio conjugate is the same number past the decimal point. show how this is the only number that can do that. The main point being “i remember this fact, but don’t remember the number”. This fact lets you calculate the number.

**Quick math: eulers constant** show how e^x being the derivative (and integral) can only work for e. The main point being “i remember this fact, but don’t remember the number”. This fact lets you calculate the number.

**Ear clipping** – for turning polygons into triangles. extendable to 3d with tetrahedron clipping, and to higher dimensions as well.

**Storageless Shuffle With Weights** – this is like if you have 3 red marbles and 5 blue marbles, how would you use FPE to storagelessly shuffle them.

**Recurrent neural networks (etc) for “time series” learning** – https://twitter.com/Peter_shirley/status/1066832031043149824?s=03

**Markov Chain Monte Carlo – Eg. for decryption** maybe try 2nd order or higher chains.

Maybe also try with rendering / numerical integration http://statweb.stanford.edu/~cgates/PERSI/papers/MCMCRev.pdf

**Blue Noise AO** – It’s common to use white noise for sampling and also sample rotation. Start from there and show how to use blue for sampling and also rotation!

https://learnopengl.com/Advanced-Lighting/SSAO

http://john-chapman-graphics.blogspot.com/2013/01/ssao-tutorial.html

**Other blue noise usage cases** – specific usage cases with easy to follow implementations

* fog shafts

* shadows (pcf)

* reflections

* dithering

**Data Cache** When doing coding experiments, there are often pieces of data that take time to calculate that are based on parameters that don’t often change from run to run. Making a data cache can help. Semi compelling usage cases: 1) next largest prime greater than N. 2) integrate a bilinear function. Compare / contrast to content addressable storage. CAS is the hash of the contents, this is the hash of the params that make the contents. code: https://github.com/Atrix256/ProgressiveProjectiveBlueNoise/blob/master/cache.h

# Audio Stuff

Biquad – a better frequency filter

Compressor & Limiter – automatic volume adjustment to eg avoid clipping. Include “side chain” stuff.