# DIY Synth: Convolution Reverb & 1D Discrete Convolution of Audio Samples

This is a part of the DIY Synthesizer series of posts where each post is roughly built upon the knowledge of the previous posts. If you are lost, check the earlier posts!

If there is a physical location that has the reverb you are looking for (a church, a cave, your bathroom, the subway, whatever), you can go there and record the sound of some short, loud sound (a clap, a starter pistol, whatever). Now that you have this sound file, you can use a mathematical technique called “Discrete Convolution” (not as difficult as it sounds) to apply that reverb to other sounds.

Not only that, you can use convolution to apply various properties of one sound to another sound. Like you could convolve a saw wave and a recording of a voice and make it sound robotic.

This is pretty cool right?

## Convolution

There are two types of convolution out there… continuous convolution and discrete convolution.

Continuous convolution is a method where you can take two formulas and perform some operations (algebra, calculus, etc) to get a third formula.

In the cases that us programmers usually work with, however, you usually only have data, and not a mathematical formula describing that data.

When you have specific data points instead of an equation, you can use something called discrete convolution instead.

Discrete convolution is a method where you take two streams of data (IE two arrays of data!) and perform some operations (only looping, addition and multiplication!) to get a third stream of data.

Interestingly, this third stream of data will be the size of the first stream of data, plus the size of the second stream of data, minus one.

This link explains discrete convolution much better than I could, and even has an interactive demo, so check it out: Discrete Convolution

Here is some pseudo code that shows how discrete convolution works.

```// We are calculating bufferA * bufferB (discrete convolution)
// initialize out buffer to empty, and make sure it's the right size
outBuffer[sizeof(bufferA)+sizeof(bufferB)-1] = 0

// reverse one of the buffers in preperation of the convolution
reverse(bufferB);

// Convolve!
for (int outBufferIndex = 0; outBufferIndex < sizeof(outBuffer); ++outBufferIndex)
{
bufferAIndex = 0;
bufferBIndex = 0;
if (outBufferIndex < sizeof(bufferB))
bufferBIndex = sizeof(bufferB) - outBufferIndex - 1;
else
bufferAIndex = outBufferIndex - sizeof(bufferB);

for (; bufferAIndex < sizeof(bufferA) && bufferBIndex < sizeof(bufferB); ++bufferAIndex, ++bufferBIndex)
{
outBuffer[outBufferIndex] += bufferA[bufferAIndex] * bufferB[bufferBIndex];
}
}
```

That's all there is to it!

## Convolution Reverb

Like I mentioned above, to do convolution reverb you just convolve the audio file you want to add reverb to with the recorded sound from the place that has the reverb you want. That second sound is called the Impulse Response, or IR for short.

Besides letting you choose a reverb model (like “cavernous” or “small hallway” or “bright tiled room”), many reverbs will have a “wetness” parameter that controls how much reverb you hear versus how much of the origional noise you hear. A wetness of 1.0 means all you hear is the reverb (the convolution), while a wetness of 0.0 means all you hear is the origional sound (the dry sound). It’s just a percentage and literally is just used to lerp between the two signals.

Here is an example of how changes in wetness can sound, and also give you a first glimpse of just how good this technique works!

Dry Sound: SoundBite
Impulse Response: ReverbLarge

Convolved with 100% wetness: c_RL_SBite.wav
Convolved with 66% wetness: c_RL_SBite_66.wav
Convolved with 33% wetness: c_RL_SBite_33.wav

## Sample Sound Files

Let’s convolve some sounds together and see what results we get!

Here are 11 sounds convoled against each other (and themselves) at 100% wetness so all you hear is the convolution. Convolution is commutative (A*B == B*A) so that cuts down on the number of combinations (66 instead of 121!)

There are 3 impulse responses, 4 short basic wave forms, 2 voice samples a cymbal drum pattern and a jaw harp twang.

Some of the results are pretty interesting, for instance check out SoundBite vs SoundCymbal. The beat of the cymbal pattern remains, but the actual symbal sound itself is replaced by the words!

## Sample Code

Here is some sample code which will read in “in.wav” and “reverb.wav”, do convolution on them and save the fully wet result (convolution result only, no original in.wav mixed into the output) as “out.wav”.

Note that this code was used to process the sample sound files above. Usual caveat: The sound loading code isn’t bulletproof. I’ve found it works best with 16 bit mono sound files. If you need better sound loading, check out libsndfile!

```#define _CRT_SECURE_NO_WARNINGS

#include
#include
#include
#include
#include
#include
#include
#include

#define _USE_MATH_DEFINES
#include

//=====================================================================================
// SNumeric - uses phantom types to enforce type safety
//=====================================================================================
template
struct SNumeric
{
public:
explicit SNumeric(const T &value) : m_value(value) { }
SNumeric() : m_value() { }
inline T& Value() { return m_value; }
inline const T& Value() const { return m_value; }

typedef SNumeric TType;
typedef T TInnerType;

// Math Operations
TType operator+ (const TType &b) const
{
return TType(this->Value() + b.Value());
}

TType operator- (const TType &b) const
{
return TType(this->Value() - b.Value());
}

TType operator* (const TType &b) const
{
return TType(this->Value() * b.Value());
}

TType operator/ (const TType &b) const
{
return TType(this->Value() / b.Value());
}

TType& operator+= (const TType &b)
{
Value() += b.Value();
return *this;
}

TType& operator-= (const TType &b)
{
Value() -= b.Value();
return *this;
}

TType& operator*= (const TType &b)
{
Value() *= b.Value();
return *this;
}

TType& operator/= (const TType &b)
{
Value() /= b.Value();
return *this;
}

TType& operator++ ()
{
Value()++;
return *this;
}

TType& operator-- ()
{
Value()--;
return *this;
}

// Extended Math Operations
template
T Divide(const TType &b)
{
return ((T)this->Value()) / ((T)b.Value());
}

// Logic Operations
bool operatorValue() < b.Value();
}
bool operatorValue()  (const TType &b) const {
return this->Value() > b.Value();
}
bool operator>= (const TType &b) const {
return this->Value() >= b.Value();
}
bool operator== (const TType &b) const {
return this->Value() == b.Value();
}
bool operator!= (const TType &b) const {
return this->Value() != b.Value();
}

private:
T m_value;
};

//=====================================================================================
// Typedefs
//=====================================================================================

typedef uint8_t uint8;
typedef uint16_t uint16;
typedef uint32_t uint32;
typedef int16_t int16;
typedef int32_t int32;

// type safe types!
typedef SNumeric      TFrequency;
typedef SNumeric        TTimeMs;
typedef SNumeric       TSamples;
typedef SNumeric   TFractionalSamples;
typedef SNumeric       TDecibels;
typedef SNumeric      TAmplitude;
typedef SNumeric          TPhase;

//=====================================================================================
// Constants
//=====================================================================================

static const float c_pi = (float)M_PI;
static const float c_twoPi = c_pi * 2.0f;

//=====================================================================================
// Structs
//=====================================================================================

struct SSoundSettings
{
TSamples        m_sampleRate;
};

//=====================================================================================
// Conversion Functions
//=====================================================================================
inline TDecibels AmplitudeToDB(TAmplitude volume)
{
return TDecibels(log10(volume.Value()));
}

inline TAmplitude DBToAmplitude(TDecibels dB)
{
return TAmplitude(pow(10.0f, dB.Value() / 20.0f));
}

TSamples SecondsToSamples(const SSoundSettings &s, float seconds)
{
return TSamples((int)(seconds * (float)s.m_sampleRate.Value()));
}

TSamples MilliSecondsToSamples(const SSoundSettings &s, float milliseconds)
{
return SecondsToSamples(s, milliseconds / 1000.0f);
}

TTimeMs SecondsToMilliseconds(float seconds)
{
return TTimeMs((uint32)(seconds * 1000.0f));
}

TFrequency Frequency(float octave, float note)
{
/* frequency = 440×(2^(n/12))
Notes:
0  = A
1  = A#
2  = B
3  = C
4  = C#
5  = D
6  = D#
7  = E
8  = F
9  = F#
10 = G
11 = G# */
return TFrequency((float)(440 * pow(2.0, ((double)((octave - 4) * 12 + note)) / 12.0)));
}

template
T AmplitudeToAudioSample(const TAmplitude& in)
{
const T c_min = std::numeric_limits::min();
const T c_max = std::numeric_limits::max();
const float c_minFloat = (float)c_min;
const float c_maxFloat = (float)c_max;

float ret = in.Value() * c_maxFloat;

if (ret  c_maxFloat)
return c_max;

return (T)ret;
}

TAmplitude GetLerpedAudioSample(const std::vector& samples, TFractionalSamples& index)
{
// get the index of each sample and the fractional blend amount
uint32 a = (uint32)floor(index.Value());
uint32 b = a + 1;
float fract = index.Value() - floor(index.Value());

// get our two amplitudes
float ampA = 0.0f;
if (a >= 0 && a = 0 && b < samples.size())
ampB = samples[b].Value();

// return the lerped result
return TAmplitude(fract * ampB + (1.0f - fract) * ampA);
}

void NormalizeSamples(std::vector& samples, TAmplitude maxAmplitude, TSamples envelopeTimeFrontBack)
{
// nothing to do if no samples
if (samples.size() == 0)
return;

// 1) find the largest absolute value in the samples.
TAmplitude largestAbsVal = TAmplitude(abs(samples.front().Value()));
std::for_each(samples.begin() + 1, samples.end(), [&largestAbsVal](const TAmplitude &a)
{
TAmplitude absVal = TAmplitude(abs(a.Value()));
if (absVal > largestAbsVal)
largestAbsVal = absVal;
}
);

// 2) adjust largestAbsVal so that when we divide all samples, none will be bigger than maxAmplitude
// if the value we are going to divide by is <= 0, bail out
largestAbsVal /= maxAmplitude;
if (largestAbsVal <= TAmplitude(0.0f))
return;

// 3) divide all numbers by the largest absolute value seen so all samples are [-maxAmplitude,+maxAmplitude]
// also apply front and back envelope
const TSamples c_frontEnvelopeEnd(envelopeTimeFrontBack);
const TSamples c_backEnvelopeStart(samples.size() - envelopeTimeFrontBack.Value());

for (TSamples index(0), numSamples(samples.size()); index < numSamples; ++index)
{
// calculate envelope
TAmplitude envelope(1.0f);
if (index < c_frontEnvelopeEnd)
envelope = TAmplitude(index.Divide(envelopeTimeFrontBack));
else if (index > c_backEnvelopeStart)
envelope = TAmplitude(1.0f) - TAmplitude((index - c_backEnvelopeStart).Divide(envelopeTimeFrontBack));

// apply envelope while normalizing
samples[index.Value()] = samples[index.Value()] * envelope / largestAbsVal;

}
}

void ResampleData(std::vector& samples, int srcSampleRate, int destSampleRate)
{
//if the requested sample rate is the sample rate it already is, bail out and do nothing
if (srcSampleRate == destSampleRate)
return;

//calculate the ratio of the old sample rate to the new
float fResampleRatio = (float)destSampleRate / (float)srcSampleRate;

//calculate how many samples the new data will have and allocate the new sample data
int nNewDataNumSamples = (int)((float)samples.size() * fResampleRatio);

std::vector newSamples;
newSamples.resize(nNewDataNumSamples);

//get each lerped output sample.  There are higher quality ways to resample
for (int nIndex = 0; nIndex < nNewDataNumSamples; ++nIndex)
newSamples[nIndex] = GetLerpedAudioSample(samples, TFractionalSamples((float)nIndex / fResampleRatio));

//free the old data and set the new data
std::swap(samples, newSamples);
}

void ChangeNumChannels(std::vector& samples, int nSrcChannels, int nDestChannels)
{
//if the number of channels requested is the number of channels already there, or either number of channels is not mono or stereo, return
if (nSrcChannels == nDestChannels ||
nSrcChannels  2 ||
nDestChannels  2)
{
return;
}

//if converting from mono to stereo, duplicate the mono channel to make stereo
if (nDestChannels == 2)
{
std::vector newSamples;
newSamples.resize(samples.size() * 2);
for (size_t index = 0; index < samples.size(); ++index)
{
newSamples[index * 2] = samples[index];
newSamples[index * 2 + 1] = samples[index];
}

std::swap(samples, newSamples);
}
//else converting from stereo to mono, mix the stereo channels together to make mono
else
{
std::vector newSamples;
newSamples.resize(samples.size() / 2);
for (size_t index = 0; index < samples.size() / 2; ++index)
newSamples[index] = samples[index * 2] + samples[index * 2 + 1];

std::swap(samples, newSamples);
}
}

float PCMToFloat(unsigned char *pPCMData, int nNumBytes)
{
switch (nNumBytes)
{
case 1:
{
uint8 data = pPCMData;
return (float)data / 255.0f;
}
case 2:
{
int16 data = pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x00007fff);
}
case 3:
{
int32 data = pPCMData << 16 | pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x007fffff);
}
case 4:
{
int32 data = pPCMData << 24 | pPCMData << 16 | pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x7fffffff);
}
default:
{
return 0.0f;
}
}
}

//=====================================================================================
// Wave File Writing Code
//=====================================================================================
{
//the main chunk
unsigned char m_szChunkID;      //0
uint32        m_nChunkSize;        //4
unsigned char m_szFormat;       //8

//sub chunk 1 "fmt "
unsigned char m_szSubChunk1ID;  //12
uint32        m_nSubChunk1Size;    //16
uint16        m_nAudioFormat;      //18
uint16        m_nNumChannels;      //20
uint32        m_nSampleRate;       //24
uint32        m_nByteRate;         //28
uint16        m_nBlockAlign;       //30
uint16        m_nBitsPerSample;    //32

//sub chunk 2 "data"
unsigned char m_szSubChunk2ID;  //36
uint32        m_nSubChunk2Size;    //40

//then comes the data!
};

//this writes a wave file
template
bool WriteWaveFile(const char *fileName, const std::vector &samples, const SSoundSettings &sound)
{
//open the file if we can
FILE *file = fopen(fileName, "w+b");
if (!file)
return false;

//calculate bits per sample and the data size
const int32 bitsPerSample = sizeof(T) * 8;
const int dataSize = samples.size() * sizeof(T);

//fill out the main chunk

//fill out sub chunk 1 "fmt "
waveHeader.m_nByteRate = sound.m_sampleRate.Value() * 1 * bitsPerSample / 8;
waveHeader.m_nBlockAlign = 1 * bitsPerSample / 8;

//fill out sub chunk 2 "data"

//write the wave data itself, converting it from float to the type specified
std::vector outSamples;
outSamples.resize(samples.size());
for (size_t index = 0; index < samples.size(); ++index)
outSamples[index] = AmplitudeToAudioSample(samples[index]);
fwrite(&outSamples, dataSize, 1, file);

//close the file and return success
fclose(file);
return true;
}

//loads a wave file in.  Converts from source format into the specified format
// TOTAL HONESTY: some wave files seem to have problems being loaded through this function and I don't have
// time to investigate why.  It seems to work best with 16 bit mono wave files.
// If you need more robust file loading, check out libsndfile at http://www.mega-nerd.com/libsndfile/
bool ReadWaveFile(const char *fileName, std::vector& samples, int32 sampleRate)
{
//open the file if we can
FILE *File = fopen(fileName, "rb");
if (!File)
{
return false;
}

//read the main chunk ID and make sure it's "RIFF"
char buffer;
buffer = 0;
if (fread(buffer, 4, 1, File) != 1 || strcmp(buffer, "RIFF"))
{
fclose(File);
return false;
}

uint32 nChunkSize;
if (fread(&nChunkSize, 4, 1, File) != 1)
{
fclose(File);
return false;
}

//read the format and make sure it's "WAVE"
if (fread(buffer, 4, 1, File) != 1 || strcmp(buffer, "WAVE"))
{
fclose(File);
return false;
}

long chunkPosFmt = -1;
long chunkPosData = -1;

while (chunkPosFmt == -1 || chunkPosData == -1)
{
//read a sub chunk id and a chunk size if we can
if (fread(buffer, 4, 1, File) != 1 || fread(&nChunkSize, 4, 1, File) != 1)
{
fclose(File);
return false;
}

//if we hit a fmt
if (!strcmp(buffer, "fmt "))
{
chunkPosFmt = ftell(File) - 8;
}
//else if we hit a data
else if (!strcmp(buffer, "data"))
{
chunkPosData = ftell(File) - 8;
}

fseek(File, nChunkSize, SEEK_CUR);
}

//we'll use this handy struct to load in

//load the fmt part if we can
fseek(File, chunkPosFmt, SEEK_SET);
if (fread(&waveData.m_szSubChunk1ID, 24, 1, File) != 1)
{
fclose(File);
return false;
}

//load the data part if we can
fseek(File, chunkPosData, SEEK_SET);
if (fread(&waveData.m_szSubChunk2ID, 8, 1, File) != 1)
{
fclose(File);
return false;
}

//verify a couple things about the file data
if (waveData.m_nAudioFormat != 1 ||       //only pcm data
waveData.m_nNumChannels  2 ||        //must not have more than 2
waveData.m_nBitsPerSample > 32 ||     //32 bits per sample max
waveData.m_nBitsPerSample % 8 != 0 || //must be a multiple of 8 bites
waveData.m_nBlockAlign > 8)           //blocks must be 8 bytes or lower
{
fclose(File);
return false;
}

//figure out how many samples and blocks there are total in the source data
int nBytesPerBlock = waveData.m_nBlockAlign;
int nNumBlocks = waveData.m_nSubChunk2Size / nBytesPerBlock;
int nNumSourceSamples = nNumBlocks * waveData.m_nNumChannels;

//allocate space for the source samples
samples.resize(nNumSourceSamples);

//maximum size of a block is 8 bytes.  4 bytes per samples, 2 channels
unsigned char pBlockData;
memset(pBlockData, 0, 8);

//read in the source samples at whatever sample rate / number of channels it might be in
int nBytesPerSample = nBytesPerBlock / waveData.m_nNumChannels;
for (int nIndex = 0; nIndex = TPhase(1.0f))
phase -= TPhase(1.0f);
while (phase  0.5f ? 1.0f : -1.0f);
}

TAmplitude AdvanceOscilator_Triangle(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{
if (phase > TPhase(0.5f))
return TAmplitude((((1.0f - phase.Value()) * 2.0f) * 2.0f) - 1.0f);
else
return TAmplitude(((phase.Value() * 2.0f) * 2.0f) - 1.0f);
}

TAmplitude AdvanceOscilator_Saw_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
TPhase harmonicPhase = phase * TPhase((float)harmonicIndex);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / (float)harmonicIndex);
}

ret *= TAmplitude(2.0f / c_pi);

return ret;
}

TAmplitude AdvanceOscilator_Square_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
float harmonicFactor = (float)harmonicIndex * 2.0f - 1.0f;
TPhase harmonicPhase = phase * TPhase(harmonicFactor);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / harmonicFactor);
}

ret *= TAmplitude(4.0f / c_pi);

return ret;
}

TAmplitude AdvanceOscilator_Triangle_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
TAmplitude signFlip(1.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
float harmonicFactor = (float)harmonicIndex * 2.0f - 1.0f;
TPhase harmonicPhase = phase * TPhase(harmonicFactor);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / (harmonicFactor*harmonicFactor)) * signFlip;
signFlip *= TAmplitude(-1.0f);
}

ret *= TAmplitude(8.0f / (c_pi*c_pi));

return ret;
}

//=====================================================================================
// Main
//=====================================================================================
int main(int argc, char **argv)
{
// wetness parameter of reverb.  It's a percentage from 0 to 1.
TAmplitude c_reverbWetness(1.0f);

// keep track of when the process started so we can report how long it took
auto start = std::chrono::system_clock::now().time_since_epoch();

//our desired sound parameters
SSoundSettings sound;
sound.m_sampleRate = TSamples(44100);

// load the input wave file if we can and normalize it
std::vector inputData;
{
printf("could not load input wave file!");
return 0;
}
NormalizeSamples(inputData, TAmplitude(1.0f), TSamples(0));

// load the reverb wave file if we can and normalize it
std::vector reverbData;
{
printf("could not load reverb wave file!");
return 0;
}
NormalizeSamples(reverbData, TAmplitude(1.0f), TSamples(0));

// allocate space for the output file - which will be the number of samples in the two input files minus 1
// initialize it to silence!
std::vector samples;
samples.resize(inputData.size() + reverbData.size() - 1);
std::fill(samples.begin(), samples.end(), TAmplitude(0.0f));

// reverse the reverb data in preparation of convolution
std::reverse(reverbData.begin(), reverbData.end());

// report the number of samples of each file
printf("Input samples = %un", inputData.size());
printf("Reverb samples = %un", reverbData.size());

// apply the convolution for each output sample
float lastPercentageReported = 0.0f;
char percentageText;
percentageText = 0;
printf("Progress: ");
for (TSamples sampleIndex(0), numSamples(samples.size()); sampleIndex < numSamples; ++sampleIndex)
{
// print some periodic output since this can take a while!
float percentage = sampleIndex.Divide(numSamples);
if (percentage >= lastPercentageReported + 0.01f)
{
// erase the last progress text
for (int i = 0, c = strlen(percentageText); i < c; ++i)
printf("%c %c", 8, 8);

// calculte and show progress text
lastPercentageReported = percentage;
auto duration = std::chrono::system_clock::now().time_since_epoch() - start;
auto millis = std::chrono::duration_cast(duration).count();
float expectedMillis = millis / percentage;
sprintf(percentageText, "%i%%  %0.2f / %0.2f seconds (%0.2f remaining)", int(percentage*100.0f), ((float)millis) / 1000.0f, expectedMillis / 1000.0f, (expectedMillis - millis) / 1000.0f);
printf("%s", percentageText);
}

// get a reference to our output sample
TAmplitude &outputSample = samples[sampleIndex.Value()];

// figure out the first reverb index and input index we should process.
TSamples startReverbIndex(0);
TSamples startInputIndex(0);
if (sampleIndex < TSamples(reverbData.size()))
startReverbIndex = TSamples(reverbData.size()) - sampleIndex - TSamples(1);
else
startInputIndex = sampleIndex - TSamples(reverbData.size());

// for each reverb sample
for (TSamples reverbIndex(startReverbIndex), numReverbSamples(reverbData.size()), inputIndex(startInputIndex), numInputSamples(inputData.size());
reverbIndex < numReverbSamples && inputIndex < numInputSamples;
++reverbIndex, ++inputIndex)
{
const TAmplitude &inputSample = inputData[inputIndex.Value()];
const TAmplitude &reverbSample = reverbData[reverbIndex.Value()];
outputSample += inputSample * reverbSample;
}
}

// normalize the reverb to the wetness volume level
NormalizeSamples(samples, c_reverbWetness, TSamples(0));

// apply the dry sound on top, per the wetness settings
for (TSamples inputIndex = TSamples(0), numInputSamples(inputData.size()); inputIndex < numInputSamples; ++inputIndex)
{
const TAmplitude &inputSample = inputData[inputIndex.Value()];
TAmplitude &outputSample = samples[inputIndex.Value()];
outputSample += inputSample * (TAmplitude(1.0f) - c_reverbWetness);
}

// normalize the amplitude of the samples to make sure they are as loud as possible without clipping.
// give 3db of headroom and also put an envelope on the front and back that is 50ms
NormalizeSamples(samples, DBToAmplitude(TDecibels(-3.0f)), MilliSecondsToSamples(sound, 50.0f));

// save as a wave file
WriteWaveFile("out.wav", samples, sound);
return 0;
}
```

## Intuition

I had to think about it for a bit but the intuition for why this works is actually pretty straight forward!

The impulse response represents all the echos that occur over time if you make a single sample of 1.0 amplitude. This is your guide for how all samples in an impulse source (IS) should be transformed, where IS is the sound you want to add reverb to.

So, starting out, you put the entire IR into your output buffer starting at output (the first sample in the output buffer), multiplied by IS (the first sample of the sound you are reverbing).

This leaves you with the proper reverb of the first sample in your IS, and your total output buffer length is the same length as your IR.

We only have one sample reverbed though, so we move to the next…

Next, you put the entire IR into the output buffer again, but this time start at output (the second sample in the output buffer) and those IR values you are writing need to be multiplied by IS (the second sample of the sound you are reverbing).

You now have the proper reverb for the first two samples in your IS, and your total output buffer length is the length of your IR + 1.

Rinse and repeat for all samples in your IS, and at the end, the reverb of all samples in your IS will be accounted for in your output buffer, and the size of your output buffer will be the length of your IR plus the length of your IS, minus one.

Pretty simple and intuitive right?

You are essentially playing the IR in the output buffer length(IS) times, where IS[outputBufferIndex] determines the volume (amplitude) that you need to play the IR at.

Convolution does exactly this, which turns out to be pretty slow due to it being a lot of math to preform.

If you have a source file (IS) you are reverbing that is 4 seconds long, running at 44100 samples per second (176,400 samples total), and you have a reverb impulse response (IR) that is 2 seconds long at 44100 samples per second (88,200 samples total), that means that you are going to essentially be mixing the IR into an output buffer 176,400 times.

Each time you play the IR you need to do a multiplication per IR sample (to scale it to the IS[index] amplitude) and an addition to add it to the resulting output buffer.

At 88,200 IR samples with a multiply and add for each sample, and doing that 176,400 times, that means at the end you will need to do 15,558,480,000 (15.5 billion) multiplies and adds.

And remember… that is only for a 4 second sound that is receiving a 2 second reverb… those are pretty small sound files involved! And, that is only a single channel. That would double in stereo, and be even worse in 5.1 or 7.1 surround sound!

So, this method works, but unfortunately it’s pretty darn slow (it took like 5 minutes for me to convolve the legend quote with the large reverb). It could be made faster by introducing threading and SSE instructions, but there is a better way that gives us an even bigger speed up.

Having audio samples means you are working in the “time domain”. If you take a fourier transform of the audio samples, you’ll have the information in the “frequency domain”. As it turns out, if you do a multiplication in the frequency domain, that is equivalent to doing a convolution in the time domain. Using the fast fourier transform on your input sound and multiplying that by the pre-transformed FFT of the impulse response, and then doing an inverse FFT on the result to get back into the time domain (aka audio samples) is MUCH FASTER than doing actual convolution.

Another problem with convolution reverb is that you need the full sound you are convolving, which makes it basically impossible to use in a live setup. The solution to this is that people convolve windows of data at a time, instead of the whole sound. I think a common window size is 256 samples.

I’ll make a post in the future that addresses both those issues to allow for fast, real time convolution reverb via windowed FFT multiplication.

Also, I said in the last post that this technique is what the pros do. We’ll I should add that this is what they do SOMETIMES. Other times they use DSP algorithms involving comb filters and multi tap delays (and other things) to make fast reverb that sounds pretty decent without needing to do convolution or FFT/IFFT. Check the links section for a link to the details of one such algorithm.

Regarding IR samples (Impulse Response recordings), if you record your own, it should work as is. However, a common thing that people do is remove the loud sound from the impulse response (via deconvolution i think?) to get ONLY the impulse response. It basically makes it so that the IR really is just an IR of that room as if you had a single sample of a “1.0” echoing in the area. Without this step, your reverb convolution will still work, but you’ll get “smearing” of the sound, due to it not being a true 1.0 sample (or in other words, not a true dirac delta).

Luckily there are IR’s available for download online as well. Some are free, some are pay. Check the links section for a few links I found for that (:

Lastly, this post basically teaches you how to do 1 dimensional convolution, showing you an application for it. You can do convolution in higher dimensions as well though, and in fact if you have used photoshop and know about things like gaussian blur, sharpen mask, etc, those guys work by 2d convolution. Bokeh is even apparently convolution, as is a “Minkowski Sum” if you have done work in game physics.

Ill make a graphics related 2d convolution post in the future as well to delve into that stuff a bit deeper.

As one final sound sample, check out this CROSS CORRELATION (not convolution) of SoundBite.wav and ReverbLarge. Cross correlation is the same as convolution except you don’t reverse one of the samples. So, in effect, it’s like I convolved SoundBite.wav with ReverbLarge.wav played backwards.

c_ReverseRL_SBite.wav

More info on cross correlation coming soon. It has uses in audio, but more often it’s used for finding time delays of sounds compared to other sounds, recognizing patterns in sound data even with the presence of noise / distortion, versus having actual audible uses (although there are some of those too!).

# DIY Synth: Multitap Reverb

This is a part of the DIY Synthesizer series of posts where each post is roughly built upon the knowledge of the previous posts. If you are lost, check the earlier posts!

Reverb is similar to delay in that it adds echoes to a sound. It’s different, though, in that whereas delay adds one echo (even if that echo repeats itself, quieter each time), reverb adds several echos to the sound. Basically, reverb is what makes things sound like they are played in a church, or a deep cavern, or a small bathroom. Delay just makes things sound a bit echoey.

Multitap reverb is the most ghetto of reverb techniques. It’s kind of hacky, it takes a lot of manual tweaking, and in the end, usually doesn’t sound very good. It is less computationally expensive compared to other reverb techniques though, so if that’s a concern of yours, or you don’t want to be bothered with the more complex and sophisticated reverb techniques, multitap reverb may be a good solution for you!

Here are some audio samples for you to hear the effect in action:

 Legend Quote Cymbals Raw legend.wav cymbal.wav Multitap Reverb legend_mtr.wav cymbal_mtr.wav

## Technique

The technique is pretty straightforward. You have a sample buffer to hold the last N samples, and then when you process an incoming sample, you add in multiple samples from the delay buffer, each multiplied by a different volume (amplitude) and add that into the incoming sample to get the outgoing sample. You then also put that outgoing sample into the reverb buffer at the current index in the circular buffer.

Here is some pseudocode about how it might be implemented:

```// The size of the delay buffer is controlled by the maximum time parameter of all taps
// make sure the buffer is initialized with silence
reverbBuffer[reverbSamples] = 0;

// start the circular buffer index at 0
reverbIndex= 0;

// process each input sample
for (i = 0; i < numSamples; ++i)
{
// calculate the output sample, which is the input sample plus all the taps from the delay buffer
// TODO: handle wrapping around zero when the tapTime is greater than the current reverbIndex
outSample[i] = inSample[i];
for (j = 0; j < numTaps; ++j)
outSample[i] += reverbBuffer[reverbIndex - taps[j].tapTime] * taps[j].feedbackMultiplier;

// also store this output sample in the reverb buffer
reverbBuffer[reverbIndex] = outSample[i];

// advance the circular buffer index
reverbIndex++;
if (reverbIndex>= reverbSamples)
reverbIndex= 0;
}
```

In the sample code below, and in the reverb processed samples above, here are the times and amplitudes of the taps that were used. The amplitude is given both as dB and amplitude so you can see whichever you are more comfortable with.

 Time (ms) dB Amplitude 79 -25 0.0562 130 -23 0.0707 230 -15 0.1778 340 -23 0.0707 470 -17 0.1412 532 -21 0.0891 662 -13 0.2238

With some more effort, you could likely come up with some better tap values to make the reverb sound better.

Also, I was going for a cavernous feel, but you could come up with specific taps to make things feel smaller instead.

You have to be careful when setting up your taps so that the overall volume diminishes over time instead of grows. If you put too much acoustic energy back into the reverb buffer, the reverbed sound will get louder and louder over time instead of things dying out giving you that nice diminishing echo sound.

## Sample Code

Here’s sample code that loads in.wav, processes it with the taps mentioned above, and writes it out as out.wav. As per usual, the wave loading code has some issues with certain wave formats. If you need better sound file loading, check out libsndfile!

```#define _CRT_SECURE_NO_WARNINGS

#include <array>
#include <stdio.h>
#include <string.h>
#include <stdint.h>
#include <cmath>
#include <vector>

#define _USE_MATH_DEFINES
#include <math.h>

//=====================================================================================
// SNumeric - uses phantom types to enforce type safety
//=====================================================================================
template <typename T, typename PHANTOM_TYPE>
struct SNumeric
{
public:
explicit SNumeric(const T &value) : m_value(value) { }
SNumeric() : m_value() { }
inline T& Value() { return m_value; }
inline const T& Value() const { return m_value; }

typedef SNumeric<T, PHANTOM_TYPE> TType;
typedef T TInnerType;

// Math Operations
TType operator+ (const TType &b) const
{
return TType(this->Value() + b.Value());
}

TType operator- (const TType &b) const
{
return TType(this->Value() - b.Value());
}

TType operator* (const TType &b) const
{
return TType(this->Value() * b.Value());
}

TType operator/ (const TType &b) const
{
return TType(this->Value() / b.Value());
}

TType& operator+= (const TType &b)
{
Value() += b.Value();
return *this;
}

TType& operator-= (const TType &b)
{
Value() -= b.Value();
return *this;
}

TType& operator*= (const TType &b)
{
Value() *= b.Value();
return *this;
}

TType& operator/= (const TType &b)
{
Value() /= b.Value();
return *this;
}

TType& operator++ ()
{
Value()++;
return *this;
}

TType& operator-- ()
{
Value()--;
return *this;
}

// Extended Math Operations
template <typename T>
T Divide(const TType &b)
{
return ((T)this->Value()) / ((T)b.Value());
}

// Logic Operations
bool operator< (const TType &b) const {
return this->Value() < b.Value();
}
bool operator<= (const TType &b) const {
return this->Value() <= b.Value();
}
bool operator> (const TType &b) const {
return this->Value() > b.Value();
}
bool operator>= (const TType &b) const {
return this->Value() >= b.Value();
}
bool operator== (const TType &b) const {
return this->Value() == b.Value();
}
bool operator!= (const TType &b) const {
return this->Value() != b.Value();
}

private:
T m_value;
};

//=====================================================================================
// Typedefs
//=====================================================================================

typedef uint8_t uint8;
typedef uint16_t uint16;
typedef uint32_t uint32;
typedef int16_t int16;
typedef int32_t int32;

// type safe types!
typedef SNumeric<float, struct S__Frequency>      TFrequency;
typedef SNumeric<uint32, struct S__TimeMs>        TTimeMs;
typedef SNumeric<uint32, struct S__Samples>       TSamples;
typedef SNumeric<float, struct S__FractSamples>   TFractionalSamples;
typedef SNumeric<float, struct S__Decibels>       TDecibels;
typedef SNumeric<float, struct S__Amplitude>      TAmplitude;
typedef SNumeric<float, struct S__Phase>          TPhase;

//=====================================================================================
// Constants
//=====================================================================================

static const float c_pi = (float)M_PI;
static const float c_twoPi = c_pi * 2.0f;

//=====================================================================================
// Structs
//=====================================================================================

struct SSoundSettings
{
TSamples        m_sampleRate;
TTimeMs         m_lengthMs;
TSamples        m_currentSample;
};

//=====================================================================================
// CMultiTapReverb -> the multi tap reverb object
//=====================================================================================

struct SReverbTap
{
TSamples    m_timeOffset;
TAmplitude  m_feedback;
};

class CMultitapReverb
{
public:
CMultitapReverb(const std::vector<SReverbTap>& taps)
: m_sampleIndex(0)
{
// copy the taps table
m_taps = taps;

// find out the largest tap time offset so we know how big to make the buffer
TSamples largestTimeOffset(0);
std::for_each(m_taps.begin(), m_taps.end(),
[&largestTimeOffset](const SReverbTap& tap)
{
if (tap.m_timeOffset > largestTimeOffset)
largestTimeOffset = tap.m_timeOffset;
}
);

// if it's 0, bail out, we are done
if (largestTimeOffset.Value() == 0)
return;

// else resize our internal buffer and fill it with silence
m_samples.resize(largestTimeOffset.Value()+1);
std::fill(m_samples.begin(), m_samples.end(), TAmplitude(0.0f));
}

TAmplitude ProcessSample(TAmplitude sample)
{
// if no taps, or none with any time value, bail out!
if (m_samples.size() == 0)
return sample;

// take our taps from the delay buffer
TAmplitude outSample = sample;
std::for_each(m_taps.begin(), m_taps.end(),
[&outSample, this](const SReverbTap& tap)
{
size_t tapSampleIndex;
if (tap.m_timeOffset.Value() > m_sampleIndex)
tapSampleIndex = m_samples.size() - 1 - (tap.m_timeOffset.Value() - m_sampleIndex);
else
tapSampleIndex = m_sampleIndex - tap.m_timeOffset.Value();

outSample += m_samples[tapSampleIndex] * tap.m_feedback;
}
);

// put the output sample into the buffer
m_samples[m_sampleIndex] = outSample;

// advance the circular buffer index
m_sampleIndex++;
if (m_sampleIndex >= m_samples.size())
m_sampleIndex = 0;

// return the reverbed sample
return outSample;
}

private:
std::vector<SReverbTap> m_taps;
std::vector<TAmplitude> m_samples;
size_t                  m_sampleIndex;
};

//=====================================================================================
// Conversion Functions
//=====================================================================================
inline TDecibels AmplitudeToDB(TAmplitude volume)
{
return TDecibels(log10(volume.Value()));
}

inline TAmplitude DBToAmplitude(TDecibels dB)
{
return TAmplitude(pow(10.0f, dB.Value() / 20.0f));
}

TSamples SecondsToSamples(const SSoundSettings &s, float seconds)
{
return TSamples((int)(seconds * (float)s.m_sampleRate.Value()));
}

TSamples MilliSecondsToSamples(const SSoundSettings &s, float milliseconds)
{
return SecondsToSamples(s, milliseconds / 1000.0f);
}

TTimeMs SecondsToMilliseconds(float seconds)
{
return TTimeMs((uint32)(seconds * 1000.0f));
}

TFrequency Frequency(float octave, float note)
{
/* frequency = 440×(2^(n/12))
Notes:
0  = A
1  = A#
2  = B
3  = C
4  = C#
5  = D
6  = D#
7  = E
8  = F
9  = F#
10 = G
11 = G# */
return TFrequency((float)(440 * pow(2.0, ((double)((octave - 4) * 12 + note)) / 12.0)));
}

template <typename T>
T AmplitudeToAudioSample(const TAmplitude& in)
{
const T c_min = std::numeric_limits<T>::min();
const T c_max = std::numeric_limits<T>::max();
const float c_minFloat = (float)c_min;
const float c_maxFloat = (float)c_max;

float ret = in.Value() * c_maxFloat;

if (ret < c_minFloat)
return c_min;

if (ret > c_maxFloat)
return c_max;

return (T)ret;
}

TAmplitude GetLerpedAudioSample(const std::vector<TAmplitude>& samples, TFractionalSamples& index)
{
// get the index of each sample and the fractional blend amount
uint32 a = (uint32)floor(index.Value());
uint32 b = a + 1;
float fract = index.Value() - floor(index.Value());

// get our two amplitudes
float ampA = 0.0f;
if (a >= 0 && a < samples.size())
ampA = samples[a].Value();

float ampB = 0.0f;
if (b >= 0 && b < samples.size())
ampB = samples[b].Value();

// return the lerped result
return TAmplitude(fract * ampB + (1.0f - fract) * ampA);
}

void NormalizeSamples(std::vector<TAmplitude>& samples, TAmplitude maxAmplitude)
{
// nothing to do if no samples
if (samples.size() == 0)
return;

// 1) find the largest absolute value in the samples.
TAmplitude largestAbsVal = TAmplitude(abs(samples.front().Value()));
std::for_each(samples.begin() + 1, samples.end(), [&largestAbsVal](const TAmplitude &a)
{
TAmplitude absVal = TAmplitude(abs(a.Value()));
if (absVal > largestAbsVal)
largestAbsVal = absVal;
}
);

// 2) adjust largestAbsVal so that when we divide all samples, none will be bigger than maxAmplitude
// if the value we are going to divide by is <= 0, bail out
largestAbsVal /= maxAmplitude;
if (largestAbsVal <= TAmplitude(0.0f))
return;

// 3) divide all numbers by the largest absolute value seen so all samples are [-maxAmplitude,+maxAmplitude]
std::for_each(samples.begin(), samples.end(), [&largestAbsVal](TAmplitude &a)
{
a /= largestAbsVal;

if (a >= TAmplitude(1.0f))
{
int ijkl = 0;
}
}
);
}

void ResampleData(std::vector<TAmplitude>& samples, int srcSampleRate, int destSampleRate)
{
//if the requested sample rate is the sample rate it already is, bail out and do nothing
if (srcSampleRate == destSampleRate)
return;

//calculate the ratio of the old sample rate to the new
float fResampleRatio = (float)destSampleRate / (float)srcSampleRate;

//calculate how many samples the new data will have and allocate the new sample data
int nNewDataNumSamples = (int)((float)samples.size() * fResampleRatio);

std::vector<TAmplitude> newSamples;
newSamples.resize(nNewDataNumSamples);

//get each lerped output sample.  There are higher quality ways to resample
for (int nIndex = 0; nIndex < nNewDataNumSamples; ++nIndex)
newSamples[nIndex] = GetLerpedAudioSample(samples, TFractionalSamples((float)nIndex / fResampleRatio));

//free the old data and set the new data
std::swap(samples, newSamples);
}

void ChangeNumChannels(std::vector<TAmplitude>& samples, int nSrcChannels, int nDestChannels)
{
//if the number of channels requested is the number of channels already there, or either number of channels is not mono or stereo, return
if (nSrcChannels == nDestChannels ||
nSrcChannels < 1 || nSrcChannels > 2 ||
nDestChannels < 1 || nDestChannels > 2)
{
return;
}

//if converting from mono to stereo, duplicate the mono channel to make stereo
if (nDestChannels == 2)
{
std::vector<TAmplitude> newSamples;
newSamples.resize(samples.size() * 2);
for (size_t index = 0; index < samples.size(); ++index)
{
newSamples[index * 2] = samples[index];
newSamples[index * 2 + 1] = samples[index];
}

std::swap(samples, newSamples);
}
//else converting from stereo to mono, mix the stereo channels together to make mono
else
{
std::vector<TAmplitude> newSamples;
newSamples.resize(samples.size() / 2);
for (size_t index = 0; index < samples.size() / 2; ++index)
newSamples[index] = samples[index * 2] + samples[index * 2 + 1];

std::swap(samples, newSamples);
}
}

float PCMToFloat(unsigned char *pPCMData, int nNumBytes)
{
switch (nNumBytes)
{
case 1:
{
uint8 data = pPCMData;
return (float)data / 255.0f;
}
case 2:
{
int16 data = pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x00007fff);
}
case 3:
{
int32 data = pPCMData << 16 | pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x007fffff);
}
case 4:
{
int32 data = pPCMData << 24 | pPCMData << 16 | pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x7fffffff);
}
default:
{
return 0.0f;
}
}
}

//=====================================================================================
// Wave File Writing Code
//=====================================================================================
{
//the main chunk
unsigned char m_szChunkID;      //0
uint32        m_nChunkSize;        //4
unsigned char m_szFormat;       //8

//sub chunk 1 "fmt "
unsigned char m_szSubChunk1ID;  //12
uint32        m_nSubChunk1Size;    //16
uint16        m_nAudioFormat;      //18
uint16        m_nNumChannels;      //20
uint32        m_nSampleRate;       //24
uint32        m_nByteRate;         //28
uint16        m_nBlockAlign;       //30
uint16        m_nBitsPerSample;    //32

//sub chunk 2 "data"
unsigned char m_szSubChunk2ID;  //36
uint32        m_nSubChunk2Size;    //40

//then comes the data!
};

//this writes a wave file
template <typename T>
bool WriteWaveFile(const char *fileName, const std::vector<TAmplitude> &samples, const SSoundSettings &sound)
{
//open the file if we can
FILE *file = fopen(fileName, "w+b");
if (!file)
return false;

//calculate bits per sample and the data size
const int32 bitsPerSample = sizeof(T) * 8;
const int dataSize = samples.size() * sizeof(T);

//fill out the main chunk

//fill out sub chunk 1 "fmt "
waveHeader.m_nByteRate = sound.m_sampleRate.Value() * 1 * bitsPerSample / 8;
waveHeader.m_nBlockAlign = 1 * bitsPerSample / 8;

//fill out sub chunk 2 "data"

//write the wave data itself, converting it from float to the type specified
std::vector<T> outSamples;
outSamples.resize(samples.size());
for (size_t index = 0; index < samples.size(); ++index)
outSamples[index] = AmplitudeToAudioSample<T>(samples[index]);
fwrite(&outSamples, dataSize, 1, file);

//close the file and return success
fclose(file);
return true;
}

//loads a wave file in.  Converts from source format into the specified format
// TOTAL HONESTY: some wave files seem to have problems being loaded through this function and I don't have
// time to investigate why.  It seems to work best with 16 bit mono wave files.
// If you need more robust file loading, check out libsndfile at http://www.mega-nerd.com/libsndfile/
bool ReadWaveFile(const char *fileName, std::vector<TAmplitude>& samples, int32 sampleRate)
{
//open the file if we can
FILE *File = fopen(fileName, "rb");
if (!File)
{
return false;
}

//read the main chunk ID and make sure it's "RIFF"
char buffer;
buffer = 0;
if (fread(buffer, 4, 1, File) != 1 || strcmp(buffer, "RIFF"))
{
fclose(File);
return false;
}

uint32 nChunkSize;
if (fread(&nChunkSize, 4, 1, File) != 1)
{
fclose(File);
return false;
}

//read the format and make sure it's "WAVE"
if (fread(buffer, 4, 1, File) != 1 || strcmp(buffer, "WAVE"))
{
fclose(File);
return false;
}

long chunkPosFmt = -1;
long chunkPosData = -1;

while (chunkPosFmt == -1 || chunkPosData == -1)
{
//read a sub chunk id and a chunk size if we can
if (fread(buffer, 4, 1, File) != 1 || fread(&nChunkSize, 4, 1, File) != 1)
{
fclose(File);
return false;
}

//if we hit a fmt
if (!strcmp(buffer, "fmt "))
{
chunkPosFmt = ftell(File) - 8;
}
//else if we hit a data
else if (!strcmp(buffer, "data"))
{
chunkPosData = ftell(File) - 8;
}

fseek(File, nChunkSize, SEEK_CUR);
}

//we'll use this handy struct to load in

//load the fmt part if we can
fseek(File, chunkPosFmt, SEEK_SET);
if (fread(&waveData.m_szSubChunk1ID, 24, 1, File) != 1)
{
fclose(File);
return false;
}

//load the data part if we can
fseek(File, chunkPosData, SEEK_SET);
if (fread(&waveData.m_szSubChunk2ID, 8, 1, File) != 1)
{
fclose(File);
return false;
}

//verify a couple things about the file data
if (waveData.m_nAudioFormat != 1 ||       //only pcm data
waveData.m_nNumChannels < 1 ||        //must have a channel
waveData.m_nNumChannels > 2 ||        //must not have more than 2
waveData.m_nBitsPerSample > 32 ||     //32 bits per sample max
waveData.m_nBitsPerSample % 8 != 0 || //must be a multiple of 8 bites
waveData.m_nBlockAlign > 8)           //blocks must be 8 bytes or lower
{
fclose(File);
return false;
}

//figure out how many samples and blocks there are total in the source data
int nBytesPerBlock = waveData.m_nBlockAlign;
int nNumBlocks = waveData.m_nSubChunk2Size / nBytesPerBlock;
int nNumSourceSamples = nNumBlocks * waveData.m_nNumChannels;

//allocate space for the source samples
samples.resize(nNumSourceSamples);

//maximum size of a block is 8 bytes.  4 bytes per samples, 2 channels
unsigned char pBlockData;
memset(pBlockData, 0, 8);

//read in the source samples at whatever sample rate / number of channels it might be in
int nBytesPerSample = nBytesPerBlock / waveData.m_nNumChannels;
for (int nIndex = 0; nIndex < nNumSourceSamples; nIndex += waveData.m_nNumChannels)
{
if (fread(pBlockData, waveData.m_nBlockAlign, 1, File) != 1)
{
fclose(File);
return false;
}

//get the first sample
samples[nIndex].Value() = PCMToFloat(pBlockData, nBytesPerSample);

//get the second sample if there is one
if (waveData.m_nNumChannels == 2)
{
samples[nIndex + 1].Value() = PCMToFloat(&pBlockData[nBytesPerSample], nBytesPerSample);
}
}

//re-sample the sample rate up or down as needed
ResampleData(samples, waveData.m_nSampleRate, sampleRate);

//handle switching from mono to stereo or vice versa
ChangeNumChannels(samples, waveData.m_nNumChannels, 1);

return true;
}

//=====================================================================================
// Oscilators
//=====================================================================================

void AdvancePhase(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{
phase += TPhase(frequency.Value() / (float)sampleRate.Value());
while (phase >= TPhase(1.0f))
phase -= TPhase(1.0f);
while (phase < TPhase(0.0f))
phase += TPhase(1.0f);
}

TAmplitude AdvanceOscilator_Sine(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{
return TAmplitude(sin(phase.Value()*c_twoPi));
}

TAmplitude AdvanceOscilator_Saw(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{
return TAmplitude(phase.Value() * 2.0f - 1.0f);
}

TAmplitude AdvanceOscilator_Square(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{
return TAmplitude(phase.Value() > 0.5f ? 1.0f : -1.0f);
}

TAmplitude AdvanceOscilator_Triangle(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{
if (phase > TPhase(0.5f))
return TAmplitude((((1.0f - phase.Value()) * 2.0f) * 2.0f) - 1.0f);
else
return TAmplitude(((phase.Value() * 2.0f) * 2.0f) - 1.0f);
}

TAmplitude AdvanceOscilator_Saw_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
TPhase harmonicPhase = phase * TPhase((float)harmonicIndex);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / (float)harmonicIndex);
}

ret *= TAmplitude(2.0f / c_pi);

return ret;
}

TAmplitude AdvanceOscilator_Square_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
float harmonicFactor = (float)harmonicIndex * 2.0f - 1.0f;
TPhase harmonicPhase = phase * TPhase(harmonicFactor);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / harmonicFactor);
}

ret *= TAmplitude(4.0f / c_pi);

return ret;
}

TAmplitude AdvanceOscilator_Triangle_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
TAmplitude signFlip(1.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
float harmonicFactor = (float)harmonicIndex * 2.0f - 1.0f;
TPhase harmonicPhase = phase * TPhase(harmonicFactor);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / (harmonicFactor*harmonicFactor)) * signFlip;
signFlip *= TAmplitude(-1.0f);
}

ret *= TAmplitude(8.0f / (c_pi*c_pi));

return ret;
}

//=====================================================================================
// Main
//=====================================================================================
int main(int argc, char **argv)
{
//our desired sound parameters
SSoundSettings sound;
sound.m_sampleRate = TSamples(44100);
sound.m_lengthMs = SecondsToMilliseconds(4.0f);

// create a reverb object with a list of taps
CMultitapReverb reverb(
{
{ MilliSecondsToSamples(sound,  79.0f), DBToAmplitude(TDecibels(-25.0f)) },
{ MilliSecondsToSamples(sound, 130.0f), DBToAmplitude(TDecibels(-23.0f)) },
{ MilliSecondsToSamples(sound, 230.0f), DBToAmplitude(TDecibels(-15.0f)) },
{ MilliSecondsToSamples(sound, 340.0f), DBToAmplitude(TDecibels(-23.0f)) },
{ MilliSecondsToSamples(sound, 470.0f), DBToAmplitude(TDecibels(-17.0f)) },
{ MilliSecondsToSamples(sound, 532.0f), DBToAmplitude(TDecibels(-21.0f)) },
{ MilliSecondsToSamples(sound, 662.0f), DBToAmplitude(TDecibels(-13.0f)) },
}
);

// load the wave file if we can
std::vector<TAmplitude> inputData;
{
return 0;
}

// allocate space for the output file
std::vector<TAmplitude> samples;
samples.resize(inputData.size());

//apply the delay effect to the file
const TSamples c_envelopeSize = MilliSecondsToSamples(sound, 50.0f);
for (TSamples index = TSamples(0), numSamples(samples.size()); index < numSamples; ++index)
{
// calculate envelope at front and end of sound.
TAmplitude envelope(1.0f);
if (index < c_envelopeSize)
envelope = TAmplitude((float)index.Value() / (float)c_envelopeSize.Value());
else if (index >(numSamples - c_envelopeSize))
envelope = TAmplitude(1.0f) - TAmplitude((float)(index - (numSamples - c_envelopeSize)).Value() / (float)c_envelopeSize.Value());

// put our input through the reverb process
TAmplitude outSample = reverb.ProcessSample(inputData[index.Value()]);

// mix the sample with the offset sample and apply the envelope for the front and back of the sound
samples[index.Value()] = outSample * envelope;
}

// normalize the amplitude of the samples to make sure they are as loud as possible without clipping
NormalizeSamples(samples, DBToAmplitude(TDecibels(-3.0f)));

// save as a wave file
WriteWaveFile<int16_t>("out.wav", samples, sound);

return 0;
}
```

Even though this is a pretty ghetto way to do reverb, it can be passable, and is not as expensive as some other reverb methods computationally.

There will soon be a post on how to do convoultion reverb, which is how the pros do reverb. It also makes it a lot easier to get the exact reverb type sound you want, because it lets you use a “reference recording” of the particular reverb you want. It’s cool stuff!

# DIY Synth: Delay Effect (Echo)

This is a part of the DIY Synthesizer series of posts where each post is roughly built upon the knowledge of the previous posts. If you are lost, check the earlier posts!

The delay effect is a pretty simple effect that adds echo to a sound.

Two parameters at minimum are usually exposed on delay effects: delay time and feedback.

Delay time is how long the echo is, and feedback controls how much the sound feeds back into itself – or how loud the echo is.

Implementing delay is actually super simple. You figure out how many samples you need to hold “delay time” amount of sound (using the sample rate of the sound), and then just keep a circular buffer of previous sound samples. The feedback value controls how much the sound feeds back into the delay buffer.

Here is some pseudo code for how it might be implemented:

```//The size of the delay buffer is controlled by the time parameter
// make sure the delay buffer is initialized with silence
delayBuffer[delaySamples] = 0;

// start the circular delay buffer index at 0
delayIndex = 0;

// process each input sample
for (i = 0; i = delaySamples)
delayIndex = 0;
}
```

Delay is an effect in it’s own right, but it’s also the basis for many other effect types as well. Flange, phaser, and chorus for instance rely on a delay buffer to be able to do their work.

Some common variations on the delay effect include having different delay parameters for the left and right channel in a stereo sound, or modifying the output sound before it goes into the delay buffer to make it so that the echo sounds a bit different than the original. For instance you could put a lowpass or highpass filter on the echo, or even flange it!

Play around with it and get creative. You might make some interesting and unique sounding sounds (:

## Audio Samples

These files were processed with the sample code in this post.

 Legend Quote Cymbals Raw legend.wav cymbal.wav 250ms delay, -3db feedback legend_250_3.wav cymbal_250_3.wav 250ms delay, -12db feedback legend_250_12.wav cymbal_250_12.wav 333ms delay, -6db feedback legend_333_6.wav cymbal_333_6.wav

## Sample Code

The sample code below reads in.wav, applies a delay of 333ms with -12db feedback and writes it as out.wav.

Usual caveat: the wave reading code isn’t bullet proof (sorry). Seems to work best with 16 bit mono wave files. You can use libsndfile if you want more reliable and more diverse sound loading options!

```#define _CRT_SECURE_NO_WARNINGS

#include
#include
#include
#include
#include
#include

#define _USE_MATH_DEFINES
#include

//=====================================================================================
// SNumeric - uses phantom types to enforce type safety
//=====================================================================================
template
struct SNumeric
{
public:
explicit SNumeric(const T &value) : m_value(value) { }
SNumeric() : m_value() { }
inline T& Value() { return m_value; }
inline const T& Value() const { return m_value; }

typedef SNumeric TType;
typedef T TInnerType;

// Math Operations
TType operator+ (const TType &b) const
{
return TType(this->Value() + b.Value());
}

TType operator- (const TType &b) const
{
return TType(this->Value() - b.Value());
}

TType operator* (const TType &b) const
{
return TType(this->Value() * b.Value());
}

TType operator/ (const TType &b) const
{
return TType(this->Value() / b.Value());
}

TType& operator+= (const TType &b)
{
Value() += b.Value();
return *this;
}

TType& operator-= (const TType &b)
{
Value() -= b.Value();
return *this;
}

TType& operator*= (const TType &b)
{
Value() *= b.Value();
return *this;
}

TType& operator/= (const TType &b)
{
Value() /= b.Value();
return *this;
}

TType& operator++ ()
{
Value()++;
return *this;
}

TType& operator-- ()
{
Value()--;
return *this;
}

// Extended Math Operations
template
T Divide(const TType &b)
{
return ((T)this->Value()) / ((T)b.Value());
}

// Logic Operations
bool operatorValue() < b.Value();
}
bool operatorValue()  (const TType &b) const {
return this->Value() > b.Value();
}
bool operator>= (const TType &b) const {
return this->Value() >= b.Value();
}
bool operator== (const TType &b) const {
return this->Value() == b.Value();
}
bool operator!= (const TType &b) const {
return this->Value() != b.Value();
}

private:
T m_value;
};

//=====================================================================================
// Typedefs
//=====================================================================================

typedef uint8_t uint8;
typedef uint16_t uint16;
typedef uint32_t uint32;
typedef int16_t int16;
typedef int32_t int32;

// type safe types!
typedef SNumeric      TFrequency;
typedef SNumeric        TTimeMs;
typedef SNumeric       TSamples;
typedef SNumeric   TFractionalSamples;
typedef SNumeric       TDecibels;
typedef SNumeric      TAmplitude;
typedef SNumeric          TPhase;

//=====================================================================================
// Constants
//=====================================================================================

static const float c_pi = (float)M_PI;
static const float c_twoPi = c_pi * 2.0f;

//=====================================================================================
// Structs
//=====================================================================================

struct SSoundSettings
{
TSamples        m_sampleRate;
TTimeMs         m_lengthMs;
TSamples        m_currentSample;
};

//=====================================================================================
// CDelay -> the delay buffer object
//=====================================================================================

class CDelay
{
public:
CDelay(TSamples delayTime, TAmplitude feedback)
: m_feedBack(feedback)
, m_sampleIndex(0)
{
m_samples.resize(delayTime.Value());
std::fill(m_samples.begin(), m_samples.end(), TAmplitude(0.0f));
}

TAmplitude ProcessSample(TAmplitude sample)
{
TAmplitude ret = sample + m_samples[m_sampleIndex];
m_samples[m_sampleIndex] = ret * m_feedBack;
m_sampleIndex++;
if (m_sampleIndex >= m_samples.size())
m_sampleIndex = 0;
return ret;
}

private:
TAmplitude              m_feedBack;
std::vector m_samples;
size_t                  m_sampleIndex;
};

//=====================================================================================
// Conversion Functions
//=====================================================================================
inline TDecibels AmplitudeToDB(TAmplitude volume)
{
return TDecibels(log10(volume.Value()));
}

inline TAmplitude DBToAmplitude(TDecibels dB)
{
return TAmplitude(pow(10.0f, dB.Value() / 20.0f));
}

TSamples SecondsToSamples(const SSoundSettings &s, float seconds)
{
return TSamples((int)(seconds * (float)s.m_sampleRate.Value()));
}

TSamples MilliSecondsToSamples(const SSoundSettings &s, float milliseconds)
{
return SecondsToSamples(s, milliseconds / 1000.0f);
}

TTimeMs SecondsToMilliseconds(float seconds)
{
return TTimeMs((uint32)(seconds * 1000.0f));
}

TFrequency Frequency(float octave, float note)
{
/* frequency = 440×(2^(n/12))
Notes:
0  = A
1  = A#
2  = B
3  = C
4  = C#
5  = D
6  = D#
7  = E
8  = F
9  = F#
10 = G
11 = G# */
return TFrequency((float)(440 * pow(2.0, ((double)((octave - 4) * 12 + note)) / 12.0)));
}

template
T AmplitudeToAudioSample(const TAmplitude& in)
{
const T c_min = std::numeric_limits::min();
const T c_max = std::numeric_limits::max();
const float c_minFloat = (float)c_min;
const float c_maxFloat = (float)c_max;

float ret = in.Value() * c_maxFloat;

if (ret  c_maxFloat)
return c_max;

return (T)ret;
}

TAmplitude GetLerpedAudioSample(const std::vector& samples, TFractionalSamples& index)
{
// get the index of each sample and the fractional blend amount
uint32 a = (uint32)floor(index.Value());
uint32 b = a + 1;
float fract = index.Value() - floor(index.Value());

// get our two amplitudes
float ampA = 0.0f;
if (a >= 0 && a = 0 && b < samples.size())
ampB = samples[b].Value();

// return the lerped result
return TAmplitude(fract * ampB + (1.0f - fract) * ampA);
}

void NormalizeSamples(std::vector& samples, TAmplitude maxAmplitude)
{
// nothing to do if no samples
if (samples.size() == 0)
return;

// 1) find the largest absolute value in the samples.
TAmplitude largestAbsVal = TAmplitude(abs(samples.front().Value()));
std::for_each(samples.begin() + 1, samples.end(), [&largestAbsVal](const TAmplitude &a)
{
TAmplitude absVal = TAmplitude(abs(a.Value()));
if (absVal > largestAbsVal)
largestAbsVal = absVal;
}
);

// 2) adjust largestAbsVal so that when we divide all samples, none will be bigger than maxAmplitude
// if the value we are going to divide by is <= 0, bail out
largestAbsVal /= maxAmplitude;
if (largestAbsVal = TAmplitude(1.0f))
{
int ijkl = 0;
}
}
);
}

void ResampleData(std::vector& samples, int srcSampleRate, int destSampleRate)
{
//if the requested sample rate is the sample rate it already is, bail out and do nothing
if (srcSampleRate == destSampleRate)
return;

//calculate the ratio of the old sample rate to the new
float fResampleRatio = (float)destSampleRate / (float)srcSampleRate;

//calculate how many samples the new data will have and allocate the new sample data
int nNewDataNumSamples = (int)((float)samples.size() * fResampleRatio);

std::vector newSamples;
newSamples.resize(nNewDataNumSamples);

//get each lerped output sample.  There are higher quality ways to resample
for(int nIndex = 0; nIndex < nNewDataNumSamples; ++nIndex)
newSamples[nIndex] = GetLerpedAudioSample(samples, TFractionalSamples((float)nIndex / fResampleRatio));

//free the old data and set the new data
std::swap(samples, newSamples);
}

void ChangeNumChannels(std::vector& samples, int nSrcChannels, int nDestChannels)
{
//if the number of channels requested is the number of channels already there, or either number of channels is not mono or stereo, return
if(nSrcChannels == nDestChannels ||
nSrcChannels  2 ||
nDestChannels  2)
{
return;
}

//if converting from mono to stereo, duplicate the mono channel to make stereo
if(nDestChannels == 2)
{
std::vector newSamples;
newSamples.resize(samples.size() * 2);
for (size_t index = 0; index < samples.size(); ++index)
{
newSamples[index * 2] = samples[index];
newSamples[index * 2 + 1] = samples[index];
}

std::swap(samples, newSamples);
}
//else converting from stereo to mono, mix the stereo channels together to make mono
else
{
std::vector newSamples;
newSamples.resize(samples.size() / 2);
for (size_t index = 0; index < samples.size() / 2; ++index)
newSamples[index] = samples[index * 2] + samples[index * 2 + 1];

std::swap(samples, newSamples);
}
}

float PCMToFloat(unsigned char *pPCMData, int nNumBytes)
{
switch(nNumBytes)
{
case 1:
{
uint8 data = pPCMData;
return (float)data / 255.0f;
}
case 2:
{
int16 data = pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x00007fff);
}
case 3:
{
int32 data = pPCMData << 16 | pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x007fffff);
}
case 4:
{
int32 data = pPCMData << 24 | pPCMData << 16 | pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x7fffffff);
}
default:
{
return 0.0f;
}
}
}

//=====================================================================================
// Wave File Writing Code
//=====================================================================================
{
//the main chunk
unsigned char m_szChunkID;      //0
uint32        m_nChunkSize;        //4
unsigned char m_szFormat;       //8

//sub chunk 1 "fmt "
unsigned char m_szSubChunk1ID;  //12
uint32        m_nSubChunk1Size;    //16
uint16        m_nAudioFormat;      //18
uint16        m_nNumChannels;      //20
uint32        m_nSampleRate;       //24
uint32        m_nByteRate;         //28
uint16        m_nBlockAlign;       //30
uint16        m_nBitsPerSample;    //32

//sub chunk 2 "data"
unsigned char m_szSubChunk2ID;  //36
uint32        m_nSubChunk2Size;    //40

//then comes the data!
};

//this writes a wave file
template
bool WriteWaveFile(const char *fileName, const std::vector &samples, const SSoundSettings &sound)
{
//open the file if we can
FILE *file = fopen(fileName, "w+b");
if (!file)
return false;

//calculate bits per sample and the data size
const int32 bitsPerSample = sizeof(T) * 8;
const int dataSize = samples.size() * sizeof(T);

//fill out the main chunk

//fill out sub chunk 1 "fmt "
waveHeader.m_nByteRate = sound.m_sampleRate.Value() * 1 * bitsPerSample / 8;
waveHeader.m_nBlockAlign = 1 * bitsPerSample / 8;

//fill out sub chunk 2 "data"

//write the wave data itself, converting it from float to the type specified
std::vector outSamples;
outSamples.resize(samples.size());
for (size_t index = 0; index < samples.size(); ++index)
outSamples[index] = AmplitudeToAudioSample(samples[index]);
fwrite(&outSamples, dataSize, 1, file);

//close the file and return success
fclose(file);
return true;
}

//loads a wave file in.  Converts from source format into the specified format
// TOTAL HONESTY: some wave files seem to have problems being loaded through this function and I don't have
// time to investigate why.  It seems to work best with 16 bit mono wave files.
// If you need more robust file loading, check out libsndfile at http://www.mega-nerd.com/libsndfile/
bool ReadWaveFile(const char *fileName, std::vector& samples, int32 sampleRate)
{
//open the file if we can
FILE *File = fopen(fileName,"rb");
if(!File)
{
return false;
}

//read the main chunk ID and make sure it's "RIFF"
char buffer;
buffer = 0;
{
fclose(File);
return false;
}

uint32 nChunkSize;
{
fclose(File);
return false;
}

//read the format and make sure it's "WAVE"
{
fclose(File);
return false;
}

long chunkPosFmt = -1;
long chunkPosData = -1;

while(chunkPosFmt == -1 || chunkPosData == -1)
{
//read a sub chunk id and a chunk size if we can
{
fclose(File);
return false;
}

//if we hit a fmt
if(!strcmp(buffer,"fmt "))
{
chunkPosFmt = ftell(File) - 8;
}
//else if we hit a data
else if(!strcmp(buffer,"data"))
{
chunkPosData = ftell(File) - 8;
}

fseek(File,nChunkSize,SEEK_CUR);
}

//we'll use this handy struct to load in

//load the fmt part if we can
fseek(File,chunkPosFmt,SEEK_SET);
{
fclose(File);
return false;
}

//load the data part if we can
fseek(File,chunkPosData,SEEK_SET);
{
fclose(File);
return false;
}

//verify a couple things about the file data
if(waveData.m_nAudioFormat != 1 ||       //only pcm data
waveData.m_nNumChannels  2 ||        //must not have more than 2
waveData.m_nBitsPerSample > 32 ||     //32 bits per sample max
waveData.m_nBitsPerSample % 8 != 0 || //must be a multiple of 8 bites
waveData.m_nBlockAlign > 8)           //blocks must be 8 bytes or lower
{
fclose(File);
return false;
}

//figure out how many samples and blocks there are total in the source data
int nBytesPerBlock = waveData.m_nBlockAlign;
int nNumBlocks = waveData.m_nSubChunk2Size / nBytesPerBlock;
int nNumSourceSamples = nNumBlocks * waveData.m_nNumChannels;

//allocate space for the source samples
samples.resize(nNumSourceSamples);

//maximum size of a block is 8 bytes.  4 bytes per samples, 2 channels
unsigned char pBlockData;
memset(pBlockData,0,8);

//read in the source samples at whatever sample rate / number of channels it might be in
int nBytesPerSample = nBytesPerBlock / waveData.m_nNumChannels;
for(int nIndex = 0; nIndex = TPhase(1.0f))
phase -= TPhase(1.0f);
while (phase  0.5f ? 1.0f : -1.0f);
}

TAmplitude AdvanceOscilator_Triangle(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{
if (phase > TPhase(0.5f))
return TAmplitude((((1.0f - phase.Value()) * 2.0f) * 2.0f) - 1.0f);
else
return TAmplitude(((phase.Value() * 2.0f) * 2.0f) - 1.0f);
}

TAmplitude AdvanceOscilator_Saw_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
TPhase harmonicPhase = phase * TPhase((float)harmonicIndex);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / (float)harmonicIndex);
}

ret *= TAmplitude(2.0f / c_pi);

return ret;
}

TAmplitude AdvanceOscilator_Square_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
float harmonicFactor = (float)harmonicIndex * 2.0f - 1.0f;
TPhase harmonicPhase = phase * TPhase(harmonicFactor);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / harmonicFactor);
}

ret *= TAmplitude(4.0f / c_pi);

return ret;
}

TAmplitude AdvanceOscilator_Triangle_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
TAmplitude signFlip(1.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
float harmonicFactor = (float)harmonicIndex * 2.0f - 1.0f;
TPhase harmonicPhase = phase * TPhase(harmonicFactor);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / (harmonicFactor*harmonicFactor)) * signFlip;
signFlip *= TAmplitude(-1.0f);
}

ret *= TAmplitude(8.0f / (c_pi*c_pi));

return ret;
}

//=====================================================================================
// Main
//=====================================================================================
int main(int argc, char **argv)
{
//our desired sound parameters
SSoundSettings sound;
sound.m_sampleRate = TSamples(44100);
sound.m_lengthMs = SecondsToMilliseconds(4.0f);

// create a delay effect object with the specified parameters
const TSamples c_delayTime = MilliSecondsToSamples(sound, 333.0f);
const TAmplitude c_delayFeedback = DBToAmplitude(TDecibels(-12.0f));
CDelay delay(c_delayTime, c_delayFeedback);

// load the wave file if we can
std::vector inputData;
{
return 0;
}

// allocate space for the output file
std::vector samples;
samples.resize(inputData.size());

//apply the delay effect to the file
const TSamples c_envelopeSize = MilliSecondsToSamples(sound, 50.0f);
for (TSamples index = TSamples(0), numSamples(samples.size()); index < numSamples; ++index)
{
// calculate envelope at front and end of sound.
TAmplitude envelope(1.0f);
if (index  (numSamples - c_envelopeSize))
envelope = TAmplitude(1.0f) - TAmplitude((float)(index - (numSamples - c_envelopeSize)).Value() / (float)c_envelopeSize.Value());

// put our input through the delay buffer
TAmplitude outSample = delay.ProcessSample(inputData[index.Value()]);

// mix the sample with the offset sample and apply the envelope for the front and back of the sound
samples[index.Value()] = outSample * envelope;
}

// normalize the amplitude of the samples to make sure they are as loud as possible without clipping
NormalizeSamples(samples, DBToAmplitude(TDecibels(-3.0f)));

// save as a wave file
WriteWaveFile("out.wav", samples, sound);

return 0;
}
```

Wikipedia: Delay Effect

Next post will be about multitap reverb, which is similar to delay, but a little bit different.

# DIY Synth: Flange Effect

This is a part of the DIY Synthesizer series of posts where each post is roughly built upon the knowledge of the previous posts. If you are lost, check the earlier posts!

Flange is a pretty interesting audio effect. It can give character to the most monotonous of sounds and it can also make good sounds even better.

Before we dive into it, check out these raw and flanged sound files to get a glimpse of what we’re talking about. The flanged files were created with the simple c++ sample code at the end of this chapter. Only standard header files used, so you too will be able to flange sounds by the end of this post!

A clip from the movie “legend”
Raw: legend.wav
Flanged: legend_f.wav

A drum loop
Raw: cymbal.wav
Flanged: cymbal_f.wav

## How Does it Work?

The idea behind flange is actually pretty simple. All you do is you mix a sound with itself, but have one of the copies speed up and slow down (like, on a sine wave) instead of letting it play at normal speed. This makes the sounds mix together at different (but similar) points in time. Since sound is made up of peaks (positive numbers) and valleys (negative numbers), mixing the sound with the time offset sound causes some of the peaks and valleys to grow larger, and causes others to get smaller as they cancel each other out. This results in the distinctive flange sound.

The simple way to flange a file would be to load all of the audio samples into memory and do something like this:

```for (int i = 0; i = flangeSampleDepth)
output[i] += input[i - offset];
}
```

It’s important to note though that for better quality flanging sounds, you should actually use a flange with sub-sample accuracy. That way if your sine wave says it wants sample 3.6, it means your resulting sample should sample * 0.4 + sample * 0.6. That is just doing a linear interpolation to get the “inbetween” data of the samples, which works well enough for my needs, but higher quality flangers will use higher quality interpolation techniques and curve fitting.

Who invented the flanger is apparently not agreed on, but it’s origin is back in the days of tape deck based audio recording studios. If you put your finger on one of the tape flanges and slow it down, if you then mix that result with an undelayed version of the same sound, you’d start to hear the flanging effect.

These days we rely on hardware and software to emulate that.

If you have ever accidentally played too many copies of the same sound too closely together, you’ve probably heard a flange-like effect. It sounds fairly similar, but you don’t get the sweeping effect that you do with flange.

Some flanges also feed their output back into their input to further the effect and add some resonance. We aren’t doing that in this post, but feel free to experiment with that on your own! (check the links section for more info)

It’s important to note that you can use the same process on LIVE music to do flanging in real time. If you have a “delay buffer” to hold the last N seconds of sound, you can use the sine wave to control what part of that delay buffer mixes with the current sound coming out.

## Flange Parameters

Flangers often have two parameters (at least). One parameter controls the frequency of the LFO (low frequency oscilator) sine wave. The other parameter controls it’s “depth” which means how far backwards or forwards in time the non-real-time sound can go.

Good frequency values of the oscilator depends entirely on the sound you are flanging as well as the style you are going for, but usually small values like less than 5 hz works best. I usually will use a value less than 1, and for best results I like to make it a value that isn’t likely to line up with the tempo of the music – such as perhaps 0.374.

The reason for this is that flange adds some interesting flavor to your sound, and if you had a value like 0.25 for your flanger, every 4 notes would always sound the same and line up with the flange effect. if instead, you have it at something like 0.374, you can play a repeating melody SEVERAL times over and over, and due to the flange effect, each time through the notes will sound different and accoustically interesting.

The best values of the other parameter (the flange depth), also varies depending on your source sounds and the sound you are going after. People usually suggest doing no more than 20ms though. I personally really enjoy the sound of a much smaller value, such as 1ms. Play around with different values and see what you like!

## Flanging Basic Wave Forms

Here are some more flange samples of the basic wave forms, to give you an idea of how flange behaves with the various wave forms:

Triangle:
Raw: triangle.wav
Flanged: triangle_f.wav

Bandlimited Triangle:
Raw: triangleBL.wav
Flanged: triangleBL_f.wav

Saw:
Raw: saw.wav
Flanged: saw_f.wav

Bandlimited Saw:
Raw: sawBL.wav
Flanged: sawBL_f.wav

Square:
Raw: square.wav
Flanged: square_f.wav

Bandlimited Square:
Raw: squareBL.wav
Flanged: squareBL_f.wav

Sine:
Raw: sine1.wav
Flanged: sine_f.wav

## Sample Code

This sample code reads in “in.wav” flanges it at 4hz with a 1ms depth, and writes out “out.wav”. Note, the wave file reading code is not bullet proof, sorry! It seems to work well with mono 16 bit wave files, but if you need better sound file reading, i suggest looking at libsndfile (link in links section!)

```#define _CRT_SECURE_NO_WARNINGS

#include
#include
#include
#include
#include
#include

#define _USE_MATH_DEFINES
#include

//=====================================================================================
// SNumeric - uses phantom types to enforce type safety
//=====================================================================================
template
struct SNumeric
{
public:
explicit SNumeric(const T &value) : m_value(value) { }
SNumeric() : m_value() { }
inline T& Value() { return m_value; }
inline const T& Value() const { return m_value; }

typedef SNumeric TType;
typedef T TInnerType;

// Math Operations
TType operator+ (const TType &b) const
{
return TType(this->Value() + b.Value());
}

TType operator- (const TType &b) const
{
return TType(this->Value() - b.Value());
}

TType operator* (const TType &b) const
{
return TType(this->Value() * b.Value());
}

TType operator/ (const TType &b) const
{
return TType(this->Value() / b.Value());
}

TType& operator+= (const TType &b)
{
Value() += b.Value();
return *this;
}

TType& operator-= (const TType &b)
{
Value() -= b.Value();
return *this;
}

TType& operator*= (const TType &b)
{
Value() *= b.Value();
return *this;
}

TType& operator/= (const TType &b)
{
Value() /= b.Value();
return *this;
}

TType& operator++ ()
{
Value()++;
return *this;
}

TType& operator-- ()
{
Value()--;
return *this;
}

// Extended Math Operations
template
T Divide(const TType &b)
{
return ((T)this->Value()) / ((T)b.Value());
}

// Logic Operations
bool operatorValue() < b.Value();
}
bool operatorValue()  (const TType &b) const {
return this->Value() > b.Value();
}
bool operator>= (const TType &b) const {
return this->Value() >= b.Value();
}
bool operator== (const TType &b) const {
return this->Value() == b.Value();
}
bool operator!= (const TType &b) const {
return this->Value() != b.Value();
}

private:
T m_value;
};

//=====================================================================================
// Typedefs
//=====================================================================================

typedef uint8_t uint8;
typedef uint16_t uint16;
typedef uint32_t uint32;
typedef int16_t int16;
typedef int32_t int32;

// type safe types!
typedef SNumeric      TFrequency;
typedef SNumeric        TTimeMs;
typedef SNumeric       TSamples;
typedef SNumeric   TFractionalSamples;
typedef SNumeric       TDecibels;
typedef SNumeric      TAmplitude;
typedef SNumeric      TChannelCount;
typedef SNumeric          TPhase;

//=====================================================================================
// Constants
//=====================================================================================

static const float c_pi = (float)M_PI;
static const float c_twoPi = c_pi * 2.0f;

//=====================================================================================
// Structs
//=====================================================================================

struct SSoundSettings
{
TSamples        m_sampleRate;
TTimeMs         m_lengthMs;
TChannelCount   m_numChannels;
TSamples        m_currentSample;
};

//=====================================================================================
// Conversion Functions
//=====================================================================================
inline TDecibels AmplitudeToDB(TAmplitude volume)
{
return TDecibels(log10(volume.Value()));
}

inline TAmplitude DBToAmplitude(TDecibels dB)
{
return TAmplitude(pow(10.0f, dB.Value() / 20.0f));
}

TSamples SecondsToSamples(const SSoundSettings &s, float seconds)
{
return TSamples((int)(seconds * (float)s.m_sampleRate.Value()));
}

TSamples MilliSecondsToSamples(const SSoundSettings &s, float milliseconds)
{
return SecondsToSamples(s, milliseconds / 1000.0f);
}

TTimeMs SecondsToMilliseconds(float seconds)
{
return TTimeMs((uint32)(seconds * 1000.0f));
}

TFrequency Frequency(float octave, float note)
{
/* frequency = 440×(2^(n/12))
Notes:
0  = A
1  = A#
2  = B
3  = C
4  = C#
5  = D
6  = D#
7  = E
8  = F
9  = F#
10 = G
11 = G# */
return TFrequency((float)(440 * pow(2.0, ((double)((octave - 4) * 12 + note)) / 12.0)));
}

template
T AmplitudeToAudioSample(const TAmplitude& in)
{
const T c_min = std::numeric_limits::min();
const T c_max = std::numeric_limits::max();
const float c_minFloat = (float)c_min;
const float c_maxFloat = (float)c_max;

float ret = in.Value() * c_maxFloat;

if (ret  c_maxFloat)
return c_max;

return (T)ret;
}

TAmplitude GetLerpedAudioSample(const std::vector& samples, TFractionalSamples& index)
{
// get the index of each sample and the fractional blend amount
uint32 a = (uint32)floor(index.Value());
uint32 b = a + 1;
float fract = index.Value() - floor(index.Value());

// get our two amplitudes
float ampA = 0.0f;
if (a >= 0 && a = 0 && b < samples.size())
ampB = samples[b].Value();

// return the lerped result
return TAmplitude(fract * ampB + (1.0f - fract) * ampA);
}

void NormalizeSamples(std::vector& samples, TAmplitude maxAmplitude)
{
// nothing to do if no samples
if (samples.size() == 0)
return;

// 1) find the largest absolute value in the samples.
TAmplitude largestAbsVal = TAmplitude(abs(samples.front().Value()));
std::for_each(samples.begin() + 1, samples.end(), [&largestAbsVal](const TAmplitude &a)
{
TAmplitude absVal = TAmplitude(abs(a.Value()));
if (absVal > largestAbsVal)
largestAbsVal = absVal;
}
);

// 2) adjust largestAbsVal so that when we divide all samples, none will be bigger than maxAmplitude
// if the value we are going to divide by is <= 0, bail out
largestAbsVal /= maxAmplitude;
if (largestAbsVal = TAmplitude(1.0f))
{
int ijkl = 0;
}
}
);
}

void ResampleData(std::vector& samples, int srcSampleRate, int destSampleRate)
{
//if the requested sample rate is the sample rate it already is, bail out and do nothing
if (srcSampleRate == destSampleRate)
return;

//calculate the ratio of the old sample rate to the new
float fResampleRatio = (float)destSampleRate / (float)srcSampleRate;

//calculate how many samples the new data will have and allocate the new sample data
int nNewDataNumSamples = (int)((float)samples.size() * fResampleRatio);

std::vector newSamples;
newSamples.resize(nNewDataNumSamples);

//get each lerped output sample.  There are higher quality ways to resample
for(int nIndex = 0; nIndex < nNewDataNumSamples; ++nIndex)
newSamples[nIndex] = GetLerpedAudioSample(samples, TFractionalSamples((float)nIndex / fResampleRatio));

//free the old data and set the new data
std::swap(samples, newSamples);
}

void ChangeNumChannels(std::vector& samples, int nSrcChannels, int nDestChannels)
{
//if the number of channels requested is the number of channels already there, or either number of channels is not mono or stereo, return
if(nSrcChannels == nDestChannels ||
nSrcChannels  2 ||
nDestChannels  2)
{
return;
}

//if converting from mono to stereo, duplicate the mono channel to make stereo
if(nDestChannels == 2)
{
std::vector newSamples;
newSamples.resize(samples.size() * 2);
for (size_t index = 0; index < samples.size(); ++index)
{
newSamples[index * 2] = samples[index];
newSamples[index * 2 + 1] = samples[index];
}

std::swap(samples, newSamples);
}
//else converting from stereo to mono, mix the stereo channels together to make mono
else
{
std::vector newSamples;
newSamples.resize(samples.size() / 2);
for (size_t index = 0; index < samples.size() / 2; ++index)
newSamples[index] = samples[index * 2] + samples[index * 2 + 1];

std::swap(samples, newSamples);
}
}

float PCMToFloat(unsigned char *pPCMData, int nNumBytes)
{
switch(nNumBytes)
{
case 1:
{
uint8 data = pPCMData;
return (float)data / 255.0f;
}
case 2:
{
int16 data = pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x00007fff);
}
case 3:
{
int32 data = pPCMData << 16 | pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x007fffff);
}
case 4:
{
int32 data = pPCMData << 24 | pPCMData << 16 | pPCMData << 8 | pPCMData;
return ((float)data) / ((float)0x7fffffff);
}
default:
{
return 0.0f;
}
}
}

//=====================================================================================
// Wave File Writing Code
//=====================================================================================
{
//the main chunk
unsigned char m_szChunkID;      //0
uint32        m_nChunkSize;        //4
unsigned char m_szFormat;       //8

//sub chunk 1 "fmt "
unsigned char m_szSubChunk1ID;  //12
uint32        m_nSubChunk1Size;    //16
uint16        m_nAudioFormat;      //18
uint16        m_nNumChannels;      //20
uint32        m_nSampleRate;       //24
uint32        m_nByteRate;         //28
uint16        m_nBlockAlign;       //30
uint16        m_nBitsPerSample;    //32

//sub chunk 2 "data"
unsigned char m_szSubChunk2ID;  //36
uint32        m_nSubChunk2Size;    //40

//then comes the data!
};

//this writes a wave file
template
bool WriteWaveFile(const char *fileName, const std::vector &samples, const SSoundSettings &sound)
{
//open the file if we can
FILE *file = fopen(fileName, "w+b");
if (!file)
return false;

//calculate bits per sample and the data size
const int32 bitsPerSample = sizeof(T) * 8;
const int dataSize = samples.size() * sizeof(T);

//fill out the main chunk

//fill out sub chunk 1 "fmt "
waveHeader.m_nByteRate = sound.m_sampleRate.Value() * sound.m_numChannels.Value() * bitsPerSample / 8;
waveHeader.m_nBlockAlign = sound.m_numChannels.Value() * bitsPerSample / 8;

//fill out sub chunk 2 "data"

//write the wave data itself, converting it from float to the type specified
std::vector outSamples;
outSamples.resize(samples.size());
for (size_t index = 0; index < samples.size(); ++index)
outSamples[index] = AmplitudeToAudioSample(samples[index]);
fwrite(&outSamples, dataSize, 1, file);

//close the file and return success
fclose(file);
return true;
}

//loads a wave file in.  Converts from source format into the specified format
// TOTAL HONESTY: some wave files seem to have problems being loaded through this function and I don't have
// time to investigate why.  It seems to work best with 16 bit mono wave files.
// If you need more robust file loading, check out libsndfile at http://www.mega-nerd.com/libsndfile/
bool ReadWaveFile(const char *fileName, std::vector& samples, int16 numChannels, int32 sampleRate)
{
//open the file if we can
FILE *File = fopen(fileName,"rb");
if(!File)
{
return false;
}

//read the main chunk ID and make sure it's "RIFF"
char buffer;
buffer = 0;
{
fclose(File);
return false;
}

uint32 nChunkSize;
{
fclose(File);
return false;
}

//read the format and make sure it's "WAVE"
{
fclose(File);
return false;
}

long chunkPosFmt = -1;
long chunkPosData = -1;

while(chunkPosFmt == -1 || chunkPosData == -1)
{
//read a sub chunk id and a chunk size if we can
{
fclose(File);
return false;
}

//if we hit a fmt
if(!strcmp(buffer,"fmt "))
{
chunkPosFmt = ftell(File) - 8;
}
//else if we hit a data
else if(!strcmp(buffer,"data"))
{
chunkPosData = ftell(File) - 8;
}

fseek(File,nChunkSize,SEEK_CUR);
}

//we'll use this handy struct to load in

//load the fmt part if we can
fseek(File,chunkPosFmt,SEEK_SET);
{
fclose(File);
return false;
}

//load the data part if we can
fseek(File,chunkPosData,SEEK_SET);
{
fclose(File);
return false;
}

//verify a couple things about the file data
if(waveData.m_nAudioFormat != 1 ||       //only pcm data
waveData.m_nNumChannels  2 ||        //must not have more than 2
waveData.m_nBitsPerSample > 32 ||     //32 bits per sample max
waveData.m_nBitsPerSample % 8 != 0 || //must be a multiple of 8 bites
waveData.m_nBlockAlign > 8)           //blocks must be 8 bytes or lower
{
fclose(File);
return false;
}

//figure out how many samples and blocks there are total in the source data
int nBytesPerBlock = waveData.m_nBlockAlign;
int nNumBlocks = waveData.m_nSubChunk2Size / nBytesPerBlock;
int nNumSourceSamples = nNumBlocks * waveData.m_nNumChannels;

//allocate space for the source samples
samples.resize(nNumSourceSamples);

//maximum size of a block is 8 bytes.  4 bytes per samples, 2 channels
unsigned char pBlockData;
memset(pBlockData,0,8);

//read in the source samples at whatever sample rate / number of channels it might be in
int nBytesPerSample = nBytesPerBlock / waveData.m_nNumChannels;
for(int nIndex = 0; nIndex = TPhase(1.0f))
phase -= TPhase(1.0f);
while (phase  0.5f ? 1.0f : -1.0f);
}

TAmplitude AdvanceOscilator_Triangle(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{
if (phase > TPhase(0.5f))
return TAmplitude((((1.0f - phase.Value()) * 2.0f) * 2.0f) - 1.0f);
else
return TAmplitude(((phase.Value() * 2.0f) * 2.0f) - 1.0f);
}

TAmplitude AdvanceOscilator_Saw_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
TPhase harmonicPhase = phase * TPhase((float)harmonicIndex);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / (float)harmonicIndex);
}

ret *= TAmplitude(2.0f / c_pi);

return ret;
}

TAmplitude AdvanceOscilator_Square_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
float harmonicFactor = (float)harmonicIndex * 2.0f - 1.0f;
TPhase harmonicPhase = phase * TPhase(harmonicFactor);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / harmonicFactor);
}

ret *= TAmplitude(4.0f / c_pi);

return ret;
}

TAmplitude AdvanceOscilator_Triangle_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
TAmplitude signFlip(1.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
float harmonicFactor = (float)harmonicIndex * 2.0f - 1.0f;
TPhase harmonicPhase = phase * TPhase(harmonicFactor);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / (harmonicFactor*harmonicFactor)) * signFlip;
signFlip *= TAmplitude(-1.0f);
}

ret *= TAmplitude(8.0f / (c_pi*c_pi));

return ret;
}

//=====================================================================================
// Main
//=====================================================================================
int main(int argc, char **argv)
{
//our desired sound parameters
SSoundSettings sound;
sound.m_sampleRate = TSamples(44100);
sound.m_lengthMs = SecondsToMilliseconds(4.0f);
sound.m_numChannels = TChannelCount(1);

// flange effect parameters
const TFrequency c_flangeFrequency(0.4f);
const TSamples c_flangeDepth(MilliSecondsToSamples(sound, 1.0f));

// load the wave file if we can
std::vector inputData;
{
return 0;
}

// allocate space for the output file
std::vector samples;
samples.resize(inputData.size());

TSamples envelopeSize = MilliSecondsToSamples(sound, 50.0f);

//apply the phase effect to the file
TPhase flangePhase(0.0f);
for (TSamples index = TSamples(0), numSamples(samples.size()); index < numSamples; ++index)
{
// calculate envelope at front and end of sound.
TAmplitude envelope(1.0f);
if (index  (numSamples - envelopeSize))
envelope = TAmplitude(1.0f) - TAmplitude((float)(index - (numSamples - envelopeSize)).Value() / (float)envelopeSize.Value());

// make a sine wave that goes from -1 to 0 at the specified frequency
TAmplitude flangeSine = AdvanceOscilator_Sine(flangePhase, c_flangeFrequency, sound.m_sampleRate) * TAmplitude(0.5f) - TAmplitude(0.5f);

// use that sine wave to calculate an offset backwards in time to sample from
TFractionalSamples flangeOffset = TFractionalSamples((float)index.Value()) + TFractionalSamples(flangeSine.Value() * (float)c_flangeDepth.Value());

// mix the sample with the offset sample and apply the envelope for the front and back of the sound
samples[index.Value()] = (inputData[index.Value()] + GetLerpedAudioSample(inputData, flangeOffset)) * envelope;
}

// normalize the amplitude of the samples to make sure they are as loud as possible without clipping
NormalizeSamples(samples, DBToAmplitude(TDecibels(-3.0f)));

// save as a wave file
WriteWaveFile("out.wav", samples, sound);

return 0;
}
```

More DIY synth stuff coming soon, I have like 5 more posts I want to make right now, with the last couple being about some pretty awesome stuff I learned about recently!

# DIY Synth: Basic Drum

This is a part of the DIY Synthesizer series of posts where each post is roughly built upon the knowledge of the previous posts. If you are lost, check the earlier posts!

Hello! It’s time for another installment of DIY synth. It’s been so long since the last one, that when I look back at the code in that post now, I’m mortified by some of the stylistic and implementation choices I made, EEK! The curse of learning new stuff… we all experience that from time to time hehe.

Following the previous DIY synth posts, you are now able to do quite a bit of synth stuff, but the only way you have to make percussion is to use recorded sound samples of drums. There’s nothing wrong with that, and in fact is an easy way to get good quality percussion, but if you want to be a purist and synthesize everything yourself, that might make you sad to have to rely on a recorded sample.

In today’s post, I’ll walk you through the process of how to create a simple drum in 3 easy steps.

It may not be the best synthesized drum, but it definitely passes as a drum sound, and I provide links that explain how to refine it further.

## Step 1: Sine Wave

Starting out is pretty simple, we just need a tone. Here is a sine wave that lasts 1 second and is a F note in the 1st octave, or approximately 87hz. Drums are low sounds, so we need to start with a low frequency sound.

sine.wav ## Step 2:Envelope

There is a concept in audio called an “envelope”. An envelope is just a fancy way of saying that you are changing the volume over time. The envelope is the “shape” of the volume changes over time.

If you notice in step 1 the volume (amplitude of the sine wave) isn’t constant throughout the whole thing, it actually has a part at the beginning that gradually goes from 0 volume to full volume, and at the end, it has a part at the end that goes from full volume to 0 volume (it’s 50 milliseconds on each side if you are curious). That fading is an envelope too and is actually there to prevent “popping” which can occur when you make audio samples that aren’t smooth from one to the next. It might seem like that would be a minor problem, but it’s actually VERY noticeable. Check out the previous DIY synth posts for more info and examples of that!

Anyhow, if you think about the sound that a drum makes when you hit it, it starts out loud right away, and then quickly fades out. You can play with the specific values of the envelope and get a different sound, but what I went with was 10 milliseconds of fading in (0.01 seconds), 10 milliseconds of being at full volume (0.01 seconds), and 175 milliseconds of fading out (0.175 seconds). You can see a picture of the envelope below: The fade in time is called the “attack”, the time it remains at full volume is called the “hold” and the time that it fades out is called the “release”. There are other common stages to envelopes that you might hear about if looking up more info about them. Two other common parts of an enveloper are “sustain” and “decay” for instance.

Envelopes are a big part of what make notes sound like specific instruments, so have fun playing with those values and listening to the results.

Here is the envelope applied to our low frequency sine wave (which you apply by just multiplying them together!)

sineenvelope.wav ## Step 3: Frequency Decay

We have something that sounds a little more interesting than a plain vanilla sine tone, but it doesn’t sound much like a drum yet…

What we are missing is that in a real drum, the frequency of the note decays over time. If that isn’t intuitive, don’t worry it wasn’t for me either. It took me a good amount of reading and investigation to find that out a few years back.

To add frequency decay, let’s have the frequency decay 80% of the way (towards frequency 0) over the “release” (fade out) portion of the envelope. So, the frequency will still be F1 through the entire drum note of attack and hold, but then starting with release, it will decay linearly over time for that 175 ms, until at the end, the frequency should only be 20% of 87hz, or about 17hz.

Here’s what we end up with:
drum.wav ## Good Enough!

That’s pretty passable as a drum, even if it isn’t the best. A neat thing too is that by changing the starting frequency, you can get different frequencies of your drum and get some different drum sounds.

Here’s a little drum melody showing what i mean:

melody.wav ## Sample Code

Here’s the code with everything above implemented, which created the drum melody. It uses only standard include files, and writes a wave file called “out.wav” when you run it. Play around with the code, adjusting envelope times, frequencies, frequency decay, or even change it from using a sine wave to a different wave form (I included some standard wave forms for you).

Often times synthesis / music making is all about just playing around with the knobs that are exposed to you til you find something really interesting.

```#define _CRT_SECURE_NO_WARNINGS

#include
#include
#include
#include
#include
#include

#define _USE_MATH_DEFINES
#include

//=====================================================================================
// SNumeric - uses phantom types to enforce type safety
//=====================================================================================
template
struct SNumeric
{
public:
explicit SNumeric(const T &value) : m_value(value) { }
SNumeric() : m_value() { }
inline T& Value() { return m_value; }
inline const T& Value() const { return m_value; }

typedef SNumeric TType;
typedef T TInnerType;

// Math Operations
TType operator+ (const TType &b) const
{
return TType(this->Value() + b.Value());
}

TType operator- (const TType &b) const
{
return TType(this->Value() - b.Value());
}

TType operator* (const TType &b) const
{
return TType(this->Value() * b.Value());
}

TType operator/ (const TType &b) const
{
return TType(this->Value() / b.Value());
}

TType& operator+= (const TType &b)
{
Value() += b.Value();
return *this;
}

TType& operator-= (const TType &b)
{
Value() -= b.Value();
return *this;
}

TType& operator*= (const TType &b)
{
Value() *= b.Value();
return *this;
}

TType& operator/= (const TType &b)
{
Value() /= b.Value();
return *this;
}

TType& operator++ ()
{
Value()++;
return *this;
}

TType& operator-- ()
{
Value()--;
return *this;
}

// Extended Math Operations
template
T Divide(const TType &b)
{
return ((T)this->Value()) / ((T)b.Value());
}

// Logic Operations
bool operatorValue() < b.Value();
}
bool operatorValue()  (const TType &b) const {
return this->Value() > b.Value();
}
bool operator>= (const TType &b) const {
return this->Value() >= b.Value();
}
bool operator== (const TType &b) const {
return this->Value() == b.Value();
}
bool operator!= (const TType &b) const {
return this->Value() != b.Value();
}

private:
T m_value;
};

//=====================================================================================
// Typedefs
//=====================================================================================

typedef uint8_t uint8;
typedef uint16_t uint16;
typedef uint32_t uint32;
typedef int16_t int16;
typedef int32_t int32;

// type safe types!
typedef SNumeric      TFrequency;
typedef SNumeric            TTimeMs;
typedef SNumeric           TSamples;
typedef SNumeric           TDecibels;
typedef SNumeric      TAmplitude;
typedef SNumeric       TChannelCount;
typedef SNumeric          TPhase;

//=====================================================================================
// Constants
//=====================================================================================

static const float c_pi = (float)M_PI;
static const float c_twoPi = c_pi * 2.0f;

//=====================================================================================
// Structs
//=====================================================================================

struct SSoundSettings
{
TSamples        m_sampleRate;
TTimeMs         m_lengthMs;
TChannelCount   m_numChannels;
TSamples        m_currentSample;
};

struct SDrumSettings
{
TFrequency  m_frequency;
TSamples    m_attack;
TSamples    m_sustain;
TSamples    m_release;
TAmplitude  m_volume;
};

struct SDrumInstance
{
SDrumInstance(TSamples startTime, const SDrumSettings &settings)
: m_startTime(startTime)
, m_settings(settings)
, m_phase(0.0f)
{

}

const SDrumSettings     &m_settings;
TSamples                m_startTime;
TPhase                  m_phase;
};

//=====================================================================================
// Globals
//=====================================================================================

std::vector    g_drumInstances;

//=====================================================================================
// Conversion Functions
//=====================================================================================
inline TDecibels AmplitudeToDB(TAmplitude volume)
{
return TDecibels(log10(volume.Value()));
}

inline TAmplitude DBToAmplitude(TDecibels dB)
{
return TAmplitude(pow(10.0f, dB.Value() / 20.0f));
}

TSamples SecondsToSamples(const SSoundSettings &s, float seconds)
{
return TSamples((int)(seconds * (float)s.m_sampleRate.Value()));
}

TSamples MilliSecondsToSamples(const SSoundSettings &s, float milliseconds)
{
return SecondsToSamples(s, milliseconds / 1000.0f);
}

TTimeMs SecondsToMilliseconds(float seconds)
{
return TTimeMs((uint32)(seconds * 1000.0f));
}

TFrequency Frequency(float octave, float note)
{
/* frequency = 440×(2^(n/12))
Notes:
0  = A
1  = A#
2  = B
3  = C
4  = C#
5  = D
6  = D#
7  = E
8  = F
9  = F#
10 = G
11 = G# */
return TFrequency((float)(440 * pow(2.0, ((double)((octave - 4) * 12 + note)) / 12.0)));
}

template
T AmplitudeToAudioSample(const TAmplitude& in)
{
const T c_min = std::numeric_limits::min();
const T c_max = std::numeric_limits::max();
const float c_minFloat = (float)c_min;
const float c_maxFloat = (float)c_max;

float ret = in.Value() * c_maxFloat;

if (ret  c_maxFloat)
return c_max;

return (T)ret;
}

//=====================================================================================
// Wave File Writing Code
//=====================================================================================
{
//the main chunk
unsigned char m_szChunkID;      //0
uint32        m_nChunkSize;        //4
unsigned char m_szFormat;       //8

//sub chunk 1 "fmt "
unsigned char m_szSubChunk1ID;  //12
uint32        m_nSubChunk1Size;    //16
uint16        m_nAudioFormat;      //18
uint16        m_nNumChannels;      //20
uint32        m_nSampleRate;       //24
uint32        m_nByteRate;         //28
uint16        m_nBlockAlign;       //30
uint16        m_nBitsPerSample;    //32

//sub chunk 2 "data"
unsigned char m_szSubChunk2ID;  //36
uint32        m_nSubChunk2Size;    //40

//then comes the data!
};

//this writes a wave file
template
bool WriteWaveFile(const char *fileName, const std::vector &samples, const SSoundSettings &sound)
{
//open the file if we can
FILE *file = fopen(fileName, "w+b");
if (!file)
return false;

//calculate bits per sample and the data size
const int32 bitsPerSample = sizeof(T) * 8;
const int dataSize = samples.size() * sizeof(T);

//fill out the main chunk

//fill out sub chunk 1 "fmt "
waveHeader.m_nByteRate = sound.m_sampleRate.Value() * sound.m_numChannels.Value() * bitsPerSample / 8;
waveHeader.m_nBlockAlign = sound.m_numChannels.Value() * bitsPerSample / 8;

//fill out sub chunk 2 "data"

//write the wave data itself, converting it from float to the type specified
std::vector outSamples;
outSamples.resize(samples.size());
for (size_t index = 0; index < samples.size(); ++index)
outSamples[index] = AmplitudeToAudioSample(samples[index]);
fwrite(&outSamples, dataSize, 1, file);

//close the file and return success
fclose(file);
return true;
}

//=====================================================================================
// Oscilators
//=====================================================================================

void AdvancePhase(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{
phase += TPhase(frequency.Value() / (float)sampleRate.Value());
while (phase >= TPhase(1.0f))
phase -= TPhase(1.0f);
while (phase  0.5f ? 1.0f : -1.0f);
}

TAmplitude AdvanceOscilator_Triangle(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{
if (phase > TPhase(0.5f))
return TAmplitude((((1.0f - phase.Value()) * 2.0f) * 2.0f) - 1.0f);
else
return TAmplitude(((phase.Value() * 2.0f) * 2.0f) - 1.0f);
}

TAmplitude AdvanceOscilator_Saw_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
TPhase harmonicPhase = phase * TPhase((float)harmonicIndex);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / (float)harmonicIndex);
}

ret *= TAmplitude(2.0f / c_pi);

return ret;
}

TAmplitude AdvanceOscilator_Square_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
for (int harmonicIndex = 1; harmonicIndex <= 4; ++harmonicIndex)
{
float harmonicFactor = (float)harmonicIndex * 2.0f - 1.0f;
TPhase harmonicPhase = phase * TPhase(harmonicFactor);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / harmonicFactor);
}

ret *= TAmplitude(4.0f / c_pi);

return ret;
}

TAmplitude AdvanceOscilator_Triangle_BandLimited(TPhase &phase, TFrequency frequency, TSamples sampleRate)
{

// sum the harmonics
TAmplitude ret(0.0f);
bool subtract = true;
for (int harmonicIndex = 1; harmonicIndex <= 10; ++harmonicIndex)
{
float harmonicFactor = (float)harmonicIndex * 2.0f - 1.0f;
TPhase harmonicPhase = phase * TPhase(harmonicFactor);
ret += TAmplitude(sin(harmonicPhase.Value()*c_twoPi) / (harmonicFactor*harmonicFactor)) * TAmplitude(subtract ? -1.0f : 1.0f);
}

ret *= TAmplitude(8.0f / (c_pi*c_pi));

return ret;
}

//=====================================================================================
// Drum Synthesis
//=====================================================================================
TAmplitude Drum(const SSoundSettings &sound, SDrumInstance &drum)
{
// if the drum hasn't started yet, nothing to do!
if (sound.m_currentSample < drum.m_startTime)
return TAmplitude(0.0f);

TFrequency frequencyMultiplier(1.0f);
TAmplitude envelopeVolume(0.0f);
TSamples sampleRelative = sound.m_currentSample - drum.m_startTime;

if (sampleRelative < drum.m_settings.m_attack)
{
envelopeVolume = TAmplitude(sampleRelative.Divide(drum.m_settings.m_attack));
}
else if (sampleRelative < drum.m_settings.m_attack + drum.m_settings.m_sustain)
{
envelopeVolume = TAmplitude(1.0f);
}
else if (sampleRelative < drum.m_settings.m_attack + drum.m_settings.m_sustain + drum.m_settings.m_release)
{
sampleRelative -= (drum.m_settings.m_attack + drum.m_settings.m_sustain);
envelopeVolume = TAmplitude(1.0f - sampleRelative.Divide(drum.m_settings.m_release));
frequencyMultiplier = TFrequency(envelopeVolume.Value());
}
else
{
return TAmplitude(0.0f);
}

const TFrequency freqDecay(0.8f);
envelopeVolume *= drum.m_settings.m_volume;
TFrequency frequency = drum.m_settings.m_frequency * ((TFrequency(1.0f) - freqDecay) + (frequencyMultiplier*freqDecay));
return AdvanceOscilator_Sine(drum.m_phase, frequency, sound.m_sampleRate) * envelopeVolume;
}

//=====================================================================================
// Main
//=====================================================================================
int main(int argc, char **argv)
{
//our sound parameters
SSoundSettings sound;
sound.m_sampleRate = TSamples(44100);
sound.m_lengthMs = SecondsToMilliseconds(9.0f);
sound.m_numChannels = TChannelCount(1);

// set up the data for our drums.
SDrumSettings drum1;
drum1.m_frequency = Frequency(1, 8);
drum1.m_attack = MilliSecondsToSamples(sound, 10.0f);
drum1.m_sustain = MilliSecondsToSamples(sound, 10.0f);
drum1.m_release = MilliSecondsToSamples(sound, 175.0f);
drum1.m_volume = DBToAmplitude(TDecibels(-3.0f));

SDrumSettings drum2 = drum1;
drum2.m_frequency = Frequency(2, 5);

SDrumSettings drum3 = drum1;
drum3.m_frequency = Frequency(2, 5);

SDrumSettings drum4 = drum1;
drum4.m_frequency = Frequency(1, 10);

SDrumSettings drumBG = drum1;
drumBG.m_frequency = Frequency(1, 2);
drumBG.m_volume = DBToAmplitude(TDecibels(-12.0f));

// setup drums: make a 4 beat pattern that occurs every other second
for (uint32 i = 1; i < sound.m_lengthMs.Value() / 1000; i += 2)
{
g_drumInstances.push_back(SDrumInstance(SecondsToSamples(sound, (float)i + 0.00f), drum1));
g_drumInstances.push_back(SDrumInstance(SecondsToSamples(sound, (float)i + 0.25f), drum2));
g_drumInstances.push_back(SDrumInstance(SecondsToSamples(sound, (float)i + 0.50f), drum3));
g_drumInstances.push_back(SDrumInstance(SecondsToSamples(sound, (float)i + 1.00f), drum4));
}

// setup drums: make a background beat
for (uint32 i = 0, c = sound.m_lengthMs.Value() / 1000 * 4; i < c; ++i)
g_drumInstances.push_back(SDrumInstance(SecondsToSamples(sound, (float)i / 4.0f), drumBG));

//make our buffer to hold the samples
TSamples numSamples = TSamples(sound.m_sampleRate.Value() * sound.m_numChannels.Value() * sound.m_lengthMs.Value() / 1000);
std::vector samples;
samples.resize(numSamples.Value());

// render our audio samples from our drum list
for (TSamples index = TSamples(0); index < numSamples; ++index)
{
sound.m_currentSample = index;
TAmplitude &sample = samples[index.Value()];
sample = TAmplitude(0.0f);

std::for_each(
g_drumInstances.begin(),
g_drumInstances.end(),
[&sample, &sound](SDrumInstance& drum)
{
sample += Drum(sound, drum);
}
);
}

// save as a wave file
WriteWaveFile("out.wav", samples, sound);
}
```

If you want a better sounding drum, check out these links. Frankly, read everything on that site… there is such great stuff there. If you like synth, that is the place to read about cool stuff, but unfortunately it’s bent towards electrical engineers and musicians, not programmers.

Here is some more info about envelopes too. Envelopes really are a core ingrediant to synthesizers. Applying different envelopes to the same sine wave frequency, you can make a variety of different sounding instruments. They are pretty darn powerful.
Wikipedia: ASDR Envelope

I got some REALLY cool synth stuff planned in the near future, so keep an eye out!

# HyperLogLog: Estimate Unique Value Counts Like The Pros This is the last of my planned posts on probabilistic algorithms. There may be more in the future, but the initial volley is finished 😛

Thanks again to Ben Deane for exposure to this really interesting area of computer science.

This post is about HyperLogLog, which is used to estimate a count of how many “uniques” there are in a stream of data. It can also do intersection and union between two hyperloglog objects allowing you to see how many items HLL objects have in common. It was invented in 2007 and is currently used in many big data situations including use by Google and Redis and Amazon.

This algorithm is probabilistic in that you can trade storage space for accuracy. You can calculate how much accuracy to expect for a given amount of storage, or calculate how much storage you’d need for a specific amount of accuracy.

If this sounds a lot like the first probabilistic algorithm post I made (Estimating Counts of Distinct Values with KMV) that means you have been paying attention. HyperLogLog is in the same family of algorithms, but it is way better at most things than KMV is, and seems to be the current standard for DV Sketches (distinct value estimators). The one thing KMV seems to be better at is calculating intersections between objects, which i’ll talk about more below.

To give you an idea of the power of HyperLogLog, here’s a quote from the paper it was first described in (link at end of post):

“…the new algorithm makes it possible to estimate cardinalities well beyond 10^9 with a typical accuracy of 2% while using a memory of only 1.5 kilobytes”

By the end of this post you should understand how that is possible, and also be able to start with the sample code and have HyperLogLog capabilities in your own C++ project immediately!

## A Usage Case: Upvoting

A usage case for this algorithm would be adding an “upvote” button to a webpage.

A naive solution to this would be to have an upvote button that when clicked would increment a counter on the server. That is problematic though because people can vote as many times as they want. You could hack together a solution to this by having some client side cookies and javascript limiting people from doing that, but all your security is out the window since it’s a client side fix, and you will soon have trolls spamming your vote counters by doing raw HTTP requests to your servers, to ruin your day, just because they can.

A less naive solution would be to have some unique identifier per user – whether that was something like a username, or just an IP address – and store that in a voting table, only allowing the counter to increment if the person wasn’t already in the table, and entering them into the table when doing the increment.

The problem with that solution is that the table of users who have already voted might get extremely huge, causing lots of memory usage to hold it, lots of processing time to see if a user exists in the table already (even with a hash based lookup), and it doesn’t parallelize very well to multiple servers. You also have to implement some protection against race conditions around the “look for user in table, increment vote counter, add user to table” work, which means some potentially costly synchronization logic.

A better solution would be to use HyperLogLog and Here are some reasons why:

• It has a fixed size you determine in advance. The bold quote from the paper would indicate that 1.5KB is likely enough for our needs, being able to count over one billion unique values. Bringing it up to 2KB would be enough to store a heck of a lot more, like somewhere up near 2^250 unique items.
• It automatically disallows the same item being accounted for more than once, so our multiple votes from same voter problem is gone with no need for costly synchronization work.
• It lends itself well to being parallelized across machines, or just using SIMD / GPGPU if you wanted to squeeze some more performance out of it.
• You can very quickly do set operations on multiple HLL objects.

The first three items solve the problems in the naive solutions, but the fourth adds some bonus functionality that is pretty cool.

Using set operations, you can do things like figure out if the same people are upvoting a dulce de leche flan desert compared to a cheesy grits recipe, and you can calculate that VERY QUICKLY.

A possibly naive way to suggest a recipe to a user would be to show them recipes that a lot of people have upvoted.

A more custom tailored suggestion (and thus, hopefully a better suggestion) would be when a person votes on a recipe, you can tell them “hey, a lot of the same people that upvoted the recipe you just upvoted also upvoted this other recipe, why don’t you check it out!”

So now, you don’t just have a voting system, you now also have a CLASSIFICATION SYSTEM.

Yeah ok, so that’s a little hand wavy and making a real suggestion system has more details to solve than just those etc etc, but hopefully you can see how this algorithm can be a fundamental building block to many “big data” problems.

Onto the details of how it works!

## Basic Premise – Coin Flips

When you flip a coin, what are the odds that it will be heads? There’s even chance of heads or tails, so the chance of a heads is 1 in 2 or 1/2 or 50%.

What are the odds that if you flip a coin twice it a row it will be heads both times? Well, on each flip there is a 50% chance, so you multiply .5 * .5 to get the answer which is .25 or 25%. There’s a 25% chance, or a 1 in 4 chance, that you will flip heads twice in a row with two coin flips. It’s the same chance that you will flip two tails in a row, or that you will flip heads then tails, or tails then heads. All possible outcomes have the same probability.

Another way to look at this, instead of probability, is to see how many combinations possible there are like this:

There’s a 100% chance that two coin flips will be somewhere in those four results, and all results have an even chance of happening, so result 0 “heads, heads” has a 1 in 4 chance of happening. All of the results above have a 1 in 4 chance. Interestingly, looking at the above table you can also see that there is a 2/4 chance that the first flip will be heads, which is also 1/2, 1 in 2 or 50% chance. That agrees with what we said before, that if only doing one coin flip (the first coin flip), there is a 50% chance of heads or tails.

If you switched to 3 coin flips, there would be a 1 in 8 chance of any specific event happening, since there are 8 possible outcomes:

It doesn’t matter which specific sequence you are looking at above, the chance that 3 coin flips will result in any of those specific sequences is 1 in 8.

In fact, for N coin flips, the probability that you will encounter any specific sequence (permutation) of heads and tails is 1 in 2^N, or 1/(2^N).

Now, let’s change it up a bit. What are the odds that you will have some number of heads in a row, and then a tails? In other words, what are the odds that you will have a “run” of heads of a specified length?

well, a run of 0 would just be you flip the coin once and get tails. Since you are doing one coin flip and you are looking for a specific one of two possible outcomes to happen, the probability is or 1 in 2, or 1/2, or 50%.

A run of 1 would mean you flipped the coin once, got heads, flipped it again and got tails. Since you are doing two coin flips and are looking for a specific one of four possible outcomes, the probability of that happening is 1/2*1/2 = 1/4, or 1 in 4, or 25%.

A run of 2 means heads, heads, tails. 3 coin flips = 1/2*1/2*1/2 = 1/8, or 1 in 8, or 12.5%.

By now you may have noticed that the probability for getting N heads and then a tail is just 1 in the number of coin flips, and the number of coin flips is 2^(N+1).

More formally, the odds of getting N heads and then a tail is 1/(2^(N+1)).

Let’s now swap out the idea of heads and tails with the idea of binary digits 0 and 1 in a sequence of random numbers. Think of “heads” as 0, and “tails” as 1.

If you generate a random bit (binary digit), there is a 50% chance it will be a 0, and a 50% chance it will be a 1.

If you generate two random bits, there is a 25% chance for each of the possible outcomes below:

 Sequence Number Sequence 0 00 1 01 2 10 3 11

Now, what are the odds that in a random binary number, we will have N zeros and then a 1?

Don’t let the binary digits scare you, it’s the same answer as the coin flip question: 1/(2^(N+1))

An interesting property of this is that if you ask for a random 8 bit binary number and get xxxxx100, using the formula above, you know there is a 1 in 8 chance that a random number would end in “100”.

Using this information you can say to yourself “i’ll bet I’ve seen about 8 different random numbers at this point”, and that’s a fairly decent guess, without actually having had to pay attention to any of the numbers that came before.

A better idea though would be to watch all the random numbers as they are generated, and keep track the longest run of zeros you’ve seen on the right side of any number.

Using this “longest run seen” value, you can guess how many unique random numbers you’ve seen so far. If the longest run you’ve seen is N zeros and then a 1, the guess as to how many random numbers you’ve seen is 2^(N+1).

If you’ve seen a maximum of 4 zeros and a 1 (xxx10000), you’ve probably seen about 32 numbers on average. If you’ve seen at maximum 2 zeros and a 1 (xxxxx100), you’ve probably seen about 8 numbers on average.

Since by definition, randomness is RANDOM, there will be fluctuation and your guess will not always be so accurate. You might have only seen one random number, but it’s value may have been 10000000 (0x80), which would incorrectly cause you to estimate that 256 items have been seen (2^8), when in fact only a single item has been seen.

To combat this, HyperLogLog uses multiple registers (counters) to keep multiple counts, and then averages the estimates together. More info on that below, but for now, hopefully you can see how that would smooth things out towards a more average case. The more registers you use, the more “average case” your estimation should be, so the more accurate your estimate should be.

There’s an interesting twist here though… you aren’t actually estimating how many random numbers you’ve seen, but instead are estimating how many UNIQUE random numbers you’ve seen. Random numbers can repeat, but this count estimation will only count each unique value once, no matter how many times it appears.

To help visualize that, no matter how many times you see 10110100 – whether it’s only once, or ten thousand times – the longest run will still be 2. After you’ve seen ten thousand of those numbers, as soon as you see the next number 10011000, the longest run will then be 3.

That may sound like a trivial difference that we are counting uniques, and not actual values, but as the next section will show, it’s actually a very important difference, and is where this technique derives it’s power.

Also, if we were counting non uniques, we could just use an integer and increment it for each item we saw (;

## Hashing Functions as Pseudorandom Number Generators

An interesting property of good hashing algorithms is that the output you get when you hash an object will be indistinguishable from random numbers. If you hash 3, you’ll get a random number, and if you hash 4, you’ll likely get a completely different random number. You can’t tell from the output what the nature of the input was, or how similar the input objects were.

But, of course, when you put the same input into a hash function, you will always get the same output.

These two properties are what makes hash functions so useful in what we are trying to do in HLL.

Quick Tangent: These same properties also apply to encryption by the way. The fact that they are random output is why hashed and encrypted data doesn’t compress very well. There are no patterns in the data that can be exploited to express the data as anything shorter than the full length of data itself. You should not be able to gain any knowledge about the nature of the input by looking at the output, except perhaps the size of the source data in the case of encryption. Also, whereas hashes are not reversible at all, encryption is only reversible if you have the encryption key (password). HLL and similar algorithms use hashes, not encryption, because they want a fixed size output, and they don’t care about reversing the process.

The output of the hash function is the source of the pseudorandom numbers that we plug into the HyperLogLog algorithm, and is what allows us to count uniques of any type of thing, so long as that thing can be hashed.

So, to do HLL counting, you hash every object you see in a stream keeping track of the longest run of zeros (in binary) you’ve seen in the resulting hashes. You store “longest runs seen” in multiple registers which you can then later use to get an averaged estimate of unique items encountered. That’s all there is to it.

MAGIC!

That’s how things work from a high level, now let’s get into the nitty gritty a bit…

## Handling Multiple Registers

Let’s say you have a hash function that spits out a 32 bit hash, which is a pretty common thing for HLL implementations.

We talked about figuring out the length of the run of 0’s in the hash output, but if you had 16 registers to store run lengths in, how do you choose which register to store each run length in?

The common way to solve this is to use some of your hash bits for register selection. If you have 16 registers, you could use the lowest 4 bits of your hash as the register index to store the count in for instance.

There is a problem here though, that hopefully you can see already. The problem is that if we have a run of 3 with a hash that ends in the binary number 1000, we will only ever store that run length in register 8! By using the same bits for register selection as we do for counting numbers, we’ve biased our count and introduced inaccuracy (error) because certain numbers will only get accounted for by specific registers. The ideal situation is that every number is as likely to end up in any specific register versus another one. It should “randomly” choose what register to use, but be deterministic in how it chose that register.

The bits you use for register selection cannot be reused for counting runs, or else you’ll fall into the trap of only storing certain numbers in specific registers.

You could perhaps hash your hash to get another pseudo random number to use for register selection, but a better option is to just throw out those register selection bits once you use them.

Reducing the number of bits you evaluate for runs of 0’s comes at a cost though. It means that your estimation of unique values seen is capped at a lower number. with 32 bits, you can estimate a count up to 2^32 (~4 billion), but at 28 bits, after using 4 bits for register selection, you can only estimate a count of up to 2^28 (~268 million).

I believe this is one of the reasons why google invented “HyperLogLog++” which uses a 64 bit hash and has some other improvements. Check the links at the bottom of this post for more information.

It’s a bit overkill, but in the sample code in this post, we create a 128 bit hash, use the top 32 bits for register selection, and the lower 96 bits for looking at runs. I say it’s overkill because at 96 bits, we can estimate counts up to 79 billion billion billion, which is way huger than anyone (even google!) is ever likely to need.

## Register Sizes

As I mentioned above, many people use 32 bit hashes, for estimating at most about 4 billion unique objects. Google bumps it up to 64 bits for up to 18 billion billion uniques, and our sample code uses 96 bits for run evaluation, letting us estimate counts up to 79 billion billion billion.

These numbers are beyond huge, but believe it or not, the size of the registers themselves used to track these things are pretty darn tiny.

Since we are looking for runs of zeros, if we have a 32 bit hash, we only need to be able to store the values 0 to 32. 0 to 31 can be stored in 5 bits, and chances are that people aren’t going to bump it up to 6 bits just to get that extra value – especially when in practice you are going to use a few bits of the hash as register selection.

So, for a 32 bit hash, you really only need 5 bits per register to keep track of longest runs seen.

For a 64 bit hash, you need to be able to store the values 0 to 64. Similar to the above, 0-63 can be stored in 6 bits, and we can ditch being able to store 64, so 6 bits per register is plenty.

For our 96 bit hash (since we use 32 bits for register selection), we’d only need to be able to store 0-96, which can fit entirely in 7 bits, since 7 bits can store 0-127.

In our example code, I’m an excessive glutton however, and store our longest run value in 8 bits, wasting an entire bit of memory per register.

Yep, in our excessively gigantic counting 128 bit hash HLL DV Sketch code, i use an ENTIRE BYTE of memory per register. The Horror!!!

With a 32 or 64 bit hash, you could drop that down to 5 or 6 bits per register, and either condense your registers in memory, or perhaps even use those extra bits for something else if you wanted (need some flags?!).

## Register Counts

So, our register size itself is fairly tiny, where my gluttonous, wasteful programming uses a single byte per register. How many registers do we need though?

The answer to that depends on how much accuracy you want.

To calculate how much error there is for M registers, the equation is: expectedError = 1.04 / sqrt(M)

To calculate how many registers you need to achieve a specific expected error, the equation is: M = 676 / (625 * expectedError^2)

In those equations, an expectedError of 0.03 would mean 3%.

Check out the table below to get an idea of accuracy vs size.

Note that since we use bits from our hash to do register selection, that our number of registers is a power of 2.

 Register Bits Register Count Error 0 1 104% 1 2 73.5% 2 4 52% 3 8 36.7% 4 16 26% 5 32 18.3% 6 64 13% 7 128 9% 8 256 6.5% 9 512 4.6% 10 1024 3.2% 11 2048 2.2% 12 4096 1.6%

Here is a graph showing how number of bits used affects error. Bits used is the x axis, and expected error is the y axis. From the table and the graph you can see that adding bits (registers) gives diminishing returns in error reduction. It’s especially diminishing because whenever we add another bit, we double our storage size (double the number of registers we use).

This shows us that this algorithm is great at counting a large number of uniques, since one byte per counter can count up to about 2^256 (2^2^8) uniques, but it isn’t super great at getting a low error rate. If you are ok with about 2% accuracy, the storage space needed is pretty small though!

Remember the claim at the top of this post?

“…the new algorithm makes it possible to estimate cardinalities well beyond 10^9 with a typical accuracy of 2% while using a memory of only 1.5 kilobytes”

Looking at the bits used / error table, you can see 11 bits, or 2048 registers, gives just a little over 2% accuracy.

If you are using 32 bits of hash to look for runs of zeros, you can use 6 bit registers to store the longest run seen, if you want to waste a bit to be able to store 0-32 instead of just 0-31.

So, 2048 registers * 6 bits = 12288 bits of register storage needed. That is 1536 bytes, or exactly 1.5KB.

You could count up to ~4 billion uniques (2^32) with that configuration, but error increases as you get closer to that limit, so I think that’s why they limited their statement to counting ~1 billion uniques (10^9).

## Estimating Counts

The math behind the count estimation is a bit complex (check the paper in the links section for more info!) but part of how it works is it uses the harmonic mean to average the data from the registers together. Since there is randomness involved, and differences in our run lengths means being off by exponential amounts, the harmonic mean is great at filtering out large outliers due to the random fluctuations. Fluctuation to the “too small” end won’t matter too much since it will be over-written by other values since we are storing the maximum value seen. Fluctuations to the “too large” end are mitigated with harmonic mean.

Here’s some pseudo code for estimating the count. Note that we are storing the position of the first 1 in the registers, not storing the run length of zeros. That’s an important distinction because it means the number is 1 higher than it would be otherwise, which if you get it wrong makes your estimation half as large as it should be. It also means that you know if you see a zero register, that it is uninitialized and hasn’t seen a value yet.

```Alpha = 0.7213 / (1 + 1.079 / NumRegisters)
Sum = 0
for i = 0 to NumRegisters
Sum = Sum + pow(2, -Register[i])

Estimation = Alpha * NumRegisters^2 / Sum

// do small range correction
if Estimation < 5 / 2 * NumRegisters
{
if NumEmptyRegisters > 0
Estimation = NumRegisters * ln(NumRegisters / NumEmptyRegisters)
}
// do large range correction
else if Estimation > 2^32 / 30
{
Estimation = -2^32 * ln(1 - Estimation/ 2^32);
}
```

Small range correction is there because when not enough registers have been filled in (they haven’t gotten any data yet), the normal algorithm path has expected error greater than 1.04. Large range correction is there for the same reason, but on the high end side, when the registers are saturated.

## Set Operations

You can do set operations on hyperloglog objects so long as they use the same number of registers, same sized registers, and same hashing algorithm.

There’s a link in the links section at the end of this post that shows you how to resize the number of registers so that you can do set operations on HLL objects that have different numbers of registers.

Union

Taking a union of two HLL objects is actually really simple.

If you have two HLL objects A and B, that you want to union to get C, all you do is take the maximum bucket value from A and B and store it in C. Check out this pseudocode to see what i mean:

```for i = 0 to NumRegisters
C.Register[i] = Max(A.Register[i], B.Register[i])
```

The neat thing about doing this union is that it is LOSSLESS and doesn’t introduce any new error. Doing a union of two HLL objects is just the same as if you had a third HLL object that processed all the same objects that A and B both processed.

Intersection

To do an intersection is a tiny bit trickier, but not by much. We have to use what is called the Inclusion-Exclusion Principle (check links section for more info).

Using that principle, we can estimate the count of how many items are in the intersection, but we can’t get a HLL object representing the intersection of the two objects unfortunately.

The formula is this:

Count(Intersection(A,B)) = Count(A) + Count(B) – Count(Union(A,B))

And here’s some more pseudocode to show you how to do it:

```C = Union(A,B)
IntersectionCountEstimate = A.CountEstimate() + B.CountEstimate() - C.CountEstimate()
```

Pretty simple eh?

At the beginning I mentioned that KMV was actually better at intersections than HyperLogLog. The reason for that is because with KMV, you have a small, random sample range from both objects, and you can do an intersection between those two ranges and get your result.

KMV really starts to shine when you need to do an intersection between more than 2 or 3 lists, because using the inclusion-exclusion principle causes a combinatorial explosion, while the KMV intersection easily extends for N sets.

Jaccard Index

There’s no special trick to calculating the Jaccard Index, as per usual it’s just:

JaccardIndex = Count(Intersection(A,B)) / Count(Union(A,B))

Which will give you a value from 0 to 1 indicating how similar the two sets A and B are, where 1 is totally the same, and 0 is completely different. In the case of HLL, this is an estimated Jaccard Index of course, not the actual value!

Contains Percent

I was noticing in runs of the sample code below that while the union operation had pretty decent error levels, the intersection operation had not so great error levels and thus the Jaccard Index wasn’t very accurate either. This is mostly due to the fact that the intersection levels were pretty small, so if you had +1 or -1, that came out to be a large percentage of the actual value.

Despite having a reasonable explanation that diminished the actual impact of the “high error levels”, I wanted to see if I could come up with a different similarity metric and see how the error rate was on it.

What I came up with was a “Contains Percent”, which is the percentage of how many items in A are contained in B. You calculate it like this:

ContainsPercent = Count(Intersection(A,B)) / Count(A)

Since it uses the intersection value (that is not so accurate), it didn’t give great results, but I wanted to mention it because it actually is a completely different measurement than Jaccard Index with different meaning. All of the items in A could be in B, which would give it a “ContainsPercent” of 1.0, but B may have 10 times as many items that don’t appear in A, which would make the Jaccard Index very low.

In some cases, you may want to use the information that the Jaccard Index represents to make decisions, and in other cases you may want this “Contains Percent” metric, or maybe even something else.

It’s a bit subtle, but it’s good to think about what it is that you are actually looking for if using these things in actual production code (:

Estimating Set Membership

So, HLL is NOT a bloom filter, but can it still be used like one?

The answer is yes, but I don’t have a whole lot of information about the formal accuracy of that.

Basically how you’d do this is create a temporary HLL object, make it store the single item you want to check the other HLL for set membership of, and then you’d do an estimated intersection count between the two HLL objects.

As crazy as it sounds, it looks like redis exposes this functionality and says it is pretty accurate (for however many registers they used anyways), which is pretty neat:
Redis: PFCOUNT

Dot Product

The dot product between two sets (or multi sets – where you have a count associated with each item) can be really useful to help gauge the similarity between the two sets.

You can do a dot product operation between two HLL objects too. If you think about it, getting the dot product between two HLL objects is the same as getting the estimated count of the intersection between those two objects.

## Example Code

Here is some example code in C++ that allows you to do HyperLogLog. It includes only standard include files so has no dependencies and is in a single file for convenience.

I use a hash called MurmurHash3 to generate 128 bits of hash, 32 bits of which are used to generate the register index, and the remaining 96 bits are used for looking at runs of zeros.

Below the code is the output of a run of this program

```#include <array>
#include <string>
#include <assert.h>
#include <unordered_set>
#include <stdint.h>
#include <memory>

// microsoft only, for _BitScanForward to quickly find the index of the first 1 bit
// Use clz in gcc
#include <intrin.h>

//=====================================================================================================
// MurmurHash3
//=====================================================================================================

// note that this 128 bit MurmurHash3 is optimized for x86.  There is a version at the above link
// optimized for x64 as well, but it gives different output for the same input.

#define ROTL32(x,y)     _rotl(x,y)

inline uint32_t getblock32 (const uint32_t * p, int i)
{
return p[i];
}

inline uint32_t fmix32 (uint32_t h)
{
h ^= h >> 16;
h *= 0x85ebca6b;
h ^= h >> 13;
h *= 0xc2b2ae35;
h ^= h >> 16;

return h;
}

void MurmurHash3_x86_128 (const void * key, const int len,
uint32_t seed, std::array<uint32_t, 4> & out)
{
const uint8_t * data = (const uint8_t*)key;
const int nblocks = len / 16;

uint32_t h1 = seed;
uint32_t h2 = seed;
uint32_t h3 = seed;
uint32_t h4 = seed;

const uint32_t c1 = 0x239b961b;
const uint32_t c2 = 0xab0e9789;
const uint32_t c3 = 0x38b34ae5;
const uint32_t c4 = 0xa1e38b93;

//----------
// body

const uint32_t * blocks = (const uint32_t *)(data + nblocks * 16);

for (int i = -nblocks; i; i++)
{
uint32_t k1 = getblock32(blocks, i * 4 + 0);
uint32_t k2 = getblock32(blocks, i * 4 + 1);
uint32_t k3 = getblock32(blocks, i * 4 + 2);
uint32_t k4 = getblock32(blocks, i * 4 + 3);

k1 *= c1; k1 = ROTL32(k1, 15); k1 *= c2; h1 ^= k1;

h1 = ROTL32(h1, 19); h1 += h2; h1 = h1 * 5 + 0x561ccd1b;

k2 *= c2; k2 = ROTL32(k2, 16); k2 *= c3; h2 ^= k2;

h2 = ROTL32(h2, 17); h2 += h3; h2 = h2 * 5 + 0x0bcaa747;

k3 *= c3; k3 = ROTL32(k3, 17); k3 *= c4; h3 ^= k3;

h3 = ROTL32(h3, 15); h3 += h4; h3 = h3 * 5 + 0x96cd1c35;

k4 *= c4; k4 = ROTL32(k4, 18); k4 *= c1; h4 ^= k4;

h4 = ROTL32(h4, 13); h4 += h1; h4 = h4 * 5 + 0x32ac3b17;
}

//----------
// tail

const uint8_t * tail = (const uint8_t*)(data + nblocks * 16);

uint32_t k1 = 0;
uint32_t k2 = 0;
uint32_t k3 = 0;
uint32_t k4 = 0;

switch (len & 15)
{
case 15: k4 ^= tail << 16;
case 14: k4 ^= tail << 8;
case 13: k4 ^= tail << 0;
k4 *= c4; k4 = ROTL32(k4, 18); k4 *= c1; h4 ^= k4;

case 12: k3 ^= tail << 24;
case 11: k3 ^= tail << 16;
case 10: k3 ^= tail << 8;
case  9: k3 ^= tail << 0;
k3 *= c3; k3 = ROTL32(k3, 17); k3 *= c4; h3 ^= k3;

case  8: k2 ^= tail << 24;
case  7: k2 ^= tail << 16;
case  6: k2 ^= tail << 8;
case  5: k2 ^= tail << 0;
k2 *= c2; k2 = ROTL32(k2, 16); k2 *= c3; h2 ^= k2;

case  4: k1 ^= tail << 24;
case  3: k1 ^= tail << 16;
case  2: k1 ^= tail << 8;
case  1: k1 ^= tail << 0;
k1 *= c1; k1 = ROTL32(k1, 15); k1 *= c2; h1 ^= k1;
};

//----------
// finalization

h1 ^= len; h2 ^= len; h3 ^= len; h4 ^= len;

h1 += h2; h1 += h3; h1 += h4;
h2 += h1; h3 += h1; h4 += h1;

h1 = fmix32(h1);
h2 = fmix32(h2);
h3 = fmix32(h3);
h4 = fmix32(h4);

h1 += h2; h1 += h3; h1 += h4;
h2 += h1; h3 += h1; h4 += h1;

out = h1;
out = h2;
out = h3;
out = h4;
}

struct SMurmurHash3
{
std::array<uint32_t, 4> HashBytes (const void *key, size_t len)
{
// use a random 32 bit number as the seed (salt) of the hash.
// Gotten from random.org
// https://www.random.org/cgi-bin/randbyte?nbytes=4&format=h
static const uint32_t c_seed = 0x2e715b3d;

// MurmurHash3 doesn't do well with small input sizes, so if the input is too small,
// make it longer in a way that hopefully doesn't cause likely collisions.
// "the" hashed before the fix (notice the last 3 are the same)
//  0x45930d0e
//  0xfc76ee5b
//  0xfc76ee5b
//  0xfc76ee5b
// and after the fix:
//  0x70220da0
//  0xe7d0664a
//  0xb4e4d832
//  0x25940640
std::array<uint32_t, 4> ret;
static const size_t c_minLen = 16;
if (len < c_minLen)
{
unsigned char buffer[c_minLen];

for (size_t i = 0; i < len; ++i)
buffer[i] = ((unsigned char*)key)[i];

for (size_t i = len; i < c_minLen; ++i)
buffer[i] = buffer[i%len] + i;

MurmurHash3_x86_128(buffer, c_minLen, c_seed, ret);
}
else
{
MurmurHash3_x86_128(key, len, c_seed, ret);
}

return ret;
}

template <typename T>
std::array<uint32_t, 4> operator() (const T &object);

template <>
std::array<uint32_t, 4> operator() <std::string> (const std::string &object)
{
return HashBytes(object.c_str(), object.length());
}

// NOTE: if you need to hash other object types, just make your own template specialization here
};

//=====================================================================================================
// The CHyperLogLog class
//=====================================================================================================
//
// TKEY is the type of objects to keep track of
// NUMREGISTERBITS is how many bits of the hash are used to index into registers.  It also controls
//   how many registers there are since that count is 2^NUMREGISTERBITS
// HASHER controls how the keys are hashed
//
template <typename TKEY, size_t NUMREGISTERBITS, typename HASHER>
class CHyperLogLog
{
public:
CHyperLogLog()
: m_counts{} // init counts to zero
{ }

// friends
template <typename TKEY, size_t NUMREGISTERBITS, typename HASHER>
friend float UnionCountEstimate (const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& A, const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& B);

// constants
static const size_t c_numRegisters = 1 << NUMREGISTERBITS;
static const size_t c_registerMask = (c_numRegisters - 1);

// interface
{
// because 2^32 does not fit in 32 bits
static_assert(NUMREGISTERBITS < 32, "CHyperLogLog must use fewer than 32 register bits");

// make as many hashed bits as we need
std::array<uint32_t, 4> hash = HASHER()(item);

// use the highest 32 bits for getting our register index
unsigned int registerIndex = hash & c_registerMask;

// use the other 96 bits as our "unique number" corresponding to our object.  Note that we
// don't use the high 32 bits that we've already used for the register index because that
// would bias our results.  Certain values seen would ONLY get tracked by specific registers.
// That's ok though that we are only using 96 bits because that is still an astronomically large
// value.  Most HyperLogLog implementations use only 32 bits, while google uses 64 bits.  We are
// doing even larger with 96 bits.
// 2^(bits) = how many unique items you can track.
//
// 2^32  = 4 billion <--- what most people use
// 2^64  = 18 billion billion <--- what google uses
// 2^96  = 79 billion billion billion <--- what we are using
// 2^128 = 340 billion billion billion billion <--- way huger than anyone needs. Beyond astronomical.

// Figure out where the highest 1 bit is
unsigned long bitSet = 0;
if (_BitScanForward(&bitSet, hash))
bitSet += 1;
else if (_BitScanForward(&bitSet, hash))
bitSet += 32 + 1;
else if (_BitScanForward(&bitSet, hash))
bitSet += 64 + 1;

// store the highest seen value for that register
assert(bitSet < 256);
unsigned char value = (unsigned char)bitSet;
if (m_counts[registerIndex] < value)
m_counts[registerIndex] = value;

}

unsigned int EmptyRegisterCount () const
{
unsigned int ret = 0;
std::for_each(m_counts.begin(), m_counts.end(), [&] (unsigned char count) {
if (count == 0)
ret++;
});
return ret;
}

float GetCountEstimation () const
{
// calculate dv estimate
const float c_alpha = 0.7213f / (1.0f + 1.079f / c_numRegisters);
float sum = 0.0f;
std::for_each(m_counts.begin(), m_counts.end(), [&](unsigned char count) {
sum += std::pow(2.0f, -(float)count);
});

float dv_est = c_alpha * ((float)c_numRegisters / sum) * (float)c_numRegisters;

// small range correction
if (dv_est < 5.0f / 2.0f * (float)c_numRegisters)
{
// if no empty registers, use the estimate we already have
unsigned int emptyRegisters = EmptyRegisterCount();
if (emptyRegisters == 0)
return dv_est;

// balls and bins correction
return (float)c_numRegisters * log((float)c_numRegisters / (float)emptyRegisters);
}

// large range correction
if (dv_est > 143165576.533f) // 2^32 / 30
return -pow(2.0f, 32.0f) * log(1.0f - dv_est / pow(2.0f, 32.0f));

return dv_est;
}

private:
std::array<unsigned char, c_numRegisters> m_counts;
};

// there seem to be numerical problems when using 26 bits or larger worth of registers
typedef CHyperLogLog<std::string, 10, SMurmurHash3> TCounterEstimated;
typedef std::unordered_set<std::string> TCounterActual;

//=====================================================================================================
// Set Operations
//=====================================================================================================

template <typename TKEY, size_t NUMREGISTERBITS, typename HASHER>
float UnionCountEstimate (
const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& A,
const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& B)
{
// dynamically allocate our hyperloglog object to not bust the stack if you are using a lot of registers.
std::unique_ptr<CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>> temp =
std::make_unique<CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>>();

// To make a union between two hyperloglog objects that have the same number of registers and use
// the same hash, just take the maximum of each register between the two objects.  This operation is
// "lossless" in that you end up with the same registers as if you actually had another object
// that tracked the items each individual object tracked.
for (size_t i = 0; i < (*temp).c_numRegisters; ++i)
(*temp).m_counts[i] = std::max(A.m_counts[i], B.m_counts[i]);
return (*temp).GetCountEstimation();
}

template <typename TKEY, size_t NUMREGISTERBITS, typename HASHER>
float IntersectionCountEstimate (
const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& A,
const CHyperLogLog<TKEY, NUMREGISTERBITS, HASHER>& B)
{
// We have to use the inclusion-exclusion principle to get an intersection estimate
// count(Intersection(A,B)) = (count(A) + count(B)) - count(Union(A,B))
// http://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle

return (A.GetCountEstimation() + B.GetCountEstimation()) - UnionCountEstimate(A, B);
}

float UnionCountActual (const TCounterActual& A, const TCounterActual& B)
{
TCounterActual temp;
std::for_each(A.begin(), A.end(), [&] (const TCounterActual::key_type &s){temp.insert(s);});
std::for_each(B.begin(), B.end(), [&] (const TCounterActual::key_type &s){temp.insert(s);});
return (float)temp.size();
}

float IntersectionCountActual (const TCounterActual& A, const TCounterActual& B)
{
float ret = 0;
std::for_each(A.begin(), A.end(), [&](const TCounterActual::key_type &s)
{
if (B.find(s) != B.end())
++ret;
});
return ret;
}

//=====================================================================================================
// Error Calculations
//=====================================================================================================

float ExpectedError (size_t numRegisters)
{
return 1.04f / ((float)(sqrt((float)numRegisters)));
}

float IdealRegisterCount (float expectedError)
{
return 676.0f / (625.0f * expectedError * expectedError);
}

//=====================================================================================================
// Driver Program
//=====================================================================================================

template <typename L>
void ForEachWord(const std::string &source, L& lambda)
{
size_t prev = 0;
size_t next = 0;

while ((next = source.find_first_of(" ,.-":n", prev)) != std::string::npos)
{
if ((next - prev != 0))
{
std::string word = source.substr(prev, next - prev);
std::transform(word.begin(), word.end(), word.begin(), ::tolower);
lambda(word);
}
prev = next + 1;
}

if (prev < source.size())
{
std::string word = source.substr(prev);
std::transform(word.begin(), word.end(), word.begin(), ::tolower);
lambda(word);
}
}

template <typename T>
const char *CalculateError (const T&estimate, const T&actual)
{
float error = 100.0f * ((float)estimate - (float)actual) / (float)actual;
if (std::isnan(error) || std::isinf(error))
return "undef";

// bad practice to return a static local string, dont do this in production code!
static char ret;
sprintf_s(ret, sizeof(ret), "%0.2f%%", error);
return ret;
}

{
// bad practice to return a static local string, dont do this in production code!
static char ret;
if (bytes >= 1024 * 1024 * 1024)
sprintf_s(ret, sizeof(ret), "%0.2fGB", ((float)bytes) / (1024.0f*1024.0f*1024.0f));
else if (bytes >= 1024 * 1024)
sprintf_s(ret, sizeof(ret), "%0.2fMB", ((float)bytes) / (1024.0f*1024.0f));
else if (bytes >= 1024)
sprintf_s(ret, sizeof(ret), "%0.2fKB", ((float)bytes) / (1024.0f));
else
sprintf_s(ret, sizeof(ret), "%u Bytes", bytes);
return ret;
}

void WaitForEnter()
{
printf("nPress Enter to quit");
fflush(stdin);
getchar();
}

// These one paragraph stories are from http://birdisabsurd.blogspot.com/p/one-paragraph-stories.html

// The Dino Doc : http://birdisabsurd.blogspot.com/2011/11/dino-doc.html (97 words)
const char *g_storyA =
"The Dino Doc:n"
"Everything had gone according to plan, up 'til this moment. His design team "
"had done their job flawlessly, and the machine, still thrumming behind him, "
"a thing of another age, was settled on a bed of prehistoric moss. They'd "
"done it. But now, beyond the protection of the pod and facing an enormous "
"tyrannosaurus rex with dripping jaws, Professor Cho reflected that, had he "
"known of the dinosaur's presence, he wouldn't have left the Chronoculator - "
"and he certainly wouldn't have chosen "Stayin' Alive", by The Beegees, as "
"his dying soundtrack. Curse his MP3 player";

// The Robot: http://birdisabsurd.blogspot.com/2011/12/robot.html (121 words)
const char *g_storyB =
"The Robot:n"
"The engineer watched his robot working, admiring its sense of purpose.It knew "
"what it was, and what it had to do.It was designed to lift crates at one end "
"of the warehouse and take them to the opposite end.It would always do this, "
"never once complaining about its place in the world.It would never have to "
"agonize over its identity, never spend empty nights wondering if it had been "
"justified dropping a promising and soul - fulfilling music career just to "
"collect a bigger paycheck.And, watching his robot, the engineer decided that "
"the next big revolution in the robotics industry would be programming "
"automatons with a capacity for self - doubt.The engineer needed some company.";

// The Internet: http://birdisabsurd.blogspot.com/2011/11/internet.html (127 words)
const char *g_storyC =
"The Internet:n"
"One day, Sandra Krewsky lost her mind.Nobody now knows why, but it happened - "
"and when it did, Sandra decided to look at every page on the Internet, "
"insisting that she wouldn't eat, drink, sleep or even use the washroom until "
"the job was done. Traps set in her house stalled worried family members, and by "
"the time they trounced the alligator guarding her bedroom door - it managed to "
"snap her neighbour's finger clean off before going down - Sandra was already "
"lost… though the look of despair carved in her waxen features, and the cat "
"video running repeat on her flickering computer screen, told them everything "
"they needed to know.She'd seen too much. She'd learned that the Internet "
"played for keeps.";

int main (int argc, char **argv)
{
// show basic info regarding memory usage, precision, etc
printf("For %u registers at 1 byte per register, the expected error is %0.2f%%.n",
TCounterEstimated::c_numRegisters,
100.0f * ExpectedError(TCounterEstimated::c_numRegisters)
);
printf("Memory Usage = %s per HyperLogLog object.n", BytesToHumanReadable(TCounterEstimated::c_numRegisters));
static const float c_expectedError = 0.03f;
printf("For expected error of %0.2f%%, you should use %0.2f registers.nn",
100.0f*c_expectedError,
IdealRegisterCount(c_expectedError)
);

// populate our data structures
// dynamically allocate estimate so we don't bust the stack if we have a large number of registers
std::unique_ptr<TCounterEstimated> estimateTotal = std::make_unique<TCounterEstimated>();
std::unique_ptr<TCounterEstimated> estimateA = std::make_unique<TCounterEstimated>();
std::unique_ptr<TCounterEstimated> estimateB = std::make_unique<TCounterEstimated>();
std::unique_ptr<TCounterEstimated> estimateC = std::make_unique<TCounterEstimated>();
TCounterActual actualTotal;
TCounterActual actualA;
TCounterActual actualB;
TCounterActual actualC;
{
auto f = [&](const std::string &word)
{
actualTotal.insert(word);
};

auto fA = [&](const std::string &word)
{
actualA.insert(word);
f(word);
};

auto fB = [&](const std::string &word)
{
actualB.insert(word);
f(word);
};

auto fC = [&](const std::string &word)
{
actualC.insert(word);
f(word);
};

ForEachWord(g_storyA, fA);
ForEachWord(g_storyB, fB);
ForEachWord(g_storyC, fC);
}

// Show unique word counts for the three combined stories
{
float estimateCount = estimateTotal->GetCountEstimation();
float actualCount = (float)actualTotal.size();
printf("Unique words in the three stories combined:n");
printf("  %0.1f estimated, %.0f actual, Error = %snn",
estimateCount,
actualCount,
CalculateError(estimateCount, actualCount)
);
}

// show unique word counts per story
{
printf("Unique Word Count Per Story:n");
auto g = [](const char *name, const TCounterEstimated &estimate, const TCounterActual &actual)
{
float estimateCount = estimate.GetCountEstimation();
float actualCount = (float)actual.size();
printf("  %s = %0.1f estimated, %.0f actual, Error = %sn",
name,
estimateCount,
actualCount,
CalculateError(estimateCount, actualCount)
);
};

g("A", *estimateA, actualA);
g("B", *estimateB, actualB);
g("C", *estimateC, actualC);
}

// Set Operations
{
printf("nSet Operations:n");

auto f = [] (
const char *name1,
const TCounterEstimated &estimate1,
const TCounterActual &actual1,
const char *name2,
const TCounterEstimated &estimate2,
const TCounterActual &actual2
)
{
printf("  %s vs %s...n", name1, name2);

// union
float estimateUnionCount = UnionCountEstimate(estimate1, estimate2);
float actualUnionCount = UnionCountActual(actual1, actual2);
printf("    Union: %0.1f estimated, %.0f actual, Error = %sn",
estimateUnionCount,
actualUnionCount,
CalculateError(estimateUnionCount, actualUnionCount)
);

// intersection
float estimateIntersectionCount = IntersectionCountEstimate(estimate1, estimate2);
float actualIntersectionCount = IntersectionCountActual(actual1, actual2);
printf("    Intersection: %0.1f estimated, %.0f actual, Error = %sn",
estimateIntersectionCount,
actualIntersectionCount,
CalculateError(estimateIntersectionCount, actualIntersectionCount)
);

// jaccard index
float estimateJaccard = estimateIntersectionCount / estimateUnionCount;
float actualJaccard = actualIntersectionCount / actualUnionCount;
printf("    Jaccard Index: %0.4f estimated, %.4f actual, Error = %sn",
estimateJaccard,
actualJaccard,
CalculateError(estimateJaccard, actualJaccard)
);

// Contains Percent.
// What percentage of items in A are also in B?
float estimateSim = estimateIntersectionCount / estimate1.GetCountEstimation();
float actualSim = actualIntersectionCount / actual1.size();
printf("    Contains Percent: %0.2f%% estimated, %0.2f%% actual, Error = %sn",
100.0f*estimateSim,
100.0f*actualSim,
CalculateError(estimateSim, actualSim)
);
};

f("A", *estimateA, actualA, "B", *estimateB, actualB);
f("A", *estimateA, actualA, "C", *estimateC, actualC);
f("B", *estimateB, actualB, "C", *estimateC, actualC);
}

WaitForEnter();
return 0;
}
```

Here is the output of the above program: ## Want More?

As per usual, the rabbit hole goes far deeper than what I’ve shown. Check out the links below to go deeper!