Macro Lists For The Win – Side B

Example Code: Code_050713.zip

In the previous post I talked about how you can use macro lists to solve the problem of wanting to generate a bunch of code based on the same set of data. This was useful for doing things like defining a list of resources a player could accumulate, and then being able to generate code to store and manipulate each resource type. You only had to update the resource list to add a new resource and the rest of the code would almost magically generate itself.

What if you wanted the reverse though? What if you had a fixed set of code that you want to apply to a bunch of different sets of data? This post is going to show you a way to do that.

In the example code, we are going to make a way to define several lists of items, and expand each list into an enum that also has a ToString and FromString function associated with it.

Another usage case for this technique might be to define lists of data fields, and expand each list into a data structure that contains serialization and deserialization functions. This would allow you to make data structures that could be saved and loaded to disk, or to sent and received over a network connection, just by defining what data fields they contained.

I haven’t yet seen this technique in the wild, and it kind of makes me wonder why since they are just two sides of the same coin.

GameEnums.h

In the last post, our data was always the same and we just applied it to different code. To do this, we had the code in one .h and the data in another .h that would get included multiple times. This allowed us to define different pieces of code in one .h, then include the other .h file to apply the fixed data to each piece of code.

In this post, it’s going to be the exact opposite. Our code will always stay the same and we will apply it to different data so our data will be in one .h and the code will be in another .h that gets included multiple times.

Here’s GameEnums.h:

//////////////////////
//     EDamageType
//////////////////////
#define ENUMNAME DamageType
#define ENUMLIST 
	ENUMENTRY(Normal) 
	ENUMENTRY(Electricity) 
	ENUMENTRY(Fire) 
	ENUMENTRY(BluntForce)

#include "EnumBuilder.h"
//////////////////////
//     EDeathType
//////////////////////
#define ENUMNAME DeathType
#define ENUMLIST 
	ENUMENTRY(Normal) 
	ENUMENTRY(Electrocuted) 
	ENUMENTRY(Incinerated) 
	ENUMENTRY(Smashed)

#include "EnumBuilder.h"
//////////////////////
//     EFruit
//////////////////////
#define ENUMNAME Fruit
#define ENUMLIST 
	ENUMENTRY(Apple) 
	ENUMENTRY(Banana) 
	ENUMENTRY(Orange) 
	ENUMENTRY(Kiwi)

#include "EnumBuilder.h"
//////////////////////
//     EPlayers
//////////////////////
#define ENUMNAME Player
#define ENUMLIST 
	ENUMENTRY(1) 
	ENUMENTRY(2) 
	ENUMENTRY(3) 
	ENUMENTRY(4)

#include "EnumBuilder.h"

EnumBuilder.h

This header file is where the real magic is; it’s responsible for taking the previously defined ENUMNAME and ENUMLIST macros as input, and turning them into an enum and the string functions. Here it is:

#include  // for _stricmp, for the enum Fromstring function

// this EB_COMBINETEXT macro works in visual studio 2010.  No promises anywhere else.
// Check out the boost preprocessor library if this doesn't work for you.
// BOOST_PP_CAT provides the same functionality, but ought to work on all compilers!
#define EB_COMBINETEXT(a, b) EB_COMBINETEXT_INTERNAL(a, b)
#define EB_COMBINETEXT_INTERNAL(a, b) a ## b

// make the enum E
#define ENUMENTRY(EnumValue) EB_COMBINETEXT(e, EB_COMBINETEXT(ENUMNAME, EnumValue)),
enum EB_COMBINETEXT(E,ENUMNAME) {
	EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), Unknown) = -1,
	ENUMLIST
	EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), Count),
	EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), First) = 0,
	EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), Last) = EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), Count) - 1
};
#undef ENUMENTRY

// make the EToString function
const char *EB_COMBINETEXT(EB_COMBINETEXT(E,ENUMNAME), ToString)(EB_COMBINETEXT(E,ENUMNAME) value)
{
	switch(value)
	{
		#define ENUMENTRY(EnumValue) 
			case EB_COMBINETEXT(e, EB_COMBINETEXT(ENUMNAME, EnumValue)): 
			return #EnumValue;
		ENUMLIST
		#undef ENUMENTRY
	}
	return "Unknown";
}

// make the EFromString function
EB_COMBINETEXT(E,ENUMNAME) EB_COMBINETEXT(EB_COMBINETEXT(E,ENUMNAME), FromString)(const char *value)
{
	#define ENUMENTRY(EnumValue) 
	if(!_stricmp(value,#EnumValue)) 
		return EB_COMBINETEXT(e, EB_COMBINETEXT(ENUMNAME, EnumValue));
	ENUMLIST
	#undef ENUMENTRY
	return EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), Unknown);
}

// clean up
#undef EB_COMBINETEXT
#undef EB_COMBINETEXT_INTERNAL

// these were defined by the caller but clean them up for convinience
#undef ENUMNAME
#undef ENUMLIST

Main.cpp

Now, here’s how you can actually use this stuff!

#include "GameEnums.h"

int main(int argc, char* argv[])
{
	EDamageType damageType = eDamageTypeBluntForce;
	EDeathType deathType = EDeathTypeFromString("smashed");
	EFruit fruit = eFruitLast;
	EPlayer player = EPlayerFromString(EPlayerToString(ePlayer1));
	return 0;
}

Combining the files

As a quick aside, in both this and the last post, I separated the code and data files. This is probably how you would normally want to do things because it’ll usually be cleaner, but it isn’t required. Here’s a cool technique I came across today…

Here’s Macro.cpp:

#ifdef MACROHEADER
	// Put your header stuff here
#else
	// Put cpp type stuff here

	// Include the "header"
	#define MACROHEADER
	#include "Macro.cpp"
	#undef MACROHEADER
#endif

If you ever really just want to combine all your code and data into a single file and not muddy up a directory or project with more files, this technique can help you do that. IMO you really ought to just use separate files, but I wanted to share this for when there are exceptions to that rule (as there always seem to be for every rule!)

Being Data Driven

After my last post, a fellow game developer friend of mine pointed out..

I hope that one day we hurl C++ into a raging sea of fire.

I do like this technique, in theory at least, but whenever I feel like it’s the right solution, a voice in the back of my head yells that I’m digging too greedily and too deeply and should step back for a second and consider what design choices have lead me to this point.

And I think that usually that introspection ends up at the intersection of “we’re not data-driven enough but want to be” and “we decided to use C++ for our engine.”

— jsola

He does have a point. For instance, in the case of our resource list from last post, it would be better if you had some “source data”, such as an xml file, listing all the resources a player could have. The game should load that data in on startup and make a dynamic array, etc to handle those resources. When you had game actions that added or subtracted specific resources from a player, the details of which resources got modified, and by how much, should also be specified in data.

When you save or pack your data, or at runtime (as your situation calls for), it can verify that your data is well formed and makes sure that if a data field is meant to specify a resource type, that it actually corresponds to an actual resource type listed in the list of resources.

That is closer to the ideal situation when making a game – especially when making larger games with a lot of people.

But there are still some good usage cases for this kind of macro magic (and template metaprogramming as well). For instance, maybe you use macros to define your data schemas so that your application can be data driven in the first place – I’ve done that on several projects myself and have seen other well respected people do it as well. So, add these things to your toolbox I say, because you never know when you might need them!

Next Post…

The next post will be what i promised at the end of the last one. I’m going to talk about a way to define a list of lists and then be able to expand that list of lists in a single go, instead of having to do a file include for each list individually.

Example Code: Code_050713.zip

Macro Lists For The Win

Around 6 years ago I was introduced to a programming technique that really blew my mind. John, My boss at the time and the tech director at inXile, had written it as part of the code base for the commercial version of a game called Line Rider and I believe he said he first heard about the technique from a friend at Sony.

Since seeing it at inXile, I’ve seen the same technique or variations on it at several places including Monolith and Blizzard, and have had the opportunity to use it on quite a few occasions myself.

What is this technique? I’m not sure if it has an actual name but I refer to it as “Macro Lists” and it’s a good tool towards achieving the DRY principle (don’t repeat yourself – http://en.wikipedia.org/wiki/Don’t_repeat_yourself)

Macro lists are often useful when you find yourself copy / pasting code just to change a couple things, and have multiple places you need to update whenever you need to add a new entry into the mix.

For example, let’s say that you have a class to store how much of each resource that a player has and lets say you start out with two resources – gold and wood.

To implement this, you might write some code like this:

enum EResource
{
	eResourceGold,
	eResourceWood,
};

class CResourceHolder
{
public:
	CResourceHolder()
	{
		m_resources[eResourceGold] = 100.0f;
		m_resources[eResourceWood] = 0.0f;
	}

	float GetGold() const
		{ return m_resources[eResourceGold ]; }
	void SetGold(float amount)
		{ m_resources[eResourceGold ] = amount; }

	float GetWood() const
		{ return m_resources[eResourceWood]; }
	void SetWood(float amount)
		{ m_resources[eResourceWood] = amount; }
private:
	float m_resources[2];
};

That seems pretty reasonable right?

Now let’s say that you (or someone else who doesn’t know the code) wants to add another resource type. What do you need to do to add a new resource?

  1. Add a new enum value to the enum
  2. Initialize the new value in the array to zero
  3. Make a Get and Set function for the resource
  4. Increase the array size of m_resources to hold the new value

If #1 or #3 are forgotten, it will probably be really obvious and it’ll be fixed right away. If #2 or #4 are missed though, you are going to have some bugs that might potentially be very hard to track down because they won’t happen all the time, and they may only happen in release, or only when doing some very specific steps that don’t seem to have anything to do with the resource code.

Kind of a pain right? As the code gets more mature and more features are added, there will likely be other places that need to be updated too that will easily be forgotten. Also, when this sort of stuff comes up, people tend to copy/paste existing patterns and then change what needs to be changed – which can be really dangerous if people forget to change some of the values which need to be changed.

Luckily macro lists can help out here to ensure that it’s IMPOSSIBLE for you, or anyone else, to forget the steps of what to change. Macro lists make it impossible to forget because they do the work for you!

Check out this code to see what I mean. It took me a little bit to wrap my head around how this technique worked when I first saw it, so don’t get discouraged if you have trouble wrapping your head around it as well.

#define RESOURCE_LIST 
	RESOURCE_ENTRY(Gold, 100.0) 
	RESOURCE_ENTRY(Wood, 0)

// make the enum
#define RESOURCE_ENTRY(resource, startingValue) 
	eResource#resource,
enum EResource
{
	eResourceUnknown = -1,
	RESOURCE_LIST
	eResourceCount,
	eResourcefirst = 0
};
#undef RESOURCE_ENTRY

class CResourceHolder
{
public:
	CResourceHolder()
	{
		// initialize to starting values
		#define RESOURCE_ENTRY(resource, startingValue) 
			m_resources[eResource#resource] = startingValue;
		RESOURCE_LIST
		#undef RESOURCE_ENTRY
	}

// make a Get and Set for each resource
#define RESOURCE_ENTRY(resource, startingValue) 
	float Get#resource() const 
	{return m_resources[eResource#resource];} 
	void Set#resource(float amount) 
	{m_resources[eResource#resource] = amount;} 
RESOURCE_LIST
#undef RESOURCE_ENTRY

private:
	// ensure that our array is always the right size
	float m_resources[eResourceCount];
};

In the above code, the steps mentioned before happen automatically. When you want to add a resource, all you have to do is add an entry to the RESOURCE_LIST and it does the rest for you. You can’t forget any of the steps, and as people add new features, they can work with the macro list to make sure people in the future can add resources without having to worry about the details.

Include File Variation

If you used the above technique a lot in your code base, you could imagine that someone might name their macros the same things you named yours which could lead to a naming conflict.

Keeping the “global macro namespace” as clean as possible is a good practice to follow and there’s a variation of the macro list technique that doesn’t pollute the global macro namespace like the above.

Basically, you put your macro list in a header file, and then include that header file every place you would normally put a RESOURCE_LIST.

Here’s the same example broken up that way. First is ResourceList.h:

///////////////////////////////////
//	RESOURCE_ENTRY(ResourceName, StartingValue)
//
//	ResourceName - the name of the resource
//	StartingValue - what to start the resource at
//
RESOURCE_ENTRY(Gold, 100.0)
RESOURCE_ENTRY(Wood, 0)
///////////////////////////////////

And now here is CResourceHolder.h:

///////////////////////////////////
// make the enum
#define RESOURCE_ENTRY(resource, startingValue) 
	eResource#resource,
enum EResource
{
	eResourceUnknown = -1,
	#include "ResourceList.h"
	eResourceCount,
	eResourcefirst = 0
};
#undef RESOURCE_ENTRY

class CResourceHolder
{
public:
	CResourceHolder()
	{
		// initialize to starting values
		#define RESOURCE_ENTRY(resource, startingValue) 
			m_resources[eResource#resource] = startingValue;
		#include "ResourceList.h"
		#undef RESOURCE_ENTRY
	}

// make a Get and Set for each resource
#define RESOURCE_ENTRY(resource, startingValue) 
	float Get#resource() const 
	{return m_resources[eResource#resource];} 
	void Set#resource(float amount) 
	{m_resources[eResource#resource] = amount;}
#include "ResourceList.h"
#undef RESOURCE_ENTRY

private:
	// ensure that our array is always the right size
	float m_resources[eResourceCount];
};

The Downside of Macro Lists

So, while doing the above makes code a lot easier to maintain and less error prone, it comes at a cost.

Most notably is it can be really difficult to figure out what code the macros will expand to, and it can be difficult to alter the functionality of the macros. A way to lessen this problem is that you can tell most compilers to make a file that shows what your code looks like after the preprocessor is done with it. It can still be difficult even with this feature, but it does help a lot.

When you have compiler errors due to macros, because perhaps you forgot a parameter, or it’s the wrong type, the compiler errors can be pretty difficult to understand sometimes.

Another problem with macros is that I don’t know of any debuggers that will let you step through macro code, so in a lot of ways it’s a black box while you are debugging, which sucks if that code malfunctions. If you keep your functionality simple, straightfoward and format it cleanly, you ought not to hit many of these problems though.

Instead of using macro lists, some people prefer to put their data into something like an xml or json data file, and then as a custom build step, use XSLT or the like to convert that data into some code, just like the C++ preprocessor would. The benefit here is that you can see the resulting code and step through it while debugging, but of course the downside is it can be more difficult for someone else to get set up to be able to compile your code.

To Be Continued…

Macro lists are great, but what if you want your lists to have sublists? For instance, what if you wanted to define network messages for your game in a format like this, and have it automatically expand into full fledged classes to be able to ensure that message parsing and data serialization was always done in a consistent way to minimize bugs and maximize efficiency (less code to write and less testing to do)?

As you might have noticed, macro lists can take parameters to help them be flexible (like, the starting value of the resources… you could add more parameters if you wanted to), but, a macro list can’t contain another macro list. At least not how the above implementations work.

I’m going to show you how to tackle this problem in the next post, so stay tuned! (:

Permutation Programming Without Maintenance Nightmares

Example Code: TemplateSlots.cpp

When writing real time software, we sometimes hit situations where we have code that needs to be lightning fast, but also needs to be configurable to change how it behaves.

For instance, if you are writing something that renders 3d graphics (either traditional rasterized 3d graphics, or raytraced 3d graphics) you might be forced to put an if statement in a deep inner loop saying “is this object textured? Does this object have emissive lighting? If so, call the appropriate functions”.  Or, in the case of something like DSP / Audio processing, you might have an if statement saying “do we need to increase the amplitude of our samples?  if so do that”.

Depending on your use case, this can result in tens of thousands of checks per second or more, which can impact performance quite a bit.

This puts us in a pickle between two extremes:

  1. We can decide to live with the the checks to see which functionality is enabled.  The end result is code that is easy to maintain, but it comes at a cost in execution speed.
  2. We can hand craft each permutation of functionality needed, and call the correct function based on parameters in an outer loop. For example, in an outer loop, we call a function like “RenderPolygonNoTextureYesEmissive”.  This means that we copy / paste a lot of code and change small pieces of each function to act like it should for the specific permutation.  This results in form fitted code that can be a lot faster, but will be harder to maintain.

Someone might look at the above and say “#1 makes slower code that is easier to maintain?  Programmers need to stop being lazy and just go with #2 so that it’s faster at runtime”.  When code is easier to maintain though, it means that programmers can spend less time working on bugs and features in this code, freeing them up to work on other things, and also it means that there will be less bugs in the code to begin with.  Making code that’s easier to maintain is a decision that affects team productivity and the business as a whole, so it isn’t something we can very easily dismiss, even at the cost of runtime performance.

Quick aside: there are features in modern CPUs like branch prediction, as well as features in modern compilers / linkers / optimizers that can mitigate some of this stuff, but they can’t always, and if you rely on the behavior of a specific compiler or CPU, you’ll have code maintenance problems when you have to start supporting a different compiler or CPU.  Console and mobile processors are very different beasts than desktop processors for example, which can make relying on assumptions like this a big problem.

One common solution to the code permutation problem is to generate code for all the permutations needed to get the speed of solution #2, but have that code generated based on a few core pieces of code to get the maintainability of solution #1.   People can do this with clever macro usage, or even actually generate code using a custom build step while compiling.

In the graphics world, it’s fairly common that AAA game engines actually have a shader permutation compiler built into the engine to handle this exact problem.  The situation there is a little bit different because shaders aren’t written in C++, but it’s the same basic problem.

I’m going to show a variation to the code generation solution using templates, template parameters, and inline functions.

Before we get into it I want to note two important things:

  • All observations I make about function inlining and optimizations preformed by the compiler were made using Microsoft Visual Studio 2010 using the default settings for a release built console application.  Your mileage may vary!
  • The example code attached to this post works as a makeshift compressor and limiter.  That DSP code is just there as an example of using this technique and this should in no way be used as a lesson on how to write a compressor or limiter.  Many features are missing and major shortcuts have been taken to keep the code simple!

OOPS!

After posting this, a friend pointed out two things to me that I want to share…

  1. Apparently this technique is already well known and used. It’s called “traits” and it’s talked about in the book Modern C++ Design by Andrei Alexandrescu.
  2. The sample code doesn’t seem to compile in llvm or clang. I don’t have easy access to those guys so… sorry about that.

Inline Functions – The Silent Killer

Ok, I’m joking, inline functions aren’t the #1 cause of death for males between the age of 23 and 25.

When people talk about using inline functions, they always seem to only talk about how inline functions remove the overhead of the function call, setting up the stack for the local parameters, the cost of the return, and any object copies that might have happened for the parameters or return value.

A far less talked about feature of inline functions is that it promotes the function code to be a full sibling to the calling code in the eyes of the optimizer.

If your inlined function checks a boolean parameter to take different action whether it’s true or false, and you call that inlined function using a compile time constant for that parameter (ie just pass “true” , not a variable), that means that the optimizer has the opportunity to completely get rid of the branch of code that will never get executed and get rid of the if statement all together.  The optimizer just ditched the inner loop “if check” we talked about earlier that was happening tens of thousands of times a second.

If you call the same inlined function in another place, passing a boolean variable for the parameter, that other location will keep the if statement intact and will act as you would expect.

This is the first step in our battle against code permutations – inline functions can be used to write code once that shrink wraps itself to whatever your specific use cases may be!

This also gives the optimizer the ability to combine various pieces of  code together in clever ways, especially if you call multiple inline functions and it has more code to work with.

Not All Unicorns and Lollipops – Code Bloat and Cache Misses

Inline functions do come at a cost though and are not a magical solution to every problem.  Most notably, inline functions bloat the size of your compiled code.  For most people, code size on disk isn’t really a huge problem because hard drives are huge and loading an exe sized file into ram is not going to be a problem for normal cases.

The problem with increasing code size mainly comes up when dealing with the instruction cache.

Processors have built in cache memory that they can read and write to super quickly, but the cache memory can only hold so much data.  Whenever the processor needs to read something from RAM that isn’t in the cache, it has to load it from RAM into the cache, and then it can read it.  This is called a cache miss and can a major source of performance problems.  Desktop CPUs can get around this problem by doing other things while waiting for the data to come in (http://en.wikipedia.org/wiki/Out-of-order_execution), but other processors like consoles and mobile devices aren’t able to do that, so they just sit there stalled while waiting for the memory to come back.  Very slow!

To minimize cache misses, you should set up your code and data structures so that they don’t have to jump around in memory a lot, which can cause the CPU to have to load and unload the same peices of RAM into the cache needlessly.

Another thing you can do to minimize cache misses is to try to make your data as small as possible by storing less data or using smaller data types.  This way, the CPU is able to hold more meaningful data in it’s cache at a time, which means less RAM access needed.

These techniques also apply to your program itself inside the instruction cache.  If you have code that jumps around a lot (e.g. gotos that jump really far away for those that use gotos, or excessive function calls), that can cause the CPU to have to excessively load different parts of your program from RAM into it’s cache which can be a costly operation.   Also just like with data, if you keep your code smaller, the CPU will be able to hold more meaningful data in the cache at a time and have to hit RAM less.

Lastly, if you ever make a for or while loop that is too large to fit in the instruction cache, that means that you can get a cache miss each time you iterate through the loop!  Making functions inlined, especially if they are called multiple times inside of the loop, can cause the contents of a loop to become larger and cause this problem.

In the end though, if you’d like you can just mark your functions as inline and let the compiler decide if it would like to actually inline them or not since it’s just a hint.  A lot of the times the compiler ought to make decent choices for you, but of course you are free to go down into this rabbit hole as you want.  Since the compiler doesn’t actually understand your algorithm, you can certainly do better than it can in some situations and make better trade offs. In MSVC there is an option to disallow the compiler from going against your wishes for inlining a function if you want to go this route. Other compilers probably have similar options.

The Main Course – Templated Classes With Worker Slots

Now that you see how inline functions can be used to write custom fitting code permutations for you, you can use this to your advantage by having the class or function that does your work take a template parameter for each piece of configurable work that you would like it to do.

You might define a class like this:

template <class EmissiveLighter, class Texturer>
class CRenderer
{
public:
	static void Render(const CSomeDummyParams &params)
	{
		// render
		...
		// now apply emissive lighting and texture
		EmissiveLighter::ApplyEmissive(params);
		Texturer::ApplyTexture(params);
	}
};

Then, you might define “Slot Classes” like the below

//do nothing for emissive lighting and pay no runtime cost for using this
class EmissiveLighter_None
{
public:
	static inline void ApplyEmissive(const CSomeDummyParams &) {}
};

//apply the emissive color defined in the params struct
//Note, we are using templates but not interfaces so the parameter declarations can change to suit our needs
class EmissiveLighter_Params
{
public:
	static inline void ApplyEmissive(CSomeDummyParams &params)
	{
		params.m_outColor += params.m_emissive;
	}
};

Now, you can call the render function and supply what operations you want to use for each slot and it will generate the form fitted permutations for you.

// no emissive lighting, no texture
Render<
	EmissiveLighter_None,
	Texturer_None>
	::Render(params);

// emissive lighting from params, no texture
Render<
	EmissiveLighter_Params,
	Texturer_None>
	::Render(params);

// emissive lighting from params, texture from params
Render<
	EmissiveLighter_Params,
	Texturer_Params>
	::Render(params);

// emissive lighting from params, use procedural noise texture generator
Render<
	EmissiveLighter_Params,
	Texturer_ProceduralNoise>
	::Render(params);

Also, a quick note… templates also cause code bloat like inline functions do, because behind the scenes, it will create a different class for each permutation of template parameters that you actually use.  Just something to be wary of!

Why No Interface Classes?

You might be saying “you know, you could make an interface class with pure virtuals to make sure that your slot classes implemented the required functions in the required ways.  That would increase type safety and such”.

That is definitely true, but in this case I think there is actually good value in NOT making interface classes that the various slot classes derive from.

To see what I mean, check out the difference between the two emissive lighter classes i defined.  One takes the params parameter as a const reference, and the other takes it as a non const reference.  This lets you decide what you want to do with the parameters on an implementation by implementation basis.

This lets you give further optimization hints to the compiler, allowing you to define unused parameters as const refs.

It lets you further “shrink wrap” the code to suit your specific needs in each permutation, and you still get a lot of compile time protection when you use the type in a template.

Last Words

If you go this route, it definitely could make some issues more difficult to debug, especially if you stray from the straight and narrow usage. For instance, you might make the EmissiveLighter_Params set some value on the params that the Texturer_Params uses because Texturer_Params assumes that emissive will always use params too when texture params is used. If you switch the emissive to something else but leave the texturer alone, the texturer could stop working and it could be hard to realize why. My advice if using this technique is to make each slot class as isolated as possible and make as few assumptions as possible about other slots. Also make sure and name slot classes so that it’s very obvious what they do from the name and make sure they don’t do anything other than the name implies. If they do anything else, change the name to reflect that! Don’t worry about not being able to share calculations between slot classes to improve efficiency either… since they ought to all be inlined, the optimizer ought to be able to combine some of that and increase re-use for you when it’s appropriate.

When using this in the real world, you might have to come up with some sort of “multiplexer” function where you pass it your variable parameters and it will return a function pointer to the specific function you ought to use.  A function pointer points at the function in the specific template instantiation asked for, so getting a function pointer back would be enough to translate your parameters into the right template instantiation to use. I’m not sure of a way to do this other than a bunch of embeded switch statements but I’ll be thinking about it and report back or write a new article if I come up with an interesting way to do that. The main problem there comes in mapping run time dynamic values to compile time static values.

Since this technique is using template classes and/or template functions you might be able to do some interesting things with template specialization, partial template specialization and template parameter deduction.

For instance, you could introduce SSE calls if you have to process more than a specific amount of data, but just do simple loops if doing less than that.

I made some example code that goes into this stuff in some more depth and has a few interesting techniques I didn’t go over in the article.

The example source code makes frequent use of casting like “(DataType)0.5” but don’t worry, those ought to be compile time operations for all basic types and have no runtime cost (verified in my compiler anyways)

Example Code: TemplateSlots.cpp

Bias And Gain Are Your Friend

Often times in game development, you have situations where you want an object to move from one place to another, you want something to grow or shrink from one size to another, you want a color to change from A to B, or any other one of the myriad tasks where you want to do something from A to B over time (or over distance).

That’s pretty abstract but let’s take some examples:

  1. You want to move a camera along a straight line from A to B
  2. You want to raise the lighting from dark to bright in a room
  3. When the player clicks an icon, you want to grow a window from small to big
  4. You want to cross fade one skeletal animation to another via the blend weights of the animations (an example from my Anatomy of a Skeletal Animation System articles)
  5. You want to use a gradient in a shader for some effect.

When you are doing these things, it’s real easy to take a percent based on time or distance and just use that percent raw to make a linear effect.   Often times a linear effect just isn’t good enough though because it looks or feels mechanical instead of organic, and unpolished.

Often times the way these things are softened and made more organic is by giving a content creator a curve editor so that they can soften the edges, speed up or slow down the processes over time or distance.

Many game engines don’t come with curve editors that can be easily used for these purposes, and other times you just want to deal with it in code for one reason or another, so don’t have the luxury of giving a content creator carte blanche with a curve editor.

There are a couple techniques for handling these situations but I want to talk to you about 2 of my favorite techniques, which are Ken Perlin’s bias and gain functions.  I actually use Christophe Schlick’s faster approximation functions (as seen in game programming gems 2), but the end result is the same thing.

If you want to skip ahead and see these things in action, I made an interactive demonstration about these functions, check em out! HTML5 Bias and gain

Bias – Not as in bigotry

The bias function takes in a number between 0 and 1 as input (I like to think of this as the percent) and also takes a number between 0 and 1 as the “tuning parameter” which defines how the function will bend your curve.

With a value of 0.5, the percent you put in is the percent you get out (so is linear), but if you put in a number > 0.5 or < 0.5, that's when the interesting things happen.

Shown here are graphs of the bias function with parameters of 0.5, 0.25, 0.75 and 0.97:

Bias 0.5 Bias 0.25Bias 0.75Bias 0.97

In javascript, the code for bias looks like this:

function GetBias(time,bias)
{
  return (time / ((((1.0/bias) - 2.0)*(1.0 - time))+1.0));
}

Gain – Not as in my weight during the holidays

The gain function is like bias in that it takes in both a 0 to 1 input (I think of this as the percent as well) and also takes a number between 0 and 1 as the “tuning parameter”.

Again, with a value of 0.5, the percent you put in is the percent you get out (again, this makes it linear) but if you put in other numbers, you get interesting curves.

Here are graphs of the gain function with the same parameters of 0.5, 0.25, 0.75 and 0.97:

gain 0.5Gain 0.25Gain 0.75Gain 0.97

In javascript, the code for gain looks like the below. You might notice it makes use of the GetBias function. Gain is just bias and reflected bias.

function GetGain(time,gain)
{
  if(time < 0.5)
    return GetBias(time * 2.0,gain)/2.0;
  else
    return GetBias(time * 2.0 - 1.0,1.0 - gain)/2.0 + 0.5;
}

That’s It!

Well that’s about it, pretty straightforward stuff. Wherever you find yourself using a percent in your code, you can try passing it through a bias / gain function (and optionally exposing the tuning parameter to content creators) and see if you can make things feel a little more organic and polished.

Sometimes its the little things that make the biggest difference!

One again, the link to the interactive example of these things is at:
HTML5 Bias and Gain

Anatomy of a Skeletal Animation System Part 3

This is part three of “Anatomy of a Skeletal Animation System”

Animation System Optimizations and Features

Here are some various animation system optimizations and techniques that you might find useful…

Multithreaded Animation Blending

If you are even mildly comfortable writing multithreaded code, this one is fairly easy to implement.

Basically every animated model that needs an update goes into a queue every frame.  (Things that haven’t been on screen for a little while could be exempt from the list so you don’t waste time on things that aren’t being rendered)

At some point in your main loop, you do the animation sampling / anim blend tree blending / etc work to come up with the final bone group. You do this by grabbing the first model in the queue, processing it, then moving to the next model.

Your main loop doesn’t continue until all of the models have been processed.

Now, imagine that you had other worker threads also grabbing models from the queue and processing them, and that the main thread will wait to continue the main loop until the queue was empty and all models had been processed.

TA-DA! You are done and have multithreaded animation blending. It can help A LOT, depending on how many hardware threads you have available for helping work.

Bias / Gain Curves in Anim Blends

With normal animation blending, it’s a linear crossfade from one animation to another.

Sometimes, an animator can make things look nicer if they have the option of doing non linear crossfading.   One nice option for doing this is exposing a bias and gain parameter to the blend in / out parameters.

Bias and gain are great ways of letting content creators create non linear curves for a variety of uses.  Ken Perlin did a lot of work in this area, but in “Game Programming Gems 2”, a guy named Cristophe Schlick presented some simplified, quick equations to calculate approximations of bias and gain.

I highly recommend checking that out and using them for this, and everything else in your game. Using bias and gain you can do things like have your camera move from point A to point B, but start out fast and slow down as it gets closer to B, giving it a nice organic feel to it, instead of a rigid lerp.  With bias and gain you pass in a % and get out a different %.  Real simple to use and extremely useful in every part of your game just about.

Here’s an interactive demonstration of the bias/gain functions I made. The source code for the functions are there too:
HTML5 Bias and Gain

Round Robin Anim Evaluation

There are some situations when you don’t need every model to have perfectly up to date animation data every single frame. One example of this is if you are simulating the game world on a server, where skeletal animation data doesn’t need to be perfectly up to date since network latency already makes it somewhat innacurate.

In these cases, one thing you could do is split the list of models you need to update into perhaps 4 different lists. Then, each frame, you only process one of the 4 lists, thus reducing your animation CPU load down to 25% of what it was. Quick and easy way to save some real CPU time quickly if you don’t need the most up to date animation data all the time.

Pose Sharing

Sometimes you have a lot of different models where many of the models are preforming the same animations – such as if you have a crowd of people in a crowded area.

One way to deal with this is to let some of the people doing the same animations SHARE their computed animation data.

If you are in a crowd, and there’s lots of different looking people walking all sorts of different directions, you aren’t going to easily notice that there are people who are using the exact same bone data, but facing different directions.

Going this route, if you have a group of 4 let’s say that all share the same bone data, you only need to calculate it for one person, and the rest of the group uses the data already calculated.

Less animations to sample and blend so you gain some CPU back.

Skeleton LODing

As things get farther away, or smaller, the smaller details are less noticeable. Because of this, you can “remove” bones from a skeleton as a model is farther away. I mentioned this briefly with facial animations, but the same is true of arm bones, leg bones, hand bones, etc.

You just have to make sure your anim system is able to handle LODing out bones gracefully (no popping) and efficiently (no excessive processing to get a lower LOD skeleton, it should just be a flag on the bones or something).

Runtime Debugging Essentials

Here are some debugging tools that I’ve found essential in debugging day to day animation bugs (popping, twitching, incorrect animations, etc).

Real Time Info On Screen

You really need the ability to show some kind of status on screen for a specified model. The info should show what animations are playing on which animation controllers, the current time of the animation controller, the playback rate of the controller, the state of the state machine, etc.

Using this, when you see a pop, you might see that for a fraction of a second, that an animation switches from one animation to another, then back to the first. From there you can go on debugging it further.

Timeline Log

Sometimes it’s useful to be able to turn on animation logging for a specified model. This way, you can generally log more info than you can on the screen in real time, and can also take your sweet time looking at very small intervals of time to see what went wrong and why.

Very useful.

Show the Bones

Sometimes you really just need to be able to look at the skeleton to see an issue more clearly, or be able to determine if the problem is with a model or the animation data.

Having a way to turn on bone rendering such that it draws 2d (unprojected) lines on the screen showing the bones of a specified model is very useful. Also sometimes it’s nice to be able to see the bones of all the animation data that went into the final blended pose, instead of just seeing the final blended pose.

Control Time Itself!

Lastly, sometimes it’s really useful to be able to slow down time to see a problem in greater detail. Rarely, it’s also useful to be able to speed up time. Having the ability to do both while the game running can be a really big help.

That’s All She Wrote

That, and MURDER I mean.

I hope you enjoyed these articles on the anatomy of a skeletal animation system. Drop me a line or post a comment if you have any questions or comments (:

Anatomy of a Skeletal Animation System Part 2

This is part two of “Anatomy of a Skeletal Animation System”

Animation Controller v3 – Bone Groups

In part 1, we talked about how to make a skeletal animation system that was able to play smooth, non popping animations on a model, it could communicate back to the engine to play sound effects, spawn objects in specific spots, and many other things as well.  What it could not do however, was play a different animation on the upper body and lower body.

To solve this, instead of having a single animation controller for our model, we need to have multiple animation controllers, where each controller controls a specific set of bones.  Note that multiple controllers should be able to affect the same set of bones, and in the end result, a bone’s position is made up by blending the data from all animation controllers that affect it.

Each animation controller should have a blend weight so that it can be blended in and out to keep animation motion smooth and continuous, and also the blend weighting allows you to turn on and off specific animation controllers as needed.

Some great example uses for this are…

  • Having a seperate animation controller for the upper and lower body so that they can work independently (the lower body can look like it’s jumping, without having to care if the upper body is firing a gun or not).
  • Having a seperate full body animation controller that affects all bones.  In most situations, this animation controller would be off, but in the rare cases that you want to play a full body animation, you turn this one on and play an animation on it.
  • Having a facial animation anim controller that only turns on if the camera is close enough to a characters’s face.  This way, if you look closely at another player, you can see their face moving, but if you are far away from them, the game engine doesn’t bother animating the facial bones since you can’t see them very well anyways.

The order that these animation controllers are evaluated should be explicit (instead of left up to load order or things like that).  You want to be very clear about which animation controllers over-ride which other animation controllers for the case of having multiple on at the same time, affecting the same bones.

For the sake of efficiency, when trying to blend the animation data together from each animation controller that affects that bone, you should start at the last fully weight (100% weight) anim controller in the anim controller list.  This way, you don’t bother evaluating animations for anim controllers that are just going to be completely masked out by other animation controllers.

If there is no full weight anim controller in the list that affects the specific bone, initialize the bone data to the “T-Pose” animation position before blending the other anim controller bone data on top of it.

We now have a very robust animation system, but it isn’t quite there yet.  Interacting with this animation system from game code means you having to tell specific game controllers when to play specific animations.   This is quite cumbersome and not very maintainable.  Ideally, the animation logic would be separated from the game play logic. Besides making the code more maintainable, this means that non animation programmers will be able to write game play code that interacts with the animation system which is a big win for everyone. Fewer development bottlenecks.

Animation Selection

There are two good techniques i’ve seen for separating the logic and preforming animation selection for you.

The first way is via “animation properties” and the second way is by using an animation state machine. There are pros and cons to each.

Animation Properties

For the animation properties method, you essentially have a list of enums that describe the player’s state.  These enums include things such as being able to say whether the player is crouched or standing, whether the player is unarmed, holding a pistol, or holding a rifle, or even how injured the player is (not injured, somewhat injured, or near death).

The game play code would be in charge of making sure these enums were set to the right values, and the animation controller(s) would use these values to determine the appropriate animations to play.

For instance, the game code may set the enum values to this:

  • WeaponType = Rifle (vs Unarmed, Pistol, etc)
  • WeaponAction = Idle (vs Firing, Reloading, etc)
  • PlayerHealth = NearDeath (vs healthy, injured, etc)
  • MovementType = WalkForward (vs Idle, Running, LungeRight, etc)

From here, the animation system takes over.

The lower body animation controller perhaps only cares about “MovementType” and “PlayerHealth”.  It notices that the player is walking forward (WalkForward) and that they have very low health (NearDeath).  From this, it uses a table that animators created in advance that says for this combination of animation properties, the lower body animation controller should play the “WalkNearDeathFwd” animation.  So, the lower body animation controller obliges and plays that animation for the lower body bones.

The upper body animation controller perhaps just cares about WeaponAction, WeaponType and PlayerHealth.  It notices that the player has a rifle, they aren’t shooting it, and they have very low health.  From this, the upper body animation controller looks into it’s animation properties table and sees that it should play the “RifleIdleInjured” animation, so it plays that animation on the upper body bones.

The logic of game play and animation are completely seperate, and the animators have a lot of control over what animations to play in which situations.

Once again, you’d want an editor of some sort for animators to set up these animation properties tables so that it’s easier for them to work with, it verifies the data to reduce the bug count, and everyone wins.

Your tool also ought to pack each animation properties table (upper body, lower body, facial animation, full body animation, etc) into some run-time friendly structure, such as perhaps a balanced decision tree to facilitate quick lookups based on animation properties.

Animation State Machine

Another way to handle animation selection is to have the animation controllers run animation state machines, having the game code send animation events to the state machines. Each state of the state machine corresponds to a specific animation.

When the player presses the crouch button for instance, it could send an event to all of the animation controllers saying so, maybe ACTION_BEGINCROUCH.

Depending on the logic of the state that each anim controller state machine is in, it may respond to that event, or ignore it.

The upper body anim controller may be in the “Idle” state. The logic for the idle state says that it doesn’t do anything if it recieves the ACTION_BEGINCROUCH event, so it does nothing and keeps doing the animation it was doing before.

The lower body anim controller may also be in a state named “Idle”. The logic for the lower body idle state says that if it recieves the ACTION_BEGINCROUCH event, that it should transition to the “StartCrouch” state. So, it transitions to that state which says to play the “CrouchBegin” animation (also says to ignore all incoming events perhaps), and when that animation is done, it should automatically transition to the “CrouchIdle” state, which it does, and that state says to play the “Crouching” animation, so it does that, waiting for various events to happen, including an ACTION_ENDCROUCH event to be sent from game code when the player lets go of the crouch button.

The interesting thing about the anim state machine is that it gives content creators a lot more control over the actual control of the player himself (they can say when the player is allowed to crouch for instance!) which can be either a good or bad thing, depending on your needs, use cases and skill sets of your content creators.

Going this route, you are going to want a full on state machine editor for content people to be able to set up states, the rules for state switching, and they should be able to see a model and simulate state switches to see how things look. If you DO make such an editor, it’s also a great place to allow them to define and edit bone groups. You might even be able to combine it with the key string editor and make a one stop shop editor for animation (and beyond).

Animation Controller v4 – Animation Blend Trees

At this point, our animation system is in pretty good shape, but we can do a bit better before calling it shippable.

The thing we can do to really spruce it up is instead of dealing with individual animations (for blending, animation selection, etc), is to replace them with animation blend trees like the below:

Animtree

In the animation blend tree above, you can see that it’s playing two animations (FireGun and GunSight) and blending them together to create the final bone data.

As you can imagine, you might have different nodes that preformed different functionality which would result in lots of different kinds of animations using the same animation blend tree.

You will be in good shape if you make a nice animation blend tree editor where a content creator can create an animation blend tree, set parameters on animation blend tree nodes, and preview their work within that editor to be able to quickly iterate on their changes.  Again, without this tool, everyone’s lives will be quite a bit harder, and a little less happy so it’s in your interest to invest the effort!

Some really useful animation nodes for use in the blend trees might include:

  • PlayAnimation – definitely needed!
  • AnimationSequence – This node has N number of “children” and will play each child in order from 1 to N in a sequence.  You may optionally specify (in the editor) that you want the children chosen at random and you specify a weighting to each child for the random choosing.  This is useful for “idle animations” so that periodically an idle character will do silly things.
  • AimGrid – this animation node uses the player data to see yaw and pitch of the player’s aim.  It uses this information to figure out how to blend between a grid of 9 animations of the player pointing in the main directions to give a proper resulting aim.  This node has 9 children, which specify the animations that specify the following aiming animations: Up Left, Up, Up Right, Left, Forward, Right, Down Left, Down, Down Right.  Note that since this is a generalized anim blend tree, these child nodes can be ANY type of animation node, they aren’t required to be a “PlayAnimation” node.  This in essence is the basis of parametric animation (which i mentioned at the beginning of part 1), so this is a way to get some parametric animation into your system without having to go full bore on it.
  • IK / FK Nodes – get full or partial ragdoll on your model.  Also get it to do IK solving to position hands correctly for specified targets and such.
  • BlendBySpeed – You give N number of children, and movement speeds for each child.  This animation node will choose the correct animation, or blend between the correct animations, based on the current traveling speed of the player.  This way you get a smooth blend between walk, run and sprint animations and the player can move at whatever speed they ought to (perhaps the speed is defined by the pathing system, or the player’s input).  To solve the problem of feet “dancing” as they blend, you need to make sure the footfalls happen on the same time (in %) on each animation that will blend together.  This way, the animations don’t fight eachother, and the feet will appear to move properly.
  • BlendByHealth – if you want the player to walk differently when they are injured, this node could be used to specify various walk animations with matching health levels so that it will blend between them (for upper or lower body or whatever else) as is appropriate for the player’s current health level.
  • Additive Blending – to get gun recoils and such

As you can see, animation blend trees have quite a bit of power.  They are also very technical which means engineers may need to help out content folk in making good trees to resolve some edge case bugs.  In my experience, animators are often very technical folk themselves, so can do quite a bit on their own generally.

Combine anim blend trees with the animation selection systems (FSM or anim properties) and the ability to smoothly blend an animation controller between it’s internal animations (or anim trees) it’s playing and you have a really robust, high quality animation system.

Often time with this work flow, an animator will just say “hey i need an anim node which can do X”, so an animation engineer creates the node and the animators start using it to do interesting things.  No need for an engineer to be deeply involved in the process of making the animation work like the animator wants, or having to worry about triggering it in the right situations etc.

Sure there will be bugs, and some things will be more complex than this, but by and large, it’s a very low hassle system that very much empowers content creators, and removes engineers from needing to be involved in most changes – which is a beautiful thing.

End of Part 2

This is the end of part 2. In the next and final part, we’ll talk about a few other miscellaneous features and optimizations.

Anatomy of a Skeletal Animation System Part 1

This is part one of “Anatomy of a Skeletal Animation System”

There is quite a bit of information out there on the basics of skeletal animation, including how to export and read animation and model data, how to animate bones and thus transform a mesh, how to blend bone data together and other related animation topics.

However, there is a lot less information out there about how to set up a system to use these techniques in a realistic way, such as you might find in your average modern 3d video game.

I myself have been an animation programmer on a few games including an open world unreal engine game called “This is Vegas” (unfortunately cancelled due to Midway going bankrupt) and also a multiplayer only first person shooter called “Gotham City Impostors” which was released earlier this year for PC, 360 and PS3.  The info I’m presenting is based on experience developing those games, as well as info i gathered from other developers or read about in books or online.

In this article I’m going to assume you already know how to get animation bone data into memory, how to use that animation data to animate models (meshes), and also how to blend animation bone data together.  I’m going to start off with the most simple animation system possible and slowly introduce features until we end up at something that would be fully featured for a typical modern game.

The “next generation” of skeletal animation seems like it’s going to be heavily based on parametric animation, and while we will TOUCH on the basics of parametric animation, we won’t dig into it very much beyond that.   If you are making a next gen AAA title, parametric animation may possibly be for you (and maybe not), but with the rise of 3d in flash, the rise of mobile games, and also indie game development, I think traditional pose driven skeletal animation is here to stay at least for a while.

Depending on the needs of your project, and how high a quality bar you want vs how much CPU time you want to spend on animation, some of these features may not be appropriate.  Feel free to take what is useful to you, and leave what isn’t.  Every game is different.

Animation Controller v1 – Super Simple

The simplest point we will start out is that if you have a mesh with an animation controller on it (to control what animations should play on it and such), it has these features:

  • If you tell it to play a looping animation, it will continue playing that looping animation forever.
  • If you tell it to play a non looping animation, it will play the animation and have some way of notifying you when the animation is done.  This is either by having it call a callback when it’s done, or by setting some flag on itself saying that the animation is done (won’t ever get set on a looping animation)
  • You should be able to tell it a playback multiplier to play the animation at, such as if you tell it to play at 3.0, it will play 3 times as fast, or if you tell it to play at 0.5, it will play half as fast and look like slow motion.
  • If you tell it to play an animation while another animation is playing, it will instantly stop the animation it’s playing and start playing the new animation.

With this simple animation system, we could conceivably make a game that has animated characters.

That being said, the animation system is lacking in a few ways:

  1.  You can only play full body animations, meaning if you want the lower body to look like it’s jumping, and the upper body to look like it’s firing a rifle, you have to make an animation that looks like that.  If you want the same thing, but you want the lower body to look like it’s standing around while the upper body is firing a rifle, you have to make an entirely different animation that looks like that!  The permutations of actions can get quite large and you have to decide in advance which animation you want to use.  That is, when the player is jumping, they cant change their mind that they suddenly want to start shooting.
  2. When you switch animations, there is visible “popping”.  Popping is when a bone goes from doing one thing to doing something else instantly.  It looks like the bone teleported and is very visible to players.  It looks buggy and unpolished.
  3. If you are doing something like having the player throw a grenade, you have no way of knowing when to actually spawn the grenade model, and where to spawn it.  You could “hard code” it to spawn at the same place relative to the player each time, when the animation stops playing, but that is pretty hackish and not very maintainable.

Lets start off by working on solving problem #3 of not being able to specify where to spawn a grenade or when to spawn it.

Keyframe Strings

To solve the problem of WHEN to spawn it, a feature common to nearly all animation systems is the ability to put game engine events on animation key frames.

This way, when the arm is at the correct position in the throw animation, someone would be able to put an event like “throw grenade” on that animation key.  When the animation reaches that animation frame, it sends the message to the game engine, which can then create a grenade (with any specified parameters to the event).

Often times I’ve seen this implemented as an actual string that is associated with an animation key frame.  The strings might be things like:

Playsound Laugh.wav   (to play a sound to go along with the animation)

SpawnPhysicsProjectile  Grenade.mdl 0 0 5   (to spawn a projectile with the specified mesh and velocity vector)

FootFallSound (This would tell the engine to play a footstep sound, based on the material the player was standing on, such as a metalic sound if on metal, or a duller thud if walking on dirt)

You could also use it to hide and show attachments or a myriad of other things.  Basically you can use it for anything that you want to be tied to an animation.

Usually you’ll want some kind of editor for animators and other content creators to be able to associate these key strings with specific key frames.   If they have to work with a text file where they have to hand enter times and key strings associated with those times, it’s going to be really tedious and they are going to be sad.  Also, it will be very error prone which makes everyone sad when it generates more bugs than it needs to, slowing down dev time.

On the topic of creating unnecessary bugs, while i’ve often seen keystrings implemented as actual strings, it’s actually a lot less error prone if you have some kind of structured input system in your key string editor.

For instance, instead of them typing a command name and supplying any required parameters, it would be a lot better for them to have to choose a key string command from a drop down list.  When they choose one, it should display any parameters that might be needed, and have some way of validating that their input is valid.

This editor should be tightly coupled with your game engine.  Example ways for doing this including having a shared header file that defines all key string commands and what parameters they require, or having the key string editor load a game dll to get at the data that way.

If you have to manually maintain the tool to match game code, it will often get out of sync and cause you pain you don’t need.  Avoiding that pain means you can work on developing more features instead of fighting reoccurring bugs, and means QA can focus on finding harder to find bugs.  In the end it means a better product which is great for the company, your continued paycheck, and the player’s experience.

Some other potential bugs can come up with key frames that I don’t have a good answer for, it’s just something you have to mindful of.

One of these bugs is that when an animation is interrupted, a key frame might not get hit when you expect a key frame to get hit.  For instance if an animation attaches something to a players hand, and at the end of the animation hides that attached object, if you interrupt the animation midway through, it won’t get hidden and the attachment will be stuck to the hand as the player does other things – which looks very weird.  Your best bet is to design things in such a way that if key strings are missed, it isn’t a problem.  Not always possible with all features unfortunately though…

Another problem that comes up when you have more advanced anim systems is that you may be blending out an animation which is no longer relevant, but while it is blending out, it hits a key frame.  For instance if you a player is holstering a weapon, but blending out a fire animation that got interupted, you may get a “firegun” key string command, when you really don’t want it because it’s not relevant anymore.  Sometimes you would want a key string to fire in that case though, so there is no real global solution to the problem that I’m aware of.

Sockets

Now that we have a way of knowing WHEN to spawn a grenade in a grenade throw animation, we don’t know WHERE to spawn it.  This is where sockets come in – no I’m not talking about TCP/IP or UDP sockets!

A seemingly obvious solution is probably to say which bone to spawn the grenade on in the “throw grenade” animation key string.    An issue here though is that maybe if you spawn it right on the “rhand” bone, it might clip through the hand (inter-penetrate the hand) and look sloppy.  Also, for other use cases, you might want to attach something where there isn’t a bone nearby.

Another seemingly obvious solution might be to add extra bones to the animation data that aren’t tied to any real geometry.  This way, you can use the bones to attach things to, or spawn things at, but they aren’t tied to any real model geometry so you can make them move however you want.

The problem with this solution is that you are paying the cost of animating those bones even if you aren’t using them for anything.  Enter sockets!

Sockets are a transformation (translation and rotation) away from a specified bone.  They are usually only calculated on demand so that when you aren’t using them, you don’t pay a price for having them.

This way, sockets act as very cheap attachment / reference points on a model during animations to attach other models to (such as capes, helmets, guns, grenades).

When a key string command takes a socket or bone as a parameter, you should have it accept either a bone or a socket.  They should be usable interchangeably, because sometimes you really do want to attach something to a bone, and you shouldn’t make an animator make an extra socket just to make it match a bone.

We now have a way of specifying WHEN to spawn a grenade (via a key string), and also WHERE to spawn it (specifying a socket to spawn it at as a parameter to the key string command).

Animation Controller v2 – Blending

I mentioned popping earlier and said it was caused by a bone changing where it is or how it’s moving by a drastic amount in a single frame.  If you’ve read my DIY Synth articles, you probably remember how important in audio programming it is to make sure that your sound data stays continuous.   The same is true of animation data, you have to make sure that bone motion / position stays continuous always, or else you’ll get popping.

Just like in audio programming, you use envelopes to help keep things continuous when you add a new animation into the mix, or remove an old animation.

For instance, If a model is playing one animation and you tell it to play another, the new animation should start at a blend weight of 0.0 and slowly increase while the old animation decreases from a blend weight of 1.0 down to 0.0.  This gives you a nice smooth blend between two animations and works for MOST animations (more on that in a second).

Typically, when crossfading from one animation to another, the magic number is to blend over 0.2 seconds, but certain uses may warrant a longer or shorter blend time.  You might also blend out the old animation at a different rate than you blend in the new animation.  Give your animators the option to choose so they can do whatever they need.  They will be happy that they have the control, and you will be happy that you don’t have to one off program things all the time for them.  Everyone wins!

What happens if you want to play an animation while an animation blend is in progress already?  0.2 seconds of blend time sounds like a short amount of time, but this actually comes up ALL THE TIME.

There are two ways to deal with this issue that I’m going to talk about.

The first way to deal with this problem is to keep a list of all the animations that are currently playing, so that if you tell the animation controller to play a bunch of different animations really quickly, it will end up sampling a bunch of different animations as various  ones blend out, and the final one is blending in.  This can result in A LOT of animation sampling which can take a serious toll on your game’s performance.  I encountered a bug on a game I worked on once that caused around 100 animations to be getting sampled on a single model for several frames due to this problem and it made the game tank HARD.

The second way to deal with this, and how I like to implement it usually, is to make it so only two animations can play at once (a main animation and a blend animation) and you have another field on the animation controller which says what the next animation  to blend in is.

Going this route, when you say to play a new animation while a blend is in progress, it goes into the “next animation” field.  When the current blend is done, that next animation will blend in and the last one will blend out.

If there is already another animation in the “next animation field”, it’s replaced and it’s never seen.

This way, only two animations will be sampled / blended at a time maximum, yet you will get a perfectly smooth blending between animations, and the controls will still feel fairly snappy, although there may be a noticeable delay in control response if animations change a lot really often.  You’ll have to make a judgement call about the needs of your game.

Lastly, I said blending works nicely for most animations but not all.  One exception to this rule is when you try to blend different lower body animations together, such as trying to blend a walk animation and a run animation together.  Often times, the feet will be in different places and when you blend them, it makes the feet look like they are doing a little stuttering dance and it looks ugly.  I’ll talk about getting around this specific problem in the next part, but as a preview, the short version of the solution is to make sure the feet are in the same positions at the same time for the two animations.

End of Part 1

At this point we have a fairly nice animation system but it isn’t quite ready yet. The most glaring problem we have is that we can only play full body animations still, which is not acceptable.  A real animation system NEEDS to be able to play different animations on different sets of bones independently.

We’ll tackle that problem, and others, in part 2.

Recording lagless demo videos of a laggy game

Often times when developing a game, you’ll want to record a demo video to show to a publisher, show at E3, post on kickstarter, youtube, or other places to help generate interest or gain funding to keep your project going.

Unfortunately, the point in time that you need a video is often in the beginning of the project, when your game probably doesn’t run very fast, or might have performance spikes, making it difficult to get a high quality video capture.

Many times, developers will have to have a performance push to get the game up to speed for a demo video, spending time on “demo hacks”, which are often just throw away code for after the video is made.  I’ve been through a couple of these myself and they are not fun, but they are an unfortunate necessity.

This article will explain a fairly simple technique for getting a full speed recording of your game engine with perfectly synchronized sound, no matter what speed your game actually runs at, saving you time and effort, not having to waste time on demo hacks just to get a presentable video.

Playable demos are a whole other beast,  and you are on your own there, but if a video will suit your needs, you’ve come to the right place!

I’ve used this technique myself in a couple different games during development, and in fact included it as a feature of one PC game I shipped in the past, called “Line Rider 2:Unbound”, so this is also a technique for adding video recording to any game you might want to add it to.

Out of the box solutions

There are various “out of the box” ways to record a video of your game, but they have some downsides which make them not so attractive.

For instance, you can get fraps which will record any application’s audio and video and you could use that to record a video of your game. The downside here is that if your game lags, so does the video, so we still have that problem. Also, the act of recording competes with your game for resources, causing your game to run at an even lower FPS and making an even worse video.  Fraps is also limited to specific platforms, and you may be working on an unsupported platform.

lastly, if you want to include this feature of video recording in a shipped product, you will have to license fraps for that use, which may be prohibitive to your project’s budget.

Other video recording software has the same or similar issues.

Rolling your own – Video

Making your own video recorder built into your game has some real easy to hit pitfalls that I want to talk about.

When considering only the video portion (not audio yet), our aim is to write all the frames to disk as individual image files (such as png, or raw uncompressed image files), and then after recording is done, use something like ffmpeg to combine the frames into a video. Writing a compressed image file (such as png or jpg) for each frame may save disk space, but may take longer for your computer to be able to process and write to disk. You will likely find that a raw file format is more performant, at the cost of increase disk space usage, but hard drives are cheap these days and everyone has huge ones. Also, at this point you probably want to use lossless image compression (such as png, or a raw image file) for your screen captures so that you don’t have compression artifacts in your screen captures. When you make a final video, you may choose a more highly compressed video format, and it may introduce it’s own artifacts, but you want to keep your source files as clean as possible so that you don’t introduce UNNECESSARY artifacts too early in the process.

If you dump each rendered frame to disk, the disk i/o can drag your game’s frame rate down to a crawl.  You might think about having the disk write happen on another thread so the game isn’t limited by the disk i/o, but then you’ll have to keep a buffer of previous frames which will grow and grow and grow (since you are making frames faster than it can write to disk) until you run out of memory.  It’s a losing battle for longer videos.

You could get a faster drive, configure a striped raid array, use a ram disk, or things like that, but why fix with hardware what you can fix in software?  Save yourself and your company some cash.

Similarly to the fraps problem, when you record video, that will likely affect the frame rate of your game as well, making it run slower, making a lower quality video because frames will be skipped – assuming you are using variable frame rate logic – making it so that you either have to have a “laggy” looking video as output, or your video will actually appear to speed up in the places that you encountered lag while recording, which is very odd looking and definitely not demoable.

The solution (which might be really obvious to the astute reader) is to make your game run your game’s logic at a fixed rate, instead of making it be based on frame time.  For instance, instead of measuring the time between frames and using that delta to control logic (making things move farther when more time has passed etc), you just make your game act as if the same amount of time has always passed between your frames, such as ~16ms for a 60fps recording, or ~33ms for a 30fps recording.  IMPORTANT: make it only behave this way when in “recording mode”.  You don’t need to sacrifice variable frame rate logic just to get the ability to record nice videos.  Also, 30fps is fine for a video.  The more FPS your video has, the larger the video file will be.  Movie and TVs are something like 24 fps, so you don’t need a 60 fps video for your game demo, 30 or less is just fine.

This way, it doesn’t matter how long it took to render each frame, the game will generate a sequence of frames at whatever frame rate you would like your video to be in.  While recording the demo video, the game may run slowly, and be difficult to control if it’s REALLY laggy, but at least the output video will be smooth and perfectly lagless.   Later in this article I present a possible solution to the problem of difficulty playing the game while recording.

Are we done at this point?  NO!  We haven’t talked at all about audio, and as it turns out our audio is in a very odd state going this route.

Rolling your own – Audio

From the section above, we have a nice lagless video stream, but if we just recorded audio as it went, the audio would be out of sync with the frames. This is because we recorded audio in real time, but we recorded the frames in variable time.

You could try to sync the audio in the right places with each frame, but then you’d have to speed up and slow down portions of your audio to hit the right frame numbers, which would make your audio sound really weird as it sped up and slowed down and changed pitch.

Definitely not demoable! So what’s the solution?

The solution is that while you are recording your video frames, you also make an audio timeline of what audio was triggered at which frame numbers.

For instance, if on frame 20, the player swung his sword and on frame 25 hit an exploding barrel, causing it to explode, your timeline would say “at frame 20, play the sword swing sound effect. at frame 25, play the exploding barrel sound effect”.

I’ve found it really easy to capture an audio timeline by hooking into your game or engine’s audio system itself, capturing all sound events.  It usually is not very difficult to implement this part.

After you have recorded all of your video frames, and have an audio timeline, the next step is to re-create the audio from the timeline, which means you need a way of doing “offline” audio mixing.

If you are using an audio library, check the documentation to see if it has an offline mode, many of them do, including the ever popular fmod.  If your audio library can’t do it for you, there are various command line tools and audio libraries out there that can do this for you.  I believe portaudio (port mixer?) can do this for you, and also another open sourced program called sox.

What you need to do is render each item in the audio timeline onto a cumulative audio stream.  If your video were a 30fps video, and a 500ms sound effect happened at frame 93, that means that you know this sound effect started at 3.1 seconds in (frame 93 * 33.33 miliseconds per frame) and lasts until 3.6 seconds (since it’s 500ms long).   So, you’d mix that into the output audio stream at the appropriate point in time, and then rinse and repeat with the rest of the audio timeline items until you had the full audio stream for the video.

When you are done with this stage, you have your video frames and your audio stream.  With your video creation software (such as ffmpeg) you can combine these into a single video file which shows your game running perfectly at whatever frame rate you specified, and with perfectly synchronized audio.  It’s a beautiful thing and definitely ready to demo to get some funding.

Recap

To recap, the steps for creating a perfect video recording of your game are:

  1. When in recording mode, make your game run at a fixed frame rate – no matter how long it really was between frames, lie to your game and tell it that 33.33ms have passed each frame for 30fps video or 16ms for 60fps video (or whatever other frame rate you want to run at)
  2. Write each rendered frame to disk as an uncompressed or lossless compression graphics file.
  3. While rendering each frame, build up a timeline of audio events that you can use to re-create the audio later.
  4. After all the frames are captured, render your audio timeline into an audio stream.
  5. After you have your audio stream and each video frame, use software such as ffmpeg to combine them into a perfect, lagless video.
  6. BLOW THE SOCKS OFF OF INVESTORS AND SECURE SOME FUNDING!

Bonus Points – Or making this feature a shippable feature of your game for players to use

At this point, the final product (the video) is as nice as it can possibly be.  However, the process of actually recording the video can be cumbersome because even though you are making a nice and smooth 30fps video, during recording it may be running at 2fps (depending on your machine) making it very difficult to control the game.  Also, in the final video it will appear that the user is traversing menus, inputting commands, and reacting at superhuman speeds.

A good way to handle this is instead of recording during play, what you do is record all the input that happens during the recording process.  This way you have an input timeline that is tied to frame numbers, the same way the audio timeline is tied to frame numbers.

When the recording process is done, you then put up a nice dialog for the end user saying something like “Rendering video please wait….” with a progress bar, and then re-simulate the user input that occurred during the recording phase, and render all those frames to disk (well, screen capture them as image files just like usual, just dont display them to the end user).

Since building an input timeline is relatively cheap computationally, you should have no slow down during the “recording” phase of the video while you (or the end user) is actually playing the game.

The “Gotcha” here is that your game needs to be deterministic for fixed rate time steps (or at least everything that really matters needs to be deterministic, maybe not particles or something) which can potentially be a bit tricky, but the upside is if you actually make this happen, you can record light weight playbacks as “videos” and have users share these feaux-videos with each other to watch playbacks of gameplay that other players had.  When you want to export these playbacks as real videos, you can put it through the regular video recording steps and spit out a full mpeg, suitable for sharing, uploading to youtube (from within the app perhaps even?) just like normal.  But, until you need to use the video outside of your application, you have very small files users can share with each other to view “videos” of in game gameplay.

Final tip: if doing this in windows, I’ve found that in recent versions of windows, doing  the screen capture using GDI functions instead of DirectX is actually WAY faster so use that if you can.  I’m thinking this must be because windows already has a screen cap in memory to show those little icons when you mouse over the minimized application or something.

That’s all folks!

That’s all there is to it. With luck this will save some fellow engineers from having to crunch up some “demo hacks” to get performance up for an E3 demo video or the like. If you have any questions or comments, drop me a line (:

MoriRT: Pixel and Geometry Caching to Aid Real Time Raytracing

About half a year ago, some really intriguing ideas came to me out of the blue dealing with ways to speed up raytracing.  My plan was to create a couple games, showing off these techniques, and after some curiosity was piqued, write an article up talking about how it worked to share it with others in case they found it useful.

For those of you who don’t know what raytracing is, check out this wikipedia article:

http://en.wikipedia.org/wiki/Ray_tracing_(graphics)

Due to some distractions and technical setbacks unrelated to the raytracing itself, I’ve only gotten one small game working with these techniques.  One of the distractions is that I implemented the game using google’s native client (NaCl) and for some reason so far unknown to me, it doesn’t work on some people’s machines which is very annoying!  I plan to get a mac mini in the next couple months to try and repro / solve the problem.

Check out the game if you want to.  Open this link in google chrome to give it a play:

https://chrome.google.com/webstore/detail/kknobginfngkgcagcfhfpnnlflhkdmol?utm_source=chrome-ntp-icon

The sections of this article are:

  1. Limitations
  2. Geometry Caching
  3. Pixel Caching
  4. Future Work
  5. Compatible Game Ideas

Limitations

Admittedly, these techniques have some pretty big limitations.  I’ve explained my techniques to quite a few fellow game devs and when I mention the limitations, the first reaction people give me is the same one you are probably going to have, which is “oh… well THAT’S dumb!”.  Usually after explaining it a bit more, people perk up again, realizing that you can still work within the limitations to make some neat things.   So please hold off judgement until checking out the rest of the article! (:

The big fat, unashamed limitations are:

  • The camera can’t move (*)
  • Objects in the scene shouldn’t move too much, at least not all at the same time

(* there is a possible exception to the camera not moving limitation in the “Future Work” section)

So…. what the heck kind of games can you make with that?  We’ll get to that later on, but here’s some things these techniques are good at:

  • Changing light color and intensity is relatively inexpensive
  • Changing object color and animating textures is inexpensive
  • These techniques don’t break the parallel-izable nature of raytracing.  Use all those CPU and GPU cores to your heart’s content!

Seems a bit dodgy I’m sure, but read on.

Geometry Caching

The first technique is geometry caching.

The idea behind geometry caching is:  If no objects have moved since the last frame, why should we test which objects each ray hits?  It’s a costly part of the ray tracing, and we already KNOW that we are going to get the same results as last frame, so why even bother?  Let’s just use the info we calculated last frame instead.

Also, if some objects HAVE moved, but we know that the moving objects don’t affect all rays, we can just recalculate the rays that have been affected, without needing to recalculate all rays.

Just because we know the collision points for rays doesn’t mean that we can just skip rendering all together though.  Several things that can make us still need to re-render a ray include:  Animating textures, objects changing colors, lights dimming, lights changing color.  When these things happen, we can re-render a ray much less expensively than normal (just recalculate lighting and shading and such), so they are comparatively inexpensive operations compared to objects actually moving around.

How I handle geometry caching is give each ray (primary and otherwise) a unique ID, and I have a dynamic array that holds the collision info for each ID.

In the part of the code that actually casts a single ray, i pass the ID and a flag saying whether it’s allowed to use the geometry cache.  If it isn’t allowed to use the geometry cache, or there is no entry in the cache for the ID, the code calculates intersection info and puts it into the geometry cache.

It then uses the geometry cache information (whether it was re-calculated, or was usable as is) and applies phong shading, does texture lookups, recurses for ray refraction and reflection, and does the other things to figure out the color of the pixel.

In my first implementation of the geometry cache, it was very fast to render with once it was filled in, but it was really expensive to invalidate individual cache items.  If an object moved and a couple hundred geometry cache items needed to be marked as dirty, it was a really computationally expensive operation!

A better option, the one i use now, involves both a 2d grid (for the screen pixels) and a 3d grid (to hold the geometry of the world).

Breaking the screen into a grid, when each ray is cast into the world, I’m able to tell the ray what screen cell it belongs to.   This way, as a ray traverses the 3d grid holding the world geometry, it’s able to add itself each world grid maintains to keep track of which rays pass through that 3d cell (keeping just the unique values of course!).  Child rays know what screen cell they are in by getting that value from their parent.

If an object moves in the world, you can make a union of which world cells it occupied before it moved, and which world cells it occupies after the move.  From there, you can make a union of which screen cells sent rays into that world cell.  The last step is to mark all those screen cells as “geometry dirty” so that next frame, the rays in those cells are disallowed from using the geometry cache data, and instead will re-calculate new intersection info.

This method makes it so potentially a lot of rays re-calcuate their intersection data that don’t really need to, but by tuning the size of the screen and world grids, you can find a good happy medium for your use cases.

If you have an idea to better maintain the geometry cache, feel free to post a comment about it!

Pixel Caching

The second technique is pixel caching which is a fancy way of saying “don’t redraw pixels that we don’t have to”.  The less rays you have to cast, the faster your scene will render.

The first challenge to tackle in this problem is how do you know which pixels will be affected when an object changes color?  That is solved by the same mechanism that tells us when geometry cache data is invalidated.

When an object changes color (or has some other non-geometry change), you just get the list of world cells the object resides in, and then get the union of screen cells that sent rays through those world cells.

When you have that list, instead of marking the screen cell “geometry dirty”, you mark it as “pixel dirty”.

When rendering the screen cells, any screen cell that isn’t marked as dirty in either way can be completely skipped.  Rendering it is a no-op because it would be the same pixels as last time! (:

This is the reason why you want to minimize geometry changes (objects moving, rotating, resizing, etc) and if you have to,  rely instead on animating textures, object colors, and lighting colors / intensities.

Future Work

Here’s a smattering of ideas for future work that I think ought to bear fruit:

  • Replace the screen and/or world grid with better performing data structures
  • Pre-compute (a pack time process) the primary rays and subsequent rays of static geometry and don’t store static geometry in the world grid, but store it in something else instead like perhaps a BSP tree.  This way, at run time, if a ray misses all objects in the dynamic geometry world grid, it can just use the info from the pre-computed static geometry, no matter how complex the static geometry is.  If something DOES hit a dynamic object however, you’ll have to test subsequent rays against both the dynamic object world grid, and the data structure holding the info about the static geometry but hopefully it’ll be a net win in general.
  • Investigate to see how to integrate this with photon mapping techniques and data structures.  Photon mapping is essentially ray tracing from the opposite direction (from light to camera, instead of from camera to light).  Going the opposite direction, there are some things it’s really good at – like caustics – which ray tracing alone just isn’t suited for: http://en.wikipedia.org/wiki/Photon_mapping
  • In a real game, some things in the world will be obscured by the UI overlays.  There might be an oportunity in some places to “early out” when rendering a single ray if it was obscured by UI.  It would complicate caching those since an individual ray could remain dirty while the screen cell itself was marked as clean.
  • Orthographic camera:  If the camera is orthographic, that means you could pan the camera without invalidating the pixel and geometry cache.  This would allow the techniques to be used for a side scrolling game, overhead view game, and things of that nature – so long as orthographic projection looks good enough for the needs of the game.  I think if you got creative, it could end up looking pretty nice.
  • Screen space effects: enhance the raytracing visuals with screen space particles and such.  Could also keep a “Z-Buffer” by having a buffer that holds the time each ray took to hit the first object.  This would allow more advanced effects.
  • Interlaced rendering: to halve the rendering time, every frame could render every other horizontal line.  Un-dirtying a screen cell would take 2 frames but this ought to be a pretty straight forward and decent win if losing a little bit of quality is ok.
  • red/blue 3d glasses mode:  This is actually a feature of my snake game but figured i’d call it out.  It works by rendering the scene twice which is costly (each “camera” has it’s own geometry and pixel cache at least).  If keeping the “Z-Buffer” as mentioned above, there might be a way to fake it more cheaply but not sure.

Compatible Game Ideas

Despite the limitations, I’ve been keeping a list of games that would be compatible with these ideas.  Here’s the highlights of that list!

  • Pinball:  Only flippers, and the area around the ball would actually have geometry changes, limiting geometry cache invalidating.  Could do periodic, cycling color / lighting animations on other parts of the board to spice the board up in the “non active” areas.
  • Marble Madness Clone: Using an orthographic camera, to allow camera paning, a player could control a glass or mirrored ball through a maze with dangerous traps and time limits.  Marble Madness had very few moving objects and was more about the static geometry so there’d probably be a lot of mileage here.  You could also have animated textures for pools of acid so that they didn’t impact the geometry cache.
  • Zelda 1 and other overhead view type games: Using ortho camera to allow panning, or in the case of Zelda 1, have each “room” be a static camera.  You’d have to keep re-rendering down somehow by minimizing enemy count, while still making it challenging.  Could be difficult.
  • Metroidvania: side scroller with ortho camera to allow panning.  Could walk behind glass pillars and waterfalls for cool refractive effects.
  • Monkey Island type game: LOTS of static geometry in a game like that which would be nice.
  • Arkanoid type game: static camera, make use of screen space effects for break bricking particles etc
  • Mystery game: Static scenes where you can use a magnifying glass to LITERALLY view things better (magnification due to refraction, just like in real life) to find clues and solve the mystery.  Move from screen to screen to find new clues and find people to talk to, progress the storyline etc.
  • Puzzle Game: could possibly do a traditional “block based” puzzle game like puzzle fighters, tetris, etc.
  • Physics based puzzle game: You set up pieces on a board (only one object moves at once! your active piece!) then press “play”.  Hopefully it’d be something like a ball goes through your contraption which remains mostly motionless and you beat the level if you get the ball in the hole or something.
  • Somehow work optics into gameplay… maybe a puzzle game based on lasers and lights or something
  • Pool and board games: as always, gotta have a chess game with insane, state of the art graphics hehe
  • mini golf: A fixed camera when you are taking your shot, with a minimum of moving objects (windmills, the player, etc).  When you hit the ball, it rolls, and when it stops, the camera teleports to the new location.
  • Security gaurd game:  Have several raytraced viewports which are played up to be security camera feeds.  Could have scenes unfold in one feed at a time to keep screen pixel redraw low.
  • Turn based overhead view game:  Ortho camera for panning, and since it’s turn based, you can probably keep object movement down to one at a time.

Lastly, here’s a video describing this stuff in action.  When you view the video, the orange squares are screen tiles that are completely clean (no rendering required, they are a no-op).  Purple squares are screen tiles that were able to use the geometry cache.   Where you don’t see any squares at all, it had to re-render the screen tile from scratch and wasn’t able to make use of either caching feature.

Feedback and comments welcomed!  I’d be really interested too in hearing if anyone actually uses this for anything or tries to implement in on their own.