This is part two of “Anatomy of a Skeletal Animation System”
- Anatomy of a Skeletal Animation System part 1
- Anatomy of a Skeletal Animation System part 2
- Anatomy of a Skeletal Animation System part 3
Animation Controller v3 – Bone Groups
In part 1, we talked about how to make a skeletal animation system that was able to play smooth, non popping animations on a model, it could communicate back to the engine to play sound effects, spawn objects in specific spots, and many other things as well. What it could not do however, was play a different animation on the upper body and lower body.
To solve this, instead of having a single animation controller for our model, we need to have multiple animation controllers, where each controller controls a specific set of bones. Note that multiple controllers should be able to affect the same set of bones, and in the end result, a bone’s position is made up by blending the data from all animation controllers that affect it.
Each animation controller should have a blend weight so that it can be blended in and out to keep animation motion smooth and continuous, and also the blend weighting allows you to turn on and off specific animation controllers as needed.
Some great example uses for this are…
- Having a seperate animation controller for the upper and lower body so that they can work independently (the lower body can look like it’s jumping, without having to care if the upper body is firing a gun or not).
- Having a seperate full body animation controller that affects all bones. In most situations, this animation controller would be off, but in the rare cases that you want to play a full body animation, you turn this one on and play an animation on it.
- Having a facial animation anim controller that only turns on if the camera is close enough to a characters’s face. This way, if you look closely at another player, you can see their face moving, but if you are far away from them, the game engine doesn’t bother animating the facial bones since you can’t see them very well anyways.
The order that these animation controllers are evaluated should be explicit (instead of left up to load order or things like that). You want to be very clear about which animation controllers over-ride which other animation controllers for the case of having multiple on at the same time, affecting the same bones.
For the sake of efficiency, when trying to blend the animation data together from each animation controller that affects that bone, you should start at the last fully weight (100% weight) anim controller in the anim controller list. This way, you don’t bother evaluating animations for anim controllers that are just going to be completely masked out by other animation controllers.
If there is no full weight anim controller in the list that affects the specific bone, initialize the bone data to the “T-Pose” animation position before blending the other anim controller bone data on top of it.
We now have a very robust animation system, but it isn’t quite there yet. Interacting with this animation system from game code means you having to tell specific game controllers when to play specific animations. This is quite cumbersome and not very maintainable. Ideally, the animation logic would be separated from the game play logic. Besides making the code more maintainable, this means that non animation programmers will be able to write game play code that interacts with the animation system which is a big win for everyone. Fewer development bottlenecks.
There are two good techniques i’ve seen for separating the logic and preforming animation selection for you.
The first way is via “animation properties” and the second way is by using an animation state machine. There are pros and cons to each.
For the animation properties method, you essentially have a list of enums that describe the player’s state. These enums include things such as being able to say whether the player is crouched or standing, whether the player is unarmed, holding a pistol, or holding a rifle, or even how injured the player is (not injured, somewhat injured, or near death).
The game play code would be in charge of making sure these enums were set to the right values, and the animation controller(s) would use these values to determine the appropriate animations to play.
For instance, the game code may set the enum values to this:
- WeaponType = Rifle (vs Unarmed, Pistol, etc)
- WeaponAction = Idle (vs Firing, Reloading, etc)
- PlayerHealth = NearDeath (vs healthy, injured, etc)
- MovementType = WalkForward (vs Idle, Running, LungeRight, etc)
From here, the animation system takes over.
The lower body animation controller perhaps only cares about “MovementType” and “PlayerHealth”. It notices that the player is walking forward (WalkForward) and that they have very low health (NearDeath). From this, it uses a table that animators created in advance that says for this combination of animation properties, the lower body animation controller should play the “WalkNearDeathFwd” animation. So, the lower body animation controller obliges and plays that animation for the lower body bones.
The upper body animation controller perhaps just cares about WeaponAction, WeaponType and PlayerHealth. It notices that the player has a rifle, they aren’t shooting it, and they have very low health. From this, the upper body animation controller looks into it’s animation properties table and sees that it should play the “RifleIdleInjured” animation, so it plays that animation on the upper body bones.
The logic of game play and animation are completely seperate, and the animators have a lot of control over what animations to play in which situations.
Once again, you’d want an editor of some sort for animators to set up these animation properties tables so that it’s easier for them to work with, it verifies the data to reduce the bug count, and everyone wins.
Your tool also ought to pack each animation properties table (upper body, lower body, facial animation, full body animation, etc) into some run-time friendly structure, such as perhaps a balanced decision tree to facilitate quick lookups based on animation properties.
Animation State Machine
Another way to handle animation selection is to have the animation controllers run animation state machines, having the game code send animation events to the state machines. Each state of the state machine corresponds to a specific animation.
When the player presses the crouch button for instance, it could send an event to all of the animation controllers saying so, maybe ACTION_BEGINCROUCH.
Depending on the logic of the state that each anim controller state machine is in, it may respond to that event, or ignore it.
The upper body anim controller may be in the “Idle” state. The logic for the idle state says that it doesn’t do anything if it recieves the ACTION_BEGINCROUCH event, so it does nothing and keeps doing the animation it was doing before.
The lower body anim controller may also be in a state named “Idle”. The logic for the lower body idle state says that if it recieves the ACTION_BEGINCROUCH event, that it should transition to the “StartCrouch” state. So, it transitions to that state which says to play the “CrouchBegin” animation (also says to ignore all incoming events perhaps), and when that animation is done, it should automatically transition to the “CrouchIdle” state, which it does, and that state says to play the “Crouching” animation, so it does that, waiting for various events to happen, including an ACTION_ENDCROUCH event to be sent from game code when the player lets go of the crouch button.
The interesting thing about the anim state machine is that it gives content creators a lot more control over the actual control of the player himself (they can say when the player is allowed to crouch for instance!) which can be either a good or bad thing, depending on your needs, use cases and skill sets of your content creators.
Going this route, you are going to want a full on state machine editor for content people to be able to set up states, the rules for state switching, and they should be able to see a model and simulate state switches to see how things look. If you DO make such an editor, it’s also a great place to allow them to define and edit bone groups. You might even be able to combine it with the key string editor and make a one stop shop editor for animation (and beyond).
Animation Controller v4 – Animation Blend Trees
At this point, our animation system is in pretty good shape, but we can do a bit better before calling it shippable.
The thing we can do to really spruce it up is instead of dealing with individual animations (for blending, animation selection, etc), is to replace them with animation blend trees like the below:
In the animation blend tree above, you can see that it’s playing two animations (FireGun and GunSight) and blending them together to create the final bone data.
As you can imagine, you might have different nodes that preformed different functionality which would result in lots of different kinds of animations using the same animation blend tree.
You will be in good shape if you make a nice animation blend tree editor where a content creator can create an animation blend tree, set parameters on animation blend tree nodes, and preview their work within that editor to be able to quickly iterate on their changes. Again, without this tool, everyone’s lives will be quite a bit harder, and a little less happy so it’s in your interest to invest the effort!
Some really useful animation nodes for use in the blend trees might include:
- PlayAnimation – definitely needed!
- AnimationSequence – This node has N number of “children” and will play each child in order from 1 to N in a sequence. You may optionally specify (in the editor) that you want the children chosen at random and you specify a weighting to each child for the random choosing. This is useful for “idle animations” so that periodically an idle character will do silly things.
- AimGrid – this animation node uses the player data to see yaw and pitch of the player’s aim. It uses this information to figure out how to blend between a grid of 9 animations of the player pointing in the main directions to give a proper resulting aim. This node has 9 children, which specify the animations that specify the following aiming animations: Up Left, Up, Up Right, Left, Forward, Right, Down Left, Down, Down Right. Note that since this is a generalized anim blend tree, these child nodes can be ANY type of animation node, they aren’t required to be a “PlayAnimation” node. This in essence is the basis of parametric animation (which i mentioned at the beginning of part 1), so this is a way to get some parametric animation into your system without having to go full bore on it.
- IK / FK Nodes – get full or partial ragdoll on your model. Also get it to do IK solving to position hands correctly for specified targets and such.
- BlendBySpeed – You give N number of children, and movement speeds for each child. This animation node will choose the correct animation, or blend between the correct animations, based on the current traveling speed of the player. This way you get a smooth blend between walk, run and sprint animations and the player can move at whatever speed they ought to (perhaps the speed is defined by the pathing system, or the player’s input). To solve the problem of feet “dancing” as they blend, you need to make sure the footfalls happen on the same time (in %) on each animation that will blend together. This way, the animations don’t fight eachother, and the feet will appear to move properly.
- BlendByHealth – if you want the player to walk differently when they are injured, this node could be used to specify various walk animations with matching health levels so that it will blend between them (for upper or lower body or whatever else) as is appropriate for the player’s current health level.
- Additive Blending – to get gun recoils and such
As you can see, animation blend trees have quite a bit of power. They are also very technical which means engineers may need to help out content folk in making good trees to resolve some edge case bugs. In my experience, animators are often very technical folk themselves, so can do quite a bit on their own generally.
Combine anim blend trees with the animation selection systems (FSM or anim properties) and the ability to smoothly blend an animation controller between it’s internal animations (or anim trees) it’s playing and you have a really robust, high quality animation system.
Often time with this work flow, an animator will just say “hey i need an anim node which can do X”, so an animation engineer creates the node and the animators start using it to do interesting things. No need for an engineer to be deeply involved in the process of making the animation work like the animator wants, or having to worry about triggering it in the right situations etc.
Sure there will be bugs, and some things will be more complex than this, but by and large, it’s a very low hassle system that very much empowers content creators, and removes engineers from needing to be involved in most changes – which is a beautiful thing.
End of Part 2
This is the end of part 2. In the next and final part, we’ll talk about a few other miscellaneous features and optimizations.