When creating an animation, it is paramount to have a very clear objective and vision of the asset and its function. How much is the object going to deform? How visible is it going to be in the scene? How complex is the overall animation? These are some questions to ask as you are making the asset. Outlining and understanding important aspects of the task can be the difference between smooth progress and a difficult, rocky workflow. Within this blog, I am hoping to address issues that people might encounter when animating and give advice as to how to address them. The main software used for the examples in the blog are Autodesk Maya and Unity, however the theories behind the workflow and habits are applicable to any 3D engine and game engine out there.
It is important to understand the role of the asset within the scene, as this will determine many of the aspects of its geometry and design. You can get away with having extra polygons if your demo is going to be shown on a PC or laptop, but if you are targeting a mobile platform then every single vertex tends to be accounted for. Knowing how much budget is available and meant for assets is always useful. This way one to ensure you used all available vertices wisely and effectively.
The time spent making sure an asset is finished, optimised and ready to be incorporated into the game is time well spent, as it means little to no modification will be needed in the later stages of the production process. There is nothing worse than finding out the model’s poly count is too high and it needs to be reduced after having spent time weighting and animating it. In a case like this, you could reuse the animations; the model will need new weights, as the vertex number will be different after a reduction. And, even then, a reduction in vertices might result in the bones influencing the mesh differently, which could result in the animations being discarded too as the deforming mesh behaves strangely.
It is a good habit to spend a reasonable amount of time on a task and not rush it through. Rushing through one part of the process because the task seems to drag out and you’re itching to start something else is a very bad idea, as it tends to come back with a force later on. Below is an example of a good workflow for making optimised assets. The diagram defines each stage and allows clear progression from one part of the process to the next.
It is worth emphasising that whilst it is tempting to keep working on an asset to achieve the perfect balance between optimised mesh and high quality, there is a point where you should just declare it finished. You could consider an asset complete when it has a low poly count, the mesh is optimised for its purpose within the scene, has a good texture map and runs on the device whilst looking its best.
Fig. 1- example of workflow during asset production
Removing Transformations on a model:
Another point to emphasise is the cleanliness of the model. Cleanliness is a model that has no transformations applied to it and is at the origin of the scene. Any kind of transformation or residue (anything that will cause influence on the model, such as an animation keyframe) that remains on the model will have an effect on the animation, so it is essential for the asset to be clean and free from anything that could potentially influence the bones.
Before starting anything, freeze all transformations, delete the history of the scene, and make sure the model is where it should be and faces the correct direction. The reason behind this is to establish a neutral point to which you can always return during the animation process. The controllers used to move objects around a scene store the transformations in values of X, Y and Z. If one wants to return to the initial position at whatever point in the animation, it would make sense for that point to be 0, 0, and 0 instead of some arbitrary values that differ from controller to controller and would be difficult to track.
It is also worth pointing out that if one does not remember to freeze the transformations of a controller and bind it to a bone, the transformations of that controller will influence the bone and will most definitely make it move in ways, which are not desired.
Overall, zeroing out the transformations on the asset and on anything that is going to be applied to the asset is a good habit to keep, and one that most definitely pays off throughout the process.
Fig. 2- Mesh with transformations versus mesh without any transformations.
All the transformations applied to a mesh can be seen in the Channel Box menu.
This is also a good point to introduce some terminology that might be used interchangeably throughout the text, in order to prevent any confusion:
- When talking about the asset or model that is to be animated, one might refer to it as the ‘mesh’ or ‘skin’ as well as the terms used so far.
- ‘Rig’ and ‘skeleton’ are sister-terms, both refer to the hierarchy of bones and joints set up inside or around the object in order to animate it.
- The bones are ‘bound’ to the skin, and will influence the mesh and deform it when moved. Skin weights or the action of ‘paint weighting’ allows control over that influence and enables the user to fix any incorrect deformations or influence that might occur.
- Controllers are curves, or maybe other polygons, parented to the object or joint in order to make the animation process easier.
Moving the Mesh:
I hope these terms are clear and it is easier to understand some of the elements mentioned so far. Turning back to the clean mesh, at this point one should start considering how to proceed with the animation. Looking at the mesh construction tends to be a good starting point, as this might play a deciding factor. Not all meshes need a skeleton in order to be animated- skeletons and skinning can get expensive, so if the asset has the potential to be animated through a parented hierarchy it’s always better to do this. A character with detached limbs (think Rayman) or pieces of an engine that are moving in unison would be good examples of assets that would animate just fine with a parent hierarchy.
Here is an image of a very simple parent hierarchy set up in Maya:
Figure 3a- Parent hierarchy example scene
Fig. 3b- Parent hierarchy example set up
In the example shown in Figure 3a there are some simple shapes orbiting a cube. Each coloured controller controls the shapes individually, the black controller allows control over the small shapes, and the white controller moves both the small and big shapes. It is a simple set up, with it, one would be able to move the shapes, set the orbit, and with ease, even move the whole set up if they wanted.
On the other hand, humanoids, organic models or more complex assets do benefit from having a skeleton rig drive them. These rigs work in a similar enough way to how physical skeletons move a body. The bones are set up with IK handles, which create an effect close enough to muscles pulling on a joint to make it move. Rigs are easy to build and become familiar with, but can get complex very quickly, as shown in the example below:
Fig. 4- Top-down view of the rig on a lizard
This rig contains about 95 bones and their respective controls, constrains (specific influences controllers cast on the joints) and weights. It works very smoothly, deforms nicely, allows good control over the lizard mesh, and performs well on a mobile platform. This rig was designed with complex movement in mind- it goes as far as having controls to allow the digits to contract and relax (Fig. 5)
Fig. 5- Close up of finger control
Optimising a Rig:
This level of detail is fine if the camera is going to come quite close to the lizard and take a note of the finer movements, but might not be the ideal set up for something aimed at a mobile device or for a scene where the camera is not getting close enough to appreciate these movements. With this particular case, the asset happened to be the only animated one within the scene so there was enough budget to accommodate the amount of bones and influences, but what if that was not the case? Bones would need to be removed in order to accommodate for more animated characters. Using this example, the removal of the extra bones in hands and feet and a reduction of bones in the tail would remove around 65 bones, which is more than enough to animate another character and would reduce the bone count on the model by two thirds.
Fig. 6- simple rig on a phoenix
Whilst the lizard is not an ideal candidate for a rig to drive an animation aimed for a mobile device, the rig on the phoenix is a much better example. In this case, the rig originally featured 15 bones, but an extra three were added to spread the influence caused in the lower part of the mesh, bringing the total count up to 18 bones. This particular model is also featured in a scene with other animated elements and plenty of effects, and was not meant to perform any particularly complex animation, so 18 bones is what it needs.
Always think carefully and take care when building the rig and controls that will drive your model. Make sure you understand what the animation is to achieve, and aim to build the rig in such a way that it can bring the object to life with the lowest quantity of bones as possible. As shown in fig. 7, a lot can be achieved with this particular rig.
Fig. 7- Progression of the animations of the phoenix, from complex flight to simpler, looping animation
The Animation Process:
So far, we have touched on the production of the assets, the rigging and skinning process and some optimisation practices, so it is time to address the actual animation process. Within computer animation, the process of animating tends to be carried out by positioning the model and keyframing the pose. The series of keyframes are then played one after another and bled together to form the resulting animation.
When animation movements are translated to the game engine, they can either be matched to triggers and played in response to them, or left to loop around on their own for as long as it’s needed. Simple looping animations are a very easy way to bring a scene to life without a complex animation, and if done right they can give the illusion of being one long string of constant movement.
ARM’s Ice Cave demo makes use of these types of animations to bring the cave to life. The butterfly and tiger both move around the cave in a constant loop, which were timed to fit with each other, and the phoenix constantly tries to come back to life but is always stopped by the loop of his animation taking him back to his sleeping, starting state.
Fig. 8- The Ice Cave Demo by ARM
Throughout the production of Ice Cave, we found that this was the best way to bring dynamism to the scene, as it would allow the video to loop continuously without restarting the demo when the animation stop.
I have repeated throughout this article that it is important to have a clear vision of what one is trying to achieve with their project, but that is because this knowledge makes many of the aspects of the production much smoother. More often than not, the result is good, optimised models, a well-constructed scene and cleverly tailored visual effects that, when put together, create the illusion that the overall product is of much higher specifications than what it actually is.
A breakdown of the scene, its elements, and their purpose will always help. Also, consider how the viewers will interact with the finished product: is everything going to reset after a certain point, or is it going to be playing continuously? Using these as a basic guideline, it will soon become clear what the best way to animate the objects is and how to best go on about it.
Animations in a Game Engine:
I hope that by this point it is evident that the asset creation and animation process is quite a complex process, full of elements to remember and consider at every point in the pipeline. The last step in the process is to export the animation and place it within the scene in your game engine of choice.
There are a few formats you can export your animated model to, but the most widely used are .fbx and .dae. Unity is able to handle maya’s .ma and .mb files as well, which can contain animations. The theory is simple enough, but in practice, a few things can go wrong resulting in the animation not exporting, or exporting in a wrong way.
3D model viewers are extremely useful when previewing the animations, as what might look fine in Maya might not be the same as what you get in Unity or other game engines. Assimp, Open3mod and Autodesk FBX converter are some examples of 3D viewers- FBX converter being particularly useful as it allows converting files from one format to another (fig. 9). This became very useful in situations in which animations would only export correctly in one file format but not the one that was needed. Even so, after running the model through some 3D viewers it’s always worth checking one last time within Unity or the game engine of choice. Unity allows the user to view the animations within the inspector tab (fig. 10) which will give an idea of how the animated model will look in the scene. It is worth noting that sometimes the mesh will deform awkwardly, and before panicking and thinking the animation exported wrongly, it is worth checking how many bones are influencing each vertex, as this might be the root of the problem.
Fig. 9- Screenshot for the Autodesk FBX converter
Fig. 10- Unity inspector tab displaying the animation of the phoenix
Taking an asset from start to finish is a very long process full of situations where many things can go wrong very easily, but understanding how each part of the process works and how best to go on about it will make it easier to handle. Throughout the course of this article, I have talked about 3D assets and how to progress from the initial sculpt to a fully animated asset integrated within the scene, with a focus on the animation part of the process. I hope that this has provided insight into the more artistic side of the 3D art world and solved any doubts, as well as provided useful advice for avoiding problems and keeping up good habits throughout the process.