FacialTitle.jpg (8540 bytes)

 

This is a demonstration of a proprietary facial gesturing and animation system designed and implemented by me for the 3DS Max Environment. It is presented as a proof of concept and should not be considered a complete solution at this time. The primary focus of this system was to create a powerful and versatile gesturing system that would allow data to be easily shared between characters. Traditionally, morphing or bones are used to do complex facial animation. Although morphing is a powerful tool,  I felt that the time and overhead needed to create all the morph targets for multiple characters and the fact that it is possible to break the volume of a character when mixing multiple morph channels together wasn't the best solution for facial animation. Using rotating bones allowed me to share data between characters easily and since bones are independent of surface topology, they didn't require the time consuming task of creating morph targets. A bone system could be setup very quickly however it didn't offer the fine control of shape and volume that morph targets provide. So, I set out in search of a better solution. The hardest part of this was developing the idea. The actual implementation was fairly simple. Let's have a look.

 

       The system is based on groups of nodes that slide across hidden ghost surfaces simulating skin sliding over muscle and bone. The red nodes shown in figure 1 are the drive nodes that control the system. These nodes are constrained to slide across the blue <ghost> surfaces. These are the nodes that the animator interacts with. I setup 22 drive controls to control the muscle node groups of the face. I wanted to keep the system fast, fun and easy to use so the idea here was simplicity. The next step in this area of the system would be to add a method of capturing a gesture such as a vowel sound or frown for later use. Now lets look at the core of the system.
Animatible Nodes
Figure 1
     

       In figure 2 you see what's under the hood. The yellow nodes shown here are driven by the red nodes in figure 1. These arrays of nodes are linked to each other in subtle and sometimes complex ways. Each node is aware of the state and position of its neighbors. I also provided direct access of these nodes to the animator for fine animatible control of the system and to allow the system to be tuned for use on a different character. Generally, the drive nodes in figure 1 were all I needed for great results. The next step here would be to automate a lot of the setup process by adding a front end interface for group node reactions.

Core of The System
Figure 2
       Figure 3 shows what the animator sees as they work. On a single PII 400 I was able to animate real-time. Theoretically the system could be hooked up to a MIDI board system to facilitate real time capture of animation. Since mesh topology and resolution remain independent of the facial system they can be edited at any time.
What the artist sees
Figure 3
       The final result (figure 4). Soft fluid motion and consistent volume. To view a demonstration of how the system works as MPEG (3.7mb) click on the link below.
FaceSysDemo.mpg
Final Result
Figure 4