What is it?

Image of animator’s face in Character Animator program showing the facial data points used for animation creation.

Facial motion capture (Mo-Cap) is a process that uses a camera to map and track points on the user’s face. Software such as Adobe’sCharacter Animator derive data from the camera to animate cartoon characters in real time. This can greatly reduce the amount of time needed to create an animation and breathes subtle life into the character that would be otherwise difficult to achieve. Character Animator harnesses the power of the webcam to map several parts of the face to the respective parts of the character allowing it to record in real time. This includes your eyebrows, eyes, mouth, and head position. It also intakes audio to change mouth shapes to match what the user is speaking. In addition to the webcam, the user can operate their keyboard to trigger additional movements, effects, and walk motions. All these different aspects combine and give the character a personalized feel.

How does it help?

Image of character being rigged into a puppet showing the mesh and body tags.

Cartoon animations currently do not have a large presence in online learning. This is mostly because they take a long time to create and not everyone has had the resources to create them. Normally, character animation for cartoons requires drawing each frame or using a pose-to-pose process called key framing. With innovative technology such as Character Animator, it greatly reduces the barrier to create cartoon animations for online learning. Each motion of the face records instantly and gives the character life by adding subtle movements to the face and head. The bulk of the work is completed early on to draw, rig, and add triggers to the character, or in this case, the puppet. Once the puppet is set up to record, it is smooth sailing from there. All movements, audio, and facial expressions are recorded in one take; greatly reducing the amount of time for development. However, Character Animator allows you to choose which aspects you want to record, so you can record the eye movements one time, then the eyebrows another time. This is helpful for the perfectionists out there who cannot seem capture it all at once.

How does it work?

To create an animation using Character Animator, there are a handful of stages to complete. The first step is to draw the character in either Photoshop or Illustrator. Next, Character Animator imports the graphics and they are rigged into puppets to prepare for recording. This means the eyes, nose, mouth, etc. are tagged with their respective labels. Also during this time, you can create keyboard triggers. These are animations such as arm movements, walk motions, and more, that the pressing of certain keys on the keyboard triggers the character to perform. After the puppets are prepared, it is time to record. It does not have to be shot perfectly all at once; you can blend the best bits from different recordings into one masterpiece. The last step is to export the character’s recording and composite it into a story using video software such as Premiere Pro or After Effects. Once you achieve the flow of facial Mo-Cap, you can start cranking out animations faster than ever before.

Click Image to View Video

Below is a quick rundown of what it takes to set up a character and how to record it. At the end of the video, there is a sample of multiple characters in one scene.

What does the process look like?

 

Author: Zach Van Stone, Oregon State University Ecampus

Print Friendly, PDF & Email

Leave a reply