![character animator mouth shapes character animator mouth shapes](https://cdn.statically.io/img/stealthygaming.com/wp-content/uploads/2021/08/How-to-draw-Roblox-Characters-2.jpg)
In that context, the software does about a third of the animator’s work. While it doesn’t bring a character to life, it does do a lot of the groundwork and allows the animator to focus on what’s really important. In this situation, the motion-captured actor would provide a lot of the additional life needed to make the scene pop.įor keyframe-based productions, FaceFX also could be of value to the user. The FaceFX-generated lip sync should lay over a motion-captured scene fairly seamlessly and not fight the motion capture. In fact, FaceFX’s output would be a big benefit for those productions using motion capture. This is where non-linear animation editors, such as those in Maya and MotionBuilder, can come in very handy because the FaceFX animation can be used as just another motion track. The facial animation pass is exported to a 3D package, where the animation can be incorporated or built upon using any number of tools. The rest of the work happens outside FaceFX, and this is where FaceFX’s plug-ins come in handy once again.
![character animator mouth shapes character animator mouth shapes](https://static1.srcdn.com/wordpress/wp-content/uploads/2021/03/10-Hidden-Details-You-Never-Noticed-In-The-Big-Mouth-Opening-Credits.jpeg)
This isn’t a bad thing, though, because FaceFX is working mostly with the audio and phonemes to generate basic facial animation-one of several tasks required to truly bring a character to life. The animation produced is fairly conservative. The results do sync up fairly well, and the mouth movements reflect the audio, but FaceFX does not bring the character to life. If the animation sync is off, the results can be tweaked in the phoneme editor, where the duration and timing of each phoneme can be adjusted. The audio track is analyzed, and the next thing you know, your character is talking. (The software currently can read seven languages: English, French, German, Italian, Spanish, Korean, and Japanese.)Īfter that, the process goes very quickly. This is not required, but does help reduce errors. To help the voice recognition, FaceFX also requests a text-based version of the audio track. An audio file of the voice is loaded to start the process. The actual track reading is done using the phoneme editor. Once a character is set up, however, the process is relatively straight‑įorward and can be used by almost anyone. It’s not something the average animator can just use out of the box. A lot of it requires some scripting, so getting all these to work on a new character probably requires a technical director with some skill to get it working.
![character animator mouth shapes character animator mouth shapes](https://cdn.statically.io/img/static0.cbrimages.com/wordpress/wp-content/uploads/2021/05/Star-Wars-Bad-Batch-Saleucami.jpg)
These events can be generated using specific triggers or in a pseudo-random fashion. These gestures, such as head nods, blinks, and brow lifts, can help to add realism to a character. In addition to facial animation, FaceFX can animate the head and body of the character. In order to speed along this process, FaceFX provides a number of scripts to help streamline the process. Phonemes can be mapped one phoneme to one mouth position, or you can use what is called a combiner node to blend multiple shapes to one phoneme. Which facilitates the process using a node-based interface, much like Maya’s Hypergraph window. The main interface for doing this is the graph tab, So, for example, an “OH” mouth shape needs to be assigned to the “OH” phoneme, and so on. In order for FaceFX to work, you need to match up the facial shapes in the model to the possible phonemes the character might speak. Surrounding these main tabs are a time slider and scene browsers. The animation is viewed in the preview tab, the audio track analyzed in the phoneme tab, and so on. The interface is tab-based, with each major task organized under its own tab. The exported character is loaded into FaceFX, where the real meat of the character setup begins. Since these are the two most popular methods of rigging a face, most productions will have no problem exporting their characters to FaceFX. Bones can also be used to manipulate the surface of the face directly. Morph targets or blendshapes can be used to manipulate the face using shape animation. When preparing a character for export, the facial deformation can be set up in one of two ways. Once complete, the plug-ins can then bring the animation back into the desired package for finishing. The software runs as a stand-alone application, with plug-ins that connect to major 3D applications (Autodesk’s Maya, 3ds Max, Softimage, and MotionBuilder.) These plug-ins basically are file format converters that export models to the main FaceFX application, where the character’s facial and body motions are matched to the sound tracks. It promises to save animators valuable time. FaceFX is OC3 Entertainment’s solution for creating facial animation and lip sync directly from audio files. When doing a lot of character animation, this can become very tedious. The task requires animators to scrub through a dialog track one frame at a time, picking out the phonemes or syllables of every word, then assigning the proper mouth shapes to match. As any character animator will attest, one of the more tedious jobs in animation is animating lip sync.