Frontier Rigging & Pipeline

Frontier was a project I worked on over a year ago which I have long been planning to come back to make a breakdown of the workflow I used. I was in charge of rigging over 35 human characters and over a dozen space crafts. As well as that I designed the pipeline for artists all over Australia and New Zealand to work remotely and created MEL scripts as they were required.

Rigging Workflow

I knew beginning the project that to rig a large number of human characters of different sizes, I was going to need an automated approach. The Setup Machine proved to be very effective, however The Face Machine required a great deal more setup time and the automatic weight painting did not seem to like the mesh flow of the FaceGen's heads and as a result required a great deal of manual weight painting. Noticing FaceGen provided all the blendshapes we needed, I decided to create my own automated facial rigging system, as shown in the following video.

1) Clean Face

The FaceGen head's come with a very comprehensive set of blendshapes that I figured would be the simplest and easiest way to create a large number of rigs with little room for error's that would involve a lot of manual labor. The only problem was that a lot of the expressions had both eyes and mouth clumped together as a single blendshape. This would be very limiting for our animators. 

To solve this, I took advantage of the FaceGen head's consistant UV layout for the seperating of the expressions. I created a loop that went through every blendshape and activated them on the blendshape node. I then applied a predefined luminance map to the blendshape to limit the area of effect to either one of the eyes or the mouth. For each section of the face (eyeR, eyeL, mouth), I duplicated the head and stored it for later to apply a new blendshape.

Finally all that was left was deleting all of the useless, extra geometry, naming, grouping, deleting history and optimizing.

2) Proxy Creator

Intially a script I created for another automated rigging script I am working on, I decided to adopt it for Frontier and alter it to work with The Setup Machine. TSM creates a proxy already, but I find this proxy quite useless for animating as it looks nothing like the character. The Proxy Creator takes the existing character model, chops it up and parents each piece of the model to the joints. This results in a proxy that is almost as good as the bound model, except it is extremely light on the system. Which is great for those heavy scenes with multiple characters.

Limitation: Ideally I would like the user to just identify the joint hierarchy and have the script automatically go through and chop the model up according to the joint position. This script I am currently in the process of creating, the main hiccup being when I extract pieces of geometry, it is not consistant with which piece of geometry is the piece I'm extracting and which piece is the original. So for the sake of this project I proceeded with a more manual approach, but I will get around to solving this limitation at some stage.

3) Face Rig

I created the initial "Define Face Parts" interface so that the Face Rig script could be used to add FaceGen heads to any rig. After identifying the geometry, neck joint and global controller, the script identifies the location of the jaw bone and tounge through the dimensions of the bounding box and making generic estimations from there.

After the rigger has corrected the placement of these placeholders, the script then goes through and completes it's magic. The jaw is deformed using a cluster, whose weights, similar to the Clean Face weight painting, were obtained from a luminance map that will work for all FaceGen heads as they have the same UV's. Similaraly, the head was bound to the neck and weighted according to a predefined luminance map.

The joints of the tounge were estimated as a slight curve between the two locators and they were connected to the tounge controller using either direct connection or constraints. Both the tounge and the jaw were placed into offset groups that change their position according to the phonemes expressions used. This added a level of polish to the rig, as when a phoneme or expression required the mouth to open or tounge to move, the animator would be able to see the effects on the controllers as well. All of which is controlled via MEL expressions.

The rest of the script is pretty straight forward rigging techniques, just done autonomously with all controller positions calculated according to the position of the geometry.

Pipeline

We used a combination of Open Pipeline for Maya and Dropbox so that everyone was able to:

  • navigate the large number of shots required
  • not have to worry about folder structure and naming convention
  • and most importantly, everyone would always have the most up to date files as every save that was made is automatically uploaded.

There was the issue of using a large amount of the artists personal bandwidth, but this was easily solved by through Dropbox's "selective sync", which allowed artists to only sync the shots that they were working on and the assets that they required. 

Another benefit of working out of a project on Dropbox was that I was able to create scripts and shelves and have them instantly available for all the animators to use. Which brings me to...

MEL Scripts

Aside from the scripts I created in the rigging process and the basic start up scripts to set the artists  file settings, there were a couple of scripts I created that were pretty neat.

createPlayblast

This automatically creates a playblast using predetermined settings so that all artists had the same resolution, file format, HUD elements and were correctly named.

audioAnimator

Building on Alin Sfetcu's as_audioAnimator, I made some modifications that allowed us to feed the audio track of dialogue into the jaw control of a character. I was able to achieve some pretty convincing results by making it more of a dynamic simulation than a direct connection. It added or subtracted from the previous frames rotation value based on the amplitude of the audio. This resulted in much smoother animation by preventing the rotation value from jumping too far in a single frame. I will post some of the results from this later.

 

I have not uploaded any of the scripts mentioned in this post onto Creative Crash because they served a specific purpose for my project and are not very flexible right now. If you are interested in using them yourself, or have a need for them with some modifications, please leave a comment or email me and I would be happy to make them available or modify them for your purposes.