Bake Custom Occlusion Map in Mental Ray

A few months back I created a game asset for Roundabout and was faced with the issue of baking an occlusion map using Mental Ray. It was surprisingly hard to find a decent reference for what I ended up doing, so I hope this helps somebody else trying to do the same thing.

mib_amb_occlusion or Render Settings Ambient Occlusion?

I generally avoid using the render settings controls for ambient occlusion as I really like the control I get through the mib_amb_occlusion node. Unfortunately, the "Transfer Maps..." does not support this node and the only option you are left with is the Render Setting Ambient Occlusion option.

Transfer Maps Occlusion options verses the options in the mib_amb_occlusion node.

So using this technique for baking Mental Ray textures and shaders, we are able to bake the mib_amb_occlusion.

Node tree for baking mib_amb_occlusion maps.

Notes:

  • Plug the occlusion outValue into "Additional Color" and make the diffuse black so that it acts like an incandescence shader. Otherwise scene lighting will affect your occlusion map.
  • Use an mia_material_x shader and not a maya shader like a lambert, as it will cause tessellation (shown in the image below).

Occlusion baked using mia_material_x versus lambert

Here is an in-game screenshot of my game asset - The Legendary Ryan Davis

V-Ray to Mental Ray - Material Wrapper

For the last couple of years I have been using V-Ray for Maya and decided to switch back to Mental Ray and check out the changes they have made with Unified Sampling. For the most part, I love it! However, there are some tools, shaders and workflows that I have grown to love in V-Ray, which I am now looking at ways to accomplish the same thing in Mental Ray.

The Material Wrapper is a matte shader that still emits light bounces like Final Gathering and appears in reflections.

The first thing I miss from V-Ray is the Material Wrapper, which is a matte shader that leaves shadows and reflections similar to the Use Background shader. However a big advantage it has is that you can plug the objects existing shader into it for light bounces and reflections.

I still have not found an elegant solution that does this as well as V-Ray, but there are a few solutions.

Contribution Passes

I could create contribution passes, which is one of the most powerful tools that Mental Ray has over V-Ray. But what if I don't want to have to calculate rendering all the other elements seen on screen?

Multiple Objects

I could also create an instance of the object, apply a matte shader to one with all render attributes other than primary visibility turned off. And the other object with the shader applied and primary visibility turned off. But this can quickly become very messy and complicated, not to mention issues involved in casting final gather points where the object has to be slightly larger than the matte object for it to work.

Matteshadow as Material Wrapper

This is the best solution I have found which most closely resembles the work flow I used when using the material Wrapper.

Zap's method of using the mip_matteshadow discusses using an image projected through the camera onto geometry, but what if I want to render an object on it's own while being matted out, receiving light bounces and casting shadows & reflections from other 3D objects?

Here is the node tree of Zap's camera projection technique

Using a combination of Zap's workflow along with the ideas in this tutorial for creating layered shaders using Mental Ray shaders, I have created a workflow that creates results similar to the material wrapper - albeit not perfect.

This is actually 2 node trees, showing a mip shader on the top and a maya blinn on the bottom.

The resulting render before and after matting objects out.

Issue 1 - Shadows & Reflections

Attributes like Catch Shadows, Ambient colour, Catch Reflections and Catch Indirect will act like a layered shader over the top of your original shader. So while you will probably want to use these to create a more realistic image, it will affect your light bounces and reflections in other objects. Ambient colour is on by default, so remember to turn it off to better match the original image.

The matted render with it's alpha channel and the settings for capturing reflections, shadows and indirect light.

As you can see in the image above, there are subtle differences between the images where one is catching shadows and reflections and one is not. The shadows and reflections change the reflected image in the chrome ball and cause lighter indirect bounces onto the ground that aren't in the original image. These difference won't necessarily be an issue, but it's just something to be aware of if you are trying to recreate the original image in compositing.

Issue 2 - Final Gathering Error

If you look closely at the image below, you will notice a difference between the two images (other than the objects being matted out).

Final gathering points will only be calculated from the camera view, not taking into account the other angles objects may be seen through reflections. 

As you can see in the image on the left, we can see an area below the white sphere that should be red from the light bouncing off of the ground.

This can be fixed through building a FG map from multiple camera angles, but unless the scene is static this is often not a solution. As with the first issue, this may not necessarily be a visible issue unless you have really strong reflections in your scene.

I hope you've found this useful. If you know of any other techniques I have not mentioned I would love to hear them!

Random Placement Script

This is a handy little python script I created for randomly placing a number of objects in a user specified area.

Download it from Creative Crash

It has a variety of options for specifying where the zero point for the placement of the objects is, but the most useful I find is the Locator option. This allows the user to interactively move a bounding box around their scene and visually determine the area in which objects will randomly be placed.

The most powerful feature of this tool is the "Stop Intersections" option. This checks the bounding box of each object and checks whether it is intersecting with the bounding box of all the other objects. If it is, then it picks a new random position.

Frontier Rigging & Pipeline

Frontier was a project I worked on over a year ago which I have long been planning to come back to make a breakdown of the workflow I used. I was in charge of rigging over 35 human characters and over a dozen space crafts. As well as that I designed the pipeline for artists all over Australia and New Zealand to work remotely and created MEL scripts as they were required.

Rigging Workflow

I knew beginning the project that to rig a large number of human characters of different sizes, I was going to need an automated approach. The Setup Machine proved to be very effective, however The Face Machine required a great deal more setup time and the automatic weight painting did not seem to like the mesh flow of the FaceGen's heads and as a result required a great deal of manual weight painting. Noticing FaceGen provided all the blendshapes we needed, I decided to create my own automated facial rigging system, as shown in the following video.

1) Clean Face

The FaceGen head's come with a very comprehensive set of blendshapes that I figured would be the simplest and easiest way to create a large number of rigs with little room for error's that would involve a lot of manual labor. The only problem was that a lot of the expressions had both eyes and mouth clumped together as a single blendshape. This would be very limiting for our animators. 

To solve this, I took advantage of the FaceGen head's consistant UV layout for the seperating of the expressions. I created a loop that went through every blendshape and activated them on the blendshape node. I then applied a predefined luminance map to the blendshape to limit the area of effect to either one of the eyes or the mouth. For each section of the face (eyeR, eyeL, mouth), I duplicated the head and stored it for later to apply a new blendshape.

Finally all that was left was deleting all of the useless, extra geometry, naming, grouping, deleting history and optimizing.

2) Proxy Creator

Intially a script I created for another automated rigging script I am working on, I decided to adopt it for Frontier and alter it to work with The Setup Machine. TSM creates a proxy already, but I find this proxy quite useless for animating as it looks nothing like the character. The Proxy Creator takes the existing character model, chops it up and parents each piece of the model to the joints. This results in a proxy that is almost as good as the bound model, except it is extremely light on the system. Which is great for those heavy scenes with multiple characters.

Limitation: Ideally I would like the user to just identify the joint hierarchy and have the script automatically go through and chop the model up according to the joint position. This script I am currently in the process of creating, the main hiccup being when I extract pieces of geometry, it is not consistant with which piece of geometry is the piece I'm extracting and which piece is the original. So for the sake of this project I proceeded with a more manual approach, but I will get around to solving this limitation at some stage.

3) Face Rig

I created the initial "Define Face Parts" interface so that the Face Rig script could be used to add FaceGen heads to any rig. After identifying the geometry, neck joint and global controller, the script identifies the location of the jaw bone and tounge through the dimensions of the bounding box and making generic estimations from there.

After the rigger has corrected the placement of these placeholders, the script then goes through and completes it's magic. The jaw is deformed using a cluster, whose weights, similar to the Clean Face weight painting, were obtained from a luminance map that will work for all FaceGen heads as they have the same UV's. Similaraly, the head was bound to the neck and weighted according to a predefined luminance map.

The joints of the tounge were estimated as a slight curve between the two locators and they were connected to the tounge controller using either direct connection or constraints. Both the tounge and the jaw were placed into offset groups that change their position according to the phonemes expressions used. This added a level of polish to the rig, as when a phoneme or expression required the mouth to open or tounge to move, the animator would be able to see the effects on the controllers as well. All of which is controlled via MEL expressions.

The rest of the script is pretty straight forward rigging techniques, just done autonomously with all controller positions calculated according to the position of the geometry.

Pipeline

We used a combination of Open Pipeline for Maya and Dropbox so that everyone was able to:

  • navigate the large number of shots required
  • not have to worry about folder structure and naming convention
  • and most importantly, everyone would always have the most up to date files as every save that was made is automatically uploaded.

There was the issue of using a large amount of the artists personal bandwidth, but this was easily solved by through Dropbox's "selective sync", which allowed artists to only sync the shots that they were working on and the assets that they required. 

Another benefit of working out of a project on Dropbox was that I was able to create scripts and shelves and have them instantly available for all the animators to use. Which brings me to...

MEL Scripts

Aside from the scripts I created in the rigging process and the basic start up scripts to set the artists  file settings, there were a couple of scripts I created that were pretty neat.

createPlayblast

This automatically creates a playblast using predetermined settings so that all artists had the same resolution, file format, HUD elements and were correctly named.

audioAnimator

Building on Alin Sfetcu's as_audioAnimator, I made some modifications that allowed us to feed the audio track of dialogue into the jaw control of a character. I was able to achieve some pretty convincing results by making it more of a dynamic simulation than a direct connection. It added or subtracted from the previous frames rotation value based on the amplitude of the audio. This resulted in much smoother animation by preventing the rotation value from jumping too far in a single frame. I will post some of the results from this later.

 

I have not uploaded any of the scripts mentioned in this post onto Creative Crash because they served a specific purpose for my project and are not very flexible right now. If you are interested in using them yourself, or have a need for them with some modifications, please leave a comment or email me and I would be happy to make them available or modify them for your purposes.

 

Image Sequence Blendshape Baker

With lots of experience MEL scripting, I decided to bite the bullet at give Python Scripting a fair go. It turned out to be a lot easier than I expected! The syntax was straight forward and for the most part it just involved converting MEL script into Python. 

So I thought I'd test my newly obtained Python skills and create a GUI for a scripted technique that I've used on many projects. I've called it the "Image Sequence Blendshape Baker" (I might come up with a better name later).

Download it from Creative Crash

Basically how it works is it bakes an animated texture to the UV's of an object as an image sequence, with the option of using this sequence as a luminance matte to animate the strength of a blendshape on an object with the same UV's. For a better look at how it works, check out the video below.

This technique is great for situations where you might resort to vertex animation, as instead you can have a static blendshape object that is easy to alter and animate it's strength on the target object through a series of projections.

In the past I used it on Ghost Rider: Spirit Of Vengeance to animate the skull appearing through his face. After match moving geometry to his face, I created a blendshape object to morph the head into matching the shape of the skull his skull beneath. Using the face's UV's, I was able to animate a matte in After Effects to decide where, when and how much the face deformed to show the skull. I then baked the image sequence onto the blendshape to get the exact deformation I wanted.

Another example of where it could be used is for cracks in walls that need to be animated on. It will give you complete control to modify the shape of the cracks on the fly without effecting the animation.

One interesting thing I noticed in my Python travels was that there are some commands that cannot be executed simply by adding "cmds." to the start of them. Such as:

import maya.cmds as cmds
cmds.KeyBlendShapeTargetsWeight;

As a work around I had to execute it purely as MEL through the eval command:

import maya.mel
maya.mel.eval("KeyBlendShapeTargetsWeight;")

I'd love to know in what ways you find this technique useful, and of course any queries, bugs or suggestions as well!