Chapter 9: Compositing

Introduction

As a lighting artist, I know this is a controversial and sensitive subject. Compositing in a feature CG animation film is quite different to what’s usually called compositing in VFX or just straight up live action.

In VFX, it is compulsory since you have to use a film plate to light and comp the shots. But what about animated features ? What would be a proper use of the compositing tool ?

Please keep in mind that I am a lighting artist. I do open Nuke daily and understand how the software works. But this does not make me a compositing artist per se. And remember that I only talk about PBR cartoon movies for Hollywood.

Why doing compositing ?

I am not trying to start a controversy here. But one may ask this question : since there has been a huge improvement of rendering techniques with PBR and path tracing, why would you want to tweak your renders in a compositing software ? How do we define compositing in the context of a full CG animation film ? I personally see two reasons. The natural division line would be :

  1. Everything that is not achievable by the render engine.
  2. All the things we do not have time nor money to do.

It is all about the desired flexibility later down the pipe. Rule of thumb is the more you leave to compensate in comp, the cheaper the movie will tend to look. Please note the use of the verb “compensate“. If your art direction is based and planned on compositing, like Spiderverse, it may look perfectly fine.

I have seen both cultures : some studios just love heavy compositing work to avoid any extra rendering and some VFX supervisors just want to nail it in 3d to avoid any compositing tweaks.

There is probably some middle ground to find here.

What we “cannot” do in 3d

  1. Artistic direction. Feast and Spider-man had a massive use of Nuke to get their amazing look.
  2. Grading. Exposure adjustments, gamma and saturation are generally done in a compositing software.
  3. Masking. Not an ideal solution because of stereo compositing but it can be a life saver too.
  4. Projections. 2d projections on a card or a sphere can be really useful and accurate. A quick setup in Nuke can do miracles.
  5. International versions. We avoid rendering the shot for each country if some writing is present. We’d rather use a ST map setup in compositing.
  6. Transitions. Some stylized effects between shots can be asked and should be done in compositing.
  7. Continuity. It is easy to lose track on a feature and any colorimetric discrepancies should be fixed in compositing. Or during the Digital Intermediate (DI).

What we “SHOULD not” do in 3d

  1. Animated lights. Any source of light with animated exposure or color during the shot should be kept as an LPE. I can guarantee this : there will be animation notes until final delivery. It is a safety net. There’s no contest here.
  2. Delivery. If you work in an ACES workflow, you may want to deliver in ACES2065-1 (AP0) which is not recommended directly from the render engine.
  3. Merging layers. If you are rendering separated passes, like beauty, haze and volumetric, you’ll obviously need a compositing software to merge them back together. Even if Katana or Guerilla have this feature.
  4. Expensive processes. In some render engines like Arnold, depth-of-field (DOF) or denoising can be expensive and it may be cheaper to run them in comp, although it is debatable.
  5. Lens effect. Chromatic aberration, lens distortion, lens breathing, glow and flares are (almost) never rendered in 3d.

All these choices really depend on budget, time and culture.

I would like to add that 3d DOF can be a time and money saver. If the DOF is set in layout, like on Lego Batman, it can save tons of work for other departments. You don’t spend any time or energy in set dressing or animation when elements are out-of-focus.

Stylistic choices

This is probably the best reason to use any kind of compositing in a full CG movie. Granted, with LUTs, modern render engines can do pretty much anything.

In modern digital times, many people think of the compositing phase as what the developing of an image used to be, back in the film days where you used chemicals to reveal the image on tape.

Film developing

The beauty of this process was the fact that the technique of film development was so refined, you could have your own recipe to make the final image stylized and expressive in your own personal way.

For example in Seven (Director : David Fincher, DP : Darius Khondji), they used different metals (silver being one of them) in the mix to make the black tones of the image extra deep and gritty. This process is known as Bleach bypass.

Now if we want to trace parallels between traditional film making and computer graphics, we have on one hand the shooting, which would be the rendering and on the other hand, the development, which would be the compositing. There is an art form in both, slightly different but pretty much the same in their fundamentals.

Heavy looks

Now, while adjusting color and contrast is quite common for a compositing task, it’s rather on the subtle spectrum of image stylizing. Although there are some more extreme effects which can only be achieved in comp.

For example, 300 (Director : Zack Snyder, Art director : Grant Freckelton) or Sin City (Director and Cinematography : Robert Rodriguez). And of course Spider-Man : into the Spider-Verse (Directors : Bob Persichetti, Peter Ramsey and Rodney Rothman).

I haven’t worked on any stylized movie like the ones listed above but for a PBR cartoon movie, my best results have always been when my render view was as close as possible to the final result.

Raw lighting ?

Many supervisors like when the lighting is less contrasted than the final result. It can help for sampling, render times and add some flexibility later in compositing. It is a production choice and it depends on your look. Compositing has this great advantage of being flexible and 3D rendering can be expensive. So rather than being dogmatic, we should adapt to our production constraints, especially if the director is rather uncertain about the look.

As a lighting artist I do not want to guess what the final result will be. What I see in my Render View is what I get. This allows me to push my lighting as best as I can. Only rule : make sure to have enough range in your render.

I clearly remember Pascal or Craig doing the Quality Check (QC) of our renders. They would increase and decrease by five stops or even play with the gamma to check the range. That is the most important thing : to have enough range !

Many lighting artists make this comparison with photography : we shoot/render in raw/flat and we give the filmic look in post. That is not my personal belief. Nonetheless I do agree that each stage of a movie production should be an improvement of the previous one. And it is definitely the case for compositing.

Production examples

Depth usage on Lego Batman

Gotham City in flames just looks great in the following shots. Most of its visual design has been achieved in compositing. It would have been pretty much impossible to reach in 3d. Maybe with a volumetric box and some incandescence, but render times would have been crazy.

Gotham City just looks amazing.

Animated lights

Here is a very interesting example of LPE animation. Grant wanted to play the silhouette at the end of the shot to accentuate the line of dialog “Batman works alone“. The lighting artist actually turned off the warm key light from the beginning of the shot to emphasize the silhouette. Very bold choice ! I have never seen anything quite like it !

Silhouette and complementary scheme.

Skin despill on Playmobil

Once again we are back to the green color issue. On Playmobil, we had a sequence happening with plenty of grass which was bouncing a great amount of green color on the character’s faces. We thought the best decision was to fix the skin tones like in VFX.

Use of compositing was minimalist on this movie.

This technique called Green Screen Despill is very common in VFX because of green screens. In a live-action film, you not only have to remove the green screen from the plate but also have to take care of the green bleeding on the skin tones. We generally use a Hue Correct for this task.

Card projection on Lego Batman

One of my last shots on Lego Batman. We were in crunch time and had to deliver renders pretty fast. When I was lighting it, the NO MORE CRIME sequence was not ready yet.

Buildings in the far background were also placed in Nuke.

Best decision was to integrate it in compositing. I had to place the card carefully to match the screen geometry location for stereo. I used the point visualization in Nuke to get it accurately. Otherwise I would just have been guessing.

ST map on The secret life of pets

Mapping screens is a very common example in 3d and most of the time image sequences are a pain to render in 3d. You generally have to :

  • Find the correct formula to make it work : “_%d”, “_$03f”, “_####”… You never really know.
  • Offset the sequence as it never starts at the right frame.
  • Use the right color space : is it in linear ? Is the LUT baked in ?
  • If the sequence gets updated, you have to render again. Painful.

A proper solution is to do all of this in Nuke with a ST Map and a UV pass, like this Times Square’s billboard shot from The Secret Life of Pets (Director : Chris Renaud).

Heat distortion on Minions

Heat distortion is actually quite common. It happens at least on a couple of sequences per movie : it can be motivated by a desert, some lava or a motor engine.

We did the heat distortion in Nuke in this sequence of Minions (Director : Pierre Coffin). It would be impossible to achieve this effect in lighting and really adds something to the sequence I think.

There are some great tutorials out there for you to check.

What we do not have time to do in 3d

  1. If you cannot re-render. That is probably the main reason to do compositing. Rendering takes time and you may not be able to afford to render more than a couple of iterations. Last minute notes during crunch time should be done in Nuke.
  2. Quick tests. Rendering can be very long, even interactively. A process I have seen since Planet 51 (Director : Jorge Blanco, Lighting : Barbara Meyers) would be to render the lights separately and play with them in Nuke. Animation aesthetics are usually stylized, and as such, it requires quite some exploring until the right look is found. For quick turn around and several iterations of experimental features, the compositing software provides an invaluable tool.
  3. Uncertain directors. Compositing gives flexibility with clients and directors who are indecisive and it may just save your production.
  4. Production constraints. I really want to emphasize this : comp can save your show since it is really fast to do corrections.

Production examples

Nuke projection on Lego Batman

I had to tackle a couple of last minute notes in compositing in the next shot :

  • Smoke at the beginning of the shot was shifted towards blue in Nuke for continuity purposes.
  • At the end of the shot, the sun and its reflection in the water were also added in compositing.
Two render layers in this shot : beauty and volumetric.

It would have been difficult and time consuming to achieve these effects in 3d and the shot was already rendered in stereo anyway. Sometimes the shorter path is the right one. Just deliver !

Night shift on the Minions movie

I have picked our next example from the Minions movie (Director : Pierre Coffin, Lighting : Nicolas Brack). We lit the following night sequence quite realistically using warm street lights as our main source. Unfortunately, a “change” in art direction occurred and the whole sequence was already rendered. A blue grade was applied to get closer to the original intention.

090_compositing_0060_minions_night_comp_FHD
090_compositing_0070_minions_night_comp_FHD
090_compositing_0080_minions_night_comp_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

This was my last shot on Minions. It got modified once I was on holidays.

Based on my memories, I have tried to reverse back the shot to its original state with the street lights being warmer. For the record color keys from Clément Griselain were indeed blue.

Nuke scripts

Nuke scripts on a Full CG PBR movie do NOT have to be complicated. There is a strong belief that complicated setups are just better, like they have been done by geniuses or comp masters. And these scripts, for some reason, they just impress people. I personally think the contrary : I am much more impressed by simple setups. So please, do NOT make your nuke script look like this :

090_compositing_0090_nuke_mess_FHD
Exit full screenEnter Full screen
 

Maybe the artist was in a rush and did not have time to clean his script. I am not here to judge.

On many shows, I have opened nuke scripts that looked like spider webs. I really felt like it was a complete waste of time and I sometimes wondered if the artist knew what he was doing. Many times I had to clean the script myself and ended up turning off 90% of the nodes. In my experience, it has ALWAYS looked better after the cleaning. Some artists just add color correctors one after another without checking. Sometimes these nodes just cancel each other. It is crazy.

Alex Fry, compositing supervisor at Animal Logic, probably one of the best artist and technician I have ever met, once told me about compositing : “The more you cheat, the more it will feel cheated.”

I completely agree with him.

Compositing good practices

Linear Compositing

Just like rendering, compositing should be done in linear. Cinematic Color explains pretty well what best practices are.

The benefits of scene-linear compositing are numerous; similar to the benefits found in rendering, shading and lighting. All operations which blend energy with spatially neighboring pixels (motion blur, defocus, image distortion, resizing, etc) have more physically plausible (aka realistic) results by default. Anti-aliasing works better, light mixing preserves the appearance of the original renders and most importantly, even simple compositing operations such as “over” produce more realistic results, particularly on semi-transparent elements (hair, volumetrics, fx elements, etc).

Grading and Log

I clearly remember that it was forbidden to use the Nuke saturation node on Lego Batman. We had our own grading node and I was not sure why. After reading Cinematic Color and this brilliant post on the acescentral forum, I finally understood the reason : some grading operations behave better with a logarithmic transfer function.

It is absolutely essential that good grading practices inherited from colorists make their way to animation studios. I cannot explain the reason any better than Daniele Siragusano, so let’s just quote him :

The ideal space for an operation depends on what you want to simulate. If you are after modelling physical phenomena, then some sort of scene linear is probably the right domain. If you want to model perceptual phenomena, then typically a perceptual space (like a quasi-log or opponent space) is the right thing.

You may have noticed that Resolve does it by default (in Project Settings -> Color Management -> Color science) but Nuke doesn’t. It is probably because most of the compositing operations such as merging layers, depth-of-field and lens effects should be done in “linear”.

Here is an example of grading on a linear exr from Cloudy with a chance of meatballs (Directors : Phil Lord and Christopher Miller) :

090_compositing_0100_sony_cloudy_original_FHD
090_compositing_0110_sony_cloudy_saturation_lin_FHD
090_compositing_0120_sony_cloudy_saturation_log_FHD
090_compositing_0130_cloudy_vector_scopes_FHD
090_compositing_0140_cloudy_graph_nodes_FHD
090_compositing_0150_cloudy_graph_nodes_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Saturating with a logarithmic transfer function is quite common for colorists.

The main issue we are facing here is that Nuke is not a grading software, contrary to Resole or Baselight. So we are doing stuff here that is a bit against its nature. Something worth mentioning is that the current trend is about color space aware tools, like in Resolve 17.

Which working space ?

The very notion of “working space” is slowly becoming obsolete, which I find fascinating. I can only quote Daniele Siragusano with this very interesting comment :

The same is true for a unified working space. Why should we do all operations in ACEScct ? Maybe I want to do a CAT in LMS, a photoshop blend mode in another space and then a saturation operation in a CAM-ish space. In the mid-term future, the concept of a working space will be obsolete.

So, we may certainly pick the radiometric domain to perform an exposure adjustment, another domain to perform a “sharpen” and some other one to add or remove saturation. As Daniele explained above, it really depends on what you want to achieve. Baselight has eight different ways to modify saturation and I agree that saturating in ACEScct may not be ideal in some cases, especially with red tones.

As a friendly reminder, I would list these three advices :

  • As always, best is to try for yourself and see what suits best your needs.
  • You should not “force yourself” into a grading workflow because of what I wrote.
  • The important thing is to be comfortable with your own workflow and understand the ins and outs.

Glow in Nuke

It was actually the same thing with the Nuke glow on Lego Batman. Completely forbidden to use the native glow from Nuke. It just looks cheap and wrong.

090_compositing_0100_sony_cloudy_original_FHD
090_compositing_0160_cloudy_glow_nuke_FHD
090_compositing_0170_cloudy_lens_node_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Using Nuke’s native glow should be forbidden. Really.

If you develop a library for your lens effects, vignettes and grading nodes; and make them available through gizmos, you will achieve consistency.

Exposure Control

Exposure is not something to take slightly. It has a huge impact on our renders for post-process operations such as anti-aliasing, denoise or glow. Here is a list of tricks to test your exposure range.

Exposure control by Lens effect

Use a “Gotham Lens Node” on your render. Have you read this brilliant article about the Gotham Procedural Lens Flares ? I had the chance to use this node designed by Alex Fry on Lego Batman and it is the BEST one I have ever seen.

What is a lens node supposed to do ? Most of the things from a camera lens :

On Lego Batman, I never tweaked nor modified the node. The Lens Node did not have any exposed parameter anyway : it was just a node you would put at the end of your Nuke tree and connect your shot’s camera to it.

If my shot did not have enough glow nor flare in the shot, it would mean that there was not enough energy in my lighting. Basically we used the node as an Exposure and Energy check. I personally think this is genius.

Don’t arbitrarily tweak the glow, tweak your light’s exposure instead. It does not make sense to tweak the intensity, color or saturation of a glow on every single shot of a movie. It is pretty much incorrect and very time consuming.

Glow should never be applied through a mask. A glow and other camera lens effects are applied on the whole image like in real life : All parts of Gotham Lens operate linearly on all values in the image, without any keys or clips. I think we used the Gotham Lens Effect without any tweaking on 90% of the shots of Lego Batman. We only tweaked the node for storytelling purpose.

Lego Batman example

It is a win-win for everybody : supervisors are happy because they have a proper way to check the render exposure and artists are happy because they do not fiddle around their glow. If you are short on time (and we always are !) lens nodes will guarantee a correct amount of glow on your shots in no time. I cannot resist the urge to share with you this video from fx guide :

Warner Bros. Pictures and Warner Animation Group, in association with LEGO System A/S, a Lin Pictures / Lord Miller / Vertigo Entertainment production

I actually wrote to Alex to ask him a bit on the conception of this node. I was eager to know if there was a scientific approach to the node. Can you calculate the amount of glow on a specific value of pixel or is it mostly a visual approach ? Here is his answer :

Alex Fry : Just a lot of tweaking. Through first few weeks of it being used in production I was in dailies, paying attention to any of the shot comments that had anything to do with the lens, and tweaking values till I found the middle ground that produced the least notes. There were no if/or conditions, or anything driven by thresholds, it was all completely linear. 
After a while the balance was found, and it generally didn’t need to be touched by artists. (There were still some exceptions though).

Exposure and clamping example

Let’s take a simple example :

090_compositing_0180_squares_no_effect_FHD
090_compositing_0190_squares_lens_effect_FHD
090_compositing_0200_squares_expo_down_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

I generally use simple examples for clarity.

This is why clamping our renders can be issue. Because you will be losing the ratio between Red, Green and Blue. So instead of having a nice warm blooming effect, the clamping makes it desaturated.

Last thing about the lens node, it is not available as it is property of Animal Logic. But you can totally do your own. Alex gives us some clues in the article “[…] using raw Nuke nodes, really just an exotic combination of Convolves, Blurs, Transforms and Vector warps.”

We had some issues doing lens distortion in 3d on Lego Batman but it got fixed during production by the developers. And I recently read this paper from Animal about 3d lens flares on Lego 2 ! Mind-blowing !

Exposure control by DOF

There is another solution to control the energy of your render and that’s depth-of-field. We noticed it on Playmobil and it is confirmed by Jeremy Selan in Cinematic Color.

090_compositing_0210_bridge_cinematic_color_FHD
090_compositing_0220_bridge_cinematic_color_FHD
090_compositing_0230_bridge_cinematic_color_FHD
090_compositing_0240_spheres_dof_guerilla_FHD
090_compositing_0250_spheres_dof_guerilla_FHD
090_compositing_0260_spheres_dof_guerilla_FHD
090_compositing_0270_spheres_dof_guerilla_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Great example from Cinematic Color.

Otherwise, you can also :

  • Play with the exposure in Nuke or RV. What happens if you decrease the exposure by 5 stops ? What is the brightest pixel of your render ?
  • Do not hesitate to look at the luminance channel of your render. Color definitely messes with our perception. Just get rid of it !
  • I also sometimes squint my eyes. It will help you to better see the contrast.
  • Or look at your shot in thumbnail mode. A smaller scale will help to focus on the bigger picture.
  • And finally you may flip your render. Desperate times call for desperate measures.

The only rule

I have been thinking a lot on the relationship between lighting and compositing and their responsibilities. The only rule I could come with is : lighting and compositing must go in the same direction, not fight against each other. You need to make sure that your lighting and your compositing are CONSISTENT. Don’t make your characters darker in Guerilla to finally make them brighter in Nuke. Rolling back the values from your Nuke script to your render engine will always help.

On Lego Batman, most of my compositing scripts would look like this : volumetric over Beauty + Lens effect ! I tried really hard to nail most of my look in CG.

Barbara Meyers, lighting supervisor on Planet 51, used to tell me : “You should push your lighting as far as you can. Go as close as you can to the final result in lighting”.

Funnily enough, ten years later, I did the exact same thing on Lego Batman. From Maya directly to DI (on a couple of shots) !
090_compositing_0280_nuke_mess_FHD
Exit full screenEnter Full screen
 

It’s kind of a challenge, really.

You can actually achieve that when you have a great render engine, some good-looking assets and a solid validation process. Fortunately, we had these three things on Lego Batman. Without Glimpse, Grant and Craig, it would not have been possible to do it this way.

I remember that our TD, Manuel Macha, told me one day : “You use Glimpse as it was designed to.” So I guess Lego Batman was meant for this kind of process and I am not saying this approach should be applied blindly on EVERY production. But it is worth thinking about it and asking ourselves : what do I want to do in compositing ?

Digital Intermediate (DI)

I really want to make it clear : Lego Batman was done with a big budget, surfacing was close to perfection (only one plastic shader may help), Glimpse was just fantastic in his look and artistic direction was consistent (no last-minute changes) ! This happens very rarely. It would be a mistake to apply this process on a small budget with artistic uncertainties.

Digital intermediate is a motion picture finishing process which classically involves digitizing a motion picture and manipulating the color and other image characteristics.

Animal Logic also has a another advantage : Digital Intermediate is in-house. And this makes a big difference ! I sometimes feel that part of the traditional compositing work on Lego Batman (like restoring natural colors to the characters) was actually done in DI.

It is like if each step was shifted in a way : lighting was done in Nuke using LPE and compositing was done in DI using AOVs. I thought this workflow was quite interesting and new.

If you are laying the groundwork for a sequence, it’s really not a good idea to start with heavy tweaks in comp. Especially because we’re still in the lighting process ! So avoid it at all cost at this stage at least.

Compositing constraints

I had both experiences : I have seen compositing artists saving sequences and some destroying them. It actually depends on numerous factors.

According to a friend of mine who is a Compositing Supervisor, here is an interesting guideline :

When reviewing a compositing shot, first thing to check should be its adequacy with the lighting.

I completely agree on this.

If the look is massively changed during the compositing phase, it can either be because of :

  • The lighting sucked or was not respecting continuity. Yes it happens.
  • There has been a change of art direction. And yes it happens a lot.

It is also true that compositing is probably the most heavily commented process in the whole chain. The reason is pretty simple : there is no department after compositing to fix the image. Because it is the final image, everybody feels legitimate to give their opinion.

It is quite common that a compositing artist faces these critics :

I wish you had put less DOF so we see my set better. Because of vignetting, we do not see the secondary animation I had put in the character’s hand, right next to the screen border. I thought particles would be more visible and present in this shot.

All of these comments are ego-related. We are not the directors of the movie and we do not work for our demo reel. Our skills are at the service of the movie. So it is completely necessary to accept that our work will be modified. And not in the way we expected or hoped.

Production examples

Shoji screens on The Master

On the short “The Master“, we had some walls made of translucent paper, Shōji screens, like in a traditional dojo. One retake we got during review was to make the paper wall brighter. How would you achieve that ?

  1. Lighting ? We could create a light only for the screens. BUT you would have to deal with some light-linking and that would complicate you setup for nothing.
  2. Compositing ? We could use DeepOpenEXRId to color correct the shoji screens. BUT that would not brighten the Global Illumination nor the reflections. A shame.
  3. Surfacing ? We could tweak the shader to add a bit more SSS or brighten a bit the albedo. YES, this seems like a legit way to do it.
090_compositing_0290_shoji_guerilla_FHD
090_compositing_0300_shoji_guerilla_FHD
090_compositing_0310_shoji_guerilla_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

This may seem like a trivial example. But it’s not.

It took me 10 seconds to apply the color corrector in Nuke and about 15 minutes to find the right values to match it in Guerilla. I am really not a big fan of comp shenanigans except for quick fixes.

In my personal experience, I have always found the long-term solutions much more rewarding. When you are just piling up small fixes one after another, it will eventually blow in your face. I highly recommend to nail the renders in CG to get proper results.

Set extension on Blade Runner 2049

Here is an example of set extension : Blade Runner 2049 (Director : Denis Villeneuve, DP : Roger Deakins). I am amazed on how close the final result is to the actual shoot. That is a complete different approach from Mad Max : Fury Road. Here are two articles if you want to read more on the subject.

090_compositing_0320_bladeRunner_Framestore_FHD
090_compositing_0330_bladeRunner_Framestore_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Both frames look so close in terms of look.

Sometimes, for artistic, technical or political reasons, you have to split the characters from the background or even the sky, but the idea is the same : simple and efficient. Less human time, more computer time. Let the computer do the heavy work !

Light Path Expression

Now, this might sound slightly controversial (especially for old timers), but there’s quite a few things done in Nuke that are considered lighting. The most obvious example of this are Light Path Expressions.

I find it quite satisfactory when I render the lights of my shot as LPE and I get to play with them to find the right balance for my lighting. It’s basically baking all of your lighting and bringing them into a software that will give you great flexibility, with great quality and very very fast.

Some scenes on Ninjago were so heavy that I could not render interactively. I would just throw dozens of area lights in some strategic places and render on the farm. Then I would do all the color and exposure work in Nuke.

Since the arrival of LPE, it has become almost the norm that lighting begins with a basic lighting setup done as quickly as possible, as close as it should be and then balance and search for your look in the comp software. You just have to limit yourself to modify exposure and RGB values, and then you can send those back to your lighting software.

090_compositing_0340_lpe_template_FHD
Exit full screenEnter Full screen
 

I’m basically doing lighting in real-time.

Compositing is the perfect process for templates. The one that makes more sense in my opinion would be with Light Path Expressions (LPE). Therefore you would have the flexibility to adjust the lighting in real-time without breaking any albedo nor GI process.

LPE and AOVs

What should we do when an element is lacking specular ? The cleanest and best solution is sending back the asset to surfacing. Problem is, we don’t always have the time to do that.

Let’s say in a sequence with 18 shots, all of them look fantastic, but in 3 of them the ground is lacking information in the specular and it looks quite dull and boring due to camera and light angle combination. How can we fix this ?

  • Adding a light : this is probably the cleanest solution but you will need a new render.
  • Surfacing fix : can either be done by the lighting artist or a surfacing artist. This may require an interaction with the surfacing department, which may cause delays or misunderstandings. You will need a new render as well.
  • Specular AOV : sometimes the specular AOV will provide the information you’re missing. I would only use this solution if I cannot render again.

If a majority of your shots requires some kind of surface fix, it should totally go back to surfacing. A common mistake is to believe that going back is a waste of money. Until you see a bunch of comp artists trying to color correct several elements between several shots, and struggle in keeping some sort of consistency.

Conclusion

I want to clearly state that I am not against Compositing. It is an amazing tool that has to be used properly. But it is so powerful and so easy to use that it can become quite a dangerous mix. I have seen so many artists tweaking their shots and throwing color corrector after color corrector ! My personal favorite being : grade up, grade down and then grade off to turn off the pass.

I am not saying that compositing is easy. The tool is probably easier than any 3d package. But its difficulty lies within the subtlety it requires.

I found out in time that the most important thing is the final pixels on screen. I’m a happy spectator when something looks beautiful, meaningful and can’t figure out if the comper used LPEs, AOVs, if surfacing fixed a shot, or two, or none, if the vignette is fake and done with a roto. Why ? Because it was done with taste, consistency and subtlety. And most important of all, to support the story and the emotions on screen.

I’ve seen all kinds of horrors done with all kinds of techniques. I’ve seen compers bury a sequence and I’ve seen incredibly boring renders that were technically correct. So there is no technique that will replace vision, taste, talent, direction and love for the craft.

Sources

If you want to dig more on the subject here are some links :