Chapter 5: Master Lighting

A big responsability

In this chapter, I am going to focus on the methodology if you have to establish a sequence for the team. I think that it is all about anticipation and communication: “Think a lot about what you are going to do” and “Explain as best as you can what you did”. This is actually a very important process because people are going to use your light rig. So you better make it with care!

References

We should never work without visual references. It should even be our first task: to gather as many images as we can for inspiration. They may be from a movie, an exhibition you went to or even some random image from internet! They can even be something you don’t like, as long as it gives you an idea of what you want.

It is essential to have a good knowledge about Art and I would like to mention some artists who inspire all these DPs: Caravaggio, Rembrandt, Georges De la Tour, Vermeer, Goya, Monet, Joaquin Sorolla…

I guess the studio where I felt most connected to History of Art has been Animal Logic. Which is crazy because the sequence I did on Lego Batman did not have any color key. At first I would not have believed it. It was so different to what I had been used to! But Craig and Grant actually developed a very clever process which solves many issues.

Master Lighting generally begins with the sequence launch. This is where you should be provided all kind of information to complete your task to the best of your abilities.

Sequence launch on Lego Batman

Grant would launch a sequence by showing us references he likes. They could come from anywhere: movies, paintings, TV shows, Google… There is no judgement at all: a good reference could come from anywhere. Grant even mentioned a Pink Floyd album cover once!

previous arrow
next arrow
Slider

I remember going to Grant’s office to be briefed. I actually had to find a path between the drawings, the toys and the models to get to his desk. The room was actually covered from the floor to the ceiling.

So cool! It was like a museum for the nerd! I remember that Grant even had some shoes made out of Lego sitting in a corner. Where did he get that from? Anyway, it is the lighting launch.

previous arrow
next arrow
Slider

© 1985 Universal Pictures / Copyright © Warner Bros 2017

Then the lighter would do a first pass, probably a second and maybe a third until Grant says: “We’re good. You have all the ingredients, I’ll provide the recipe.” Grant would then use the Light Path Expressions (LPE) to do some paint-overs. In Photoshop, he would basically tweak 4 parameters to help us balance our lights:

  • Exposure
  • Color
  • Z-depth
  • Gradient

It was amazing to see what he would come with. You would often wonder: “How did he do that with my lights?” Finally we just had to match in Maya what he did. Even this step was interesting as you would put yourself in his place and follow his steps: “Okay, he decreased the fill 2 stops, increased the rim 1 stop and put a gradient in the key…That’s actually easier than matching 2d concepts since his work is using your lights. There is no interpretation of a brush stroke. A pure CG workflow.

On Lego Batman we had to setup a sequence where Batman had mixed feelings. How do you translate that in lighting? Grant imagined some clouds passing by and covering the key to alternate shadow and light. Brilliant. He used the final sequence of Akira as a reference.

Anticipate

Ideally, Lighting should be part of the cinematography and linked to the story. We have all seen making-of where the director explains this. This is great but on a basic Hollywood movie:

  • Lighting must be above all pretty and readable.
  • Many lighters do NOT know what is their sequence about.

Let me just define pretty here as it could mean lots of things. Pretty in a basic cartoon PBR movie means not too dark (filled) and saturated.

First question you should ask yourself: What is this sequence about? Here is a list of recommendations to start on the right foot:

  • Put on your headphones and look at the edit of the sequence.
  • Listen to the dialogues. Any important piece of information?
  • Look at the references and color keys. Are they clear enough?
  • Do not hesitate to ask questions to your lead, supervisor or art director.
  • Be a sponge. Starting a sequence is a great way to learn and meet people from other departments.

All of this may sound obvious but I have seen many lighters who would start a sequence without watching its edit.

Analyze your sequence: Don’t start without meticulously planning your work and knowing your sequence. How many shots are there? Can you group them? How many master shots will you need?

Types of shots

There are three types of shots:

  • A master shot (or establishing) is when you start from scratch. Using a color key, you create all the lights and take care of layering. The camera angle has to be the widest possible to cover the biggest area possible. You establish the mood of the sequence. Ideally, masters shots would be approved before any dispatch to the team.
  • A key shot is when you want to refine the master for a given angle. For example, if the camera is lower, the floor may look brighter and cause some issues. Sometimes even if the light rig is the same, it feels different. It is all about shot value: on a certain angle you may want to rotate the key or tweak it value to get a better look.
  • A child shot is when you copy-paste-render. Ideally, there should not be anything else to do on this kind of shots. Just make sure render times are optimized.

Let’s have a look at this sequence from The secret life of pets. (Director: Chris Renaud, Art Director: Colin Stimpson)

Copyright © Illumination 2016

On the examples below, I have organized the sequence respecting this convention.

previous arrow
next arrow
Slider

Copyright © Illumination 2016

Shot planning

Bidding and casting are very complicated tasks. You want to keep your team happy. You don’t want to get people jealous at each other or frustrated with their assignations. But what should we do when a junior shows more attitude than a senior? Tough.

All of this stuff may seem basic to you. But you do not find that many organized people in the industry, especially when the company has too much money. My lead at Weta used to tell me: “It is like people don’t want to get organized.

Ideally, establishing shots would be the same between different departments: from the art department to compositing, so the master shots are always prioritized and finished first. But production reality makes this kind of plan almost impossible.

If the animation department starts close-up first and if production wants to make quotas at any cost, it will make things very complicated. One workflow makes the anticipation really interesting: when lighting starts right after Rough Layout (RLO). I have done this on Playmobil and it was a really good idea. Why lighting should be the last process?

From Sharon Calahan: It should be kept in mind that the sooner in the production process the lighting can be designed, the more involved it can be in the storytelling process.

Light rig guidelines

Keep it simple

This is actually the most important rule for me. When you begin to work on a sequence, have your lights affect everything. That’s the easiest way to start. If something looks off, that means that surfacing is not correct. We should follow a general-to-specific order like in painting. We start with broad strokes and then refine them.

When an artist asks for my help to check his light rig, I ALWAYS end up removing stuff. For some reason, people think light rigs have to be complex. ON THE CONTRARY! It does not have to be complicated, especially in a PBR setup with Global Illumination (GI).

Hey, without this blocker and these two lights, it looks better.”

Why that? For a very simple reason: Let the render engine do the work! Do not go against it. Less work for you, more work for the computer! Use its full potential by letting the lights bounce naturally. Let them bounce!

In PBR, we actually rely much more on the surfacing. If you have to adjust the specular, SSS or indirect contribution of your lights on every shot, it is a surfacing issue. I actually do not tweak many options on the lights when I work. Six of them generally give me plenty of control:

  • Position
  • Rotation
  • Angle (shadow softness)
  • Exposure
  • Color (temperature)
  • Light Filters (blockers, black flags…)

Don’t twist your lights. Use 2 axis (x and y) for rotation. In 90% of the cases, that’s all you need. I hate when lights have a rotation of -653.4 degrees on z axis. This is why I lock all the parameters I don’t want my team to use.

Less possibilities mean less human mistake.

Focus your energy

Less parameters also means you can focus on what really matters. For example, at Animal Logic our “Glimpse” lights only had 6 or 7 parameters. Simple. Max Liani even removed the decay. Genius.

previous arrow
next arrow
Slider

Training artists is easier when things are simple.

We did NOT have any parameters for spread, shadow color or indirect contribution. Why would you want a light not to cast shadows? In PBR, that does not make sense. I agree that flexibility is important but keep in mind: would you rather tweak 60 shots or fix 1 asset? I know that under a short deadline these options can come handy. But ideally, surfacing has to take care of these issues.

Weta Digital uses lumen and lux for their lights’ exposure. Lumen is the quantity of light emitted by a source. Lux is the measure of light quantity that hits a surface. Since it is impossible to say what is the brightness of the sky, they use lux to measure the amount of sky onto the objects.

Scientific approach.

Light globally

This is the best technique I know to prepare a sequence. Most lighters do not worry about what is happening outside of their shot’s camera. This is a very common mistake in lighting. But they should really take care of the whole environment and work outside of their camera field of view. This technique brought me to a whole new level.

previous arrow
next arrow
Slider

This panoramic render is the perfect way to approve your establishing lighting. It actually comes from the sequence described in the Types of shots section. It will allow you to check that all your lights are correctly calibrated and you did not forget any area.

Try to think about your light rig as a whole. Sometimes, when you create a light, you are just compensating for something that has been poorly done. And if you are missing some reflections in the eyes, it is maybe because your set is not properly lit.

Another technique is to render several shots using the same light rig to make it as universal as you can. There is no magical light rig that will cover 100% of shots. But refining the light rig on different angles is the best way to make sure your sequence will go smoothly.

You have to find the right balance between a flexible rig and an heavy rig that will be difficult to manipulate.The following three shots use the exact same light rig. It took me three weeks to set it up. Quite challenging!

previous arrow
next arrow
Slider

Ideally, we should be able to test our light rigs on as many shots as we want without even opening Maya or Guerilla. On Playmobil, we would render First-Middle-Last (FML) of the whole sequence using a simple command line. Awesome.

Identify the light sources

At Mac Guff, my lead used to tell me: “Lighting is easy. You only have to identify the light sources.” This technique works well if you are only focusing on practical lights. Nonetheless, the first question that any lighting artist should ask:Where is the light coming from?

For example at Weta Digital, they use 3D scans of the movie set to position their lights! Hard to reproduce on an animation feature. But why couldn’t we learn from this technique? On War for the planet of the Apes, the lights of the prison were placed at the exact same spot than the live-action set.

Copyright © 20th Century Fox 2017

Place your lights like in the real world. Make it real, make it physical! Place the practical lights physically where they should be. Think like a Director of Photography (DP). Otherwise it will just look weird. For practical lighting, you really do not have to complicate things:

  • You can either use a Mesh Light on the actual object.
  • You can either use an Area Light where the object is.
  • Or in some cases you can use both!

Placement example

On the next example, the sun is impacting the floor with a distant light. If I want to fake a bounce light from the impact, I will just place the area light at the same spot. Thanks to quadratic attenuation (decay), you only have to tweak color and exposure to make it look good. Easy.

I take as axiom that no one should ever ever ever touch the decay. It has to be quadratic!

previous arrow
next arrow
Slider

At Weta Digital, they even did something crazy to simulate the “real-life behavior”: their lights are casting shadows as if they were a geometry. If I add an area to bounce some extra light, it will actually cast a shadow on the floor!

previous arrow
next arrow
Slider

Black flags

Even further, we did NOT have blockers nor light-linking on Apes. We used CG black flags, like in a real movie set. Below is a photo from the set of Ida (Director: Paweł Pawlikowski, Cinematography by Lukasz Zal and Ryszard Lenczewski) where you can see the use of black flags. Check this amazing article about their work!

Copyright © Music Box Films 2014
I cannot resist the urge to share the sequence of this beautiful movie. We will talk about Rembrandt lighting in Chapter 8.
Copyright © Music Box Films 2014

I understand the whole mimic reality logic but it can make the work of the artists really difficult. They probably want the lighting TDs to think like Directors of Photography rather than just throwing lights anywhere they want.

Virtual Movie Set

But in my opinion this kind of workflow is too extreme for a PBR cartoon movie. I have never tested the following plugin Cine Tracer but it follows the same logic.

Cine Designer is a plug-in for Cinema 4d that mimics gaffer tools on set. Cine Tracer is a plugin for Unreal Engine.

Some supervisors would love this.

Anyway you have to find a good balance on how much control you want to give to the artists and how many options you have in your software. This is why I use the example of Glimpse quite often: it had very few options but enough control to make it look good. I never felt limited with Glimpse.

On the Big Friendly Giant, at Weta Digital, it was forbidden to use an area light without mapping it! It will not make your render any cheaper and probably bring some noise. But it brings you closer to reality.

You can map your lights with some HDR to get more realistic reflections.

Think big

Think of your shot as a movie set. The goal is to achieve something like this one below from Bridge of Spies (Director: Steven Spielberg, DP: Janusz Kaminski).

Copyright © Dreamworks Pictures 2015

There is a light (screen right) directly pointing at a chair. Can you see it? Nice trick from Janusz Kaminski (Munich DP). We will come back to this later.

Have you noticed how light is softened behind these big translucent sheets? They pretty much look like Area Lights, right? Think BIG. Having big lights outside of your camera field of view will help for:

  • Better sampling and will look better.
  • Tying your characters and sets together.
  • Making the image look more natural.

In my opinion, we want to get as close as we can to a real movie set. This is actually what I enjoy the most in lighting: How am I going to physically build this light rig? I generally try to have all of my lights affecting everything, like in real life. We’ll come back to light-linking later.

This “Big lights” theory was kind of confirmed by a conversation I had with Aymeric Montouchet. The biggest difference between his student and professional work was the ability to have big and powerful lights outside of the set, like behind the windows to avoid any decay issue. As a student he was using small lights inside the room that would burn the characters but not light enough the set.

Get the best of each light

Work your lights separately and be sure of what they do. Each light should have its very unique contribution. When I see artists ONLY working with all their lights on, I always wonder how can they know which light is doing what. Test them ONE BY ONE!

Let me be clear here: you want to go between “All lights ON” and “One by one” when you work on a rig. I constantly jump back and forth between those two.

Here is an example of Light Path Expression (LPE) from Guerilla Render. It is very easy to setup and helps you lot to design your light rig. The following setup will be detailed at the bottom of this article.

For flexibility, it is very important that each light has its own purpose. You don’t want to have two lights doing the same thing. Otherwise, when you get a retake, which light will you change?

previous arrow
next arrow
Slider

If you are working on a daylight setup, try turning on all your lights except your sun. It should then look like a cloudy day. If not, keep working! Since the sun is going to blast everything, it is a very good way to check if the light rig is well balanced. Thanks Anthony Cabula!

Make it cheap

Optimize your rig as much as you can and track the noise. Test your sampling and render times BEFORE sharing with the team. By doing things properly and clean, it should be a no-brainer. That’s called karma.

Adaptive sampling is amazing. In Guerilla Render, you just adjust ONE value, the Adaptive Threshold, to fix the noise. Easy and powerful. No need to do it per light, nor shader.

Test your sampling on at least three consecutive frames before rendering full frame range.

You really don’t want to be caught in a situation where your light rig is being propagated with bad sampling values or naming. Because it will be a mess and a real pain to fix then.

We want to fix all the technical issues like noise and render times before propagating. But quite often tight schedules will make you propagate the light rig before you actually want to. This is why anticipation is so important. And as a supervisor you have to tell Production what is possible or not.

Many people thinks the more lights they use, the longer the render time will be. It is not entirely true. With adaptive sampling it helps to add a couple of lights to make the direct lighting more uniform. It will globally increase the luminance which means residual noise from other lights will be diminished and less samples will be overall needed.

Of course if you add 10 000 lights…

Name things properly

There is nothing more dangerous than renaming stuff (object, lights, passes). This is why it is really important to think ahead. So many automatizations are name-based. You really want to nail this. Which light is your key? Is there a naming convention to respect? Can you use patterns (or expressions)? After giving much thought, here is a convention I have created with the help of artists from Animal Logic and On Animation.

Albert Camus said: “To name things wrongly is to add to the misfortune of the world.”

  • Prefix: We use the same prefix ‘lgt‘ on every light to use expressions or patterns more easily. Very useful if you want to select or hide all your lights at once. Just use this wild card: lgt_*.
  • Source/Light category: Why did we split them in two categories? Because the sun can be a key or a rim depending on your camera angle. To avoid confusion, if you name your light ‘Sun’ it will cover all these possible cases. That’s actually super useful.
  • Asset category/name: I encourage to have most of your lights affecting all. It is the only way to have proper bouncing in the scene.

In these three shots, the main light source is the sun. Depending on the camera angle, the sun may be a rim or a key. Therefore let’s just call it: Sun.  In Computer Graphics, it is much easier to call the lights by what they actually are. Name them so it makes sense to everybody. A clean setup will make everybody’s life easier.

previous arrow
next arrow
Slider

Copyright © Illumination 2016

If you create a light in the establishing shot, it will be likely for ‘all’ or ‘set’. In shot lighting it will likely be for your character.

Less is more

I am huge fan of this philosophy: Don’t work harder, work smarter. When I arrived at Cinesite FA, people were manually doing their layering. I mean they were dragging-and-dropping ONE BY ONE their assets. I highly recommend the use of wildcards.

previous arrow
next arrow
Slider

And the same goes with a nodal software like Guerilla Render. On Playmobil, one of our artists wanted to create a set of our main character Marla. He brought one by one the assets: marla_shoe, marla_head, marla_body... When you actually only need ONE node with just marla in it. Thanks to Lua pattern, you don’t even need a * to match the full name.

previous arrow
next arrow
Slider

Maybe it is a bit out of topic but I found pretty interesting this video with Alexandre Aja commenting on the use of “Less is more” by producers in Hollywood. Basically these executives use “Jaws” as an excuse to invest less money for the special effects of the movie. We don’t need to see the shark, the music only will do! Less is more!

Light rig examples

Lego Batman

My light rig would look like this on “Lego Batman”. I usually spend a lot of time renaming and cleaning stuff. Light categories have been grouped for clarity: Natural, Practical and Dramatic. I personally do not think we should put the light categories in our lights’ names.

During Master Lighting, your light rig is going to be a mess and that is perfectly normal. It is very difficult to keep things clean while experimenting.

But when the look is approved, you need to do a pass of cleaning before passing it to the team. Just delete everything that is useless. And rename your lights according to their roles. You will also need to make sure that the rig is still working after the cleaning pass!

If your light rig only contains three lights, that’s no big deal. Lighters will be able to figure it out. But when you have 50 lights with GOBOs, blockers and some light-linking, being organized makes a huge difference for the crew. So for the sake of our industry, please do not name your lights like this: “top_rim_blue_chars_all_04_copy_2_dup”.

The secret life of pets

A lighter once asked me. “Dude, I just inherited a terrible light rig. Everything is upside-down, poorly named and over-complicated. What should I do? Just roll with it? ” My answer is always the same: “For the long term, it is always best to clean. You are like a wall that blocks the crap.” Some lighters actually thanked him so I guess it was worth it.

We named the lights “Brooklyn”, “Bridge” and “Manhattan” on the sequence below. We were able to turn them off with one simple expression for optimization. If you are facing Brooklyn, you do not need the 500 lights of Manhattan to be on. And vice-versa. Huge time saver!

previous arrow
next arrow
Slider

Copyright © Illumination 2016

Do NOT publish important stuff on Friday! It’s too risky for the weekend renders. On The secret life of pets, I actually broke an asset on Friday afternoon, published it and all the renders on Monday morning were screwed. Well done me!

Layering

First thing first: what is the case for splitting shots in layers? Well there’s a few reasons why this might be the way to go:

  • Efficiency. This is perhaps the most common reason of all. Let’s say you have quite a complex background. With several millions of polygons and a moving camera. The background looks great, your lighting is amazing. BUT the character needs some tweaking. Re-rendering that background is inefficient and a waste of resources.
  • Flexibility. For volumetric effects, most common practice is to render your geometry as a matte while having your volumes rendered in RGB. So you can do the fine tuning in compositing.
  • Optimization. Smokes, clouds and several other FX usually require special settings to render as beautiful as they have been designed. But these settings might go against the requirements of your character. So what do we do? Split them in layers.
  • DOF and Motion blur in comp. They usually require clever layer splits to avoid jagged edges and borders. This is quite shot dependent but quite a strong reason to split a shot in layers.
  • Animated lights. If you can’t use LPE, layers are your best friends for splitting lights. This way you can animate and tweak in Nuke as much as you want. Major time saver and flexibility provider. And unless you have access to Anti-aliasing targeting, layering will be your only option for some clean light passes.

I acknowledge the fact that layering is also a budget and time decision. If the client is difficult and you are on a short deadline, layering may just save you.

Layering issues

So what is the war about using layers then? It sounds like a perfect solution. As with all other techniques in CG, overdoing it is quite easy and also the door for a massive amount of unnecessary pain. What could possibly go wrong with such a technique? Let’s see:

  • Too many layers. Yes, that’s a thing. Rule of thumb would be to minimize this as much as possible while being useful to the reason you’re splitting them for. Not more, not less.
  • Hold outs. Avoid them like the plague. Unless you love dark edges around your layers, and many other potential problems.
  • Interaction. Splitting in layers objects that interact or are very close to each other. This is an invitation to never be able to match them.
  • Versioning. It can easily become a mess if you have rendered v29 of the characters and v14 of the sets. Best solution would be to flag them.

What are layers good for? Efficiency. What problems can they cause? If not done correctly, it’s easy to lose consistency.

Good practices of layering

So what are the good practices with layers? Easy:

  • Find the natural divisions in the geometry of the shot. If you have trees in the far background and a railing much closer, then your layers have been set up for you.
  • Do NOT use any kind of cutouts by having a clear separation between foreground and background. You’ll appreciate it later when you just put one layer on top of the other. A over B. Simple. No edges, no alphas to combine, no premultiply. And DOF will love you for this.
  • Objects that interact should always be in the same layer. Period.
  • Layering should be set on a sequence level. Anticipate as much as you can so compositing goes smoothly. Use your passes to split the geometry: sets, chars, props…
  • Lowest possible number of layers.
  • Avoid shadow and occlusion passes. Well occlusion can sometimes be OK but never multiplying on top of everything. NEVER.
  • ALWAYS use your background layer as a plate if you’re going to tweak your character lights.

Of course there’s always the matter of philosophy in each production, not even studio but individual shows! Some places love the efficiency while others are pure brute force. The important thing is to be able to adapt and know how to switch mindsets.

There’s not one technique to rule them all and you can for sure do amazing things either way. Do not become dependent on either one of the techniques, that will render you (no pun intended) obsolete pretty quick in a production environment.

Example of non-deep layering

On most movies I have worked on, we would render many layers:

  • Foreground, Middle ground, Background and Characters on separated layers.
  • The Set would be in Primary Visibility Off (PVO) in the Chars layers and vice-versa.
  • We would use a RGB mask to merge them together. No hold outs!

Smart but old-fashioned: motion-blur and depth-of-field would be done in 2d.

previous arrow
next arrow
Slider

Deep Compositing

Deep is an image that stores depth values per pixel. The depth data is used by the deep merge to figure out which pixel goes in front of which. In my experience, this works for most situations but not always. It really depends on the amount of depth of field and the distance between objects.

Deep compositing can be quite useful as it completely removes the need for hold outs. The drawback to this technique is the weight of each frame because of all the data stored. This makes it impossible for some budgets and it can make your compositing software much slower.

previous arrow
next arrow
Slider

Deep example

The following shot from The Star (Director: Timothy Reckart, Art Direction: Sean Eckols) gave me a bit of a headache. Not only I had to manually split FG and BG characters but also FG and BG sets! On top of that there were some precision issues with the hand contact on the well. Deep is great but it’s not the ultimate solution.

Copyright © Sony Pictures Entertainment 2017

As usual, your best bet is to evaluate the needs of your production. If your show requires a massive amount of layering, with very very complex objects in each layer then maybe it’s a good idea to adopt a system that is not so sensitive to layer ordering and masking.

A good example of this could be Dawn of the Planet of the Apes. Imaging the amount of care and planning that every shot would need if you had to keep in mind which apes goes in front of which group, together with elements like leaves, branches, grass… The obvious choice is to render in deep and let Nuke do the rest of the work.

On Apes, we had 3d motion blur and deep compositing. We would render a bunch of Characters per layer (those apes were heavy in memory!) and the Set separately. Less layers, no matte issue thanks to deep. Depth-of-field was done in Nuke to match plate. More efficient.

I am just going to mention it quickly since I have never used it personally. But there is a possibility in deep layering to use Camera Clipping. Rather than separating CHARS, SETS and PROPS which can be time-consuming, Camera Clipping allows to split in FG, MG and BG quickly. Using a distance with a few pixels margin may be a safe way too.

The no layering solution

On Lego Batman, since we had motion blur and depth-of-field in 3d, we could afford to render everything in one beauty pass, even with the fx smokes! Awesome. We would output separately our lights (Light Path Expression) to keep some flexibility in Nuke.

It makes so much sense: Split the lights, not the geometry. Just think about the genius of this! To have one beauty pass actually removes lots of issues: less cheat, no borders, getting closer to reality and still plenty of flexibility!.

Many people are not convinced by this solution. Because of the cost. My argument is simple: what is the most expensive? Human time or machine time? Spending sometimes hours splitting layers or hours of CPU?

Choice looks pretty logical to me.

All of these shots have been rendered in one beauty pass and one volumetric pass. I could have combined both in one single pass but did not want to get that far. The CG renders of this sequence are 99% similar to the final movie. Barely any comp. Awesome! If I had to choose one solution for our layering issues, that would be the one: NO LAYERING!

previous arrow
next arrow
Slider

The LEGO Batman Movie: Warner Bros. Pictures and Warner Animation Group, in association with LEGO System A/S, a Lin Pictures / Lord Miller / Vertigo Entertainment production

I am not against depth-of-field in Nuke. We have to go with what is more efficient. On Lego Batman, depth-of-field was set by layout. It was very handy because other departments would know what is in focus and where to concentrate their energy on. Put your money where it matters. Few special shots had their depth-of-field done in Nuke for narrative purpose. Clever!

Lighting in layering

There is another reason to do layering. Something I particularly disagree with. But since it exists and I have seen it from my own eyes, I won’t bury my head in the sand. Here it is: Layering may be faster for cheating directions.

I agree that many times the direction of the sun on a character is different from the one on the set. Many lighters think the easiest solution would be to fix it in layering. This technique is just WRONG. You MUST nail your lighting in your master layer.

When I open a scene, I want to be able to hit render and get a proper look at the lighting. I don’t want to be switching from one layer to another trying to imagine how it will look like. Do NOT turn on and off lights in your layers.

Think of layering as a simple split of geometry. Back in the day, we used turn off lights to save memory or render time. This not necessary anymore. Otherwise it could be very difficult to integrate the characters with the set. Like in this example from The Star (Director: Timothy Reckart, Art Direction: Sean Eckols):

Copyright © Sony Pictures Entertainment 2017

Thanks to global illumination, we do not need to create as many lights as we used to. Back in the day, you had to create a light for everything. Painful.

previous arrow
next arrow
Slider

Copyright © Sony Pictures Entertainment 2017

I have put below an example of a kick light on a character with and without light-linking to show the differences. This example also applies to lighting in layering.

Templates

A preset format is very helpful as well. It is the best way to make sure the team shares the same settings for their light rigs and Nuke files. If you are a supervisor and you want your team to work a certain way (naming convention, types of lights, compositing scripts…), the use of templates is the ideal solution.

I have worked in many studios where templates were not prepared. You would have to build your Nuke script from scratch or fill your render settings manually every time you will build a shot. This is the perfect open door for trouble.

No compositing… Yet!

On Ninjago, we had a Master that had been heavily tweaked with 10 different color correction nodes in Nuke. The final result was so different from the lighting that you could not rely on the CG at all. It was impossible to propagate to the team. Best solution in this scenario is to report back the values to your 3D scene.

For establishing shots you can render your lights separately and tweak them in Nuke. We did this on Planet 51 and Lego Batman. Once you are happy with the result, report the values back to Maya or Guerilla. This way your light rig will be more CONSISTENT and EASIER to share. You can also use Nuke to test stuff like Haze and Camera Lens Effect.

In my experience, you get best results when your CG output is as close as possible to the final look. Apparently it is the same thing in live-action. Jarin Blaschke (The Witch DP) explains:

I’m not a fan of heavy post looks in film and I just monitor on set in Rec709. […] but overall the sets and costumes were plenty desaturated to show an idea of what we were shooting.

Desaturated costumes on set? Sounds like a Character Surfacing retake, right?
Copyright © A24 Films 2016

I really want to emphasize on this topic. During Master Lighting, it is very interesting to play with your rig in Nuke to find a look. Master Lighting and Compositing are not opposed tasks, they are really complementary. But when you are satisfied and the Master got approved, we need to put the correct values back in 3D.

Communicate

It is not only about being a great lighter and do shots. How do you make sure your team will use the rig correctly? Since I have been a lead, I always do a wiki page about my light rig. But then I realized the ugly truth: Nobody reads wiki. And those who do, generally read too quickly and end up understanding the contrary of what I meant… #epicfail

This is why you have to show your light rig during a presentation. It does not have to be 3 hours long. But going through the wiki and showing the light rig is the best way to start a sequence with the team. The wiki will also be useful as a memo, especially if you have to go back on a sequence three months later.

Explain

  • Have you noticed how helpful it is to talk to someone about an issue? Sometimes you just find the solution as you are explaining the issue to someone else.
  • Have someone test the light rig. Get feedback to improve it.
  • Explain what you did and share the awesomeness with the team. Do not let your ego in the way.

I guess my point is “What is the best way to communicate with your team?” Actually many people do not to want to bother themselves with this. A key lighter at Cinesite even told me one day: “I do this job so I don’t have to speak to people. I just want to put my headphones on and be untroubled.”

Master example

I am going to detail here an example of a Master Lighting from scratch. Hopefully it will give you a more concrete view on the whole process. Let’s take a daylight scene as an example. I have used the Mery CG Character and Zoma the zombie from Sony Pictures Animation for the characters and did some quick modeling for the set.

previous arrow
next arrow
Slider

MeryProject: José Manuel García Alvarez – Antonio Francisco Méndez Lora @ 2014. Todos los derechos reservados.

In order to focus on lighting, we will be working with a mid-gray shader (0.214) and no specular on all the assets. It is actually a quite difficult exercise but it makes you really concentrate on the quality of the light. I am using ACEScg as Rendering Space and Rec.709 (ACES) as Output Display Transform.

No light-liking nor blockers have been used in this light rig. I could have but for the sake of this exercise I did not. It is the exact same rig on the four shots. I thought it would be important to mention it before going through the breakdown of the lights.

Start with the sun

How do I approach a scene like that? Since this sequence is happening during daylight, I generally start with the Sun. It will be probably be your most powerful light. It really takes me some time to find the proper orientation and exposure. I tweak A LOT interactively before being satisfied with the result. Let’s a create a distant light.

previous arrow
next arrow
Slider

Follow with the sky

Most of the time I follow with the Sky. I used an Environment light with an HDRI for this exercise. I could have used a Skylight as well.

previous arrow
next arrow
Slider

Add the big window

Now that I have tackled the natural lights, I may move to practical ones. I know by experience that I am going to stick a big area light for this window. It is almost an automatic thing in a PBR movie. No need to overthink it.

previous arrow
next arrow
Slider

And some dramatic lights

At this stage I feel like there is something missing on the zombie character. The sun lights are not doing what I was hoping for. After many tweaks, I decided to go for a kick light on him to reinforce the sun. It is very common when creating a light rig to go back and forth to test things. I generally end up with the simplest solution.

previous arrow
next arrow
Slider

I am going to follow with some bounce lights. It is useful for two things: it gives shape to the characters and reinforces the solar sensation. And it also helps the render engine to rely less on the indirect. It makes your renders less noisy basically. Let’s start with the bounce lights from the floor:

previous arrow
next arrow
Slider

I have also added two bounce lights from the table. It adds a very subtle touch to the characters by filling some dark areas.

previous arrow
next arrow
Slider

Finally I feel like we are missing some blue light. It is very common to add a top light in cartoon movies. We generally want to avoid any black areas on the characters and the top light helps to bring some freshness to the shot and a nice complementary scheme.

previous arrow
next arrow
Slider

Here is the final result with the Camera Lens Effect. I do not want you to think that it was easy to deliver this rig. It has taken me several days and many tests to balance the whole thing. I always go back and forth by switching off all the lights and then turning them one by one.

previous arrow
next arrow
Slider

Conclusion

In this chapter I have tried to describe a typical Master Lighting process on a PBR movie. Please be aware you may do things differently based on your art direction, resources, software or schedule. I also have to acknowledge the fact that in production things rarely go this smooth.

There is always a broken or dodgy asset which makes you do unorthodox things in lighting. Sometimes it can even be a note from the director himself! We have to be flexible so we can adapt to any demand if necessary.

What is a good light rig?

In conclusion, I have come with three adjectives to best qualify a light rig on a PBR cartoon movie.

  1. Solid: if you move the camera, the lighting still works. It is physically plausible.
  2. User-friendly: if you change something, it is easy to do so. Each light has its own purpose.
  3. Cheap: it renders fast and with little noise. Low render times will allow you more iterations.

Summary

When you start a light rig:

  • Do not place your lights in the camera field of view yet.
  • Let’s not use tiny area lights because sampling will be terrible. Think BIG.
  • Avoid any light-linking for the moment. Have your lights affect everything when you start a sequence. Simple.
  • Lighting is exactly the same as Painting: we start with broad strokes.
  • Test your light rig with panoramic renders and by rendering the key shots of the sequence.
  • Refine the light rig as much as you can and THEN share it with the team. It is really important NOT to share too early.

Sources

Close Menu