Chapter 1: Color Management

Introduction

We cannot talk about cinematography without addressing the question of color management first. It is essential to know within which color space you are working and what is your display target.

Yes, color management can be intimidating, but it will give you strong foundations to build a proper lighting later on. Lighting in computer graphics is so connected to technology that a good understanding of these concepts is important.

There are some good introductory courses out there such as Cinematic Color, but I find it a bit frustrating to get lost in mathematical formulas at page 9. So I have tried to keep this chapter as simple and artist-friendly as I could.

The first and most important decision

Based on my experience, the choice of a Color Management Workflow (CMW) is the first decision to make when you start a project. Because, every single artistic decision will be based on this choice:

  • Albedo values
  • Lights’ exposures
  • Look development (texturing and surfacing)

The trailer below is an example of thousands hours of hard work by hundreds of artists that got “wasted” by wrong color management. Look at the lava, it is clamped !

What is color ?

Some of my data comes from this article by Thomas Mansencal. As it is a bit technical, I have tried to simplify it for readers who are not familiar with this topic. Let’s start with this great quote from Mark D. Fairchild:

“Why should it be particularly difficult to agree upon consistent terminology in the field of color appearance ? Perhaps the answer lies in the very nature of the subject. Almost everyone knows what color is. After all, they have had firsthand experience of it since shortly after birth. However, very few can precisely describe their color experiences or even precisely define color.

Preliminary observations

Human Visual System (HVS)

There are three things to take in account when we talk about color:

  • The eye
  • The brain
  • The subject

Here are a few examples that will show you that our brain can be easily tricked and that we should speak about color with humility.

010_colorManagement_010_optical_illusions_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

The dress is probably the most famous example. It has even been studied by scientists !

First thing to know is that the human visual system is an incredibly complex “technology”. No one really knows how it works, but I can tell you that:

  • Color is a sensation.
  • Color only exists in the brain.

The brain plays a very important part in our visual system, since it is responsible for managing all the data received by the eyes. For instance, when the “image” hits the retina, it is actually upside down. The brain put its back correctly:

What we really see is our mind’s reconstruction of objects based on input provided by the eyes, not the actual light received by our eyes.

From this article.

I thought it would be interesting to discuss the human eye with the creator of Guerilla Render, Benjamin Legros, who actually explained to me: “our eyes actually only see in “red, green and blue”. They are crap ! Nothing compared to the mantis shrimp.

010_colorManagement_130_mantis_shrimp_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Here are some pretty amazing eyes: they have twelve photoreceptors, when humans only have three ! It can even see UV and Infrared, like the Predator !

What is light ?

To define color, you must first define light because no color is perceptible without light.

  • Light is a type of energy.
  • Light energy travels in waves.

Some light travels in short, “choppy” waves. Other light travels in long, lazy waves. Blue light waves are shorter than red light waves.

From Cinematic Color: a study of color science begins with the spectrum. One measures light energy as a function of wavelengths. […] Light towards the middle of this range (yellow-green) is perceived as being most luminous.

010_colorManagement_160_visible_wave_FHD
Exit full screenEnter Full screen
 

Light is an electromagnetic radiation which is visible to the human eye. Visible light is usually defined as having wavelengths in the range of 400-700 nanometres between the infrared (with longer wavelengths) and the ultraviolet (with shorter wavelengths). These waves are made of photons.

All light travels in a straight line unless something gets in the way and does one of these things:

  • Reflect it (like a mirror).
  • Refract it (bend like a prism).
  • Scatter it (like molecules of the gases in the atmosphere).

We could add diffraction to this list, even if it is actually part of the phenomenona listed above.

Light is the source of all colors. It is actually stunning how important light is in our lives. When a lemon appears yellow, it is because its surface reflects the yellow color rather than because it is really yellow. It has confused me a lot in the past but pigments appear colored because they selectively reflect and absorb certain wavelengths of visible light.

Most of these notions come from Wikipedia and you can find plenty of articles on this topic online.

From light to color

So how do we go from Light Spectra to Color ? I cannot explain the Color Matching Functions more clearly than Jeremy Selan, so I’ll just quote him:

The human visual system […] is trichromatic. Thus, color can be fully specified as a function of three variables. Through a series of perceptual experiments, the color community has derived three curves, the CIE1931 color matching functions, which allow for the conversion of spectral energy into a measure of color.

From the amazing Cinematic Color.
010_colorManagement_170_color_matching_functions_FHD
Exit full screenEnter Full screen
 

The CIE 1931 Color Matching Functions convert spectral energy distributions into a measure of color, XYZ. […] When you integrate a spectral power distribution with CIE 1931 curves, the output is referred to as CIE XYZ tristumulus values.

I will not mention here metarism. Feel free to have a look by yourself.

It has taken me a while to connect the dots and I do not think I could have figured this out without this amazing diagram from hg2dc.com:

010_colorManagement_175_diagramme_recap_FHD
Exit full screenEnter Full screen
 

When one converts all possible spectra into x,y,Y space and plots x,y they fall into a horse-shoe shaped region on the chromaticity chart.

The CIE Chromaticity Diagrams from 1931 and 1976

The “CIE XYZ” is a Chromaticity Diagram defined by the International Commission on Illumination (CIE) in 1931. It is the first step to describe colors based on human vision.

An updated version got created 45 years later to improve it: “CIE U’V’” was born. Even if the CIE U’V’ from 1976 is a more “perceptually uniform” variation, the one from 1931 is still the most used in the color community. Old habits die hard.

010_colorManagement_180_diagramme_CIE_FHD
Exit full screenEnter Full screen
 

The two main CIE diagrams from 1931 and 1976.

The research of David L. MacAdam (1942) showed that the CIE 1931 xy chromaticity diagram did not offer perceptual uniformity. What this means is that the relation between the measurable chromaticity of a color and the error margin in observation, was not consistent within the CIE 1931 xy chromaticity diagram.

From this article.

Here are two important notions about chromaticity diagrams:

  • These chromaticity diagrams are the visualization of all chromaticities perceivable by the human eye.
  • 2 axis are available to give each chromaticity an unique coordinate on this diagram.

From Cinematic Color: the region inside the horse-shoe represents all possible integrated color spectra; the region outside does not correspond to physically-possible colors.

We describe it as horseshoe or tongue shaped area. It could be a windsurf sail as well…

The CIE XYZ serves as a standard reference against which many other color spaces are defined. Keep these diagrams in mind because we are going to constantly refer to them later on.

Chromaticity or Color ?

At this point, you may ask yourself: what is the difference between a “chromaticity” and a “color” ? It is a proper question. Basically, every time we mention the terms “color” or “hue“, we enter the field of “perception“. This means that a color only exists when it is perceived by a human being (or what we call a “Standard Observer“).

On the other hand, a chromaticity is a stimulus. It does not include any notion of someone looking at it and perceiving something. And yes, as you have probably guessed, the same stimulus can sometimes generate four different perceived colors. It is demonstrated very clearly in this Siggraph 2018 course.

Interestingly enough, the International Commission on Illumination does list two entries to the word “color“.

RGB Color space and its components

An RGB color space is composed of all the colors available in our workspace. It is defined by three components:

This is what an RGB color space looks like:

010_colorManagement_190_rgb_colorspace_HD
Exit full screenEnter Full screen
 

This GIF comes from this article from Colour Science mentioned above.

On the image above, there are three important things to notice:

  • RGB color spaces actually exist in the 3D space (screen left).
  • To make it easier to visualize, we generally only use a 2D slice like a “top” view (bottom screen right). In this example you can see the Rec. 709 color space against the CIE 1931 diagram.
  • Notice the black points ? They represent the pixel values of the image (top screen right) within an RGB color space (in this case Rec. 709). It is called to plot the gamut.

Primaries

“The primaries chromaticity coordinates define the gamut (the triangle of colors) that can be encoded by a given RGB color space.”

In other words, the primaries are the vertices of the triangle. We are going to complicate things a bit… Pay attention ! Each of these vertices has a pure RGB value expressed in their own color space:

  • Red = 1, 0, 0
  • Green = 0, 1, 0
  • Blue = 0, 0, 1

But each of these vertices has an unique xy coordinate on the CIE diagram. That is how we are able to compare them. The only way to define a color (independently from its luminance) in an universal way is to give its xy coordinates.

In this chart, you can see the coordinates for different color spaces:

010_colorManagement_200_primaries_FHD
Exit full screenEnter Full screen
 

These are like the most common RGB color spaces.

For instance, there is nothing more saturated on earth that the BT.2020 primaries or lasers and they are all located on the “Spectral Locus” (the border of the Chromaticity Diagram). The closer a chromaticity is to the spectral locus, the more colorful it will appear to us.

Primaries comparison

You should clearly see now that the different primaries of each color space (as shown in the image below):

  • Have different xy coordinates.
  • Have same “pure” RGB values.
  • Are different stimuli.
010_colorManagement_210_monitor_colorspace_FHD
Exit full screenEnter Full screen
 

I used both Chromaticity Diagrams (1931 and 1976) for comparison.

Whitepoint

The whitepoint defines the white color for a given RGB color space. Any set of colors lying on the neutral axis passing through the whitepoint, no matter their luminance, will be neutral to that RGB color space.

010_colorManagement_220_CIE_Illuminants_D_Series_FHD
Exit full screenEnter Full screen
 

The neutral axis is also called the achromatic axis.

There are different types of white points. For instance, the “sRGB” and “Rec.2020” color spaces have a white point of D65. This is part of their intrinsic characteristics. But depending on its context usage, it can be a creative choice. So there is a notion of a “creative white point” (just like a white balance):

  • If you wish to simulate the light quality of a standard viewing booth, choose D50. Selecting a warm color temperature such as D50 will create a warm-colored white.
  • If you prefer the quality of daylight at noon, choose D65. A higher temperature setting such as D65 will create a white that is slightly cooler.
  • If you even prefer cooler daylight, choose D75.

Transfer functions (OETF and EOTF)

The transfer functions perform the mapping between the linear light components (tristimulus values) and a non-linear R’G’B’ video signal (most of the time for coding optimisation and bandwidth performance).

The Substance PBR guide actually gives a bit more context :

The Human Visual System (HVS) is more sensitive to relative differences in darker tones rather than brighter tones. Because of this, not using a “gamma correction” is wasteful as too many bits will be allocated to tonal regions where the HVS cannot distinguish between tones.

Transfer functions (or “gamma”) help to encode better the bits (thus increasing performance). There are two transfer functions:

  • OETF: the opto-electronic transfer function, converts linear scene light into the video signal, typically within a camera. When you shoot or scan (for encoding).
  • EOTF: electro-optical transfer function, which converts the video signal into the linear light output of the display. When you send a signal to the screen (for decoding).
010_colorManagement_230_transfer_functions_FHD
Exit full screenEnter Full screen
 

There are also logarithmic transfer functions but we won’t mention them here.

Common mistakes

There are a few common mistakes in the industry (even among veteran VFX supervisors):

  • “Linear” is not a color space.
  • “Linear” is not necessarily wide gamut and certainly not infinite.
  • It is a transfer function which is 100% gamut dependent.

99% of the time when people refers to “linear”, they actually mean: “BT.709 primaries with a D65 white point and a linear transfer function“. But since we used the term “linear” for the past 20 years or so, it is very hard to make them transition to a more accurate terminology.

It also interesting to note that ACEScg for instance is also “linear”. Or to put it more accurately: “AP1 primaries with an ACES white point (~D60) and a linear transfer function.” There is just no way to escape it: a colorspace has three components.

For the sake of simplicity, I will not mention here other color space’s “properties” such as the “viewing environment” or “image state”.

Industry Standards

Screen manufacturers and Cinema majors have agreed on some standards. Their characteristics change the way our images are displayed . Here are the five most important for us:

  • sRGB for internet, Windows and camera photos.
  • Rec. 709 has the same primaries than sRGB but differs on transfer function/gamma (this is because the target use of Rec.709 is video where it’s supposed to be viewed in a dim surround).
  • DCI-P3 for for digital movie projection, defined by the DCI organization and published by the SMPTE.
  • Rec. 2020, also called UHD TV, the future of colorimetry.
  • AdobeRGB for printing projects.
010_colorManagement_240_monitor_colorspace_FHD
Exit full screenEnter Full screen
 

A green BT.709 primary displayed on a BT.709 monitor will NOT look like a green BT.2020 primary displayed on a BT.2020 monitor. They are complete different chromaticities !

How can we use these color spaces in CG ? For two things mainly:

  • Rendering space: also called working space, for our lighting calculation (the scene-referred state).
  • Display space: which should match our monitors with proper calibration (the display-referred state).

Rendering and Display spaces

The rendering and display spaces do NOT have to be the same. It is really important to understand the distinction between those two. In CG, the rendering space will always have a linear transfer function.

Basically the display space should match the monitors used at your company. Also, when you work on a project, you have to know what your display targets are. Are our images going to be seen on a smartphone, on a TV set or in a theater ?

This is where Color Management really comes in handy. In a CG workflow, it is vital to know for every single step which color space you are working in.

When you buy a monitor, you should check the coverage of these color spaces. There is no point on working in a display space that does not match exactly the specifics of your monitor or needs of your project.

I would recommend a coverage of 100% if your budget allows for it.

What is sRGB ?

For many historical reasons, there is no agreement on what exactly sRGB is. But I don’t want to start another Gamma / sRGB EOTF “war”, so I will not share my opinion on this topic.

But basically we are still unsure if our monitor’s EOTF should be a “sRGB Piecewise Function” or a pure “Power Function” (what people misleadingly calls “Gamma“). Both exist in the real world. I would recommend watching Daniele Siragusano’s video if you to know more on this topic.

So unfortunately, sRGB is still a confused notion for many artists:

  • Some say: “It is a color space !
  • Others reply: “It is a transfer function !

sRGB is actually both ! As explained in the Substance PBR guide:

“It is critically important to disambiguate the sRGB OETF from the sRGB color space; the OETF is only one of the three components that make an RGB color space.”

Scene-Linear workflow

This is for me the biggest step between a student work and a professional work. Scene-linear workflow is compulsory in the industry. I do not know any respectable studio that does not use it.

Jeremy Selan: it is often preferable to carry over those aspects of the traditional film workflow which are beneficial even in a fully-CG context. We thus advocate thinking of an animated feature as creating a “virtual movie-set” within the computer, complete with a virtual camera, virtual negative stock, and virtual print stock.

010_colorManagement_250_linear_workflow_FHD
Exit full screenEnter Full screen
 

In this example, the input is display-referred and gets linearized for the lighting calculation. Then, we take the scene-referred output and use tone-mapping to view it properly.

As seen earlier, monitors have a gamma correction to display the images correctly. It is an industry standard for visual comfort since CRT monitors (because the light intensity does not change linearly with the voltage).

We correct this problem by saving an inverted EOTF into our display-referred images. And this is why you need to linearize them for rendering. This is a very important point: your renders will NEVER be correct if you do not use a linear transfer function for your rendering space. Rendering (and displaying) in plain sRGB are just wrong !

Display EOTF: is it “up” or “down” ?

So when we say that our displays have a gamma of 2.2 and that we save our display-referred images in “sRGB”, does it mean we add twice the same gamma Correction ? Well, actually, no.

It is very well explained in this presentation by John Hable: the gamma 2.2 of our displays (the EOTF) is actually a “gamma down” ! Not a “gamma up” ! This has been very unsettling for me since a gamma of 2.2 in Nuke is a “gamma up”. So I got used to this behavior for the past fifteen years…

But for displays, it is the other way around. So no, we do not add up twice the same gamma correction, that would be completely broken ! This is what we do when we save a display-referred image:

  • Encode a jpg for with an OETF (or inverted EOTF), so a “gamma up”.
  • Decode it on a monitor/display which has an EOTF (most likely a “gamma” of 2.2), so a “gamma down”.

To clarify even further, we could use the word “gamma” in the context of “grading” and “gamma” in the context of “display EOTF”. You may refer to Charles Poynton’s FAQ for futher explanation.

In the end, we should not obsess with the word “Gamma”, it is just a greek letter ! We might also describe the gamma function as a Power function with an inverted exponent, as I try to recap in the table below:

NameFunctionDescriptionArtist-friendly terminology
sRGB OETF (Gamma) Encoding a slight tweaking of the simple 2.2 Gamma function (or 1/2.2 Power function)“Gamma Up” or “Inverse EOTF”
sRGB EOTF(Gamma) Decodinga slight tweaking of the pure 2.2 Power function (or 1/2.2 Gamma function) “Gamma Down”

Display Linear

For a very long time, I have thought that “Display” and “Linear” were opposite terms, like they could not co-exist together. Until I read Cinematic Color:

Both the motion-picture and computer graphics communities are far too casual about using the word
“linear” to reference both scene-referred and display-referred linear imagery. We highly encourage
both communities to set a positive example, and to always distinguish between these two image states
even in casual conversation.

To clarify, are you referencing the linear light as emitted by a display ? Did you use the word “gamma” ?
Are there scary consequences to your imagery going above 1.0 ? If so, please use the term display-linear.

Are you referencing high-dynamic range imagery ? Is your middle gray at 0.18 ? Are you talking about
light in terms of “stops” ? Does 1.0 have no particular consequence in your pipeline ? If so, please use the
term scene-linear.

A Plea for Precise Terminology.

So it might a bit of a mind-bender but what we see on our displays, the actual light they emit, is… “linear” ! Or better put “display-linear” since the encoding and the decoding of our images are supposed to be a no-operation !

Example of scene-linear rendering space

The following example is using a sRGB OETF with no LUT. We will see why this is pretty much wrong. I have done a couple of really simple renders where I use mid-gray shaders to illustrate a scene-linear workflow:

  • Non scene-linear workflow: there is no linearization of the value. So 0.5 is used for the rendering.
  • Scene-linear workflow: Middle gray is 0.18 in my scene which makes my calculation correct.
010_colorManagement_260_linear_workflow_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

I agree that these renders are blown-out and are not the best in illustrating my point.

In this very limited experiment, you can see that the bouncing from the plane onto the sphere is more “realistic” in scene-linear rendering. From Cinematic Color:

Why is scene-linear preferred for lighting ? First, the render itself benefits. Physically plausible light transport […] such as global illumination yield natural results when given scenes with high dynamic ranges. […] Light shaders also benefit from working with scene-referred linear, specifically in the area of light falloff. […] when combined with physically-based shading models, using an r² light falloff behaves naturally.

r2 light falloff is just another term for quadratic decay.

“Linear” confusion

Scene-linear confusion

Basically the scene-linear rendering provides the correct math for lighting. There are plenty of websites that show examples of scene-linear workflow. But if you pay attention, you will notice that they use a bit of “loose” terminology here ! So take them with a pinch of salt.

In Nuke, this is particularly disturbing: “linear” is listed as a color space, same as sRGB or Rec.709. In the support pages from The Foundry, you can find an explanation: “However, Nuke’s color space isn’t a standard color space.” Guys, this should be written in BIG RED LETTERS !

010_colorManagement_280_softwares_colorspace_FHD
Exit full screenEnter Full screen
 

These “linear” options have confused me for a very long time.

I think it is worth mentioning that working in scene-linear does not give you magically access to infinite color ranges. You still have to take in account the primaries (or gamut) you are working in.

Display-linear confusion

We are missing one key ingredient in the mix to finalize this section about “linear workflows“, which is how we display our images:

The way that Linear Workflow is often described contains a crucial error. Usually the description implies that the viewing process simply inverts the gamma adjustment applied on the input side. However, this is wrong and will lead to results that are too low in contrast, have too light a mid-gray, and have clipped highlights, therefore requiring artists to compensate by biasing the lighting and material decisions in unnatural ways. In order to correct this, the viewing transform must account for the fact that the input is what color scientists call “scene-referred” whereas the image being viewed on the monitor is “display-referred”. This means that the viewing transform should not be a simple gamma and instead needs to incorporate what is sometimes called a tone mapping step.

By Doug Walker, Technology lead for color science at Autodesk.

Let’s dive in !

Tone mapping

Definition

We are now going to explain the part that converts from scene-linear to display-linear. Most artists know this process as “tone mapping” but it can also be called a “Display Rendering Transform“, a “View Transform” or an “Output Transform” for instance.

Tone mapping is the intentional modification of the relationship between relative scene luminance values and display luminance values usually intended to compensate for limitations in minimum and maximum achievable luminance levels of a particular display, perceptual effects associated with viewing environment differences between the scene and the reproduction, and preferential image reproduction characteristics. Tone mapping may be achieved through a variety of means but is quantified by its net effect on relationship between relative scene luminance values and display luminance values.

Alex Forsythe.

Before going any deeper in this topic, let’s have a look at some images first.

A simple visual example

Check the example below: this is the SAME render displayed differently ! I repeat: the exr from the render engine does not change, it is just the way we view it.

010_colorManagement_310_lut_spi_anim_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

“Tone mapping” is the only way to display properly a scene-linear render.

If you do not use some “tone-mapping”, every time your shot is overexposed, you will have to manually compensate for itand you will NEVER get the right amount of energy in your scene.

Without it, you will decrease your light’s intensity if it burns out. BUT you will loose some bouncing from Global Illumination (GI), some Sub-Surface Scattering (SSS)… So you will probably have to create plenty of lights to compensate for the loss of energy. And this is how you end up with a complicated rig of 50 lights !

The important part is that “tone-mapping” is not an option. It is compulsory to use a proper display transform to review your work properly. From Cinematic Color:

[…] for those using scene-linear workflows remember to use a viewing transform that goes beyond a simple gamma model. Friends don’t let friends view scene-linear imagery without an “S-shaped” view transform.

Jeremy Selan.

Filmic look

As a student I was obsessed to get a filmic look and pure blacks without making my renders look dirty. The only solution to achieve this is to use some “tone mapping”. It will give your images proper contrast and nicely roll off any pixels over 1. Here is what a “filmic” s-curve looks like:

010_colorManagement_330_s_curve_FHD
Exit full screenEnter Full screen
 

All the scene-referred values between 1 and 10 (X axis) will be displayed in a range from 0.8 to 1 (Y axis).

From Cinematic Color: most tone renderings map a traditional scene gray exposure to a central value on the output display. […] one adds a reconstruction slope greater than 1:1 to bump the midtone contrast. Of course, with this increase in contrast the shadows and highlights are severely clipped, so a rolloff in contrast of lower than 1:1 is applied on both the high and low ends to allow for highlight and shadow detail to have smooth transitions. With this high contrast portion in middle, and low contrast portions at the extrema, the final curve resembles an “S” shape as shown below.

Lookup table (LUT)

Display transforms” generally contain more than a “simple” tone-mapping operation. The famous s-curve is only one part of a complex series of operations. And these algorithms can sometimes be heavy and need to be baked into a Lookup Table (LUT) for real-time computation. From Cinematic Color:

Lookup tables (LUT) are a technique for optimizing the evaluation of functions that are expensive to compute and inexpensive to cache.

LUTs are also convenient to share data without breaking any Intellectual Property (“IP”).

Thanks to a list of values (a table), 1D LUTs allow to display high-dynamic range images (HDR) on standard-dynamic range monitors (SDR). Our monitors and film projectors are not able to display the entire HDR range (we call them “display LUTs”).

LUT description

There are several kinds of LUT, but I will describe here the most common:

  • 1D LUT (left image): it contains only one column of numbers as it affects RGB pixels the same way. They generally contain a transfer function and a S-curve shape (I used the vd16.spi1d from “Sony Pictures Imageworks Animation” as an example). A 1D LUT will not change the gamut of your image.
  • 3D LUT (right image): it contains three columns of numbers as it affects RGB pixels differently. They can be used to map one color space to another. I used the Rec. 709 for ACEScg Maya.csp from the ACES 1.2 (Academy Color Encoding System) config as an example. A 3D LUT allows you to modify the gamut.

From Cinematic Color: A lookup table (LUT) is characterized by its dimensionality, that is, the number of indices necessary to index an output value.

010_colorManagement_290_LUTs_FHD
Exit full screenEnter Full screen
 

I have shortened the screenshots for convenience. These files actually have hundred of thousand of lines.

We could also divide LUTs in two categories (although it could be considered an arbitrary distinction):

  • Technical LUTs: like for instance a “Cineon Log to Linear” spi1d LUT.
  • Artistic LUTs: such as a “Look” or “Look Modification Transform” (LMT).

There are also several file formats available for LUTs: spi1d, spi3d, csp, cube…

OCIO (OpenColorIO)

To load a LUT into Maya, Nuke, Guerilla Render or Mari, we will use an OCIO configuration. The OCIO config is the easiest way to share a LUT between different programs. And guess what ? There are several OCIO Configs available for free to help you setup your Color Management Workflow.

OCIO is an open source framework for configuring and applying color transformations. […] It is designed for rolling out complex color pipeline decisions to lots of people who might not individually understand what’s going on (ie: not colorists).

Great OCIO definition from this post.

Here is what the OCIO config from Sony Picture Imageworks (left side) and ACES 1.2 (right side) look like:

010_colorManagement_300_config_OCIO_FHD
Exit full screenEnter Full screen
 

I have also shortened the files on purpose for display convenience.

Here are a few general observations about OCIO Configs:

  • Different roles like color picking, texture painting or compositing are predefined. Watch out !
  • Different displays are available such as DCI-P3, sRGB or Rec.709.
  • OCIO configs are editable in text editors and generally contain some description to help you.

I have written a couple of articles about “OCIO and Display Transforms” and “Picture formations” if you are keen on exploring this topic. Watch out, it is a rabbit hole !

Look development and LUTs

It is crucial that anyone doing look development or rendering uses the same display transform. Artists from the surfacing, grooming, FX, DMP, lighting and compositing departments need to see the same thing. There are plenty of solutions for Mari, Substance Painter or Photoshop to make our work consistent.

I could make an analogy here: we generally need to test our look development assets under different HDRIs (or lighting conditions) to make sure they react well. Same goes with Display Transforms. It is not uncommon to work the assets under a “neutral” studio LUT and check how it looks under the “show” LUT.

The OCIO configs are very handy because you can share them between softwares. You can use the official ones or build your own in Python (or manually). LUTs in animated movies have arrived pretty late compared to visual effects (VFX), but they are kind of a game changer in the industry (Naughty Dog has done some really good papers about them).

Output recommendations

Format

We are all set for proper Color Management and ready to make some beautiful renders. But how do we keep all these values, especially above 1, in a proper format ? We can thank ILM to have invented OpenEXR.

OpenEXR allows you to save in 16 bits half float to preserve all the “raw” data. You will also be able to write some Arbitrary Output Variable (AOV) for your compositing process. You can check the Arnold documentation about exr files.

From Cinematic Color: It is important to remember when dealing with float-16 data, that the bits are allocated very differently from integer encodings. Whereas integer encodings have a uniform allocation of bits over the entire coding space, the floating-point encodings have increased precision in the low end, and decreased precision in the high end.

It is also worth mentioning that 16 or 32 bits files do not give you magically access to infinite color ranges. You still have to take in account the primaries (or gamut) you are working in.

We did render some AOVs in 32 bits on Playmobil: Ncam, Pcam and Pworld for better precision.

For Z Depth and position.

A common misconception is also about the weight of the EXR files. I strongly believe that they are actually lighter than most formats, such as TGA, TIF, PNG and even JPG sometimes ! Just use the proper compression such as “DWAB” (developed by Dreamworks Animation) for example.

Range

Finally we have to take care of our dynamic range. Basically you want to keep as much data as possible. There are two settings to take care of:

  • Indirect Clamp Value: Clamp the GI sample values.
  • AA Clamp Value: Clamp the pixel values.

Clamping the GI sample values will help you to reduce fireflies and noise. The lowest value I ever used was 10 and it helped me greatly. Clamping the pixel values will help your anti-aliasing for a smoother result.

If you have a very dark pixel next to an overexposed pixel, you do NOT want the overexposed pixel to blow out every other pixel next to it. This is why we clamp our renders.

It may also depend on your scene and your render engine. Some studios clamp their renders at 50. I have tried a couple of times as low as 30… I guess you need to try and test for yourself, especially on HDR displays ! The Arnold developers explain pretty well the topic of clamping.

We will see in chapter 9 why clamping pixel values too low can be an issue.

Summary

We have seen some important and complex notions in this chapter. Hopefully I was able to explain them without causing too much headache. Here is a quick recap:

  • A color space has three components: three primaries, a white point and a transfer function.
  • The difference between Transfer Functions (EOTF and OETF), a Gamma function and a Power function.
  • The different properties of a color space: scene-referred, display-referred and viewing environment.
  • A presentation of the Industry standards: sRGB, Rec.709, DCI-P3 and Rec.2020.
  • The importance of scene-linear and display-linear terminology, and the “linear” confusion.
  • The benefits of a scene-linear workflow and the importance of tone-mapping.
  • The use of differents LUTs through OpenColorIO (OCIO) configuration files.
  • And finally, how to save all our High Dynamic Range data into a proper file format: OpenEXR.

We have also seen that your rendering space (or scene-referred) should always have a linear transfer function and that you should NEVER display a linear image without a proper display transform.

Conclusion

Color Management is a never-ending topic that I continue to explore on a daily basis. I have tried to introduce it to you in an easily understandable post. A good color management is when you see the same thing along the color pipeline and in different softwares.

If your colors or your contrast shift when you go from Substance Painter to Maya, it may be worth to analyze the issue. A good color pipeline is all about controlling the color space from the first sketches (probably done by the Art Department) to final delivery. Consistency is key.

This has been a great and long journey through color with my former colleague Christophe Verspieren . But it doesn’t end here. That would be too easy. If you are still up for it, we can move to chapter 1.5 about the Academy Color Encoding System (ACES).