Chapter 1: Color Management


You cannot talk about Cinematography without addressing the question of Color Management. It is essential to know within which color space you are working and what is your display target. Chapter 1 and 1.5 are the only technical chapters of the book. We will focus on the artistic side in the next chapters. Don’t be scared !

Sure, you could work in full sRGB with no LUT and still get a decent result… But, let’s no go down that path.

Yes, color management is a pain in the arse, but it will give you strong foundations to build a proper lighting later on. Lighting in computer graphics is so connected to technology that a good understanding of these concepts is important.

There are some good introductory courses out there such as Cinematic Color, but it is always a bit frustrating for me to get lost in mathematic formulas at page 9 ! So I have tried to keep this chapter as simple and artist-friendly as I could. Thanks for reading !

If you don’t feel like starting by a technical chapter, I don’t blame you. You can skip directly to chapter 2.

But maybe it is worth the effort…

The first and most important decision

Based on my experience, the choice of a Color Management Workflow (CMW) is the first decision to make when you start a project. Because, every single artistic decision will be based on this choice :

  • Albedo values
  • Lights’ exposures
  • Look development (texturing and surfacing)

The trailer below is an example of thousands of hours of hard work by hundreds of artists that got wasted by wrong color management. Look at the lava, it is clamped !

Introductory Quote

Some of my data comes from this article by Thomas Mansencal. As it is a bit technical, I have tried to simplify it for readers who are not familiar with this topic. Let’s start with this great quote from Mark D. Fairchild :

“Why should it be particularly difficult to agree upon consistent terminology in the field of color appearance ? Perhaps the answer lies in the very nature of the subject. Almost everyone knows what color is. After all, they have had firsthand experience of it since shortly after birth. However, very few can precisely describe their color experiences or even precisely define color.”

Optical Illusions

There are three things to take in account when we talk about color :

  • The eye
  • The brain
  • The subject

Here are a few examples that will show you that our brain can be easily tricked and that we should speak about color with humility.

Exit full screenEnter Full screen
previous arrow
next arrow

You can find plenty of optical illusions on internet, like our last example : is the dress blue or white ?

The Dress has also been studied by scientists !

Human Visual System (HVS)

The human eye

First thing to know is that the human eye is an incredibly complex and advanced technology. We are poorly trying to replicate its reconstruction process with cameras and screens. How does it work ?

Exit full screenEnter Full screen

I have copied most of this text from the websites linked above.

The cornea is the clear, transparent front covering which admits light. Its refractive power bends the light rays in such a way that they pass freely through the pupil. The pupil is the opening in the center of the iris and works like a shutter in a camera. It is an adjustable opening that controls the intensity of light permitted to strike the crystalline lens. It has the ability to enlarge and shrink, depending on how much light is entering the eye.

After passing through the iris, the light rays pass thru the eye’s natural crystalline lens. This clear, flexible structure works like the lens in a camera, shortening and lengthening its width in order to focus light rays properly. The crystalline lens focuses light through the vitreous humor, a dense, transparent gel-like substance, that fills the globe of the eyeball and supports the retina.

The retina functions much like the film in a camera. It receives the image that the cornea focuses through the eye’s internal lens and transforms this image into electrical impulses that are carried by the optic nerve to the brain. Retinas are made of photoreceptors, which come in two kinds : rods and cones.

We will come back to rods and cones later.

The mantis shrimp eye

I thought it was interesting to discuss the human eye with the creator of Guerilla Render, Benjamin Legros, who actually explained to me : our eyes actually only see in red, green and blue. They are crap !

Nothing compared to the mantis shrimp. Here are some pretty amazing eyes : they have twelve photoreceptors, when humans only have three ! It can even see UV and Infrared, like the Predator ! I really liked this original take on our trichromatic system.

Exit full screenEnter Full screen
previous arrow
next arrow

Mantis shrimp is a reference for color scientists.

Mantis shrimps, or stomatopods, are marine crustaceans of the order Stomatopoda. They are among the most important predators in many shallow, tropical and subtropical marine habitats. However, despite being common, they are poorly understood, as many species spend most of their lives tucked away in burrows and holes.

The human brain

Our brain has also an important part in our Visual System. It is responsible for managing all this data. When the image hits the retina, it is actually upside down. The brain put its back correctly :

What we really see is our mind’s reconstruction of objects based on input provided by the eyes, not the actual light received by our eyes.

From this article.

What is light ?

To define color, you must first define light because no color is perceptible without light.

  • Light is a type of energy.
  • Light energy travels in waves.

Some light travels in short, “choppy” waves. Other light travels in long, lazy waves. Blue light waves are shorter than red light waves.

From Cinematic Color : a study of color science begins with the spectrum. One measures light energy as a function of wavelengths. […] Light towards the middle of this range (yellow-green) is perceived as being most luminous.

Exit full screenEnter Full screen

Light is an electromagnetic radiation which is visible to the human eye.

All light travels in a straight line unless something gets in the way and does one of these things :

  • Reflect it (like a mirror).
  • Refract it (bend like a prism).
  • Scatter it (like molecules of the gases in the atmosphere).

We could also add Diffraction to this list, even if it looks like a more complex phenomenon.

Light is the source of all colors. It is actually stunning how important light is in our lives. When a lemon appears yellow, it is because its surface reflects the yellow color rather than because it is really yellow. It has confused me a lot in the past but pigments appear colored because they selectively reflect and absorb certain wavelengths of visible light.

Most of these notions come from Wikipedia and you can find plenty of articles on this topic online. Here is a more technical description :

Light is an electromagnetic radiation which is visible to the human eye. Visible light is usually defined as having wavelengths in the range of 400-700 nanometres between the infrared (with longer wavelengths) and the ultraviolet (with shorter wavelengths). These waves are made of photons.

From Light to Colour

So how do we go from Light Spectra to Colour ? I cannot explain the Color Matching Functions more clearly than Jeremy Selan, so I’ll just quote him :

The human visual system […] is trichromatic. Thus, color can be fully specified as a function of three variables. Through a series of perceptual experiments, the color community has derived three curves, the CIE1931 color matching functions, which allow for the conversion of spectral energy into a measure of color.

From the amazing Cinematic Color.
Exit full screenEnter Full screen

The CIE 1931 Color Matching Functions convert spectral energy distributions into a measure of color, XYZ. XYZ predicts if two spectral distributions appear identical to an average human observer. […] When you integrate a spectral power distribution with CIE 1931 curves, the output is referred to as CIE XYZ tristumulus values.

Metarism is at the base of everything.

It has taken me a while to connect the dots and I do not think I could have figured this out without this amazing diagram from :

Exit full screenEnter Full screen

When one converts all possible spectra into x,y,Y space and plots x,y they fall into a horse-shoe shaped region on the chromaticity chart.

Let’s recap ! So far we have seen that :

  • We, poor humans, can be tricked very easily.
  • The Human Visual System (HVS) is incredibly complex.
  • Light and Colour are two inseparable subjects.
  • Spectral wavelengths got “converted” to Chromaticity Diagrams (the horse-shoe shape) for the first time in 1931.

The CIE Chromaticity Diagrams from 1931 and 1976

The CIE XYZ is a Chromaticity Diagram defined by the International Commission on Illumination (CIE) in 1931. It is the first step to describe colors based on human vision.

Exit full screenEnter Full screen

The two main CIE diagrams from 1931 and 1976.

CIE scientists met in 1931 to represent the colors as we see them. The CIE XYZ chromaticity diagram was born ! They met 45 years later to improve it : CIE U’V’ was created. Even if the CIE U’V’ from 1976 is a more “perceptually uniform” variation, the one from 1931 is still the most used in the color community. Old habits die hard.

The research of David L. MacAdam (1942) showed that the CIE 1931 xy chromaticity diagram did not offer perceptual uniformity. What this means is that the relation between the measurable chromaticity of a color and the error margin in observation, was not consistent within the CIE 1931 xy chromaticity diagram.

From this amazing article.

Here are two important notions about Chromaticity Diagrams :

  • These chromaticity diagrams are the visualization of all chromaticities perceivable by the human eye.
  • 2 axis are available to give each chromaticity an unique coordinate on this diagram.

From Cinematic Color : the region inside the horse-shoe represents all possible integrated color spectra; the region outside does not correspond to physically-possible colors.

Horseshoe or tongue shaped area. It really depends. It could be a windsurf sail as well.

The CIE XYZ serves as a standard reference against which many other color spaces are defined. Keep these diagrams in mind because we are going to constantly refer to them later on.

Chromaticity or Color ?

At this point, you may ask yourself : what is the difference between a Chromaticity and a Color ? It is a proper question. Basically, every time we mention the terms “Color” or “Hue”, we enter the field of “perception“. This means that a color only exists when it is perceived by a human being (or what we call a “Standard Observer“).

On the other hand, a chromaticity is a stimulus. It does not include any notion of someone looking at it and perceiving something. And yes, as you have probably guessed, the same stimulus can generate four different perceived colours. It is demonstrated very clearly in this Siggraph 2018 course.

Interestingly enough, the Commission internationale de I’Eclairage (CIE) does list two entries to the word “colour” :

RGB Color space and its components

An RGB color space is composed of all the colors available in our workspace. It is defined by three components :

This is what an RGB color space looks like :

Exit full screenEnter Full screen

This screenshot comes from colour-science.

On the image above, there are three important things to notice :

  • RGB color spaces actually exist in the 3D space (screen left).
  • To make it easier to visualize, we generally only use a 2D slice like a Top view (bottom screen right). In this example you can see the Rec. 709 color space against the CIE 1931 diagram.
  • Notice the black points ? They represent the pixel values of the image (top screen right) within an RGB color space (in this case Rec. 709). It is called to plot the gamut.


“The primaries chromaticity coordinates define the gamut (the triangle of colors) that can be encoded by a given RGB color space.”

In other words, the primaries are the vertices of the triangle. We are going to complicate things a bit… Pay attention ! Each of these vertices has a pure RGB value expressed in their own color space :

  • Red = 1, 0, 0
  • Green = 0, 1, 0
  • Blue = 0, 0, 1

But each of these vertices has an unique xy coordinate on the CIE diagram. That is how we are able to compare them. The only way to define a color (independently from its luminance) in an universal way is to give its xy coordinates.

In this chart, you can see the coordinates for different color spaces :

Exit full screenEnter Full screen

These are like the most common RGB color spaces.

For instance, there is nothing more saturated on earth that the BT.2020 primaries or lasers, they are both located on the Spectral Locus (the border of the Chromaticity Diagram). The closer a chromaticity is to the Spectral Locus, the more saturated it will be.

Here is table of conversion of some primaries to illustrate this relationship :

sRGB/BT.709 RGB valuesRec.2020 RGB values
sRGB/BT.709 red primary1, 0, 00.62742, 0.096910, 0.01639
sRGB/BT.709 green primary 0, 1, 0 0.32928, 0.91594, 0.08803
sRGB/BT.709 blue primary 0, 0, 1 0.04331, 0.01136, 0.89559
BT.2020 red primary 1.66045, -0.12455, -0.01815 1, 0, 0
BT.2020 green primary -0.58762, 1.13290, -0.10059 0, 1, 0
BT.2020 blue primary -0.07284, -0.00835, 1.11874 0, 0, 1
This table clearly shows what pixel negative values are : out-of-gamut values. BT.2020 primaries are outside the sRGB/BT.709 gamut.

Primaries comparison

You should clearly see now that the different primaries of each color space (as shown in the image below) :

  • Have different xy coordinates.
  • Have same RGB values.
  • Are different stimuli.
Exit full screenEnter Full screen

I used both Chromaticity Diagrams (1931 and 1976) for comparison.


The whitepoint defines the white color for a given RGB color space. Any set of colors lying on the neutral axis passing through the whitepoint, no matter their Luminance, will be neutral to that RGB color space.

Exit full screenEnter Full screen

D65 corresponds roughly to the average midday light in Western Europe.

There are different types of white points. For instance, when a white point is supplied with a set of primaries, it is used to balance the primaries. For instance, the sRGB and Rec.2020 color spaces have a White Point of D65. This is part of their intrinsic characteristics.

But a viewing environment will also have a white point which may be the same or different. So there is a notion of a calibration white point and a creative white point. An example of where they are different is for digital cinema which is calibrated to the DCI white point, but often movies will use another white point (D60 or D65) as the creative white point.

The neutral axis is also called the achromatic axis.

An RGB color space can have different whitepoints depending its context usage. It can be a creative choice :

  • If you wish to simulate the light quality of a standard viewing booth, choose D50. Selecting a warm color temperature such as D50 will create a warm-colored white.
  • If you wish to simulate the quality of daylight at noon, choose D65. A higher temperature setting such as D65 will create a white that is slightly cooler.
  • If you prefer cooler daylight, choose D75.

Transfer functions (OETF and EOTF)

The transfer functions perform the mapping between the linear light components (tristimulus values) and a non-linear R’G’B’ video signal (most of the time for coding optimisation and bandwidth performance).

Okay. Here things get a bit more complicated… We have seen earlier what the tristimulus values are. But what about coding optimization and bandwidth performance ? Here is the answer :

From Substance PBR guide : The Human Visual System (HVS) is more sensitive to relative differences in darker tones rather than brighter tones. Because of this, not using a gamma correction is wasteful as too many bits will be allocated to tonal regions where the HVS cannot distinguish between tones.

Transfer functions (or gamma) help to encode better the bits, thus increasing performance. There are two transfer functions :

  • OETF : the opto-electronic transfer function, converts linear scene light into the video signal, typically within a camera. When you shoot or scan (for encoding).
  • EOTF : electro-optical transfer function, which converts the video signal into the linear light output of the display. When you send a signal to the screen (for decoding).
Exit full screenEnter Full screen

There are also logarithmic transfer functions but that’s another topic.

Common mistakes

Important stuff to notice is that :

  • “Linear” is not a color space.
  • “Linear” is not necessarily wide gamut and certainly not infinite.
  • It is a transfer function which is 100% gamut dependent.

That is a very common mistake among even veteran VFX sups in the industry. Secondly, to simplify, let’s say that these transfer functions are related to gamma and tv signal :

A common belief is that a non-linear Optical to Electrical Transfer Function is needed because of the non-linearity of the human visual system. It is true that the human visual system is very complex and lightness perception is non-linear, approximating to a cube-root power law.

This is a similar explanation as the one from the Substance PBR guide.

We are going back to our first point : human vision is incredibly complex. And that we need these transfer functions for our visual comfort.

One of the biggest fight I ever had was to convince my supervisors that linear was NOT some kind of infinite color space.

In this case infinite means access to all colors of the spectrum.

Properties of a color space

Two properties are also necessary in order to properly interpret a set of RGB values : image state and viewing environment.

Image State

Image State is defined in the international standard ISO 22028-1. This is basically the international standard for how to define an RGB color space.


From Charles Poynton : Scene-referred means there is a documented mathematical mapping from the original scene light and the values recorded in the pixels/data. It is not exactly the same as “scene-linear“, even if these notions can overlap.

A scene referred image is one whose light values are recorded as they existed at the camera focal plane before any kind of in-camera processing. These linear light values are directly proportional to the objective, physical light of the scene exposure. By extension, if an image is scene referred then the camera that captured it is little more than a photon measuring device.

In this quote, scene-referred and scene-linear are the same.


From Charles Poynton : Display-referred means that the image data is having a documented mathematical mapping from image code value to absolute colorimetric light at a display.

A display referred image is one defined by how it will be displayed. Rec.709 for example is a display referred color space, meaning that the contrast range of Rec.709 images is mapped to the contrast range of the display device, a HD television.

Viewing environment

As surprising as it sounds, yes, the Viewing environment is important when it comes to color spaces. The room you are sitting in, the light that you are using, the color of your walls… All of this have an influence on your viewing conditions and for BT.709 they are defined here.

The reference viewing environment is intended to provide an environment which can be replicated from one facility to another. It defines the Room Illumination, the Chromaticity background, the Observation angle and the Display characteristics and adjustment.

This article and this one by Autodesk also explain this topic.

Industry Standards

Screen manufacturers and Cinema majors have agreed on some standards. Their characteristics change the way our images are displayed and rendered. Here are the five most important for us :

  • sRGB for internet, Windows and camera photos.
  • Rec. 709 has the same primaries than sRGB but differs on transfer function/gamma. This is because the target use of Rec.709 is video where it’s supposed to be viewed on a dim surround.
  • DCI-P3 for cinema projectors.
  • Rec. 2020, also called UHD TV, the future of colorimetry.
  • AdobeRGB for printing projects.
Exit full screenEnter Full screen

A green BT.709 primary displayed on a BT.709 monitor will NOT look like a green BT.2020 primary displayed on a BT.2020 monitor. They are complete different chromaticities !

What are these color spaces for ? For two things :

  • Rendering Space : also called working space, for our lighting calculation (the scene-referred state).
  • Display Space : should match our monitors, which obviously need proper calibration (the display-referred state).

Rendering and Display spaces

The rendering and display spaces do NOT have to be the same. It is really important to understand the distinction between those two. In CG, the rendering space will always have a linear transfer function. We will detail the scene-linear workflow in a bit.

For the Display Space, when you work on a project, you have to ask yourself : what is our display target ? Are our images going to be seen on a smartphone, on a TV set or in a theater ? Basically the Display Space should match the monitors used at your company.

This is where Color Management really comes in handy. In a CG workflow, it is vital to know for every single step which color space you are working in.

When you buy a monitor, you should check the coverage of these color spaces. For example the monitors used at a famous Parisian studio only cover 93,7% of the P3 gamut. That’s too low ! We will never match perfectly the projectors from theaters. There is no point on working in a Display Space that does not match exactly the specifics of your monitor or needs of your project.

P3 is a common RGB color space for digital movie projection, defined by the Digital Cinema Initiatives (DCI) organization and published by the Society of Motion Picture and Television Engineers (SMPTE).

What is sRGB ?

As this has been explained in Thomas Mansencal’s article, sRGB is still a confused notion for many artists. Whats is sRGB ?

  • Some say : “It is a color space !
  • Others reply : “It is a transfer function !

sRGB is actually both ! It is a color space that includes a transfer function which is a slight tweaking of the simple gamma 2.2 curve. This is why you can also use gamma corrections (0.454 / 2.2) to be in linear or not.

From the Substance PBR guide : “It is critically important to disambiguate the sRGB OETF from the sRGB color space; the OETF is only one of the three components that make an RGB color space.”

Scene-Linear workflow

This is for me the biggest step between a student work and a professional work. Scene-linear workflow is compulsory in the industry. I do not know any respectable studio that does not use it.

Jeremy Selan : it is often preferable to carry over those aspects of the traditional film workflow which are beneficial even in a fully-CG context. We thus advocate thinking of an animated feature as creating a “virtual movie-set” within the computer, complete with a virtual camera, virtual negative stock, and virtual print stock.

Exit full screenEnter Full screen

In this example, the input is display-referred and gets linearized for the lighting calculation. Then, we take the scene-referred output and use tone-mapping to view it properly.

As seen earlier, monitors have a gamma correction to display the images correctly. It is an industry standard for visual comfort since we had CRT monitors (because the light intensity does not change linearly with the voltage).

In most cases, if a computer runs the Windows operating system, we can achieve close to ideal colors by using a monitor with a gamma value of 2.2. This is because Windows assumes a monitor with a gamma value of 2.2, the standard gamma value for Windows.

From Eizo’s website. We will see in the next paraagraph why this terminology has confused me.

We correct this problem by saving an inverted EOTF into our display-referred images. And this is why you need to linearize them for rendering. This is a very important point : your renders will NEVER be correct if you do not use a linear transfer function for your rendering space. Rendering (and displaying) in plain sRGB are just wrong !

Display EOTF : is it “up” or “down” ?

So when we say that our displays have a Gamma of 2.2 and that we save our display-referred images in “sRGB”, does it mean we add twice the same Gamma Correction ? Well, actually, no.

It is very well explained in this presentation by John Hable : the Gamma 2.2 of our displays (the EOTF) is actually a “Gamma Down” ! Not a “Gamma Up” ! This has been very unsettling for me since a Gamma of 2.2 in Nuke is a “Gamma Up”. So I got used to this behaviour for the past fifteen years…

But for displays, it is the other way around. So no, we do not add up twice the same Gamma Correction, that would be completely broken ! This is what we do when we save a display-referred image :

  • Encode a jpg for with an OETF (or inverted EOTF), so a “Gamma Up”.
  • Decode it on a monitor/display which has an EOTF (most likely a “Gamma” of 2.2), so a “Gamma Down”.

OETF and EOTF : a no-operation ?

In an ideal world, these two operations cancel out and we would call the whole thing a “no-operation“. But unfortunately this not exactly true for many historical reasons. But I don’t want to start another Gamma / sRGB EOTF “war”, so I will not go further on this topic.

But basically we are still unsure if our monitor’s EOTF should be a sRGB Piecewise Function or a pure Power Function (what people misleadingly calls “Gamma”). Both exist in the real world. Nobody explains it better than Daniele Siragusano in this video :

It is interesting though to observe that we used to encode with an OETF and decode with an EOTF that would not match to take in account flare/surround compensation. But in 2022 we would rather let the Display Transform handle that and have the whole encoding/decoding phase to be a “no-operation”.

In the end, we should not obsess with the word “Gamma”, it is just a greek letter ! You could also see the Gamma function as a Power function with an inverted exponent, as I try to recap in the table below :

NameFunctionDescriptionArtist-friendly terminology
sRGB OETF (Gamma) Encoding a slight tweaking of the simple 2.2 Gamma function (or 1/2.2 Power function) “Gamma Up” or “Inverse EOTF”
sRGB EOTF(Gamma) Decodinga slight tweaking of the pure 2.2 Power function (or 1/2.2 Gamma function) “Gamma Down”

Display Linear

And this little paragraph about displays/monitors’ EOTF lead me to another misconception I had. For me, “Display” and “Linear” would always be opposite terms. They could not co-exist together. Until I read Cinematic Color :

Both the motion-picture and computer graphics communities are far too casual about using the word
“linear” to reference both scene-referred and display-referred linear imagery. We highly encourage
both communities to set a positive example, and to always distinguish between these two image states
even in casual conversation.

To clarify, are you referencing the linear light as emitted by a display ? Did you use the word “gamma” ?
Are there scary consequences to your imagery going above 1.0 ? If so, please use the term display-linear.

Are you referencing high-dynamic range imagery ? Is your middle gray at 0.18 ? Are you talking about
light in terms of “stops” ? Does 1.0 have no particular consequence in your pipeline ? If so, please use the
term scene-linear.

Finally, for those using scene-linear workflows remember to use a viewing transform that goes beyond a simple gamma model. Friends don’t let friends view scene-linear imagery without an “S-shaped” view transform.

A Plea for Precise Terminology.

So it might a bit of a mind-bender but what we see on our displays, the actual light they emit, is… “Linear” ! Or better put “Display-Linear“, since the encoding and the decoding of our images are supposed to cancel each other out.

Example of scene-linear rendering space

The following example is using a sRGB OETF with no LUT, like most rendering softwares offer by default. We will see why this is pretty much wrong. I have done a couple of really simple renders where I use mid-gray shaders to illustrate a scene-linear workflow :

  • Non scene-linear workflow : there is no linearization of the value. So 0.5 is used for the rendering.
  • Scene-linear workflow : Middle gray is 0.18 in my scene which makes my calculation correct.
Exit full screenEnter Full screen
previous arrow
next arrow

Warning : a scene-linear workflow is only complete with a tone mapping step.

In this very limited experiment, you can see that the bouncing from the plane onto the sphere is more “realistic” in scene-linear rendering. Unfortunately, I was not able to produce a better comparison but hopefully you get the idea. Otherwise, we can just ask Jeremy Selan :

From Cinematic Color : Why is scene-linear preferred for lighting ? First, the render itself benefits. Physically plausible light transport […] such as global illumination yield natural results when given scenes with high dynamic ranges. […] Light shaders also benefit from working with scene-referred linear, specifically in the area of light falloff. […] when combined with physically-based shading models, using an r² light falloff behaves naturally.

r2 light falloff is just another term for quadratic decay.

Basically the scene-linear rendering provides the correct math for lighting. There are plenty of websites that show examples of scene-linear workflow. But if you pay attention, you will notice that these professional websites use a bit of “loose” terminology here ! And none of them mentions tone mapping, wich is arguably an essential step ! So take them with a pinch of salt.

“Linear” confusion

Scene-linear confusion

That is really the original sin from the VFX industry : thinking that “linear” was a color space. The error is still quite common, even among VFX industry veterans. And even some softwares are wrong about it !

Exit full screenEnter Full screen

These “linear” options have confused me for a very long time.

In Nuke, this is particularly disturbing : “linear” is listed as a color space, same as sRGB or Rec.709. In the support pages from The Foundry, you can find an explanation : “However, Nuke’s color space isn’t a standard color space.” Guys, this should be written in BIG RED LETTERS !

After a couple of e-mails following the release of my website, I thought it would be worth mentioning that working in scene-linear does not give you magically access to infinite color ranges. You still have to take in account the primaries (or gamut) you are working in.

We will see in the next chapter how all of this color space confusion gets clarified with proper terminology.

Display-linear confusion

I recently (April 2020) had the chance to talk with Doug Walker about linear issues regarding scene-referred and display-referred :

The way that Linear Workflow is often described contains a crucial error. Usually the description implies that the viewing process simply inverts the gamma adjustment applied on the input side. However, this is wrong and will lead to results that are too low in contrast, have too light a mid-gray, and have clipped highlights, therefore requiring artists to compensate by biasing the lighting and material decisions in unnatural ways.

By Doug Walker, Technology lead for color science at Autodesk.

Basically what most schools and students do is… Wrong ! The sRGB view transform maps 1.0 in the rendering space to 1.0 in the display space. This is one of the problems with using that as a view transform, it does not leave room for anything above 1.0 and those values are simply clipped.

In order to correct this, the viewing transform must account for the fact that the input is what color scientists call “scene-referred” whereas the image being viewed on the monitor is “display-referred”. This means that the viewing transform should not be a simple gamma and instead needs to incorporate what is sometimes called a tone mapping step.

Which is the topic of our next paragraph.

Tone mapping

Until now we have mostly focused on the scene-referred part of the process aka the rendering space. We are now going to detail the part that converts from scene-linear to display-linear. Most artists know this process as “tone mapping“, but it can also be called a “Display Rendering Transform”, a “View Transform” or an “Output Transform” for instance.

Tone mapping is the intentional modification of the relationship between relative scene luminance values and display luminance values usually intended to compensate for limitations in minimum and maximum achievable luminance levels of a particular display, perceptual effects associated with viewing environment differences between the scene and the reproduction, and preferential image reproduction characteristics. Tone mapping may be achieved through a variety of means but is quantified by its net effect on relationship between relative scene luminance values and display luminance values.

Alex Forsythe.

In any case, if you just apply an sRGB OETF for display, your contrast will be off, your highlights will be clipped and your mid-grays will be in the wrong place. Most CG artists will struggle and compensate for this error by biasing their lighting and material values but that is not necessary if a proper color management system is used.

[…] for those using scene-linear workflows remember to use a viewing transform that goes beyond a simple gamma model. Friends don’t let friends view scene-linear imagery without an “S-shaped” view transform.

Jeremy Selan.

Lookup table (LUT)

We won’t get too much in detail about “Display Transforms” in this chapter (I have written a full article about them) but it is important to know that they generally contain more than a “simple” tone-mapping operation. The famous s-curve is only one part of a complex series of operations. And these algorithms can sometimes be heavy and need to be baked into a Lookup Table (LUT) for real-time computation.

From Cinematic Color : Lookup tables (LUT) are a technique for optimizing the evaluation of functions that are expensive to compute and inexpensive to cache.

LUTs are also convenient to share data without breaking any Intellectual Property (“IP”).

You can put many things in a LUT. As far as I know, LUTs originally come from live-action : they allow to transport grading operations between the set and the VFX facilities or DI house. But here we are going to mostly to focus on one step specific type of LUTs.

Basically, thanks to a list of values (a table), 1D LUTs allow to display high-dynamic range images (HDR) on standard-dynamic range monitors (SDR). Our monitors and film projectors are not able to display the entire HDR range. Generally, a 1D LUT is the best way to display pixels on a screen and to get a nice filmic look. We call them “Display LUTs”.

LUT description

There are several kinds of LUT, but I will describe here the most common :

  • 1D LUT (left image) : it contains only one column of numbers as it affects RGB pixels the same way. They generally contain a Transfer Function or a S-Curve Shape. I used the vd16.spi1d from the spi-anim config (Sony Pictures Imageworks Animation). If the gamut of your image is Rec. 709, a 1D LUT will NOT change it.
  • 3D LUT (right image) : it contains three columns of numbers as it affects RGB pixels differently. They can be used to map one color space to another. I used the Rec. 709 for ACEScg Maya.csp from the ACES 1.2 (Academy Color Encoding System) config as an example. A 3D LUT allows you to switch from one gamut to another.

From Cinematic Color : A lookup table (LUT) is characterized by its dimensionality, that is, the number of indices necessary to index an output value.

Exit full screenEnter Full screen

I have shortened the files on purpose for display convenience. These files actually have hundred of thousand of lines.

We could also divide LUTs in two categories (although it could be considered as an arbitrary distinction) :

  • Technical LUTs, like for instance a “Cineon Log to Linear” spi1d LUT.
  • Artistic LUTs, such as a “Look” or “Look Modification Transform” (LMT).

OCIO (OpenColorIO)

OCIO is an open source framework for configuring and applying colour transformations. […] It is designed for rolling out complex colour pipeline decisions to lots of people who might not individually understand what’s going on (ie : not colourists).

Great OCIO definition that I copied from this post.

To load a LUT into Maya, Nuke, Guerilla Render or Mari, we will use an OCIO configuration. The OCIO config is the easiest way to share a LUT between different programs. And guess what ? There are several OCIO Configs available for free to help you setup your Color Management Workflow without breaking a sweat.

Here is what the OCIO config from Sony Picture Imageworks (left side) and ACES 1.2 (right side) look like :

Exit full screenEnter Full screen

I have also shortened the files on purpose for display convenience.

Here are a few general observations about OCIO Configs :

  • Different roles like color picking, texture painting or compositing are predefined. Watch out !
  • Different displays are available such DCI-P3, sRGB or Rec.709.
  • In the spi-anim config, the “Film View” loads the file vd16.spi1d with a quick description. Handy !

There are several file formats for LUTs : spi1d, spi3d, csp, cube… Photoshop only accepts cube files. Adobe softwares are not OCIO friendly unfortunately. So you could try to use an ICC profile.

Look development and LUTs

It is crucial that anyone doing look development or rendering uses the same Display Transform. Artists from the Surfacing, Grooming, FX, Lighting and Compositing departments need to see the same thing. Texturing in plain sRGB but rendering with a s-curve LUT can give issues. There are plenty of solutions for Mari, Substance Painter or Photoshop to make our work consistent.

I could make an analogy here : we generally need to test our look development assets under different HDRIs (or lighting conditions) to make sure they react well. Same goes with Display Transforms/LUTs. It is not uncommon to work the assets under a “neutral” studio LUT and check how it looks under the show LUT.

The OCIO configs are very handy because you can share them between softwares. You can use the official ones or build your own in Python (or manually). LUTs in animated movies have arrived pretty late compared to visual effects (VFX). But they are kind of a game changer in the industry. Naughty Dog has done some really good papers about them.

1D LUT example

What happens if you do not use a LUT ? Every time your shot is overexposed, you will have to manually compensate for it. This is just wrong.

Without a LUT, you will decrease your light’s intensity if it burns out. BUT you will loose some bouncing from Global Illumination (GI), some Sub-Surface Scattering (SSS)… Without a Display Transfrom, you will NEVER get the right amount of energy in your scene.

So you will probably have to create plenty of lights to compensate for the loss of energy. And this is how you end up with a complicated rig of 50 lights ! You want to be able to put as much energy as you need in a shot.

Check the example below : this is the SAME render displayed differently ! I repeat : the exr from the render engine does not change, it is just the way we view it.

Exit full screenEnter Full screen
previous arrow
next arrow

This is the only way to display a scene-linear render.

Filmic look

As a student I was obsessed to get a filmic look and pure blacks without making my renders look dirty. The only solution to achieve that is to use Tone Mapping (or better put, a Display Transform). It will give your images proper contrast and nicely roll off any pixels over 1. Here is what a “filmic” s-curve for tone mapping looks like :

Exit full screenEnter Full screen

Please note that this is a semi-log plot. The X axis is logarithmic !

From Cinematic Color : most tone renderings map a traditional scene gray exposure to a central value on the output display. […] one adds a reconstruction slope greater than 1:1 to bump the midtone contrast. Of course, with this increase in contrast the shadows and highlights are severely clipped, so a rolloff in contrast of lower than 1:1 is applied on both the high and low ends to allow for highlight and shadow detail to have smooth transitions. With this high contrast portion in middle, and low contrast portions at the extrema, the final curve resembles an “S” shape as shown below.

The rolloff of the highlights is pretty clear in the image above : all the values between 1 and 10 will be displayed in a range from 0.8 to 1. Only a tonemapping s-curve will give you a proper contrast without breaking a sweat.

It is the same thing if you’re lighting an interior : make it “real” with a ceiling, a floor and four walls. I see too many people lighting with holes or missing walls… Lots of light leaking and therefore no contrast ! It cannot work in my opinion.


We are all set for proper Color Management and ready to make some beautiful renders. But how do we keep all these values, especially above 1, in a proper format ? We can thank ILM to have invented OpenEXR.

OpenEXR allows you to save in 16 bits half float to preserve all the “raw” data. You will also be able to write some Arbitrary Output Variable (AOV) for your compositing process. You can check the Arnold documentation about exr files.

From Cinematic Color : It is important to remember when dealing with float-16 data, that the bits are allocated very differently from integer encodings. Whereas integer encodings have a uniform allocation of bits over the entire coding space, the floating-point encodings have increased precision in the low end, and decreased precision in the high end.

After a couple of e-mails following the release of my website, I thought it would be worth mentioning that 16 or 32 bits files do not give you magically access to infinite color ranges. You still have to take in account the primaries (or gamut) you are working in.

We did render some AOVs in 32 bits on Playmobil : Ncam, Pcam and Pworld for better precision.

For Z Depth and position.

A common misconception is also about the weight of the EXR files. I strongly believe that they are actually lighter than most formats, such as TGA, TIF, PNG and even JPG ! Just use the proper compression such as “dwab” (developed by Dreamworks Animation) for example.


Finally we have to take care of our range. This topic is subject to controversy in many studios ! Basically you want to go as low as possible without affecting the data. There are two settings to take care of :

  • Indirect Clamp Value : Clamp the sample values.
  • AA Clamp Value : Clamp the pixel values.

Clamping the sample values will help you to reduce fireflies and noise. The lowest value I ever used was 10 and it helped me greatly. Clamping the pixel values will help your anti-aliasing for a smoother result.

If you have a very dark pixel next to an overexposed pixel, you do NOT want the overexposed pixel to blow out every other pixel next to it. This is why we clamp our renders.

It may also depend on your scene and your render engine. Some studios clamp their renders at 50. I have tried a couple of times as low as 30… I guess you need to try and test for yourself, especially on HDR displays ! Solid Angle explains pretty well the topic of clamping.

We will see in chapter 9 why clamping pixel values can be an issue.


We have seen some important and complex notions in this chapter. Hopefully I was able to explain them without causing too much headache. Here is a quick recap :

  • A color space has three components : three primaries, a white point and a transfer function.
  • The difference between Transfer Functions (EOTF and OETF), a Gamma function and a Power function.
  • The different properties of a color space : scene-referred, display-referred and viewing environment.
  • A presentation of the Industry standards : sRGB, Rec.709, DCI-P3 and Rec.2020.
  • The importance of scene-linear and display-linear terminology, and the “linear” confusion.
  • The benefits of a scene-linear workflow and the importance of tone-mapping.
  • The use of differents LUTs through OpenColorIO (OCIO) configuration files.
  • And finally, how to save all our High Dynamic Range data into a proper file format : OpenEXR.

We have also seen that your rendering space (or scene-referred) should always have a linear transfer function and that you should NEVER display a linear image without a proper display transform.


Color Management is a never-ending topic that I continue to explore on a daily basis. I have tried to introduce it to you in an easily understandable post. A good color management is when you see the same thing along the color pipeline and in different softwares.

If your colors or your contrast shift when you go from Substance Painter to Maya, it may be worth to analyze the issue. A good color pipeline is all about controlling the color space from the first sketches (probably done by the Art Department) to final delivery. Consistency is key.

This has been a great and long journey through color with my former colleague Christophe Verspieren . But it doesn’t end here. That would be too easy. If you are still up for it, we can move to chapter 1.5 about the Academy Color Encoding System (ACES).


Daniele Siragusano’s

Recommended sources for beginners

I was lucky enough to come across this list of recommended sources for beginners, by Daniele Siragusano :

Videos about the red colour

I was sent these videos in French about the red colour. They look quite interesting :

Articles and blogs