Chapter 1.5: Academy Color Encoding System (ACES)


The previous chapter was mainly about Color Management prior to 2014. I am now going to describe ACES, a Color Management System (CMS) developed by dozens of professionals under the auspices of the Academy of Motion Picture Arts and Sciences (AMPAS).

More than 300 movies have been done using ACES and many VFX studios such as ILM, Framestore, Double Negative and Animal Logic use it. ACES has also become a delivery standard for Netflix. The whole idea behind ACES is to set a standard to help professionals in their color management.

ACES is available through OCIO (here is the link to the ACES 1.2 config), just like the spi-anim config. ACES has also been implemented in CTL in Resolve and GLSL / HLSL in Unreal and Unity.

If you don’t feel like following by another technical chapter, I don’t blame you. You can skip to chapter 2.

Otherwise let’s dive in.

ACES overview

Something that really hit me when I arrived at Animal Logic in 2016 was their range of colors. The artists were working on a beautiful and very saturated movie called Lego Batman. It was my first day and I saw this shot on a monitor (I think Nick Cross lit this shot).

Full screenExit full screen

I really thought to myself : “Wow ! This looks good ! How did they get these crazy colors ?” The range of colors really seemed wider than in my previous studio. I realized later that it was thanks to ACES :

  • We have seen in the previous chapter that many studios and schools render within the sRGB gamut with a linear transfer function and display in sRGB through a 1D LUT. That is not ideal as they work in the smallest gamut of all.
  • Animal Logic (and many other studios such as ILM or MPC) render in ACEScg (which is similar to Rec. 2020) and display in P3 which is the industry standard for cinema theaters. ACES helps them to manage different gamuts.

Why ACES ?

ACES has been developed by the Academy of Motion Picture Arts and Sciences, some VFX studios (MPC, Animal Logic…) and camera manufacturers (Arri, Red, Sony…). The idea behind it is pretty genius.

When cameras were analog, things were simple. There were only a couple of formats : 35mm and 70mm. The Original Print, shot on film, was available for eternity.

But with the digital revolution, multiple cameras and formats have emerged. These proprietary systems, used for Digital Cinema Distribution Master (DCDM), could be outdated quite quickly. Indeed, the technology of digital cameras evolves pretty fast. Issue is when these movies got to be remastered for new supports, the DCDM were not relevant anymore.

What is ACES ?

ACES is a series of color spaces and transforms that allows you to manipulate them. It is a very powerful CMS, available for free since it is an open-source project. The reference color space developed by the academy is called ACES2065-1 (AP0 primaries). Here are its characteristics :

  • Ultra Wide Gamut (non-physically realizable primaries)
  • Linear
  • High Dynamic Range
  • Standardised
  • RGB
Full screenExit full screen

With ACES2065-1 (AP0), the idea is to get a DCDM (Digital Cinema Distribution Master) for eternity. We do NOT know how movies will be watched in 50 or 100 years. ACES has been created for this specific reason : its purpose is to last in time !

ACES2065-1 is also called the ACES colorspace. But I’d rather use its full name.

Accuracy is critical in color management.

ACES in one picture

ACES is composed of three main processes described in the following image :

Full screenExit full screen
  • A. IDT is the import/conversion of the images to the ACEScg color space.
  • B. ACEScg is the rendering/working space.
  • C. RRT + ODT are the output to any monitor or video projector.

The idea behind ACES is to deal with any color transform you may need :

  • Is your texture in sRGB from Photoshop ? Or is it in linear within the sRGB gamut ? ACES provides all the matrix and LUTS you need to jump from one color space to another with the IDT (Input Device Transform).
  • Does you monitor cover Rec.709 or P3 ? ACES provides all the LUTs to view your renders with the most appropriate ODT (Output Device Transform).

This is why one of the reasons I like ACES so much : you always know in which gamut each process happens.

I really think that ACES makes everything clearer.

ACES color spaces

Here is a list of the five ACES color spaces :

  • ACES 2065-1 is scene linear with AP0 primaries. It remains the core of ACES and is the only interchange and archival format (for DCDM).
  • ACEScg is scene linear with AP1 primaries (the smaller “working” color space for Computer Graphics).
  • ACEScc, ACEScct and ACESproxy all have AP1 primaries and their own specified logarithmic transfer functions.
PrimariesWhite PointTransfer functionsUsage
ACES2065-1AP0 (non-physically realizable)~D60LinearInterchange and archival space
ACESccAP1 (non-physically realizable)~D60LogarithmicWorking space (color grading)
ACEScctAP1 (non-physically realizable)~D60Logarithmic (Cineon like)Working space (color grading)
ACEScgAP1 (non-physically realizable)~D60LinearWorking space (rendering, compositing)
AcesproxyAP1 (non-physically realizable)~D60LogarithmicTransport space

The ACES White Point is not exactly D60 (many people are wrong about this actually). It was chosen to avoid any misunderstanding that ACES would only be compatible with scenes shot under CIE Daylight with a CCT of 6000K.

It’s all explained in here.

There is also an absolutely brilliant article about the different ACES color spaces if you want to read more on the topic.

Please note that the ACES2065-1 color space is not recommended for rendering. You should use ACEScg (AP1 primaries).

More explanations are provided right below.

Why ACEScg ?

What about Computer Graphics ? How ACES can benefit our renders ? Some tests have also been conducted by Steve Agland from Animal Logic and Anders Langlands to render in ACES2065-1.

An unexpected issue occurred when rendering in the ACES2065-1 color space : it was so big that it gave some negative values and would mess up with energy conversation. It is very well explained in this post. Some color peeps refer to this event as The Great Discovery.

Full screenExit full screen

On top of that, grading on ACES2065-1 did not feel “natural“. From ACEScentral, Nick Shaw explains :

The AP1 primaries are a compromise which code most colors likely to occur in images from real cameras using positive values. Because even the most saturated ACEScg colors are still “real”, this means that the maths of grading operations works in a way which “feels” better to colorists.

ACEScg is more artist friendly.

Therefore, another color space has been created especially for Computer Graphics : ACEScg (AP1 primaries). I will repeat in bold and with emphasis because it is CRITICAL : you should only render in ACEScg.

ACEScg : the ultimate rendering space

Why would we render in one color space and display in another ? What is the point ? Remember the Rendering space and the Display space from Chapter 1 ? We have already seen that they do NOT have to be the same. It is something that surprises a lot of supervisors but, yes, rendering within different primaries will NOT give the same result.

Rendering in Linear – sRGB or ACEScg will not give the same image. Many supervisors have told me : “What is the point in rendering in a different color space if we do NOT have the monitor to show it ?” They are mistaken. It makes complete sense to render in one color space and view it in another.

Another nonsensical argument I have often been given against ACES was : We don’t care about ACES, we render in linear.

Thinking that linear was an infinite color space…

Basically, we want to use the most appropriate color space to get the best render possible : ACEScg. This is called Wide Gamut Rendering. How do we know that this particular color space is the most appropriate ?

To perform such a test, you need a reference called the “Ground truth“. In our case, it would be a render with no bias like the spectral render engine Mitsuba. Otherwise we could not compare objectively the renders.

Comparison between Spectral, Rec.709 and Rec. 2020

Some really interesting tests and research have been conducted by Anders Langlands and Thomas Mansencal. They are brilliantly explained in this post. Three different renders have been done :

Full screenExit full screen
  • Rec.709, the smallest gamut of all.
  • Spectral, the ground truth using wavelengths for its calculation.
  • Rec. 2020, which is similar to ACEScg.

Then, you subtract them from one another. The darkest it gets, the closer it is to spectral ! Just brilliant ! So if you have a look at the bottom row the average value is overall darker. Which means that Rec. 2020 gets us closer to spectral rendering.

What is the difference between ACEScg and Rec. 2020 ? What is the advantage to have the green primary out of the CIE diagram in ACEScg ? To encompass P3 mostly, ACEScg is a gamut close to BT.2020 but that encompasses P3. This requires non-physically realizable primaries.

Thanks Thomas for the explanation !

ACEScg explanation

The technical reason behind this difference is given in a series of posts :

From Thomas Mansencal : On a strictly technical point of view, rendering engines are indeed colourspaces agnostic. They just chew through whatever data you throw at them without making any consideration of the colorspace the data is stored into. However the choice of colorspace and its primaries is critical to achieve a faithful rendering. […]

If you are using sRGB textures, you will be rendering in this particular gamut (by default). Only the use of an Input Device Transform (IDT) will allow you to render in ACEScg (or a conversion beforehand).

From Thomas Mansencal : […] some RGB colorspaces have gamuts that are better suited for CG rendering and will get results that overall will be closer to a ground truth full spectral rendering. ACEScg / BT.2020 have been shown to produce more faithful results in that regard. […] Yes, the basis vectors are different and BT.2020 / ACEScg are producing better results, likely because the primaries are sharper along the fact that the basis vectors are rotated in way that reduces errors. A few people (I’m one of them) have written about that a few years ago about it. […] Each RGB colorspace has different basis vectors as a result of which mathematical operations such as multiplication, division and power are not equivalent.

Input Device Transform (IDT)

The IDT is the process to import the textures/images to your working/rendering space, which most likely will be ACEScg.

Cornell box example

Here are two renders of a Cornell Box in Guerilla Render. I have used the same sRGB textures for both renders, with the following values :

  1. Green sRGB primary at (0, 1, 0)
  2. Red sRGB primary at (1, 0, 0)
  3. Middle gray at (0.18, 0.18, 0.18)
previous arrow
next arrow
Full screenExit full screen

The only difference between these Cornell boxes is the rendering space :

  • In the first one, the rendering space is what many softwares call linear. Which actually means sRGB gamut with a linear transfer function.
  • In the second one, the rendering space is ACEScg. I had to set the IDT correctly to take full advantage of the wide gamut.
previous arrow
next arrow
Full screenExit full screen

Main thing about this test to take in account is that I used some textures. If you use colors directly in your software, you will not get the same result (especially with the Color Picking role activated).

You must be extra careful with your mipmap generation (tex files). If you switch your rendering space, it is safer to delete the existing tex files. Otherwise you may get some incorrect results.

Why do we get a better global illumination ?

Thanks to ACES we have changed the primaries of our scene and we got a better GI in our render. We can do the same process in Nuke to analyze what is actually happening :

  • On the left, we have a pure green constant at 0,1,0.
  • We convert it from sRGB to ACEScg using an OCIOColorSpace node.
  • The same color expressed in ACEScg has some information in the red and blue channels. It is really just a conversion : ACES does not “add” anything.

The conversion does not change the color. It gives the same color (or chromaticity) but expressed differently.

previous arrow
next arrow
Full screenExit full screen

Here is another way of explaining it :

  • On the left, we have a green primary in the sRGB/Rec.709 color space.
  • Using a Matrix 3×3 to switch from sRGB to ACEScg, this color with unique XY coordinates has been converted.
  • The color is not a pure green anymore in the ACEScg color space (right image).
Full screenExit full screen

Thanks to ACES and its conversion process, the ray is no longer stopped by a zero on some channels (red and blue in this case). Light paths are therefore less likely to be stopped by a null channel.

IDT overview

ACES provides all the 3D LUTs and Matrix we need to process these transforms. This is why it is so powerful ! Most common IDT for Computer Graphics are :

  • Utility – sRGB – Texture : If your texture comes from Photoshop or Internet. Only for 8-bits texture, like an albedo map.
  • Utility – Linear – sRGB : If your texture is linear within the sRGB primaries and you want to convert it to ACEScg.
  • Utility – Raw : If you do NOT want any transform applied on your texture, like normal maps.

Something that took me some time to understand as well is that if your rendering space is ACEScg, in this particular case, Utility – Raw and ACEScg are the same IDT. No transforms are applied with both options.

Most studios nowadays work in “lazy ACES” because of the lack of OCIO in Substance Painter. It means that we actually paint textures in a sRGB gamut and convert them on the fly to ACEScg in the render engine.

To plot the gamut

Plotting the gamut of an image allows you to map its pixels against the CIE 1931 Chromaticity Diagram. This is a pretty brilliant concept ! And a great way to debug ! This function is available in colour-science, developed by Thomas Mansencal.

  • On the first image, we have plotted a render done in sRGB. The pixels are clearly limited by the sRGB primaries. They are compressed against the basis vectors of the gamut.
  • On the second image, we have plotted a render done in ACEScg. The pixels, especially the green ones, are not limited anymore and offer a much more satisfying coverage of the gamut.
Full screenExit full screen

There is also an app available on Windows and Mac called Color Spatioplotter if you want to plot the gamut of an image. I haven’t tried it myself but from the feedback I got, it seems to be working fine at a very affordable price.

Reference Rendering Transform (RRT)

A paragraph about the RRT was long overdue and will be written for Q1 2021. Even if in practice, the RRT + ODT process is combined for the user, I think it would be worth to describe here some components of the RRT. I am particularly interested by the infamous “sweeteners” : glow module, red modifier and global desaturation.

It is worth noting that the output of the RRT is called Output Color Encoding Specification (OCES).

Glow module

Red modifier

Global desaturation

Output Device Transform (ODT)

The ODT is the process to display your renders, which most likely will be in ACEScg, on your monitor. The academy recommends the use of an ODT adapted to your display. The display is based on your project and your monitor :

  • Do you work for TV and Internet ? You should display in sRGB or Rec.709.
  • Are you working in Feature Film ? You should display in P3.
  • Do you want to output for an UHDTV ? You should display in Rec. 2020.

Rec. 2020 is clearly the future but there are no projectors that are able to cover 100% of this color space. The technology is not there yet. But in ten years maybe, it will be the new norm. And the best part is that when this day comes, ACES will still be a solution since it includes most of color spaces.

Not there yet, unless you own a Christie.

Examples and comparison of ODT

Here is the secret recipe of why ACES looks so good ! Check the highlights on the second render, they just look amazing !

previous arrow
next arrow
Full screenExit full screen

I have also done a test on the MacBeth chart to compare the Film (sRGB) from the spi-anim config with the ACES config. The results speak for themselves.

previous arrow
next arrow
Full screenExit full screen

I’ll just put it out there so that it is clear : there is no point in using a P3D65 ACES ODT if your monitor only covers sRGB. It won’t make your renders look prettier.

Your ODT should match your screen basically.

ODT clarification

Many artists have been confused by Nuke’s default display transform :

  • Why does sRGB display transform and sRGB (ACES) do NOT match ?
  • Because the ODT of ACES includes some tone mapping !

In ACES, we call this the “rendering” step and in ITU-R BT.2100 (which is the standard for HDR television) it is called the OOTF.

Most artists know this process as “tone mapping”.

From ACEScentral, Nick Shaw explains :

The ACES Rec.709 Output Transform is a much more sophisticated display transform, which includes a colour space mapping from the ACEScg working space to Rec.709, and tone mapping to expand mid-tone contrast and compress the shadows and highlights. The aim of this is to produce an image on a Rec.709/BT.1886 display which is a good perceptual match to the original scene.

It is not an artistic LUT. Not at all.

ODT overview

Some people complain about the tone mapping included in the ODT. I personally love it. Here a few things to know :

The RRT and ODT splines and thus the ACES system tone scale (RRT+ODT) were derived through visual testing on a large test set of images […] from expert viewers. So no, the values are not arbitrary.

From Scott Dyer, ACES mentor.

Some additional explanations about the RRT/ODT process from this post :

  • The ACES RRT was designed for Theatrical Exhibition where Viewing Conditions are Dark. Content for cinema tends to be authored with more contrast to compensate for the dark surround.
  • Even though there is a surround compensation process (Dark <–> Dim), the values to drive that process were subjectively obtained and it might not be enough for all the cases.
  • The RRT + ODTs are also the results of viewing images by an expert viewer, so there is undeniably some subjectivity built-in.
  • Some companies such as Epic Games are pre-exposing the Scene-Referred Values with a 1.45 gain (which would match roughly an exposure increase of 0.55 in your lights).

Another description of the ODT tone scale can be found here.


The ACES Output Transform includes a shaper, which is a logarithmic color space, to optimize the data. It is a transparent process, nothing more than an intermediate state for data, with purely technical goals.

What does exactly happen when we display in sRGB (ACES) with an OCIO config ? To go to sRGB (ACES), OCIO first transforms the color to ACES2065-1 (AP0 primaries). Then from AP0 we go to a colour space called Shaper thanks to a 1D LUT and finally to sRGB thanks to a 3D LUT.

From the ACES 1.2 OCIO Config :

  - !<ColorSpace>
    name: Output - sRGB
    family: Output
    equalitygroup: ""
    bitdepth: 32f
    description: |
      ACES 1.0 Output - sRGB Output Transform
      ACES Transform ID : urn:ampas:aces:transformId:v1.5:ODT.Academy.RGBmonitor_100nits_dim.a1.0.3
    isdata: false
    allocation: uniform
    allocationvars: [0, 1]
    to_reference: !<GroupTransform>
        - !<FileTransform> {src: InvRRT.sRGB.Log2_48_nits_Shaper.spi3d, interpolation: tetrahedral}
        - !<FileTransform> {src: Log2_48_nits_Shaper_to_linear.spi1d, interpolation: linear}
    from_reference: !<GroupTransform>
        - !<FileTransform> {src: Log2_48_nits_Shaper_to_linear.spi1d, interpolation: linear, direction: inverse}
        - !<FileTransform> {src: Log2_48_nits_Shaper.RRT.sRGB.spi3d, interpolation: tetrahedral}

A shaper is needed because a 3D LUT (even 64^3) is not suitable for applying to linear data like ACEScg. Otherwise it would be just a waste of data.


Once you’re happy with your renders and pretty much done with the project, you are ready to deliver your frames. In animation studios, we generally deliver linear exr files to a digital laboratory, such as this one.

With ACES, it is pretty much the same concept with a couple of important notes. For final delivery to the Digital Intermediate, you will have to deliver ACES compliant EXR files.

This is the standard set by the Academy to exchange files between facilities. This is really important. Your render output will be ACEScg (AP1) but your compositing output has to be ACES2065-1 (AP0) with the correct metadata.

Full screenExit full screen

The interchange and archival files should be written as OpenEXRs conforming to SMPTE 2065-4. In Nuke, you should set the Write node’s colorspace to ACES2065-1 and check the box write ACES compliant EXR to get the correct metadata.

From ACEScentral, Doug Walker explains :

The SMPTE ST 2065-4 spec “ACES Image Container File Layout” currently requires uncompressed files. Also, there are other restrictions such as only 16-bit float and only certain channel layouts (RGB, RGBA, and stereo RGB/RGBA). These limitations do make sense for use-cases that involve archiving or real-time playback.

ACES implementation


Most color pipelines nowadays are set through OCIO which is great because of its compatibility with many softwares : Maya, Guerilla, Nuke, Mari, Rv… But there is one downside using OCIO and LUTs : you loose precision. It is really well explained in this post and also here.

The reason the CTF works better is because it is more of an “exact math” implementation of the ACES CTL code rather than baking down into a LUT-based representation.

From Doug Walker, ACES mentor.

Discrete and Continuous transforms

What is happening here ? The answer has been given to me by my colleague, Christophe Verspieren. He has showed me the concept of Continuous and Discrete that is happening with the baked LUTS from OCIO. It is actually pretty easy to understand. Check this image from this site :

Full screenExit full screen

When we go from Scene Referred to Display Referred, it implies to cover a huge dynamic range (ACES deals something like 15 stops). The discrete transform actually covers huge zones.

We do not split the dynamic range into equal zones as we prefer to split in detail most current values at the expense of highlights. Therefore the display tone mapping (ODT) makes these false chromaticities really visible by increasing the exposure.

Also even if the transformation is mathematically defined in OCIO, the fact that it runs on GPU rather than on CPU leads to a discretization of the formula : the graphic card actually creates a LUT !

Really these issues are endless…

Color interpolation gaps

Furthermore, these gaps (from the discretization) are filled linearly which is not necessarily the most natural way. Even if we change gamut, we still work in RGB and linear interpolations are done on the line going through A and B. Sometimes it would be better to manage color interpolation in the Lab colorspace.

Not every mathematical formula is available in OCIO 1.1.1, only OCIO 2.0 will allow to represent correctly the necessary calculation of ACES.

To sum it up :

  • Between each slice we have linear interpolations.
  • In very large areas these interpolations lack accuracy.
  • This results in errors of chromaticites in highlights due to the discretisation.

How does this translate visually ? Let’s have a look at some renders with extreme saturation to compare different solutions.

previous arrow
next arrow
Full screenExit full screen

Because of Arnold default settings, the base color weight is at 0.8 in these renders.

CTL nodes in Nuke

How important is this ? Should we stick to OCIO or look into this CTL implementation ? I guess the best answer I have read on this topic comes (once again !) from Alex Fry :

With normally exposed images, especially from live action cameras, you’re unlikely to see any values that render differently to the OCIO config, but with CG images containing extreme intensity and saturation you will see more pleasing roll off and desaturation.

December 2016 and Alex Fry had already understood so much stuff…

So it looks like CTL would be worth using especially if you are working on saturated animated feature films ! Alex Fry was kind enough to share with us this Pure Nuke ACES RRT & ODT.

These Nuke nodes have a 100% match to a pure CTL implementation but are way faster. Please note that these nodes work with ACES2065-1 footage. Since we render in ACEScg, you will need to convert your footage with an OCIOColorSpace before plugging these nodes.

You can also check different examples from Alex Fry’s presentation. I don’t know if these images were generated in CTL or OCIO though. Or even using FilmLight’s Baselight… But they look great !

Important update : thanks to OCIO 2.0, the ACES OCIO config will not present any discrepancies anymore. It will be a huge improvement (probably available for Q1 2021).

The release notes for the ACES OCIO configs are available here.

ACES in render engines

Most render engines have integrated OCIO which give us access to ACES. Autodesk has even come up with a CTL integration of ACES in Maya. But as surprising as it may sound, there are different levels of integration for OCIO/ACES. And for most render engines, calculations based on light spectra are still done with sRGB/Rec.709 primaries, such as :

  • Kelvin temperatures (for lights, black-body radiation and camera white balance).
  • Physical sun & sky (or Skylight).

For example, in most render engines, the Skylight is a spectral representation converted to sRGB/Rec.709 primaries. I am quoting here the example of V-Ray :

V-Ray uses sRGB primaries by default, but this is really only relevant if you use any V-Ray features that deal with spectra – like light temperature, camera white balance as temperature, physical sun and sky. […] Normally, V-Ray treats all colors as triplets of floating point values; for the most part V-Ray doesn’t really care what the three numbers actually mean. However some calculations in V-Ray are based on light spectra; the conversion of these spectra to floating-point color triplets assumes that the three numbers mean something specific – a color in some predefined internal renderer color space. This means that when converting from Kelvin temperature to RGB colors, V-Ray must know what that internal renderer color space is.

From this post in 2016.x

I am pretty sure that developers will catch soon on these points. Otherwise you may simply use a 3×3 transform from sRGB to ACEScg to convert. But it is true that unfortunately this final step of integration is missing in most DCC softwares.

Inverted ODT Workflow

Preserving Logos and Graphics

This topic has been addressed many many many many many many many times on the ACEScentral forum. How do we preserve the look of an image from internet into an ACES workflow ? The ODT can be used as an IDT to do so. That is so powerful and absolutely genius. This process is simply called Inverted LUT.

Importing an image using an ODT is just a way to tell ACES : this image, wherever it comes from, is my final result and I want to convert it to my working space. When you think about it, the (almost) perfect reversibility of the RRT/ODT is pretty cool. What happens is just a transparent round trip :

  • IDT : Output – sRGB -> ACEScg
  • Working/Rendering space : ACEScg
  • ODT : ACEScg -> Output-sRGB

A friendly reminder : you should never ever use this technique to load a texture into a shader. It has been explained several times on the ACEScentral forum.

Constant color

One of the most frequent questions I have been asked is about converting asset : how should we deal with assets created in linear – sRGB to convert them to ACEScg ?

Here is an example I had to face recently. Let’s say you work on a movie with a famous character dressed in red. Generally, the client will be very picky about this red value. It has to match !

previous arrow
next arrow
Full screenExit full screen

This workflow works because Nuke and Guerilla color select in ACEScg.

I have written this tutorial for a constant color because I was able to check that the converted values did not break the energy conversation. And this is why you should never do it for a texture.

Color key

I have used the inverted ODT technique for our next example. The challenge was to maintain the same colors from a boat concept painted in Photoshop into an ACEScg render. I know that fidelity to color keys and concepts is critical for many studios.

previous arrow
next arrow
Full screenExit full screen

This workflow works because Nuke and Guerilla color select in ACEScg.

I totally accept the fact that this concept has some lighting information. So it can become arbitrary to pick a color in one place or an other.

Hence the use of a large picking zone in Nuke.

This process has a couple of limitations :

  • First if all, if you work in ACES, you should only color pick in an ACES environment.
  • It struggles a bit with very saturated and extreme values, especially the yellow ones. You could possibly end up with some negative values.
  • If you are color picking an albedo value, you should be extremely careful that this value is within the PBR range.

We can now move to more in-depth information about the Color Picking role. By default, the Color Picking role is set to Output – sRGB in the ACES 1.2 OCIO configuration to match the default display transform. This is why the Color Picking role is “contextually related” to the inverted ODT workflow.

Color Picking role

Different implementations

When it comes to the Color Picking role, the first thing to know is that all softwares are not equals regarding this feature. When a developer implements OCIO in its software, he/she can choose to integrate things to a certain level. Let’s take Guerilla and Maya as an example :

  • The Color Picking role in Guerilla only drives the color picker hue board.
  • The Color Picking role in Maya drives the whole color selection process.

It only took me six months to understand the Color Picking role in Maya. But I think I have finally cracked it. First things first : what is the Color Picking role for ? From the OCIO documentation :

color_picking – colors in a color-selection UI can be displayed in this space, while selecting colors in a different working space (e.g. scene_linear or texture_paint).

This is what Guerilla has implemented basically.

But Maya has a different use of the Color Picking role. It actually uses this role to select any RGB color in a shader or a light. In this context, Color Picking could also be called Color Selection.

RGB triplets by themselves do not make really sense. You need a context to interpret them correctly. Maya lets you choose the context, when Guerilla or Nuke do not.

This is why you have to be extra careful when it comes to Color Picking or Color Selection depending on the DCC software you use.

If your ODT is different from Output – sRGB, I suggest that you modify the color_picking role in the OCIO config file to match the ODT your are using.

Your ODT and color picking role should be the same.

Color Picking in Maya

Please be aware that in this section I will focus on Maya, since its integration of the color_picking role is quite unique.

When it comes to choosing a Color Picking role for Maya, I have seen three different philosophies :

Pros Cons
Output – sRGBColor selection is display-referred. It is artist-friendly.You do not know what it is the exact value used for rendering (unless you constantly check the Channel Editor). This could break the PBR of your scene.
Utility – Linear – sRGBLegacy Mode. Artists may select colors they are used to.You do not take advantage of the wide gamut in the color selection.
ACES – ACEScgYou know exactly which value is used for rendering and can reach very saturated values (like a laser).You have access to crazy saturated values, unsuitable for albedo, that may eventually break energy conversation.

If you use (1, 1, 1) with a Color Picking role set to Output – sRGB in Maya, you actually use a value of (18.91, 18.91, 18.91) in your render. And that is very wrong if it is set in the Base Color for example !

previous arrow
next arrow
Full screenExit full screen

It is true that when you use “Output – sRGB”, you are never really sure of what color is being used during rendering. And this is why some studios have set their Color Picking role to ACEScg to know what values are used by the render engine. I cannot honestly say that one system is better than the other. You just have to be aware of what you are doing.

Color Picking in ACEScg

If you use ACEScg as a Color Picking role, you should be aware of this : there is no non-emissive surface that has such saturated colors. Only lasers (an emissive source) can reach BT.2020 primaries.

This particular example really made things clearer for me. When you study color science, you may sometimes get lost in abstract stuff. Knowing this made all of these concepts more grounded and more real in a way.

If you want to Color Select in ACEScg, you just need to edit the OCIO Config and modify the color_picking role :

  color_picking: ACES - ACEScg

If you are working in a realistic context, all of this concerns you. And if you are working in cartoon… Well it concerns you as well ! Some producers out there just like the most outrageous saturation. But most cartoon movies want to be believable in their look. So my advice is to do your look development in a realistic way and you will still be able to push saturation with a grade afterwards.

All of this only stands for a PBR cartoon movie of course.

ACES limitations

ACES Retrospective and Enhancements

In March 2017, a brilliant study has listed some possible improvements for ACES : ACES Retrospective and Enhancements. It is a really interesting document that has lead to several changes in the ACES organization. Here is a link to the official response from the ACES Leadership.

A list of 48 points to improve has also been published on the forum and the creation of several Virtual Working Groups has already brought some solutions to the table. ACES is pretty much alive and improving. Do not hesitate to join the process !

Hue skews and posterization

The two biggest issues I have encountered are called Hue skews and Posterization. Some image makers believe that the audience got used to it and are not bothered. Some find them truly horrific. I’ll let you decide for yourselves.

previous arrow
next arrow
Full screenExit full screen

I didn’t put the green primary on purpose as I didn’t notice the effect as much.

There are different reasons for this kind of issues. They are pretty technical and beyond the scope of this chapter but I have listed them here :

  • A 3×3 matrix can only model linear transformations. Brute force gamut clips may also induce the Abney effect because they are straight lines.
  • Discrete per-channel lookups skew the intention. Any aesthetic transfer function that asymptotes at 1.0 suffers this.
  • The aesthetic transfer function ends up collapsing a boatload of values into the same value, hence Posterization. The non-physically realizable primaries of ACEScg may also be responsible.

Most of these notions are eventually related to what we call gamut mapping. I have thought for a very long time that the ODT would remap the gamut it in a smart way. Unfortunately it is not the case, it just does a 3×3 matrix transform and clamps !

previous arrow
next arrow
Full screenExit full screen

Indeed a P3 ODT brings the value back into its gamut through the clamp which causes some Posterization. This is one of the issue we are facing with very saturated values.

// Handle out-of-gamut values
// Clip values < 0 or > 1 (i.e. projecting outside the display primaries)
linearCV = clamp_f3( linearCV, 0., 1.);

All ODTs clamp to the target gamut so it is impossible to have something outside the gamut. All values ​​are assumed to be between 0 and 1 after this process and this the penultimate step before the transfer function (that will not change this result).

Gamut mapping

What is a gamut mapping ? A proper Display Rendering Transform (DRT) should be composed of two main elements :

  • Tone mapping (or intensity mapping) to compress an infinite range (HDR) on a limited range (SDR).
  • Gamut mapping to compress a wide gamut (ACEScg – scene) to a smaller gamut (P3 – display) by maintaining the intention of the scene as best as possible.
previous arrow
next arrow
Full screenExit full screen

I have used this experimental OCIO config to compare different Output Transforms.

This process is actually super complex and there has been many attempts these last few years to solve this riddle : AMD Fidelity FX, AMD Variable Dynamic Range, AMD FreeSync HDR Gamut Mapping, Frostbite… With more or less success.

Apart from ICC, there are not really any systems that do [gamut mapping]. It is the responsibility of the colorist to manage this kind of problem by desaturating a bit the red. But it is not necessarily a limitation of ACES, on the contrary. The system allows you to use extreme values ​​so with great power comes great responsibilities. This is where gamut mapping would be useful. The reality is that all the technology changes super super fast and it takes a lot of time to build the tools. The research is not even finished in fact : for example, LED onset lighting is very recent.

A bit of advice from Thomas Mansencal.
previous arrow
next arrow
Full screenExit full screen

One issue that has often been noticed is what we call the Blue Light Artifact. It is very well described in this post from ACEScentral. A temporary fix has also been provided until a more long-term solution is found.


ACES has been developed by dozens of industry professionals, is available for free and has many advantages :

  • Compatibility through OCIO with many softwares.
  • Free and lot of support from the community ACEScentral.
  • To ensure a lighting calculation in a wide gamut : ACEScg.
  • Less guess work and quality jump with the Output Device Transforms.
  • To generate a Digital Cinema Distribution Master (DCDM) that will still be valid in many years.

In all my testing, I never had a case where ACES would not make something look better. It is such an improvement of quality : everything looks more real and behaves more correctly.

We get so much closer to a photographic approach with ACES. All the hard work has been done by the Academy of Motion Picture Arts and Sciences (AMPAS) and the ACES community. And it is up to us, CG Supervisors and Cinematographers, to spread the word !

previous arrow
next arrow
Full screenExit full screen

To sum it up :

  • Render in ACEScg to get the best lighting calculation possible, even if our monitors are not capable of displaying it.
  • Display using a Display Transform that suits your monitor and project.
  • Use a monitor that covers your needs (which should be ideally 100% coverage of P3 for feature film).

We can now move to less technical chapters and focus on cinematography. Yay !