Chapter 1.5: Academy Color Encoding System (ACES)


We have seen in the previous chapter about Color Management prior to 2014. I am now going to describe the revolutionary system developed by hundred of professionals under the auspices of the Academy of Motion Picture Arts and Sciences.

Spoiler alert : ACES is available through OCIO (here is the link to the ACES 1.2 config), just like the spi-anim config (seen in the previous chapter). ACES has also been implemented in CTL in Resolve and GLSL / HLSL in Unreal and Unity.

More than 300 movies have been done using ACES and big VFX studios such as ILM, Framestore, Double Negative and Animal Logic use it. ACES has also become a delivery standard for Netflix. The whole idea behind ACES is to set a standard to help professionals in their color management.

If you don’t feel like following by another technical chapter, I don’t blame you. You can skip to chapter 2.

Otherwise let’s dive in.

Academy Color Encoding System (ACES)

Something that really hit me when I arrived at Animal Logic in 2016 was their range of colors. The artists were working on a beautiful and very saturated movie called Lego Batman. It was my first day and I saw this shot on a screen (I think Nick Cross lit this shot).

The LEGO Batman Movie : Warner Bros. Pictures and Warner Animation Group, in association with LEGO System A/S, a Lin Pictures / Lord Miller / Vertigo Entertainment production

I really thought to myself : “Wow ! This looks good ! How did they get these crazy colors ?” The range of colors really seemed wider than my previous studio. I realized later that it was due to ACES :

  • We have seen in the previous chapter that many studios and schools render within the sRGB gamut with a linear transfer function and display in sRGB through a 1D LUT. That is not ideal as they work in the smallest gamut of all.
  • Animal Logic (and many other studios such as ILM or MPC) render in ACEScg (which is similar to Rec. 2020) and display in DCI-P3 which is the industry standard for cinema theaters. ACES helps them to manage gamut in the best way.

Why ACES ?

ACES has been developed by the Academy of Motion Picture Arts and Sciences, some VFX Studios (MPC, Animal Logic…) and the Camera Manufacturers (Arri, Red, Sony, Canon…). The idea behind it is pretty genius.

When cameras were analog, things were simple. There were only a couple of formats : 35mm and 70mm. The Original Print, shot on film, was available for eternity.

But with the digital revolution, multiple cameras and formats have emerged. These proprietary systems, used for Digital Cinema Distribution Master (DCDM), could be outdated quite quickly. Indeed, the technology of digital cameras evolves pretty fast. Issue is when these movies got to be remastered for new supports, the DCDM were not relevant anymore.

What is ACES ?

ACES is a series of color spaces and transforms that allows you to manipulate them. It is currently the most powerful tool in Color Management. The reference color space developed by the academy is called ACES2065-1 (AP0 primaries). It is simply the ultimate color space and here are its characteristics :

  • Ultra Wide Gamut
  • Linear
  • High Dynamic Range
  • Standardised
  • RGB
ACES2065-1 includes all these color spaces. That makes it perfect for storage.

With ACES2065-1 (AP0), the idea is to get a DCDM (Digital Cinema Distribution Master) for eternity. We do NOT know how movies will be watched in 50 or 100 years. ACES has been created for this specific reason : its purpose is to last in time !

The ACES White Point is not exactly D60 (many people are wrong about this actually). It was chosen to avoid any misunderstanding that ACES would only be compatible with scenes shot under CIE Daylight with a CCT of 6000K.

It’s all explained in here.

ACES color spaces

The color space ACES2065-1 (AP0 primaries) includes most color spaces. This is why it will last for a very long time. Here is a list of all the color spaces available in ACES :

  • ACES 2065-1 is scene linear with AP0 primaries. It remains the core of ACES and is the only interchange and archival format (for DCDM).
  • ACEScg is scene linear with AP1 primaries (the smaller “working” color space for CG).
  • ACEScc, ACEScct and ACESproxy all have AP1 primaries and their own specified logarithmic transfer functions.
  • ACEScc and ACEScct are for color grading. They are working color spaces.
  • ACEScct is very similar to ACEScc across most of the range, except that it adds a “toe” to make it more akin to traditional “log” curves (e.g. Cineon, LogC, S-Log, etc.).
  • ACESproxy is for camera playback and video displays. It is a transport color space.

There is also an absolutely brilliant post about the different color spaces available in ACES if you want to read more on the topic.

Please note that the ACES2065-1 color space is not recommended for rendering. You should use ACEScg (AP1 primaries). More explanations are provided right below.

I will mostly focus on ACEScg in this post since my book is about full CG projects.

What about ACEScg ?

What about Computer Graphics ? How ACES can benefit our renders ? Alex Fry explains it really well in this video. Some tests have also been conducted by Steve Agland from Animal Logic and Anders Langlands to render in ACES2065-1.

An unexpected issue occurred when rendering in the ACES2065-1 color space : it was so big that it gave some imaginary colors. It is very well explained in this post.

Therefore, another color space has been created especially for CG rendering : ACEScg (AP1 primaries). It is more or less the same as Rec. 2020. I will repeat in bold and with emphasis because it is CRITICAL : you should only render in ACEScg.

What is the difference between ACEScg and Rec. 2020 ? What is the advantage to have the green primary out of the CIE diagram in ACEScg ? To encompass P3 mostly, ACEScg is a gamut close to BT.2020 but that encompasses P3. This requires one non-physically realizable primary : the green one.

Thanks Thomas for the explanation !

The ultimate rendering space

Why would we render in one color space and display in another ? What is the point ? Remember the Rendering Space and the Display Space ? Well they do NOT have to be the same. It is something that surprises a lot of people but, yes, rendering within different primaries will NOT give the SAME result.

Rendering in Linear – sRGB or ACEScg will not give the same image. Many supervisors have told me : “What is the point in rendering in a different color space if we do NOT have the monitor to show it ?” But they are mistaken. It makes complete sense to render in one color space and view it in another.

Basically, we want to use the most appropriate color space to get the best render possible : ACEScg. That is called Wide Gamut Rendering ! How do we know that this particular color space is the most appropriate ?

To perform such a test, you need a reference called the “Ground truth“. In our case, it would be a render with no bias like spectral render engine Mitsuba. Otherwise you could not compare objectively the renders.

ACEScg comparison

Tests and research have been conducted by Anders Langlands and Thomas Mansencal and are brilliantly explained in this post. Three renders have been done :

  • Rec.709, the smallest gamut of all.
  • Spectral, the ground truth using wavelengths for its calculation.
  • Rec. 2020, which is pretty much equivalent to ACEScg.

Then, you subtract them from one another. The darkest it gets, the closer it is to spectral ! Just brilliant ! So if you have a look at the bottom row the average value is overall darker. Which means that Rec. 2020 gets us closer to spectral rendering.

ACEScg explanation

The technical reason behind it is given in a series of article and post :

From Thomas Mansencal : On a strictly technical point of view, rendering engines are indeed colourspaces agnostic. They just chew through whatever data you throw at them without making any consideration of the colorspace the data is stored into. However the choice of colorspace and its primaries is critical to achieve a faithful rendering. […]

So basically, if you are using textures done within a sRGB colorspace, you will be rendering in this particular colorspace. Only the use of an Input Device Transform (IDT) will allow you to render in ACEScg.

From Thomas Mansencal : […] some RGB colorspaces have gamuts that are better suited for CG rendering and will get results that overall will be closer to a ground truth full spectral rendering. ACEScg / BT.2020 have been shown to produce more faithful results in that regard. […] Yes, the basis vectors are different and BT.2020 / ACEScg are producing better results, likely because the primaries are sharper along the fact that the basis vectors are rotated in way that reduces errors. A few people (I’m one of them) have written about that a few years ago about it. […] Each RGB colorspace has different basis vectors as a result of which mathematical operations such as multiplication, division and power are not equivalent.

Display Target

The display is based on your project :

  • Do you work for TV and Internet ? You should display in sRGB or Rec.709.
  • Are you working in Feature Film ? Well you should display in DCI-P3.
  • Do you want to output for an UHDTV ? You should display in Rec. 2020.

Rec. 2020 is clearly the future but there are no monitors that are able to cover 100% of this color space. The technology is not there yet. But in ten years maybe, it will be the new norm. And the best part is that when this day comes, ACES will still be a proper solution since it includes most of color spaces.

To sum it up :

  • Render in ACEScg to get the best lighting calculation possible, even if our monitors are not capable of displaying it.
  • Display in Nuke or RV using a Display Transform that suits your project.
  • Use a monitor that covers your needs (which should be ideally 100% coverage of DCI-P3 for feature film).

I’ll just put it out there so that it is clear : there is no point in using a P3D60 ACES ODT if your monitor only covers sRGB. It won’t make your renders look prettier.

Your ODT should match your screen basically.

ACES in one picture

ACEScg is not equal to Rec.2020 but they are quite similar.
  • A. IDT is the import/conversion of the images to the ACEScg color space.
  • B. ACEScg is the Rendering Space.
  • C. RRT + ODT are the LUTs output to any monitor or video projector.

The idea behind ACES is to deal with any color transform you may need :

  • Is your texture in sRGB from Photoshop ? Or is it in linear within the sRGB gamut ? ACES provides all the matrix and LUTS you need to jump from one color space to another in the IDT (Input Device Transform).
  • Does you monitor cover Rec.709 or DCI-P3 ? ACES provides all the LUTs to view your renders with the most appropriate display transform.

This is why I love ACES so much : you always know in which gamut you are. Another argument I was given against ACES was :We don’t care about ACES, we render in linear.

Thinking that linear was an infinite color space…

Input Device Transform (IDT)

Here are two renders of a Cornell Box in Guerilla Render. I have used some textures with pure green at 0,1,0 and pure red at 1,0,0 for both renders.

previous arrow
next arrow

Many readers have asked me about this test. So I’ll give here some extra information. The main difference between these Cornell boxes is the rendering space :

  • In the first one, the rendering space is what many softwares call linear. Which actually means sRGB gamut with a linear transfer function.
  • In the second one, the rendering space is ACEScg. In this situation I had to set the IDT correctly to take full advantage of the wide gamut.
previous arrow
next arrow

Main thing about this test to take in account is that I used some textures. If you use colors directly in your software, you will not get the same result (especially with the Color Picking role activated).

You must be extra careful with your mipmap generation (tex files). If you switch your rendering space, it is safer to delete the existing tex files. Otherwise you may get some incorrect results.

Why do we get a better global illumination ?

Thanks to ACES we have changed the primaries of our scene and we got a better GI in our render. We can do the same process in Nuke to analyze what is actually happening :

  • On the left, we have a pure green constant at 0,1,0.
  • We convert it from sRGB to ACEScg using an OCIOColorSpace node.
  • The same color expressed in ACEScg has some information in the red and blue channels. ACES does not add anything. It is really just a conversion.

The conversion does not change the color. It gives the same color (or chromaticity) but expressed differently.

We should really not say that ACES add anything.
previous arrow
next arrow

Here is another way of explaining it :

  • On the left, we have a green primary in the sRGB/Rec.709 color space.
  • Using a 3D LUT to switch from sRGB to ACEScg, this color with unique XY coordinates has been converted.
  • The color is not a pure green anymore in the ACEScg color space (right image).

Thanks to ACES and its conversion process, the ray is no longer stopped by a zero on some channels (red and blue in this case). Light paths are therefore less likely to be stopped by a null channel.

IDT (Input Device Transform) description

ACES provides all the 3D LUTs and Matrix we need to process these transforms. This is why it is so powerful ! Most common IDT for Computer Graphics are :

  • Utility – sRGB – Texture : If your texture comes from Photoshop or Internet. Only for 8-bits texture, like an albedo map.
  • Utility – Linear – sRGB : If your texture is linear within the sRGB primaries and you want to convert it to ACEScg.
  • Utility – Raw : If you do NOT want any transform applied on your texture, like normal maps.

Most studios nowadays work in “lazy ACES” because of the lack of OCIO in Substance. It means that we actually paint textures in a sRGB gamut and convert them on the fly to ACEScg in the render engine.

Something that took me some time to understand as well is that if your rendering space is ACEScg, in this particular case, Utility – Raw and ACEScg are the same IDT. No transforms are applied with both options.

To plot the gamut

Plotting the gamut of an image allows you to map its pixels against the CIE 1931 Chromaticity Diagram. That is a pretty brilliant concept ! And a great way to debug ! This function is available in colour-science, developed by Thomas Mansencal.

  • On the first image, we have plotted a render done in Rec.709. The pixels are clearly limited by the Rec.709 primaries. They are compressed against the basis vectors of its gamut.
  • On the second image, we have plotted a render done in ACEScg. The pixels, especially the green ones, are not limited anymore and offer a much more satisfying coverage of the gamut.
previous arrow
next arrow

There is also an app available on Windows and Mac called Color Spatioplotter if you want to plot the gamut of an image. I haven’t tried it myself but from feedback I got, it seems to be working fine at a very affordable price.

Reference Rendering Transform + Output Device Transform (RRT + ODT)

The academy recommends the use of an ODT adapted to our output. Many artists have been confused by Nuke’s display transform :

  • Why does sRGB display transform and sRGB (ACES) do NOT match ?
  • Because the ODT of ACES includes some tone mapping !

In ACES, we call this the “rendering” step and in ITU-R BT.2100 (which is the standard for HDR television) it is called the OOTF.

Most artists know this process as “tone mapping”.

From ACEScentral, Nick Shaw explains :

The ACES Rec.709 Output Transform is a much more sophisticated display transform, which includes a colour space mapping from the ACEScg working space to Rec.709, and tone mapping to expand mid-tone contrast and compress the shadows and highlights. The aim of this is to produce an image on a Rec.709/BT.1886 display which is a good perceptual match to the original scene.

It is not an artistic LUT. Not at all.

Some interesting explanations about the RRT/ODT process from this post :

  • The ACES RRT was designed for Theatrical Exhibition where Viewing Conditions are Dark. Content for cinema tends to be authored with more contrast to compensate for the dark surround.
  • Even though there is a surround compensation process (Dark <–> Dim), the values to drive that process were subjectively obtained and it might not be enough for all the cases.
  • The RRT + ODTs are also the results of viewing images by an expert viewer, so there is undeniably some subjectivity built-in.
  • Some companies such as Epic Games are pre-exposing the Scene-Referred Values with a 1.45 gain (which would match roughly an exposure increase of 0.55 in your lights).
  • A lot of that is described in the ACES RAE paper: ACES Retrospective and Enhancements.

Examples and comparisons

Here is the secret recipe of why ACES looks so good ! Check the highlights on the second render, they just look amazing !

previous arrow
next arrow

Another description of the ODT tone scale can be found here. I have also done a test on the MacBeth chart to compare the Film (sRGB) from the spi-anim config with the ACES config. The results speak for themselves.

previous arrow
next arrow

ODT technical description

What does exactly happen when we view in Rec.709 (ACES) with an OCIO config ? To go to Rec.709 (ACES), OCIO first transforms the color to ACES2065-1 (AP0 primaries). Then from AP0 we go to a colour space called Shaper thanks to a 1D LUT and finally to Rec.709 thanks to a 3D LUT.

Some people complain about the tone mapping included in the ODT. I personally love it. Here a few things to know :

The RRT and ODT splines and thus the ACES system tone scale (RRT+ODT) were derived through visual testing on a large test set of images […] from expert viewers. So no, the values are not arbitrary.

From Scott Dyer, ACES mentor.

ODT limitations

There is no gamut mapping in ACES yet. This feature will be probably integrated in ACES 2.0. What does it mean exactly ?

If you set a color in ACEScg, like a red primary (1/0/0), this value will be most likely clipped by the ODT for display. For example, if you work in movies, an ACEScg red is outside of the P3 gamut.

All ODTs clamp to the target gamut so it is impossible to have something outside the gamut.

We could aso explain it this way : an ODT P3 brings the value back into its gamut through the clamp (which may cause some banding).

// Handle out-of-gamut values
// Clip values < 0 or > 1 (i.e. projecting outside the display primaries)
linearCV = clamp_f3( linearCV, 0., 1.);

All values ​​are assumed to be between 0 and 1 after this process and this the penultimate step before the transfer function (that will not change this result).

The clamp is what brings the red back since ACES has no advanced gamut mapping. Besides, apart from ICC, there are not really any systems that do that. It is mostly the responsibility of the colorist to manage this kind of problem by desaturating a little the red color.

Having said that, it is important to mention that :

  • There is no projector at the moment that really makes 100% BT.2020 and none will ever do ACEScg.
  • BT.2020 primaries can only be reached by lasers (and by extension reflective surfaces).
  • There is no non-emissive surface that could have such saturated colors (a red ACEScg primary).

But lets not see this as a limitation of the system, on the contrary ! The system allows you to use extreme values ​​so with great powers comes great responsibilities.

More on this stuff later in the paragraphs about Pointer’s Gamut and ACES limitations.


Most color pipelines nowadays are set through OCIO which is great because of its compatibility with many softwares : Maya, Guerilla, Nuke, Mari, Rv… But there is one downside using OCIO and LUTs : you loose precision. It is really well explained in this post and also here.

The reason the CTF works better is because it is more of an “exact math” implementation of the ACES CTL code rather than baking down into a LUT-based representation.

From Doug Walker, ACES mentor.

Discrete and Continuous transforms

What is happening here ? The answer has been given to me by my colleague, Christophe Verspieren. He has showed me the concept of Continuous and Discrete that is happening with the baked LUTS from OCIO. It is actually pretty easy to understand. Check this image from this site :

Discrete and Continuous are two paradigms of calculation. On the left our baked LUTs. On the right the equations, which are much more accurate as there are no gaps to fill.

When we go from Scene Referred to Display Referred, it implies to cover a huge dynamic range (ACES deals something like 15 stops). The discrete transform actually covers huge zones.

We do not split the dynamic range into equal zones as we prefer to split in detail most current values at the expense of highlights. Therefore the display tone mapping (ODT) makes these false chromaticities really visible by increasing the exposure.

Also even if the transformation is mathematically defined in OCIO, the fact that it runs on GPU rather than on CPU leads to a discretization of the formula : the graphic card actually creates a LUT !

Really these issues are endless…

Color interpolation gaps

Furthermore, these gaps (from the discretization) are filled linearly which is not necessarily the most natural way. Even if we change gamut, we still work in RGB and linear interpolations are done on the line going through A and B. Sometimes it would be better to manage color interpolation in Lab colorspace which will only be supported in OCIO 2.0.

Not every mathematical formula is available in OCIO 1.0, only OCIO 2.0 will allow to represent correctly the necessary calculation of ACES.

To sum it up :

  • Between each slice we have linear interpolations.
  • In very large areas these interpolations lack accuracy.
  • This results in errors of chromaticites in highlights due to the discretisation.

How does this translate visually ? Let’s have a look at some renders with extreme saturation to compare different solutions.

previous arrow
next arrow

In the version 2.0 of this book (release in Q4 2020) I will try to address the LMT process of ACES.

Sounds like an interesting topic !

All paths lead to ACES

ACES implementation

How important is this ? Should we sick to OCIO or look into this CTL implementation ? I guess the best answer I have read on this topic comes (once again !) from Alex Fry :

With normally exposed images, especially from live action cameras, you’re unlikely to see any values that render differently to the OCIO config, but with CG images containing extreme intensity and saturation you will see more pleasing roll off and desaturation.

December 2016 and Alex Fry had already understood so much stuff…

So it looks like CTL would be worth using especially if you are working on saturated animated feature films ! To sum it up, here are my personal ACES recommendations :

  • ACES 1.0.3 : very good config but some limitations as we have seen.
  • ACES 1.1 : improved version with the increased shaper dynamic range.
  • The ACES 1.2 config is available since 5th of April 2020.
  • ACES 2.0 and OCIO 2.0 should bring a perfect match to AMPAS CTL as stated in the roadmap. Beta testing will start after Siggraph 2020.
  • ACES CTL has been implemented through SynColor into Maya and also in Resolve. No color shifting due to loss of precision.

CTL nodes in Nuke

If you’re keen on going down the CTL road, you may ask : how can I maintain a color management chain without OCIO ? That is a very tricky question. I don’t have the answer but Alex Fry was kind enough to share with us this Pure Nuke ACES RRT & ODT.

These Nuke nodes have a 100% match to a pure CTL implementation but way faster. Please note that these nodes work with ACES2065-1 footage. Since we render in ACEScg, you will need to convert your footage with an OCIOColorSpace before plugging these nodes.

Quick update : the Pure Nuke ACES node has been updated by Nick Shaw to work in P3D65, Rec.709, Rec.2020…

Thanks Nick !
previous arrow
next arrow


This chapter has never been intended to be a tutorial about ACES implementation in different softwares. I’d rather focus on the concepts of ACES and how to hopefully master them.

Having said that and due to popular demand, I have listed below the softwares I personally use. If you want to know about Mari, Affinity, Clarisse, c4d or Houdini, please join the ACEScentral forum. You’ll find plenty of artists and TDs to help you !

Maya and ACES

The Color Management Module from Maya is pretty much explicit :

Autodesk Maya can load ACES in two ways : OCIO and CTL through SynColor.

If you load the ACES 1.2 OCIO configuration, you will get some presets that work well. Solid Angle has done a pretty decent tutorial on describing the ACES integration in Maya.

If you are using SynColor, Maya assumes an sRGB display by default. Although it is possible to use any of the other ACES display options by editing the synColorConfig file (please see the user guide).

Resolve and ACES

All ACES transforms have been implemented in Resolve with CTL for better accuracy. I am not a frequent Resolve user but I have tried to come up with a few screenshots to setup Resolve in ACES.

previous arrow
next arrow

Some schools and studios have a final grading pass before delivery. This stage is called Digital Intermediate (DI) and is often done in Resolve. You may notice some visual improvements (with extreme values) if you have worked with OCIO during the whole process except this Resolve stage.

Some posts on the forum of Blackmagic also explain this setup and how to work between Fusion and Resolve.

All of this available for free… Pretty neat !

Guerilla and ACES

Since Guerilla 2.1 (available for free) OCIO management is fully supported with looks and input LUTs, color picker and color boxes. All the roles are based on the OCIO config.

There is no need to use a .bat file anymore to load an OCIO Config. And if you want to, you can still use a MaterialOverride node in your Render Graph to override the Input Device Transform (IDT) for the Color inputs.

I will detail later important differences about the color picking role between Maya and Guerilla.

Nuke and ACES

ACES has been integrated natively in Nuke since version 10. Nuke 12 integrates the ACES 1.1 OCIO config by default and the Foundry has even removed the spi-anim config !

For final delivery of your project to the Digital Intermediate, you will have to deliver ACES compliant EXR files. This is the standard set by the Academy to exchange files between facilities. This is really important. Your output of 3D render will be ACEScg (AP1) but your output of Nuke has to be ACES2065-1 (AP0) with the correct metadata.

The interchange and archival files should be written as OpenEXRs conforming to SMPTE 2065-4. In Nuke, you should set the Write node’s colorspace to ACES2065-1 and check the box write ACES compliant EXR to get the correct metadata.

Non-OCIO softwares and ACES

Unfortunately some softwares have not implemented OCIO yet : Redshift, 3dsMax, Octane… So what should you do if you cannot specify ACEScg as a rendering space in your favorite DCC ? Here is a quick workaround we have come with :

  • Do a conversion of the textures to ACEScg.
  • Load them as linear.
  • Look at the render through an ACES ODT.

One may think : if I do that, the software would still render in “linear” but with the textures converted to ACEScg. Well, not exactly : if you do that, you are rendering in ACEScg. Instead of the IDT doing the conversion, you just did it yourself manually.

Render engines are agnostic. They just chew what you feed them. If you feed them with ACEScg textures and render in linear, your render is ACEScg.

The IDT and ACEScg rendering space is just a faster way to work. In an OCIO friendly environment, you set the IDT on the texture and it converts automatically to the Rendering space (ACEScg in this case).

BUT since you converted the textures manually, you are doing the exact same process. It is just longer I reckon. When Redshift or 3dsMax will integrate OCIO, you will no longer need to convert the textures. That’s about it.

Manual conversion process

There are several ways to convert manually your textures (I haven’t tested them all myself and if you have used different techniques, let me know) :

Here is an example of an Arnold command line to convert HDRIs (you’ll need maketx to make it work) :

"C:\Program Files\Autodesk\Arnold\maya2020\bin\maketx.exe" -v -u --stats --envlatl --colorconfig %OCIO% --colorconvert "Utility - Linear - sRGB" "ACES - ACEScg" "path\to\*.hdr" -o "path\to\*.tx"

%OCIO% here is the system variable set to the OCIO config file. If you haven’t set it yet, just use the path to the config.ocio instead.

At last but not least, which textures should you convert ? You don’t need to convert grayscale linear maps (aka data maps) such as roughness or displacement maps. They will look exactly the same in “linear” and “ACEScg”. You may only convert color maps such as base color (albedo) or emission.

Here is the setup in Redshift :

previous arrow
next arrow
Full screenExit full screen

The primaries are set by the texture itself since the software is not handling them.

Substance and ACES

On December 2019, announcement has been made about Substance Designer supporting OCIO Color Management. Substance Painter is not OCIO friendly yet. But there is a workaround available to paint with an ACES ODT.

Substance Painter uses a Color Profile (exr) as a display LUT. This Color Profile must contain the transfer function adapted to your display device. Basically this display LUT for Substance Painter is a 2D texture. Its generation is documented on Allegorithmic website.

If you are not familiar with this method you can also generate the texture in Nuke by using the image reference provided by Allegorithmic. You just need to apply a colorspace transform from Utility – Linear -sRGB -> Output – Rec.709 using an OCIOColorSpace node. The method is detailed in the following slide :

previous arrow
next arrow

In the slide above, I detail two methods that will give the exact same result. I just wanted to show you both ways. Thomas Mansencal provides the Nuke nodes in this post.

Photoshop and ACES

ACES and Photoshop are giving headaches to many people and I wanted to describe a logarithmic workaround we have used for a couple of years now. I understand it is not as Photoshop friendly as an ICC profile but I know it would be helpful to some artists and students. We will start this workflow in Nuke and then move to Photoshop.

ACES uses two logarithmic color spaces : ACEScc and ACEScct. If you want to know more about the differences between them, just follow the link.

ACESproxy is not considered a working space, more like a transport one.

Photoshop can load a csp LUT as a display LUT layer. This display LUT must contain the transfer function adapted to your display device. Artists generally paint their Matte-Paintings in logarithmic in Photoshop and use a csp LUT to display the image correctly. Here is a description of this workaround method :

previous arrow
next arrow

I know it is not a perfect ACES workflow but I think it is an easy workaround for artists who are not familiar with ICC profiles like myself. You can simply generate the csp display LUT in Nuke thanks to three nodes CMSTestPattern, OCIOColorSpace and GenerateLut. Here is a slide to show you how :

previous arrow
next arrow

I really want to state that this is only a workaround and you could probably check Affinity as well, which has a better ACES integration.

If you’re looking at other ACES workflows with Photoshop, check this post.

Thanks Muhammed Hamed !

Render engines and ACES

Most render engines have integrated OCIO which give us access to ACES. Autodesk has even come up with a CTL integration of ACES in Maya. But there is one thing I haven’t seen so far (maybe in V-Ray ?). Features and calculations based on light spectra are still done with sRGB/Rec.709 primaries, such as :

  • Light temperature.
  • Camera white balance as temperature.
  • Physical sun & sky.

My favorite quote about this topic being :

The Skylight is a spectral representation converted to sRGB/Rec.709 primaries.

This is true for most render engines.

I am using here the example of V-Ray but this may apply to most render engines :

V-Ray uses sRGB primaries by default, but this is really only relevant if you use any V-Ray features that deal with spectra – like light temperature, camera white balance as temperature, physical sun and sky. […] Normally, V-Ray treats all colors as triplets of floating point values; for the most part V-Ray doesn’t really care what the three numbers actually mean. However some calculations in V-Ray are based on light spectra; the conversion of these spectra to floating-point color triplets assumes that the three numbers mean something specific – a color in some predefined internal renderer color space. This means that when converting from Kelvin temperature to RGB colors, V-Ray must know what that internal renderer color space is.

From this post in 2016.

I am pretty sure that developers will catch soon on these points. Otherwise you may simply use a 3×3 transformation from sRGB -> ACEScg on the output.

Asset conversion

Manual workflow

One of the most frequent questions I have been asked is about converting asset. How should we deal with assets created in linear – sRGB to convert them to ACEScg ? Here is an example I had to face recently. Let’s say you work on a movie with a famous character dressed in red. Generally, the client will be very picky about this red value. It has to match !

previous arrow
next arrow

I have also used the same technique for our next example. The challenge was to maintain the same colors from a boat concept painted in Photoshop into an ACES render. I know that fidelity to color keys and concepts is critical for many studios.

previous arrow
next arrow

The workflow shown in our two previous examples you is quite manual and a bit tricky. A simpler workflow is described right below. At least you have both techniques to play with.

I totally accept the fact that this concept has some lighting information. So it can become arbitrary to pick a color in one place or an other. I tried to be as accurate as I could.

I describe how to pick the values from a larger zone in Nuke right below.

Inverted RRT/ODT Workflow

This topic has been addressed many many many many many times on acescentral. How do we preserve the look of an image from the internet into an ACES workflow ? The Color Picking role can be used as an IDT to do so. That is so powerful and absolutely genius. This process is simply called Inverted LUT.

Importing an image in Nuke with a Color Picking role (which is set by default to Output – Rec.709) is just a way to tell ACES : This image, wherever it comes from, is my final result and I want to pick some ACEScg values from it. When you think about it, the (almost) perfect reversibility of the RRT/ODT is just mind-blowing. What happens is just a perfect round trip :

  • IDT : Output – Rec.709 -> ACEScg
  • ODT : ACEScg -> Output-Rec.709

If your ODT is different from Output – Rec.709, I suggest that you modify the OCIO config file with the ODT your are using.

Otherwise the round trip would be incomplete.
previous arrow
next arrow

This process has a couple of limitations :

  • It struggles a bit with very saturated and extreme values, especially the yellow ones. You could possibly end up with some negative values for example.
  • If you are color picking an albedo value (like in Nuke), you should be extremely careful that this value is within the PBR range.
  • Using an inverted ODT as a Color Picking role in Maya, you don’t really know which value is used in the rendering space (ACEScg most likely).
  • Finally, if you render in ACES, you should only color pick in an ACES environment.

We can now move to more in-depth information about the Color Picking role.

Color Picking

It took me a while to understand the Color Picking role in Maya. But after six months I think have finally cracked it. First things first : what is Color Picking ?

  • Color Picking has nothing to do with the Color Picker tool.
  • Color Picking in the OCIO config is a role that you may use to set any image into the ACEScg ecosystem (without modifying it) and pick ACEScg values from it.
  • But Maya has a very interesting use of the Color Picking role. It actually uses this role to set any RGB color in a shader or a light (which is really what Color Picking is about).
previous arrow
next arrow

So as you can see an apparently simple question has been answered differently by two DCC softwares. Second thing that really took me a lot of time to understand is : why Output – Rec.709 as a Color Picking role ?

Well actually it is super easy to understand. The OCIO config comes by default with this Color Picking role because most users will use an ODT Output – Rec.709 (or Output – sRGB) in their viewer.

If your ODT is different from Output – Rec.709, I suggest that you modify the OCIO config file with the ODT your are using.

Your ODT and color picking role should be the same.
previous arrow
next arrow

We have seen in the example above that the Color Picking role in Maya allows you to set your colors taking in account the ODT of ACES which is super user-friendly.

Which Color Picking role ?

In Maya, you should see the Output – Rec.709 color picking role as a way to bypass the ODT for color declaration. Many users complain about the fact that their colors are being modified by the ODT (the tonemapping process). To use your ODT as color picking role is a way to avoid this.

The real purpose of the color picking role is to know in which color space your colors will be chosen/declared. It is really based on the artist’s preferences. There is no better or worse way.

  • If I want the color I pick to be exactly the same after the ODT, I will choose my ODT as color picking role.
  • If I think it is more convenient to work in linear-sRGB or sRGB-Texture, I can do so as well.
  • All these values will be converted to ACEScg for rendering.

Another way to explain it would be :

  • You find a sRGB value on internet and want the exact same value after the ODT : you should use your ODT as a color picking role.
  • You find a sRGB value on the internet and do not mind this value to be modified by tonemapping, you should use Utility-sRGB-Texture.
  • Someone would choose ACEScg as color picking role if he wants to reach super saturated values like a laser for example.

Let’s say a client wants a really saturated red, using (1/0/0) in sRGB might not be enough. Some people choose ACEScg because they want to know what values are used by the render engine.

It is true that when you use “Output-Something”, you are never really sure of what color is really being used during rendering.

Wrong color picking workflow

Here is an example of a very wrong workflow I have heard about. Please be aware that I am only showing this as a counter-example !

previous arrow
next arrow

And the same goes with these values :

  • If you use 1,1,1 with a Color Picking role set to Output – Rec.709 in Maya, you actually use a value of 16.29, 16.29, 16.29 in your render. And that is very wrong !
  • Same thing with Primary values. If you want to reproduce the Cornell Box test in Maya using 1, 0, 0, it won’t work ! You would actually be using 1.24, 0.11, 0.02 (ACEScg values) which is not PBR since the red value exceeds 1.

This is why some studios have set their Color Picking role to ACEScg to avoid this kind of mistake in Maya. I cannot honestly say that one system is better than the other. You just have to be aware of what you are doing.

Color Picking in ACEScg

If you use ACEScg as a Color Picking role, you may face another issue. If you set a primary (like a pure red (1,0,0) for example) in ACEScg :

  • First of all you should be aware that no monitor is capable of displaying this value. And since there is no gamut mapping in ACES (yet), a red primary ACEScg displayed in Rec.709 will be just clipped !
  • Secondly, there is no non-emissive surface that could have such saturated colors. Only lasers (an emissive source) can reach BT.2020 primaries.

If you are working in a realistic context, all of this concerns you. And if you are working in cartoon… Well it concerns you as well ! Some producers out there just like the most outrageous saturation. But most cartoon movies want to be believable in their look. So my advice is to do your look development in a realistic way and you will still be able to push saturation with a grade afterwards.

All of this only stands for a PBR cartoon movie of course.

If you set your Base Color (or Diffuse Color) directly in ACEScg, it is important to be aware that there should be some limit in terms of saturation. One may ask : what would be a proper limit for the Diffuse Color ?

Here is my personal answer : The Pointer’s gamut ! Before we go deep into its definition, let’s step back a bit and have a proper look at the albedo.

Albedo definition

So… After reading all of this, where do I start ? How do I set correctly all these values ? Sometimes we struggle to balance the look development of our assets between lighting, shading and texturing. For example, how strong should be our lights in a turntable ? Or how bright should our textures be ?

I personally consider the albedo color to be a proper reference. Since albedo comes from real life, it is only natural to consider it as a proper way to balance our assets.

From Substance PBR guide : The visible color of a surface is due to the wavelengths emitted by the light source. These wavelengths are absorbed by the object and reflected both specularly and diffusely. The remaining reflected wavelengths are what we see as color.

The skin of an apple mostly reflects red light. Only the red wavelengths are scattered back outside the apple skin, while the others are absorbed.

From Jeremy Selan : In a real scene, if you measure luminance values with a tool such as a spectroradiometer, one can observe a very wide range of values in a single scene. […] Very dark materials (such as charcoal) reflect a small fraction of incoming light, often in the 3-5% range. As a single number, this overall reflectivity is called “albedo”.

Albedo explanation

We just got two great definitions from Substance and Jeremy Selan. And I think it is really worth it to pause a bit and think about what the albedo really is. Because there are so many misconceptions about it.

From Wikipedia : Surface albedo is defined as the ratio of radiosity to the irradiance (flux per unit area) received by a surface. […] albedo is the directional integration of reflectance over all solar angles in a given period.

Great article about this as well: Everything is shiny.

What do all these definitions tell us ? That you get some specular/glossiness information embedded in the albedo value. That is very important.

Let’s take charcoal as an example. Charcoal not being a pure lambertian surface, you will get some “specular” reflection at grazing angles. For artistic control in CG, we generally split diffuse and specular reflections but in real life they are just really the same.

Great explanation from Thomas Mansencal.

Diffuse Color or Albedo AOV ?

In real life albedo includes both diffuse reflection and specular reflection. That’s the first thing we need to clear out. But many render engines, like Guerilla, have simplified this process : the albedo AOV is simply the Diffuse Color (also called Base Color or Diffuse Reflection). Arnold, on the other hand, seems to have kept the “real” thing :

The fresnel in the diffuse_albedo is a result of the Specular IOR and how it affects the albedo of the diffuse to make it energy efficient. So with no Specular, there would be no fresnel on the diffuse_albedo, as all the energy would be in the diffuse.

From the Arnold documentaton.

I found this last part particularly interesting. The misconception about the albedo I was talking earlier may come from these different behaviors. This is particularly critical for our next paragraph.

If you are purist, working with a spectral render engine, you probably want to measure the BTF of a surface spectrally. This will get an almost perfect representation of the surface but by doing so you’ll also loose all your artistic controls over it. Otherwise you can check the vrscans (the VRScans are non-spectral BTF) or Quixel Megascans.

Spectral BTF measures are beyond the reach of most studios anyway.

Which Albedo limit ?

If it was not clear enough, I clearly split the Color Picking in two categories :

  • Lights and emissive surfaces -> Rec.2020 or ACEScg gamuts.
  • Albedo and specular colors of non-emissive surfaces -> Pointer’s gamut.

To have a saturated color, the spectral distribution must be narrow-band. The laser, i.e. a line, being the most saturated in the world. It’s the opposite of surfaces that are pretty smooth. So they cannot be extremely saturated by the very nature of their spectral distribution of reflection.

Perfect explanation from Thomas Mansencal.

Hence the two categories :

  • With a light source, the spectrum is very changing with spikes.
  • With a natural or man made surface, the spectrum is very smooth.

The problem you may face, however, is when you light a surface with a light narrow band. You find yourself in a situation where even if indeed your surface is smooth it reflects something narrow-band. So you can end up with a super saturated surface (it often happens in concert for example). And sometimes (often) it doesn’t go as we would like, e.g. blue highlights fix.

I just love Thomas’ concert example. So visual !

Finally I had to ask him about fluorescence.

Fluorescence is actually considered emission. So it is not limited like a non-emissive surface would be. Fluorescence is simply a re-emission at a different wavelength. We even talk about Optical Brightener in laundry.

I love when all the dots start to connect like this.

Existing solutions

Studios have provided different answers to this issue : how do we limit the albedo range to a PBR one ? Here is a couple of solutions I have seen :

  • A technical check scanning albedo textures and stopping the publish if some out-of-range values are found in them.
  • A soft clip limit directly in the shader to fit the range of any color input.
  • A visual check of the Albedo AOV for out-of-range values (meaning the Albedo AOV is correctly set).

All these solutions are okay. Even if most of the time they only take in account the luminance of the maps. But what about saturation ? And this is where the Pointer’s Gamut comes handy !

Interestingly enough Thomas Mansencal made a plea for Colour Analysis tools… In 2014 !

I am only six years late.

Pointer gamut

I have been wondering for a while if there was any study on the Diffuse Color and saturation. Until I found out about the Pointer’s Gamut. Most of my data comes from this great article.

First of all, a very basic question that has bothered many people. Why is it called Pointer‘s ? It is actually very simple ! In 1980, a scientist named Michael Pointer took over 4000 references to study their colors and came with a Gamut named after him.

The Pointer’s gamut is (an approximation of) the gamut of real surface colors as can be seen by the human eye, based on the research by Michael R. Pointer (1980). […] What this means is that every color that can be reflected by the surface of an object of any material is inside the Pointer’s gamut.

This totally sounds like a legit solution. But to what should we compare the Pointer’s Gamut ? The answer is given to us in the same article (just read it) :

Pointer’s gamut is defined for diffuse reflection (matte surface). As opposed to diffuse reflection there is specular reflection, or mirror like reflection. By specular reflection objects can reflect colors that are outside the Pointer’s gamut.

It could not be any clearer. Really.

The Pointer’s Gamut is a study done for Kodak originally and the list of 4000 samples used is not available.

A technical check based on Pointer’s Gamut

We could definitely think about developing an application allowing us to compare our base color textures to the Pointer’s Gamut. A few studios have already developed some solutions internally and an open-source software would be more than welcome for the community.

A few recommendations about this Pointer’s Gamut check :

  • What is important is to prevent the majority of cases.
  • The Pointer Gamut is not exhaustive, it does not represent all the possible reflectances.
  • Pointer did not measure all actual surfaces. He has a quite large representative sample.
  • We have to apply all of this wisely and filter 98% of problematic cases.
  • It shouldn’t become a brake on creativity, on the contrary it should help it.
  • The system is not made to stop people from doing their job, but to help them do it better and faster.

Hopefully I’ll convince a developer to develop this tool in 2020.

The Pointer’s Gamut Data Set is available here.

Albedo charts and their limits

We now have a clear target for our Diffuse Reflection. We may translate this into a technical check or even an albedo chart.

There are some interesting albedo charts out there even from different render engines, like Unity and Unreal. Since these values have been obtained from real-world measured values, they are pretty good guidelines.

It seems to me that nobody has worked on them like Sebastien Lagarde from Unity. In 2013, Sebastien was already talking about their use and their limits.

Paragraph coming in May 2020. Stay tuned !

I sometimes find shaders with the 0 value in albedo. I agree that the 0 value can be an optimization since there is no value nor bsdf to evaluate. But that’s really not the way to go in PBR.

Albedo chart for ACEScg

My process to generate this chart was pretty simple. I merged one chart posted on ACEScentral, one from Sebastien Lagarde and the macbeth color checker into one. Since most of these charts have sRGB values between 0 and 255, I used this converter to normalize them. I had to dig a bit to find the macbeth values in sRGB. Then in Nuke, I used an OCIOcolorspace node to convert from “Utility – Linear – sRGB” to “ACES – ACEScg”. And voila !

This whole process, similar to the asset conversion shown above, can be simplified thanks to the color picking role.

If you wish to use this in a render for comparison, please use the IDT : “Utility – sRGB – Texture”. Otherwise I can provide you an ACEScg exr if you contact me.

The luminance values comes from Nuke’s viewer and is in linear. I have ordered the albedo values by luminance as I thought it would be more convenient. The chart itself has been written as a jpg file with the following colorspace “Utility – sRGB – Texture”.

ACES limitations

We could debate if we are talking here about ACES or OCIO limitations. But one limit I have encountered in my tests is that there is no advanced gamut mapping. I have thought for a very long time that the ODT would scale the gamut or remap it in a smart way. Unfortunately it is not the case, it just clamps !

Apart from ICC, there are not really any systems that do it. It is the responsibility of the colorist to manage this kind of problem by desaturating a bit the red. But it is not necessarily a limitation of ACES, on the contrary. The system allows you to use extreme values ​​so with great power comes great responsibilities. This is where gamut mapping would be useful. The reality is that all the technology changes super super fast and it takes a lot of time to build the tools. The research is not even finished in fact : for example, LED onset lighting is very recent.

A bit of advice from Thomas Mansencal.

In January 2020, an ACES Working Group about Gamut Mapping has been created. Stay tuned !

ACES : a practical example

Paragraph coming in May 2020. Stay tuned !

When you convert a blue primary color from sRGB to ACES, here is what happens : with the ACEScg conversion, the same chromaticity is expressed with a non-zero red component. It does not give purple. It gives the same color but expressed differently. Then you have to see what the lighting does with these new coordinates. It is by increasing the exposure on these non-zero coordinates that you see purple.


Everyone should work with ACES. As you may have noticed, I am a big fan of ACES and hopefully it will be more and more used. ACES has been developed by hundreds of industry professionals and is available for free. There is no valid reason NOT to adopt it. ACES has many advantages :

  • Compatibility through OCIO with many softwares.
  • Free and lot of support from the community acescentral.
  • To ensure a lighting calculation in the best gamut : ACEScg.
  • Less guess work and quality jump with the amazing Output Device Transforms.
  • To generate a Digital Cinema Distribution Master (DCDM) that will still be valid in many years.

In all my testing, I never had a case where ACES would not make something look better. It is such an improvement of quality : everything looks more real and behaves more correctly. We get so much closer to a photographic approach with ACES. All the hard work has been done by the Academy of Motion Picture Arts and Sciences (AMPAS). And it is up to us, CG Supervisors and Cinematographers, to spread the word !

previous arrow
next arrow

Here are my thoughts on ACES. This is an extensive topic and some people would explain differently. All the examples from this chapter actually use three different gamuts :

  • sRGB for texturing.
  • ACEScg for rendering.
  • ACES2065-1 for delivery.

We can now move to less technical chapters and focus on cinematography. Yay !