Chapter 1.5: Academy Color Encoding System (ACES)


The previous chapter was mainly about Color Management prior to 2014. I am now going to describe ACES, a Color Management Workflow (CMW) developed by dozens of professionals under the auspices of the Academy of Motion Picture Arts and Sciences (AMPAS).

If you want to check the difference between a Color Managemement System (CMS) and a Color Management Workflow (CMW), please check the excellent proposal from Daniele Siragusano.

More than 300 movies have been done using ACES and many VFX studios such as ILM, Framestore, Double Negative and Animal Logic use it. ACES has also become a delivery standard for Netflix. The whole idea behind ACES is to set a standard to help professionals in their color management.

After some investigation, I try to be a bit more careful about the previous statements. It has been confirmed during one of the TAC meetings (at 1:02:28) that some of the movies listed were not using the ACES Output Transform.

It all comes down to the question : which requirements make a project ACES compliant ?

ACES is available through OCIO (here is the link to the ACES 1.2 config), just like the spi-anim config. ACES has also been implemented in CTL in Resolve and GLSL / HLSL in Unreal and Unity.

If you don’t feel like following by another technical chapter, I don’t blame you. You can skip to chapter 2.

Otherwise let’s dive in.

ACES overview

Something that really hit me when I arrived at Animal Logic in 2016 was their range of colors. The artists were working on a beautiful and very saturated movie called Lego Batman. It was my first day and I saw this shot on a monitor (I think Nick Cross lit this shot).

Exit full screenEnter Full screen

I really thought to myself : “Wow ! This looks good ! How did they get these crazy colors ?” The range of colors really seemed wider than in my previous studio :

  • We have seen in the previous chapter that many studios and schools render within the sRGB gamut with a linear transfer function and display in sRGB through a 1D LUT (or a simple sRGB EOTF).
  • Animal Logic (and many other studios such as ILM or MPC) render in ACEScg (which is similar to Rec. 2020) and display in P3 which is the industry standard for cinema theaters. ACES helps them to manage these different colorspaces.

Why ACES ?

ACES has been developed by the Academy of Motion Picture Arts and Sciences with the help of camera manufacturers (Arri, Red, Sony…) and some VFX studios (ILM, Animal Logic…). The idea behind it is pretty interesting.

You can check the contributors and the TAC members here (in the CONTRIBUTORS tab).

When cameras were analog, things were simple. There were only a couple of formats : 35mm and 70mm. The Original Print, shot on film, was available for eternity.

But with the digital revolution, multiple cameras and formats have emerged. These proprietary systems, used for Digital Cinema Distribution Master (DCDM), can be outdated quite quickly. Indeed, the technology of digital cameras evolves pretty fast. Issue is when these movies have to be remastered for new supports, the DCDM are not relevant anymore.

What is ACES ?

ACES is a CMW that includes a series of color spaces and transforms that allows you to manipulate them. It is available for free since it is an open-source project. The reference color space developed by the academy is called ACES2065-1 (AP0 primaries). Here are its characteristics :

  • Ultra Wide Gamut (non-physically realizable primaries)
  • Linear
  • High Dynamic Range
  • Standardised
  • RGB
Exit full screenEnter Full screen

With ACES2065-1 (AP0), the idea is to get a DCDM (Digital Cinema Distribution Master) for eternity. We do NOT know how movies will be watched in 50 or 100 years. ACES has been created for this specific reason, its purpose is to last in time !

ACES2065-1 is also called the ACES colorspace. But I’d rather use its full name.

Terminology accuracy is critical in color management.

ACES in one picture

ACES is composed of three main processes described in the following image :

Exit full screenEnter Full screen
  • A. IDT is the import/conversion of the textures/images/renders to the ACEScg colorspace.
  • B. ACEScg is the rendering/working space.
  • C. RRT + ODT are the Output Transform to any monitor or video projector.

We used to say RRT(Rendering Reference Transform) and ODT (Output Device Transform). I shall refer to them as Output Transform.

The idea behind ACES is to deal with any color transform you may need :

  • Is your texture in sRGB from Photoshop ? Or is it in linear within the sRGB gamut ? ACES provides all the matrixes and LUTs you need to convert one colorspace to another with the IDT (Input Device Transform).
  • Is you monitor Rec.709 or P3 ? ACES provides all the LUTs to view your renders with the appropriate Output Transform.

This is why one of the important things about Color Management : you must know the colorspace (primaries, transfer function and white point) for each process (input, working space and output).

ACES tries to clarify that.

ACES color spaces

Here is a list of the five ACES color spaces :

  • ACES 2065-1 is scene linear with AP0 primaries. It remains the core of ACES and is the only interchange and archival format (for DCDM).
  • ACEScg is scene linear with AP1 primaries (the smaller “working” color space for Computer Graphics).
  • ACEScc, ACEScct and ACESproxy all have AP1 primaries and their own specified logarithmic transfer functions.
PrimariesWhite PointTransfer functionsUsage
ACES2065-1AP0 (non-physically realizable)~D60LinearInterchange and archival space
ACESccAP1 (non-physically realizable)~D60LogarithmicWorking space (color grading)
ACEScctAP1 (non-physically realizable)~D60Logarithmic (Cineon like)Working space (color grading)
ACEScgAP1 (non-physically realizable)~D60LinearWorking space (rendering, compositing)
AcesproxyAP1 (non-physically realizable)~D60LogarithmicTransport space (deprecated)

The ACES White Point is not exactly D60 (many people are wrong about this actually). It was chosen to avoid any misunderstanding that ACES would only be compatible with scenes shot under CIE Daylight with a CCT of 6000K.

It’s all explained in here.

There is also an absolutely brilliant article about the different ACES color spaces if you want to read more on the topic.

Please note that the ACES2065-1 color space is not recommended for rendering. You should use ACEScg (AP1 primaries).

More explanations are provided right below.

Why ACEScg ?

What about Computer Graphics ? How ACES can benefit our renders ? Some tests have also been conducted by Steve Agland (Animal Logic) and Anders Langlands (Weta Digital) to render in ACES2065-1.

An unexpected issue occurred when rendering in the ACES2065-1 color space : it was so big that it gave some negative values and would mess up with energy conservation. It is very well explained in this post. Some color peeps refer to this event as The Great Discovery.

Exit full screenEnter Full screen

On top of that, grading on ACES2065-1 did not feel “natural“. From ACEScentral, Nick Shaw explains :

The AP1 primaries are a compromise which code most colors likely to occur in images from real cameras using positive values. Because even the most saturated ACEScg colors are still “real”, this means that the maths of grading operations works in a way which “feels” better to colorists.

ACEScg is more artist friendly.

Therefore, another color space has been created especially for Computer Graphics : ACEScg (AP1 primaries). I will repeat in bold and with emphasis because it is CRITICAL : you should never render in ACES2065-1.

ACEScg : our working/rendering space

Why would we render in one color space and display in another ? What is the point ? Remember the Rendering space and the Display space from Chapter 1 ? We have already seen that they do NOT have to be the same. It is something that surprises a lot of CG artists but, yes, rendering within different primaries will NOT give the same result.

I’ll repeat for clarity : rendering in Linear – sRGB or ACEScg will not give the same image. Many supervisors have told me : “What is the point in rendering in a wide gamut if we do NOT have the monitors to display it ?” They are mistaken. Rendering in one color space and viewing images in another makes sense. But it is certainly not a trivial decision to make !

Lego Batman and Ninjago were rendered in linear – P3-D60, which was entirely down to the space the surfacing was done in. Peter Rabbit forward was rendered in ACEScg.

The choice of rendering primaries is definitely not a trivial decision…

No ultimate rendering colourspace

First of all, we shall all agree that RGB rendering is kind of broken from first principles (compared to spectral rendering). So we want to use a colorspace for rendering that makes it a bit less broken and ACEScg is a good candidate for that. By switching from Linear – sRGB to ACEScg, you will access Wide Gamut Rendering. How do we know which particular color space is suited for our needs ? As always, by doing some tests !

Another argument I have often been given against ACES was : We don’t care about ACES, we render in linear.

Thinking that “linear” was an infinite color space…

To perform a proper comparison between rendering color spaces, we’d need a reference called the “ground truth“. In our case, it would be an unbiased image like the spectral render engine Mitsuba allows you to generate. Otherwise we could not compare objectively the renders !

The claim is not that BT2020 or ACEScg are the ultimate colourspaces, in fact none is, the claim is that they tend to reduce error generally a bit better compared to others. They happen to have an orientation that is the jack-of-all-trades of all the major RGB colourspaces.

Thomas Mansencal

Comparison between Spectral, Rec.709 and Rec. 2020

Some really interesting tests and research have been conducted by Anders Langlands and Thomas Mansencal. They are brilliantly explained in this post. Three different renders have been done :

Exit full screenEnter Full screen
  • Rec.709, the smallest gamut of all.
  • Spectral, the ground truth using wavelengths for its calculation.
  • Rec. 2020, which is similar to ACEScg.

Then, you subtract them from one another. The darkest it gets, the closer it is to spectral ! Just brilliant ! So if you have a look at the bottom row the average value is overall darker. Which means that Rec. 2020 gets us closer to spectral rendering.

What is the difference between ACEScg and Rec. 2020 ? What is the advantage to have the green primary out of the CIE diagram in ACEScg ? To encompass P3 mostly, ACEScg is a gamut close to BT.2020 but that encompasses P3. This requires non-physically realizable primaries.

Thanks Thomas for the explanation !

ACEScg explanation

The technical reason behind this difference is given in a series of posts :

From Thomas Mansencal : On a strictly technical point of view, rendering engines are indeed colourspaces agnostic. They just chew through whatever data you throw at them without making any consideration of the colorspace the data is stored into. However the choice of colorspace and its primaries is critical to achieve a faithful rendering. […]

If you are using sRGB textures, you will be rendering in this particular gamut (by default). Only the use of an Input Device Transform (IDT) will allow you to render in ACEScg (or a conversion beforehand).

From Thomas Mansencal : What most CG artists are referring to as linear is currently sRGB / BT.709 / Rec. 709 colourspace with a linear transfer function. ACEScg is intrinsically linear which makes it perfect for rendering. […] some RGB colorspaces have gamuts that are better suited for CG rendering and will get results that overall will be closer to a ground truth full spectral rendering. ACEScg / BT.2020 have been shown to produce more faithful results in that regard.

And if this was not clear enough :

Yes, the basis vectors are different and BT.2020 / ACEScg are producing better results, likely because the primaries are sharper along the fact that the basis vectors are rotated in way that reduces errors. A few people (I’m one of them) have written about that a few years ago about it. […] Each RGB colorspace has different basis vectors as a result of which mathematical operations such as multiplication, division and power are not equivalent. […] Generally, you should avoid rendering with ACES2065-1 because it is far from optimal for computer graphics rendering, […].

A closer look at virtual primaries

One of my favorite posts on ACESCentral contains some interesting information :

The reason for unreal primaries is that they are necessary in order to code all colours within the CIE “horseshoe” using only positive values. The AP0 primaries form the smallest possible triangle which contains all the real colours. This has the knock-on effect that a significant proportion of code values are “wasted” on unreal colours. […] The AP1 primaries are a compromise which code most […] colours likely to occur in images from real cameras using positive values. Because even the most saturated ACEScc/ACEScct/ACEScg colours are still real, this means that the maths of grading operations works in a way which “feels” better to colourists.

Nick Shaw

AP1 was designed to produce more reasonable ‘RGB’ grading (so that the dials move in the direction of R and G and B), to pick up critical yellows and golds along the spectral locus (to get that entire edge of the locus), and to clearly encompass Rec.2020 primaries by just a small amount. […] Getting rid of the negative blue primary location in AP0 was also a goal.

Jim Houston

You can also check Scott Dyer’s answer from the same thread.

Input Device Transform (IDT)

The IDT is the process to import the textures/images to your working/rendering space, which most likely will be ACEScg.

Cornell box example

Here are two renders of a Cornell Box in Guerilla Render. I have used the same sRGB textures for both renders, with the following values :

  1. Green sRGB primary at (0, 1, 0)
  2. Red sRGB primary at (1, 0, 0)
  3. Middle gray at (0.18, 0.18, 0.18)
Exit full screenEnter Full screen
previous arrow
next arrow

The only difference between these Cornell boxes is the rendering space :

  • In the first one, the rendering space is what many softwares call “linear“. Which actually means sRGB gamut with a linear transfer function.
  • In the second one, the rendering space is ACEScg. I had to set the IDT correctly to take full advantage of the wide gamut.
Exit full screenEnter Full screen
previous arrow
next arrow

Main thing about this test to take in account is that I used some textures. If you use colors directly in your software, you may not get the same result. It would also depend on how the color_picking role has been implemented. So use the following values carefully :

sRGB primariessRGB primaries converted to ACEScg
Red primary1, 0, 00.61312, 0.07020, 0.02062
Green primary0, 1, 00.33951, 0.91636, 0.10958
Blue primary0, 0, 10.04737, 0.01345, 0.86980
Mid gray0.18, 0.18, 0.180.18, 0.18, 0.18

You must also be careful with your mipmap generation (tex files). If you switch your rendering space, it is safer to delete the existing tex files. Otherwise you may get some incorrect results.

Why do we get a better global illumination ?

ACES allows us to set the primaries of our scene to ACEScg and to have a closer-to-spectral GI in our render. We can do the same process in Nuke to analyze what is actually happening :

  • On the left, we have a pure green constant at 0,1,0.
  • We convert it from sRGB to ACEScg using an OCIOColorSpace node.
  • The same color expressed in ACEScg has some information in the red and blue channels. It is really just a conversion : ACES does not “add” anything.

The conversion does not change the color. It gives the same color (or chromaticity) but expressed differently.

Exit full screenEnter Full screen
previous arrow
next arrow

Here is another way of explaining it :

  • On the left, we have a green primary in the sRGB/Rec.709 color space.
  • Using a Matrix 3×3 to switch from sRGB to ACEScg, this color with unique XY coordinates has been converted.
  • The color is not a pure green anymore in the ACEScg color space (right image).
Exit full screenEnter Full screen

Because of the conversion process, the ray is no longer stopped by a zero on some channels (red and blue in this case). Light paths are therefore less likely to be stopped by a null channel.

IDT overview

ACES provides all the 3D LUTs and Matrix we need to process these transforms. Most common IDT for Computer Graphics are :

  • Utility – sRGB – Texture : If your texture comes from Photoshop or Internet. Only for 8-bits texture, like an albedo map.
  • Utility – Linear – sRGB : If your texture is linear within the sRGB primaries and you want to convert it to ACEScg.
  • Utility – Raw : If you do NOT want any transform applied on your texture, like normal maps.

Something that took me some time to understand as well is that if your rendering space is ACEScg, in this particular case, Utility – Raw and ACEScg are the same IDT. No transforms are applied with both options.

Some studios nowadays work in “lazy ACES” because of the lack of OCIO in Substance Painter. It means that we actually paint textures in a sRGB gamut and convert them on the fly to ACEScg in the render engine.

To plot the gamut

Plotting the gamut of an image allows you to map its pixels against the CIE 1931 Chromaticity Diagram. This is a pretty brilliant concept and a great way to debug ! This function is available in colour-science, developed by Thomas Mansencal.

  • On the first image, we have plotted a render done in sRGB. The pixels are clearly limited by the sRGB primaries. They are compressed against the basis vectors of the gamut.
  • On the second image, we have plotted a render done in ACEScg. The pixels, especially the green ones, are not limited anymore and offer a wider coverage of the gamut.
Exit full screenEnter Full screen

There is also an app available on Windows and Mac called Color Spatioplotter if you want to plot the gamut of an image. I haven’t tried it myself but from the feedback I got, it seems to be working fine at a very affordable price.

Output Transform

The ACES Output Transform is made of two separated steps called the Reference Rendering Transform (RRT) and the Output Device Transform (ODT). This was and is still true for all Output Transforms in ACES 1.0.X. The release of ACES 1.1 introduced some new HDR Output Transforms as a single step, which is called Single Stage Tone Scale (SSTS).

Tone Scale is the ACES terminology for what generally people call Tone Mapping.

The origin of the two steps Output Transform (RRT+ODT) can be found in this very informative document by Ed Giorgianni. The idea behind it was the following :

  • RRT : intermediate rendering to an idealized and hypothetical reference display. It is the ” ACES look”, like a virtual film stock.
  • ODT : final rendering for a specific real-world display device (primaries, eotf and white point). It also takes in account the Viewing Environment (dark, dim or normal surround) and the nits.

If you display your sRGB render directly on P3 without transformation, I would say that it is “Absence of Colour Management”.

Thomas Mansencal

Reference Rendering Transform (RRT)

In practice, the RRT + ODT process is combined for the user but I think it is worth to describe here some components of the RRT. I am particularly interested by the infamous “sweeteners” : glow module, red modifier and global desaturation.

The output of the RRT is called Output Color Encoding Specification (OCES).

These “sweeteners” have generated much debate about where they belong and if they should be part of a Look Modification Transform (LMT). They also cause problems for invertibility. Here are a few quotes about their history :

[They] originally came from an aim to be “pseudo filmic” in the early days. [..] Glow came from perceived filmic look. […] Red modifier and glow are different. Glow is aesthetic.

Scott Dyer

I don’t consider [the red modifier] a “sweetener”. It’s compensating for saturation effect of RGB tone scale.[…] It is compensating for “hot” reds.

Doug Walker and Alex Forsythe

It is worth noting that at some point in the future, the whole Output Transform architecture may be modified for ACES 2.0. There is a trend to try matching the three OCIO steps : Look, Display, View.

Output Device Transform (ODT)

The ODT is the process to display the reference (OCES) on your monitor. The academy recommends the use of an ODT adapted to your monitor. It should be based on your project needs :

  • Do you work for TV and Internet ? You should display in sRGB or Rec.709.
  • Are you working in Feature Film ? You should display in P3.
  • Do you want to output for an UHDTV ? You should display in Rec. 2020.

Rec. 2020 is clearly the future but there are no projectors that are able to cover 100% of this color space. The technology is not there yet. But in ten years maybe, it will be the new norm.

Not there yet, unless you own a Christie.

Examples and comparison of Output Transforms

Here are some examples comparing nuke-default OCIO setup with ACES 1.1. Please note that Nuke is wrong in its OCIO config :

  • As it has been implemented in Nuke, rec709 (approximatively a gamma value of 1.95) is a camera encoding OETF ! This is completely wrong for display !
  • Rec.709 (ACES) uses a BT.1886 EOTF (equivalent gamma of 2.4) and is is in reference to an EOTF output display.
Exit full screenEnter Full screen
previous arrow
next arrow

I have also done a test on the MacBeth chart to compare the Film (sRGB) from the spi-anim config with the ACES config.

Exit full screenEnter Full screen
previous arrow
next arrow

I’ll just put it out there so that it is clear : there is no point in using a P3D65 ACES ODT if your monitor only covers sRGB. It won’t make your renders look prettier.

Your ODT should match your monitor characteristics basically.

Output Transform clarification

Many artists have been confused by Nuke’s default display transform :

  • Why does sRGB display transform and sRGB (ACES) do NOT match ?
  • Because the sRGB (ACES) Output Transform includes some tone scale !

In ACES, we call this the “rendering” step. Going from ACEScg (scene-referred) to your display is not a simple color space conversion. It is actually a “complex” (color and tonality) rendering operation.

Most artists know this process as “tone mapping”.

From ACEScentral, Nick Shaw explains :

The ACES Rec.709 Output Transform is a much more sophisticated display transform, which includes a colour space mapping from the ACEScg working space to Rec.709, and tone mapping to expand mid-tone contrast and compress the shadows and highlights. The aim of this is to produce an image on a Rec.709/BT.1886 display which is a good perceptual match to the original scene.

Output Transform overview

Some people complain about the tone mapping included in the Output Transform. Here a few things to know :

The RRT and ODT splines and thus the ACES system tone scale (RRT+ODT) were derived through visual testing on a large test set of images […] from expert viewers. So no, the values are not arbitrary.

From Scott Dyer, ACES mentor.

Some additional explanations about the RRT/ODT process from this post :

  • The ACES RRT was designed for Theatrical Exhibition where Viewing Conditions are Dark. Content for cinema tends to be authored with more contrast to compensate for the dark surround.
  • Even though there is a surround compensation process (Dark <–> Dim), the values to drive that process were subjectively obtained and it might not be enough for all the cases.
  • The RRT + ODTs are also the results of viewing images by an expert viewer, so there is undeniably some subjectivity built-in.
  • Some companies such as Epic Games are pre-exposing the Scene-Referred Values with a 1.45 gain (which would match roughly an exposure increase of 0.55 in your lights).

Another description of the ODT tone scale can be found here.


The ACES Output Transform includes a shaper, which is a logarithmic color space, to optimize the data. It is a transparent process, nothing more than an intermediate state for data, with purely technical goals.

What does exactly happen when we display in sRGB (ACES) with an OCIO config ? To go to sRGB (ACES), OCIO first transforms the color to ACES2065-1 (AP0 primaries). Then from AP0 we go to a colour space called Shaper thanks to a 1D LUT and finally to sRGB thanks to a 3D LUT.

From the ACES 1.2 OCIO Config :

  - !<ColorSpace>
    name: Output - sRGB
    family: Output
    equalitygroup: ""
    bitdepth: 32f
    description: |
      ACES 1.0 Output - sRGB Output Transform
      ACES Transform ID : urn:ampas:aces:transformId:v1.5:ODT.Academy.RGBmonitor_100nits_dim.a1.0.3
    isdata: false
    allocation: uniform
    allocationvars: [0, 1]
    to_reference: !<GroupTransform>
        - !<FileTransform> {src: InvRRT.sRGB.Log2_48_nits_Shaper.spi3d, interpolation: tetrahedral}
        - !<FileTransform> {src: Log2_48_nits_Shaper_to_linear.spi1d, interpolation: linear}
    from_reference: !<GroupTransform>
        - !<FileTransform> {src: Log2_48_nits_Shaper_to_linear.spi1d, interpolation: linear, direction: inverse}
        - !<FileTransform> {src: Log2_48_nits_Shaper.RRT.sRGB.spi3d, interpolation: tetrahedral}

A shaper is needed because a 3D LUT (even 64^3) is not suitable for applying to linear data like ACEScg. Otherwise it would be just a waste of data.


Once you’re happy with your renders and pretty much done with the project, you are ready to deliver your frames. In animation studios, we generally deliver linear exr files to a digital laboratory, such as this one.

With ACES, it is pretty much the same concept with a couple of important notes. For final delivery to the Digital Intermediate, you will have to deliver ACES compliant EXR files.

This is the standard set by the Academy to exchange files between facilities. This is really important. Your render output will be ACEScg (AP1) but your compositing output has to be ACES2065-1 (AP0) with the correct metadata.

Rendering in ACEScg uses color primaries that are closer to actual devices – a little bigger than Rec2020, but AP0 is the target for File Outputs (archive and interchange). When working completely within your own facility without sharing of files, ACEScg is sometimes used for convenience but using the format in the name of the file to distinguish it from the ACES standard (putting ACEScg in EXR with the primaries specified – a device or AP1 – means it is not an ACES file). The ACES flag in a header should not be set.

Critical explanation by Jim Houston
Exit full screenEnter Full screen

The interchange and archival files should be written as OpenEXRs conforming to SMPTE 2065-4. In Nuke, you should set the Write node’s colorspace to ACES2065-1 and check the box write ACES compliant EXR to get the correct metadata.

From ACEScentral, Doug Walker explains :

The SMPTE ST 2065-4 spec “ACES Image Container File Layout” currently requires uncompressed files. Also, there are other restrictions such as only 16-bit float and only certain channel layouts (RGB, RGBA, and stereo RGB/RGBA). These limitations do make sense for use-cases that involve archiving or real-time playback.

ACES implementation


Most color pipelines nowadays are set through OCIO which is great because of its compatibility with many softwares : Maya, Guerilla, Nuke, Mari, Rv… But there is one downside using OCIOv1 and LUTs : you loose precision. It is really well explained in this post and also here.

The reason the CTF works better is because it is more of an “exact math” implementation of the ACES CTL code rather than baking down into a LUT-based representation.

From Doug Walker, ACES mentor.

Discrete and Continuous transforms

What is happening here ? The answer has been given to me by my colleague, Christophe Verspieren. He has showed me the concept of Continuous and Discrete that is happening with the baked LUTS from OCIO. It is actually pretty easy to understand. Check this image from this site :

Exit full screenEnter Full screen

When we go from Scene Referred to Display Referred, it implies to cover a high dynamic range (ACES deals something like 15 stops). The discrete transform actually covers huge zones.

We do not split the dynamic range into equal zones as we prefer to split in detail most current values at the expense of highlights. Therefore the display tone mapping (ODT) makes these false chromaticities really visible by increasing the exposure.

Also even if the transformation is mathematically defined in OCIO, the fact that it runs on GPU rather than on CPU leads to a discretization of the formula : the graphic card actually creates a LUT !

Really these issues are endless…

Color interpolation gaps

Furthermore, these gaps (from the discretization) are filled linearly which is not necessarily the most natural way. Even if we change gamut, we still work in RGB and linear interpolations are done on a line going from A to B. Sometimes it would be better to manage color interpolation in the Lab colorspace.

Not every mathematical formula is available in OCIO 1.1.1, only OCIO 2.0 will allow to represent correctly the necessary calculation of ACES.

To sum it up :

  • Between each slice we have linear interpolations.
  • In very large areas these interpolations lack accuracy.
  • This results in errors of chromaticites in highlights due to the discretisation.

How does this translate visually ? Let’s have a look at some renders with extreme saturation to compare different solutions.

Exit full screenEnter Full screen
previous arrow
next arrow

Because of Arnold default settings, the base color weight is at 0.8 in these renders.

CTL nodes in Nuke

How important is this ? Should we stick to OCIO or look into this CTL implementation ? I guess the best answer I have read on this topic comes (once again !) from Alex Fry :

With normally exposed images, especially from live action cameras, you’re unlikely to see any values that render differently to the OCIO config, but with CG images containing extreme intensity and saturation you will see more pleasing roll off and desaturation.

December 2016 and Alex Fry had already understood so much stuff…

So it looks like CTL would be worth using especially if you are working on saturated animated feature films ! Alex Fry was kind enough to share with us this Pure Nuke ACES RRT & ODT. Jed Smith has also done a Nuke implementation of the ACES Output Transforms.

These Nuke nodes have a 100% match to a pure CTL implementation but are way faster. Please note that these nodes work with ACES2065-1 footage. Since we render in ACEScg, you will need to convert your footage with an OCIOColorSpace before plugging these nodes.

You can also check different examples from Alex Fry’s presentation. I don’t know if these images were generated in CTL or OCIO though. Or even using FilmLight’s Baselight… With OCIOv2, the ACES OCIO config should not present any discrepancies anymore (probably available for Q3 2021).

The release notes for the ACES OCIO configs are available here.

ACES in render engines

Most render engines have integrated OCIO which give us access to ACES. Autodesk has even come up with a CTL integration of ACES in Maya. But as surprising as it may sound, there are different levels of integration for OCIO/ACES. And for most render engines, calculations based on light spectra are still done with sRGB/Rec.709 primaries, such as :

  • Kelvin temperatures (for lights, black-body radiation and camera white balance).
  • Physical sun & sky (or Skylight).

For example, in most render engines, the Skylight is a spectral representation converted to sRGB/Rec.709 primaries. I am quoting here the example of V-Ray :

V-Ray uses sRGB primaries by default, but this is really only relevant if you use any V-Ray features that deal with spectra – like light temperature, camera white balance as temperature, physical sun and sky. […] Normally, V-Ray treats all colors as triplets of floating point values; for the most part V-Ray doesn’t really care what the three numbers actually mean. However some calculations in V-Ray are based on light spectra; the conversion of these spectra to floating-point color triplets assumes that the three numbers mean something specific – a color in some predefined internal renderer color space. This means that when converting from Kelvin temperature to RGB colors, V-Ray must know what that internal renderer color space is.

From this post in 2016.

I am pretty sure that developers will catch soon on these points. Otherwise you may simply use a 3×3 transform from sRGB to ACEScg to convert. But it is true that unfortunately this final step of integration is missing in most DCC softwares.

Modifying the ACES OCIO config

I have thought for quite some time that the ACES OCIO config should be used as it is. Not at all ! There is a thread on ACESCentral that explains it really well :

It’s very common practice to make a custom config by editing the standard ACES config in a text editor. […] Not only that but recommended ! One of the goals of the ACES config has always been to be a starting point for people to tailor it to their needs.

Nick Shaw and Thomas Mansencal.

The simplest approach is to edit the “config.ocio” file in a text editor, such as notepad. You should do a backup of the original config file, then do simple edits based on your needs and requirements. Here are my personal preferences :

ACES 1.2 OCIO ConfigCustom ACES OCIO Config
color_pickingOutput – sRGBlin_srgb
color_timingACES – ACESccacescct
compositing_linearACES – ACEScgacescg
compositing_logInput – ADX – ADX10acescct
dataUtility – Rawraw
defaultACES – ACES2065-1lin_ap0
matte_paintUtility – sRGB – Textureacescct
referenceUtility – Rawraw
renderingACES – ACEScgacescg
scene_linearACES – ACEScgacescg
texture_paintACES – ACEScclin_srgb
active views[sRGB, DCDM, DCDM P3D60 Limited, DCDM P3D65 Limited, P3-D60, P3-D65 ST2084 1000 nits, P3-D65 ST2084 2000 nits, P3-D65 ST2084 4000 nits, P3-DCI D60 simulation, P3-DCI D65 simulation, P3D65, P3D65 D60 simulation, P3D65 Rec.709 Limited, P3D65 ST2084 108 nits, Rec.2020, Rec.2020 P3D65 Limited, Rec.2020 Rec.709 Limited, Rec.2020 HLG 1000 nits, Rec.2020 ST2084 1000 nits, Rec.2020 ST2084 2000 nits, Rec.2020 ST2084 4000 nits, Rec.709, Rec.709 D60 sim., sRGB D60 sim., Raw, Log][sRGB, Raw, Log]

Reasons for modifying the ACES OCIO config

Here are a few explanations on my choices :

  • I use Aliases instead of the ColorSpace’s names. I find them shorter and quite useful.
  • The color_picking role is limited to the sRGB gamut, in order to avoid any gamut clipping from the Output Transform. This feature may highly depend on your DCC’s implementation though.
  • The color_timing and compositing_log roles both use acescct as it gave me good results in several cases (such as saturation or sharpen).
  • The matte_paint role is in acescct due to a Photoshop workaround.
  • The texture_paint role is in lin_srgb since all my inputs/textures have a linear transfer function (by convention).
  • I have also limited the number of active views to shorten the drop-down menus (like Nuke’s viewer).

Inverted ODT Workflow

Preserving Logos and Graphics

This topic has been addressed many many many many many many many times on the ACEScentral forum. How do we preserve the look of an image from internet into an ACES workflow ? The ODT can be used as an IDT to do so. This process is simply called Inverted LUT.

Importing an image using the RRT/ODT is just a way to tell ACES : this image, wherever it comes from, is my final result and I want to convert it back to my working space. Unfortunately the RRT/ODT is not perfectly invertible. So what happens is (almost but not quite) a transparent round trip :

  • IDT : Output – sRGB -> ACEScg
  • Working/Rendering space : ACEScg
  • ODT : ACEScg -> Output-sRGB

A friendly reminder : you should never ever use this technique to load a texture into a shader. It has been explained several times on the ACEScentral forum.

Constant color

One of the most frequent questions I have been asked is about converting asset : how should we deal with assets created in linear – sRGB to convert them to ACEScg ?

Here is an example I had to face recently. Let’s say you work on a movie with a famous character dressed in red. Generally, the client will be very picky about this red value. It has to match !

Exit full screenEnter Full screen
previous arrow
next arrow

This workflow works because Nuke and Guerilla color select in ACEScg.

I have written this tutorial for a constant color because I was able to check that the converted values did not break the energy conservation. And this is why you should never do it for a texture.

Color key

I have used the “inverted ODT technique” for our next example. The challenge was to maintain the same colors from a boat concept painted in Photoshop into an ACEScg render. I know that fidelity to color keys and concepts is critical for many studios.

Exit full screenEnter Full screen
previous arrow
next arrow

This workflow works because Nuke and Guerilla color select in ACEScg.

I totally accept the fact that this concept has some lighting information. So it can become arbitrary to pick a color in one place or an other.

Hence the use of a large picking zone in Nuke.

This process has a couple of limitations :

  • First if all, if you work in ACES, you should only color pick in an ACES environment.
  • It struggles a bit with very saturated and extreme values, especially the yellow ones. You could possibly end up with some negative values.
  • If you are color picking an albedo value, you should be extremely careful that this value is within the PBR range.
  • It is not suitable for grading and motion graphics, especially with LDR imagery as explained in this post.

We can now move to more in-depth information about the Color Picking role. By default, the Color Picking role is set to Output – sRGB in the ACES 1.2 OCIO config to match the default Output Transform. This is why the Color Picking role is “contextually related” to the inverted ODT workflow.

Color Picking role

Different implementations

When it comes to the Color Picking role, the first thing to know is that all softwares are not equals regarding this feature. When a developer implements OCIO in its software, he/she can choose to integrate things to a certain level. Let’s take Guerilla and Maya as an example :

  • The Color Picking role in Guerilla only drives the color picker hue board (for display).
  • The Color Picking role in Maya drives the whole color selection process.

It only took me six months to understand the Color Picking role in Maya. But I think I have finally cracked it. First things first : what is the Color Picking role for ? From the OCIO documentation :

color_picking – colors in a color-selection UI can be displayed in this space, while selecting colors in a different working space (e.g. scene_linear or texture_paint).

This is what Guerilla has implemented basically.

But Maya has a different use of the Color Picking role. It actually uses this role to select any RGB color in a shader or a light. In this context, Color Picking could also be called Color Selection.

RGB triplets by themselves do not make really sense. You need a context to interpret them correctly. Maya lets you choose the context, when Guerilla or Nuke do not.

This is why you have to be extra careful when it comes to Color Picking or Color Selection depending on the DCC software you use.

If your ODT is different from Output – sRGB, I suggest that you modify the color_picking role in the OCIO config file to match the ODT your are using.

Your ODT and color picking role should be the same.

Color Picking in Maya

Please be aware that in this section I will focus on Maya, since its integration of the color_picking role is quite unique.

When it comes to choosing a Color Picking role for Maya, I have seen three different philosophies :

Pros Cons
Output – sRGBColor selection is display-referred. It is “artist-friendly”.You do not know what it is the exact value used for rendering (unless you constantly check the Channel Editor). This could break the PBR of your scene.
Utility – Linear – sRGB“Legacy” Mode. Artists may select colors they are used to.You do not take advantage of the wide gamut in the color selection.
ACES – ACEScgYou know exactly which value is used for rendering and can reach very saturated values (like a laser).You have access to crazy saturated values, unsuitable for albedo, that may eventually break energy conservation (and clip !).

If you use (1, 1, 1) with a Color Picking role set to Output – sRGB in Maya, you actually use a value of (18.91, 18.91, 18.91) in your render. And that is very wrong if it is set in the Base Color for example !

Exit full screenEnter Full screen
previous arrow
next arrow

It is true that when you use “Output – sRGB”, you are never really sure of what color is being used during rendering. And this is why some studios have set their Color Picking role to ACEScg to know what values are used by the render engine.

I cannot honestly say that one system is better than the other. You just have to be aware of what you are doing. If you’re interested by this specific topic, I would suggest the read of these three threads on ACESCentral.

Color Picking in ACEScg

If you use ACEScg as a Color Picking role, you should be aware of this : there is no non-emissive surface that has such saturated colors. Only lasers (an emissive source) can reach BT.2020 primaries.

This particular example really made things clearer for me. When you study color science, you may sometimes get lost in abstract stuff. Knowing this made all of these concepts more grounded and more real in a way.

If you want to Color Select in ACEScg, you just need to edit the OCIO Config and modify the color_picking role :

  color_picking: ACES - ACEScg

If you are working in a realistic context, all of this concerns you. And if you are working in cartoon… Well it concerns you as well ! Some producers out there just like the most outrageous saturation. But most cartoon movies want to be believable in their look. So my advice is to do your look development in a realistic way and you will still be able to push saturation with a grade afterwards.

All of this only stands for a PBR cartoon movie of course.

ACES limitations

ACES Retrospective and Enhancements

In March 2017, a study has listed some possible improvements for ACES : ACES Retrospective and Enhancements. It is an interesting document that has lead to several changes in the ACES organization. Here is a link to the official response from the ACES Leadership.

A list of 48 points to improve has also been published on the forum and the creation of several Virtual Working Groups has already brought some solutions to the table. Do not hesitate to join the process !

This interesting article also describes ACES’ issues and a proposal to solve them.

Hue skews and Gamut Clipping

The two biggest issues I have encountered are called Hue skews and Gamut Clipping. Some image makers believe that the audience got used to it and are not bothered. Some find them truly horrific. I’ll let you decide for yourselves.

Exit full screenEnter Full screen
previous arrow
next arrow

In these slides, I described the issue as “posterization”. It has been discussed on ACESCentral and described as “Gamut Clipping”.

There are different reasons for this kind of issues. They are pretty technical and beyond the scope of this chapter but I have listed them here :

  • A 3×3 matrix can only model linear transformations which may induce Abney effect because they are straight lines (just like brute force gamut clips).
  • Discrete per-channel lookups (also called RGB tone mapping) skew the intention. Any aesthetic transfer function that asymptotes at 1.0 suffers this.
  • The aesthetic transfer function ends up collapsing a boatload of values into the same value, hence Gamut Clipping. The non-physically realizable primaries of ACEScg may also be responsible.

Most of these notions are eventually related to what we call gamut mapping. I have thought for a very long time that the ODT would remap the gamut it in a smart way. Unfortunately it is not the case, it just does a 3×3 matrix transform and clamps !

Exit full screenEnter Full screen
previous arrow
next arrow

Indeed a P3 ODT brings the value back into its gamut through the clamp which causes some Gamut Clipping. This is one of the issue we are facing with very saturated values.

// Handle out-of-gamut values
// Clip values < 0 or > 1 (i.e. projecting outside the display primaries)
linearCV = clamp_f3( linearCV, 0., 1.);

All ODTs clamp to the target gamut so it is impossible to have something outside the gamut. All values ​​are assumed to be between 0 and 1 after this process and this the penultimate step before the transfer function (that will not change this result).

A word about matrixes 3×3

Thomas Mansencal was kind enough to share some knowledge about matrixes and their use in ACES. Matrixes are currently used in more than 50 “places” such as IDTs, the BlueLightArtifactFix LMT, the RRT and ODTs. They are mostly used in definitions for colorspace changes, chromatic adaptation and saturation adjustment.

The advantage is that they are modeling a linear transformation which is very fast to apply, is very stable numerically, invertible (most of the time), easy to implement and does not suffer from exposure variance. This advantage is also their curse because they can only model linear transformations.

An online app for RGB Colourspace Transformation Matrix.

Handling a cube

To understand better how matrix 3×3 works, Thomas used a comparison with a cube. I like simple examples !

Imagine that you have two cubes with different rotations and scale. A 3×3 matrix could make one cube fit to the other perfectly. Now imagine that you have a cube and a sphere and you want to fit them together ?  The 3×3 matrix would get you to the point where the sphere and the cube are sharing the same space but they would not have the same shape. This is where you need a non-linear transformation basically that will do more than rotating and scaling your space. You will need to locally distort it ! 3×3 is basically putting two large handles around your space and kind of distorting it globally, note that a 4×4 matrix would also allow you to translate, a 3×3 by convention commonly only scales and rotates (in 3D spaces).

More information about matrixes can be found here.

From what I understood, matrixes are ideal for colorspace conversions, such as the ones used in the IDT and Utilities but are less than ideal when it comes to Display Transform. Gamut Mapping would be a more suitable process in this case.

Gamut mapping

What is gamut mapping ? A proper Output Transform (also called Display Rendering Transform) should be composed of two main elements :

  • Tone mapping (or intensity/luminance mapping) to compress an infinite range (HDR) on a limited range (SDR).
  • Gamut mapping to compress a wide gamut (ACEScg – scene) to a smaller gamut (P3 – display) by maintaining the intention of the scene as best as possible.

Jed Smith actually does a much more complete description of which “modules” we might need for an Output Transform.

Exit full screenEnter Full screen
previous arrow
next arrow

I have used this experimental OCIO config to compare different Output Transforms.

This process is actually super complex and there has been many attempts these last few years to solve this riddle : AMD Fidelity FX, AMD Variable Dynamic Range, AMD FreeSync HDR Gamut Mapping, Frostbite… With more or less success.

Apart from ICC, there are not really any systems that do [gamut mapping]. It is the responsibility of the colorist to manage this kind of problem by desaturating a bit the red. But it is not necessarily a limitation of ACES, on the contrary. The system allows you to use extreme values ​​so with great power comes great responsibilities. This is where gamut mapping would be useful. The reality is that all the technology changes super super fast and it takes a lot of time to build the tools. The research is not even finished in fact : for example, LED onset lighting is very recent.

A bit of advice from Thomas Mansencal.
Exit full screenEnter Full screen
previous arrow
next arrow

One issue that has often been noticed is what we call the Blue Light Artifact. It is very well described in this post from ACEScentral. A temporary fix has also been provided until a more long-term solution is found, such as the Gamut Compress algorithm by Jed Smith.


ACES is available for free and provides the following characteristics :

  • Compatibility through OCIO with many softwares.
  • Free and lot of support from the community ACEScentral.
  • To ensure a lighting calculation in a wide gamut : ACEScg.
  • Less guess work thanks to the Output Transforms.
  • To generate a Digital Cinema Distribution Master (DCDM) that will still be valid in many years.

However, several aspects should be adressed for ACES 2.0 to make it more robust and reliable :

  • Hue Skews due to per-channel (RGB) lookup.
  • Gamut clipping (or posterization) due to lack of signal compression (aka gamut mapping).
  • The current Output Transforms are not neutral/chromaticity-linear/hue-preserving (light mixtures in the working space are not respected).
  • Lack of predictability between the SDR and HDR Output Transforms.
Exit full screenEnter Full screen
previous arrow
next arrow


  • Render in ACEScg to get the best lighting calculation possible, even if our monitors are not capable of displaying it.
  • Display using a Display Transform that suits your monitor and project.
  • Use a monitor that covers your needs (which should be ideally 100% coverage of P3 for feature film).

We can now move to less technical chapters and focus on cinematography. Yay !


Articles and blogs