Chapter 1.5: Academy Color Encoding System (ACES)


The careful reader will have noticed that over the two years of existence of my website, my analysis on ACES has changed quite a bit and went from “ACES is awesome” to “ACES has issues and limitations“. Don’t get me wrong : the idea of ACES (an actual framework to help professionals in color management) is great and ACES has been my entry point to color management. Over the past two years (since September 2018, my first post), I have learned so much and met amazing color-obsessed people partly thanks to ACES.

But in reality, and this has been stated at meeting#5 (of the Output Transforms VWG) : “The two big successes of ACES are the IDT concept, and the understanding that color management is needed.” The Output Transforms (the module we use to display our renders) are ACES’ weakness, which is an issue since they are probably the most critical part of the “Image Formation Chain“.

The biggest point overall is that many image makers who are uncomfortable with Color Management (because let’s face it, it is a scary and complex topic) have completely given up on learning and actually trusting their skills or eyes. There is like a psychological effect : “if it has been done by the Academy and used on the Lego movies, then it MUST be good. It MUST be.” You don’t believe me ? Check this out :

Exit full screenEnter Full screen
previous arrow
next arrow

Why on earth would a blue rabbit turn magenta ? Is there something weird going on here ? Can we do better than this ? These two images come from the Redshift and Unreal documentations, two respectable render engines. And this adds again to this fashion trend (in which my website participated) : “if ACES is implemented everywhere, it HAS TO be be perfect. It HAS TO.” So let’s see what is going on here…


The previous chapter was mainly about Color Management prior to 2014. I am now going to describe ACES, a Color Management Workflow (CMW) developed by dozens of professionals under the auspices of the Academy of Motion Picture Arts and Sciences (AMPAS).

If you want to check the difference between a Color Managemement System (CMS) and a Color Management Workflow (CMW), please check the explanation and proposal by Daniele Siragusano.

The whole idea behind ACES is to set a standard to help professionals in their color management. Many VFX studios such as ILM, Framestore, Double Negative and Animal Logic use it to exchange files and officially more than 300 movies have been done using ACES.

After some investigation, I try to be careful about the previous statement. It has been confirmed during one of the TAC meetings (at 1:02:28) that some of the movies listed were not using the ACES Output Transform.

It all comes down to the question : which requirements make a project ACES compliant ?

ACES is available for free through OCIO (here is the link to the ACES 1.2 config), just like the spi-anim config. ACES has also been implemented in CTL in Resolve and GLSL / HLSL in Unreal and Unity.

If you don’t feel like following by another technical chapter, I don’t blame you. You can skip to chapter 2.

Otherwise let’s dive in.

ACES overview

Something that really hit me when I arrived at Animal Logic in 2016 was their range of colors. The artists were working on a beautiful and very saturated movie called Lego Batman. It was my first day and I saw this shot on a monitor (I think Nick Cross lit this shot).

Exit full screenEnter Full screen

I really thought to myself : “Wow ! This looks good ! How did they get these crazy colors ?” The range of colors really seemed wider than in my previous studio :

  • We have seen in the previous chapter that many studios and schools render within the sRGB gamut with a linear transfer function and display in sRGB through a 1D LUT (or a simple sRGB OETF).
  • Animal Logic uses P3 monitors which is the industry standard for cinema theaters. As we have seen in the previous chapter, it gives a wider range of colours to display. The amazing art direction by Grant Freckelton also helped to achieve this amazing result.

Why ACES ?

ACES has been developed by the Academy of Motion Picture Arts and Sciences with the help of camera manufacturers (Arri, Red, Sony…) and some VFX studios (ILM, Animal Logic…).

You can check the contributors and the TAC members here (in the CONTRIBUTORS tab).

When cameras were analog, things were simple. There were only a couple of formats : 35mm and 70mm. The Original Print, shot on film, was available for eternity.

But with the digital revolution, multiple cameras and formats have emerged. These proprietary systems, used for Digital Source Master (DSM), can be outdated quite quickly or even worse, not sufficiently documented.

Issue is when these movies have to be remastered for new supports, the DSMs are not relevant anymore and the original creative intent is lost. So the original idea behind ACES was to encode these archives in a “standard” format to make them less ambiguous. The first name of ACES was indeed the “Image Interchange Framework” (IIF).

BUT it has been explained quite clearly in this post that :

The ST2065-1 file format is far from unambiguous. […] If I give you a ST2065-1 file you don’t know if it was made with the ut33, 0.1.1, 0.2.1, 0.71, 1.03 or 1.1 version of ACES. Which all will produce quite a different version of the movie. […] Is it a bad thing that we have an ambiguous archive file format? I don’t think so. It is good to have ST2065 and nail down a few parameters for the encoding. It is a better DSM (Digital Source Master), that’s it and it is great.

So it looks like that the sole purpose of ACES has not been achieved…

What is ACES ?

ACES is a CMW that includes a series of color spaces and transforms that allows you to manipulate them. The reference color space developed by the academy is called ACES2065-1 (AP0 primaries). Here are its characteristics :

  • Ultra Wide Gamut (non-physically realizable primaries)
  • Linear
  • High Dynamic Range
  • Standardised
  • RGB
Exit full screenEnter Full screen

ACES2065-1 is also called the ACES colorspace. But I’d rather use its full name.

Terminology accuracy is critical in color management.

ACES in one picture

ACES is composed of three main processes described in the following image :

Exit full screenEnter Full screen
  • A. IDT is the import/conversion of the textures/images/renders to the ACEScg colorspace.
  • B. ACEScg is the rendering/working space.
  • C. RRT + ODT are the Output Transform to the monitor or video projector.

We used to say RRT(Rendering Reference Transform) and ODT (Output Device Transform). I shall refer to them as Output Transform.

RRT and ODT were merged for the HDR Output Transforms in ACES 1.1.

The idea behind ACES is to deal with any color transform you may need :

  • Is your texture in sRGB from Internet ? Or is it in linear within the sRGB gamut ? ACES provides all the matrixes and LUTs to convert one colorspace to another with the IDT (Input Device Transform).
  • Is you monitor Rec.709 or P3 ? ACES provides all the LUTs to view your renders with the appropriate Output Transform.

This is why one of the important things about Color Management : you must know the colorspace (primaries, transfer function and white point) for each process (input, working space and output).

ACES tries to clarify that.

ACES color spaces

Here is a list of the five ACES color spaces :

  • ACES 2065-1 is scene linear with AP0 primaries. It remains the core of ACES and is the only interchange and archival format (for DCDM).
  • ACEScg is scene linear with AP1 primaries (the smaller “working” color space for Computer Graphics).
  • ACEScc, ACEScct and ACESproxy all have AP1 primaries and their own specified logarithmic transfer functions.
PrimariesWhite PointTransfer functionsUsage
ACES2065-1AP0 (non-physically realizable)~D60LinearInterchange and archival space
ACESccAP1 (non-physically realizable)~D60LogarithmicWorking space (color grading)
ACEScctAP1 (non-physically realizable)~D60Logarithmic (Cineon like)Working space (color grading)
ACEScgAP1 (non-physically realizable)~D60LinearWorking space (rendering, compositing)
AcesproxyAP1 (non-physically realizable)~D60LogarithmicTransport space (deprecated)

The ACES White Point is not exactly D60 (many people are wrong about this actually). It was chosen to avoid any misunderstanding that ACES would only be compatible with scenes shot under CIE Daylight with a CCT of 6000K.

It’s all explained in here.

There is also an interesting article about the different ACES color spaces if you want to read more on the topic.

Please note that the ACES2065-1 color space is not recommended for rendering. You should use ACEScg (AP1 primaries).

More explanations are provided right below.

Why ACEScg ?

What about Computer Graphics ? How ACES can modify our renders ? Some tests have also been conducted by Steve Agland (Animal Logic) and Anders Langlands (Weta Digital) to render in ACES2065-1.

An unexpected issue occurred when rendering in the ACES2065-1 color space : it was so big that it gave some negative values and would mess up with energy conservation. It is very well explained in this post. Some color peeps refer to this event as The Great Discovery.

Exit full screenEnter Full screen

On top of that, grading on ACES2065-1 did not feel “natural“. From ACEScentral, Nick Shaw explains :

The AP1 primaries are a compromise which code most colors likely to occur in images from real cameras using positive values. Because even the most saturated ACEScg colors are still “real”, this means that the maths of grading operations works in a way which “feels” better to colorists.

ACEScg is more artist friendly.

Therefore another color space has been created for Computer Graphics : ACEScg (AP1 primaries) (which is similar to Rec. 2020). I will repeat in bold and with emphasis because it is CRITICAL : you should never render in ACES2065-1.

ACEScg : our working/rendering space

Why would we render in one color space and display in another ? What is the point ? Remember the Rendering space and the Display space from Chapter 1 ? We have already seen that they do not have to be the same.

But what if I told you that rendering within different primaries will NOT give the same result ? I’ll repeat for clarity : rendering in Linear – sRGB or ACEScg will not give the same “beauty” exr. The lighting calculation and therefore result will be different.

Lego Batman and Ninjago were rendered in linear – P3-D60, which was entirely down to the space the surfacing was done in. Peter Rabbit forward was rendered in ACEScg.

The choice of rendering primaries is definitely not a trivial decision…

First of all, we shall all agree that RGB rendering is kind of broken from first principles (compared to spectral rendering). So we want to use a colorspace for rendering that makes it a bit less broken and ACEScg is a good candidate for that. By switching from Linear – sRGB to ACEScg, you will access Wide Gamut Rendering. How do we know which particular color space is suited for our needs ? As always, by doing some tests !

To perform a proper comparison between rendering color spaces, we’d need a reference called the “ground truth“. In our case, it would be an unbiased image like the spectral render engine Mitsuba allows you to generate. Otherwise we could not compare objectively the renders !

The claim is not that BT2020 or ACEScg are the ultimate colourspaces, in fact none is, the claim is that they tend to reduce error generally a bit better compared to others. They happen to have an orientation that is the jack-of-all-trades of all the major RGB colourspaces.

Thomas Mansencal.

Comparison between Spectral, Rec.709 and Rec. 2020

Some really interesting tests and research have been conducted by Anders Langlands and Thomas Mansencal. They are explained in this post. Three different renders have been done :

Exit full screenEnter Full screen
  • sRGB/Rec.709, the smallest gamut of all.
  • Spectral, the ground truth using wavelengths for its calculation.
  • Rec. 2020, which is similar to ACEScg.

Then, you subtract them from one another. The darkest it gets, the closer it is to spectral ! So if you have a look at the bottom row the average value is overall darker. Which means that Rec. 2020 gets us “closer” to spectral rendering.

What is the difference between ACEScg and Rec. 2020 ? What is the advantage to have the three ACEScg primaries out of the CIE diagram ? To encompass P3 mostly, ACEScg is a gamut close to BT.2020 but that encompasses P3. This requires non-physically realizable primaries.

Explanation by Thomas Mansencal. It is worth noting that the choice of the ACEScg primaries was controversial.

ACEScg explanation

The technical reason behind this difference is given in a series of posts :

From Thomas Mansencal : On a strictly technical point of view, rendering engines are indeed colourspaces agnostic. They just chew through whatever data you throw at them without making any consideration of the colorspace the data is stored into. However the choice of colorspace and its primaries is critical to achieve a faithful rendering. […]

If you are using sRGB textures, you will be rendering in this particular gamut (by default). Only the use of an Input Device Transform (IDT) will allow you to render in ACEScg (or a conversion beforehand).

From Thomas Mansencal : What most CG artists are referring to as linear is currently sRGB / BT.709 / Rec. 709 colourspace with a linear transfer function. ACEScg is intrinsically linear which makes it perfect for rendering. […] some RGB colorspaces have gamuts that are better suited for CG rendering and will get results that overall will be closer to a ground truth full spectral rendering. ACEScg / BT.2020 have been shown to produce more faithful results in that regard.

And if this was not clear enough :

Yes, the basis vectors are different and BT.2020 / ACEScg are producing better results, likely because the primaries are sharper along the fact that the basis vectors are rotated in way that reduces errors. A few people (I’m one of them) have written about that a few years ago about it. […] Each RGB colorspace has different basis vectors as a result of which mathematical operations such as multiplication, division and power are not equivalent. […] Generally, you should avoid rendering with ACES2065-1 because it is far from optimal for computer graphics rendering, […].

A closer look at virtual primaries

One of my favorite posts on ACESCentral contains some interesting information :

The reason for unreal primaries is that they are necessary in order to code all colours within the CIE “horseshoe” using only positive values. The AP0 primaries form the smallest possible triangle which contains all the real colours. This has the knock-on effect that a significant proportion of code values are “wasted” on unreal colours. […] The AP1 primaries are a compromise which code most […] colours likely to occur in images from real cameras using positive values. Because even the most saturated ACEScc/ACEScct/ACEScg colours are still real, this means that the maths of grading operations works in a way which “feels” better to colourists.

Nick Shaw.

AP1 was designed to produce more reasonable ‘RGB’ grading (so that the dials move in the direction of R and G and B), to pick up critical yellows and golds along the spectral locus (to get that entire edge of the locus), and to clearly encompass Rec.2020 primaries by just a small amount. […] Getting rid of the negative blue primary location in AP0 was also a goal.

Jim Houston.

You can also check Scott Dyer’s answer from the same thread. From my personal experience, I can share that rendering in ACEScg :

  • Is not “better” than rendering in Linear – sRGB. It is just different and you should test by yourself to see if Wide Gamut Rendering fits your needs.
  • Gives a result which is closer to Spectral Rendering, but this not necessarily suitable for all projects, especially in Full CG.
  • Does not give access to more bright and saturated colors. That is the biggest misconception about it. These values do not make it to the display !

So yeah, Wide Gamut rendering has both pros and cons. And as always, you should test for yourself.

Input Device Transform (IDT)

The IDT is the process to import the textures/images to your working/rendering space, which most likely will be ACEScg.

Cornell box example

Here are two renders of a Cornell Box in Guerilla Render. I have used the same sRGB textures for both renders, with the following values :

  1. Green sRGB primary at (0, 1, 0).
  2. Red sRGB primary at (1, 0, 0).
  3. Middle gray at (0.18, 0.18, 0.18).
Exit full screenEnter Full screen
previous arrow
next arrow

The only difference between these Cornell boxes is the rendering space :

  • In the first one, the rendering space is what many softwares call “linear“. Which actually means sRGB gamut with a linear transfer function (“Utility – Linear – sRGB“).
  • In the second one, the rendering space is ACEScg. I set the IDT correctly to properly test the wide gamut rendering.
Exit full screenEnter Full screen
previous arrow
next arrow

Main thing about this test to take in account is that I used some textures. If you use colors directly in your software, you may not get the same result. It would also depend on how the color_picking role has been implemented. So use the following values carefully :

sRGB primariessRGB primaries converted to ACEScg
Red primary1, 0, 00.61312, 0.07020, 0.02062
Green primary0, 1, 00.33951, 0.91636, 0.10958
Blue primary0, 0, 10.04737, 0.01345, 0.86980
Mid gray0.18, 0.18, 0.180.18, 0.18, 0.18

You must also be careful with your mipmap generation (tex files). If you switch your rendering space, it is safer to delete the existing tex files. Otherwise you may get some incorrect results.

Why do we get a different global illumination ?

OCIO allows us to set the color space of our scene to ACEScg and to have a closer-to-spectral GI in our render. We can do the same process in Nuke to analyze what is actually happening :

  • On the left, we have a pure green constant at 0,1,0.
  • We convert it from sRGB to ACEScg using an OCIOColorSpace node.
  • The same color expressed in ACEScg has some information in the red and blue channels. It is really just a conversion : ACES does not “add” anything.

The conversion does not change the color. It gives the same chromaticity but expressed differently.

Exit full screenEnter Full screen
previous arrow
next arrow

Here is another way of explaining it :

  • On the left, we have a green primary in the sRGB/Rec.709 color space.
  • Using a Matrix 3×3 to switch from sRGB to ACEScg, this chromaticity with unique XY coordinates has been converted.
  • The color is not a pure green anymore in the ACEScg color space (right image).
Exit full screenEnter Full screen

Because of the conversion process, the ray is no longer stopped by a zero on some channels (red and blue in this case). Light paths are therefore less likely to be stopped by a null channel.

IDT overview

ACES provides all the 3D LUTs and Matrix we need to process these transforms. Most common IDT for Computer Graphics are :

  • Utility – sRGB – Texture : If your texture comes from Photoshop or Internet. Only for display-referred textures encoded with an sRGB OETF, like an albedo map.
  • Utility – Linear – sRGB : If your texture is linear within the sRGB primaries and you want to convert it to ACEScg, like an exr file.
  • Utility – Raw : If you do NOT want any transform applied on your texture, like normal maps.

Please note that if your rendering space is ACEScg, in this particular case, Utility – Raw and ACEScg are contextually the “same” IDT. No transforms are applied with both options.

To plot the gamut

Plotting the gamut of an image allows you to map its pixels against the CIE 1931 Chromaticity Diagram. This function is available in colour-science, developed by Thomas Mansencal.

  • On the first image, we have plotted a render done in sRGB. The pixels are within the sRGB gamut.
  • On the second image, we have plotted a render done in ACEScg. The pixels have a wider coverage of the gamut.
Exit full screenEnter Full screen

There is also an app available on Windows and Mac called Color Spatioplotter if you want to plot the gamut of an image. I haven’t tried it myself but from the feedback I got, it seems to be working fine at a very affordable price.

As a conclusion on this IDT / ACEScg section, I would like to add that :

  • The Cornell boxes examples are best-case scenarios. In real production, with much more complex stimuli, the differences are not that important.
  • Closer to Spectral rendering should be interpreted as “the chromaticities after indirect bounces are closer“, which again does not necessarily look “better” (as shown here).
  • Wide gamut rendering adds an another layer of complexity when it comes to displaying these images, since a proper gamut compression (which ACES does not have) would be needed.

Again, test for yourself, trust your eyes and compare both pros and cons.

Output Transform

The ACES Output Transform is made of two separated steps called the Reference Rendering Transform (RRT) and the Output Device Transform (ODT). This was and is still true for all Output Transforms in ACES 1.0.X. The release of ACES 1.1 introduced some new HDR Output Transforms as a single step, which is called Single Stage Tone Scale (SSTS).

Tone Scale is the ACES terminology for what generally people call Tone Mapping.

The origin of the two steps Output Transform (RRT+ODT) can be found in this document by Ed Giorgianni. The idea behind it is the following :

  • RRT : intermediate rendering to an idealized and hypothetical reference display. It is the ” ACES look“, like a virtual film stock.
  • ODT : final rendering for a specific real-world display device (primaries, eotf and white point). It also takes in account the Viewing Environment (dark, dim or normal surround) and the nits.

If you display your sRGB render directly on P3 without transformation, I would say that it is “Absence of Colour Management”.

Thomas Mansencal.

In the next paragraphs, you will find a description of the ACES 1.X Output Transforms. It is not an in-depth analysis, but rather a collection of information from ACESCentral and some simple visual examples. We now know (in January 2022), partly thanks to the Output Transform Virtual Working Group for ACES2.0, their limitations and issues. I have described them a bit below and also more extensively in this article. Watch out !

Reference Rendering Transform (RRT)

In practice, the RRT + ODT process is combined for the user but I think it is worth to describe here some components of the RRT. I am particularly interested by the infamous “sweeteners” : glow module, red modifier and global desaturation.

The output of the RRT is called Output Color Encoding Specification (OCES).

These “sweeteners” have generated much debate about where they belong and if they should be part of a Look Modification Transform (LMT). They also cause problems for invertibility. Here are a few quotes about their history :

[They] originally came from an aim to be “pseudo filmic” in the early days. [..] Glow came from perceived filmic look. […] Red modifier and glow are different. Glow is aesthetic.

Scott Dyer

I don’t consider [the red modifier] a “sweetener”. It’s compensating for saturation effect of RGB tone scale.[…] It is compensating for “hot” reds.

Doug Walker and Alex Forsythe

It is worth noting that at some point in the future, the whole Output Transform architecture may be modified for ACES 2.0. There is a thread about “following” the three OCIO steps : Look, Display, View.

Output Device Transform (ODT)

The ODT is the process to display the reference (OCES) on your monitor. The academy recommends the use of an ODT adapted to your screen. It should be based on your project needs :

  • Do you work for TV and Internet ? You should display in sRGB or Rec.709.
  • Are you working in Feature Film ? You should display in P3.

Rec. 2020 is clearly the future but there are no projectors that are able to cover 100% of this color space. The technology is not there yet. But in ten years maybe, it will be the new norm. So as far as I know, in 2022, Rec.2020 is actually only used as a container for P3 deliveries.

Not there yet, unless you own a Christie.

Examples and comparison of Output Transforms

Here are some examples comparing nuke-default OCIO setup with ACES 1.1. Please note that Nuke is wrong in its OCIO config :

  • As it has been implemented in Nuke, rec709 (approximatively a Gamma value of 1.95) is a camera encoding OETF !
  • Rec.709 (ACES) uses a BT.1886 EOTF (equivalent Power function of 2.4) and is in reference to an EOTF output display.
Exit full screenEnter Full screen
previous arrow
next arrow

I have also done a test on the MacBeth chart to compare the “Film (sRGB)” from the spi-anim config with the ACES config.

Exit full screenEnter Full screen
previous arrow
next arrow

I’ll just put it out there so that it is clear : there is no point in using a P3D65 ACES ODT if your monitor only covers sRGB. It won’t make your renders look prettier.

Your ODT should match your monitor characteristics basically.

Output Transform clarification

Many artists have been confused by Nuke’s default display transform :

  • Why does sRGB display transform and sRGB (ACES) do NOT match ?
  • Because the sRGB (ACES) Output Transform includes some tone scale !

In ACES, we call this the “rendering” step. Going from ACEScg (scene-referred) to your display is not a simple color space conversion. It is actually a “complex” (color and tonality) rendering operation.

Most artists know this process as “tone mapping”.

From ACEScentral, Nick Shaw explains :

The ACES Rec.709 Output Transform is a much more sophisticated display transform, which includes a colour space mapping from the ACEScg working space to Rec.709, and tone mapping to expand mid-tone contrast and compress the shadows and highlights. The aim of this is to produce an image on a Rec.709/BT.1886 display which is a good perceptual match to the original scene.

Output Transform overview

Some people complain about the tone mapping included in the Output Transform. Here a few things to know :

The RRT and ODT splines and thus the ACES system tone scale (RRT+ODT) were derived through visual testing on a large test set of images […] from expert viewers. So no, the values are not arbitrary.

From Scott Dyer, ACES mentor.

Some additional explanations about the RRT/ODT process from this post :

  • The ACES RRT was designed for Theatrical Exhibition where Viewing Conditions are Dark. Content for cinema tends to be authored with more contrast to compensate for the dark surround.
  • Even though there is a surround compensation process (Dark <–> Dim), the values to drive that process were subjectively obtained and it might not be enough for all the cases.
  • The RRT + ODTs are also the results of viewing images by an expert viewer, so there is undeniably some subjectivity built-in.
  • Some companies such as Epic Games are pre-exposing the Scene-Referred Values with a 1.45 gain (which would match roughly an exposure increase of 0.55 in your lights).

Another description of the ODT tone scale can be found here.


The ACES Output Transform includes a shaper, which is a logarithmic color space, to optimize the data. It is a transparent process, nothing more than an intermediate state for data, with purely technical goals.

What does exactly happen when we display in sRGB (ACES) with an OCIO config ? To go to sRGB (ACES), OCIO first transforms the color to ACES2065-1 (AP0 primaries). Then from AP0 we go to a colour space called Shaper thanks to a 1D LUT and finally to sRGB thanks to a 3D LUT.

From the ACES 1.2 OCIO Config :

  - !<ColorSpace>
    name: Output - sRGB
    family: Output
    equalitygroup: ""
    bitdepth: 32f
    description: |
      ACES 1.0 Output - sRGB Output Transform
      ACES Transform ID : urn:ampas:aces:transformId:v1.5:ODT.Academy.RGBmonitor_100nits_dim.a1.0.3
    isdata: false
    allocation: uniform
    allocationvars: [0, 1]
    to_reference: !<GroupTransform>
        - !<FileTransform> {src: InvRRT.sRGB.Log2_48_nits_Shaper.spi3d, interpolation: tetrahedral}
        - !<FileTransform> {src: Log2_48_nits_Shaper_to_linear.spi1d, interpolation: linear}
    from_reference: !<GroupTransform>
        - !<FileTransform> {src: Log2_48_nits_Shaper_to_linear.spi1d, interpolation: linear, direction: inverse}
        - !<FileTransform> {src: Log2_48_nits_Shaper.RRT.sRGB.spi3d, interpolation: tetrahedral}

A shaper is needed because a 3D LUT (even 64^3) is not suitable for applying to linear data like ACEScg. Otherwise it would be just a waste of data.


Once you’re happy with your renders and pretty much done with the project, you are ready to deliver your frames. In animation studios, we generally deliver scene-linear exr files to a digital laboratory, such as this one.

With ACES, it is pretty much the same concept with a couple of important notes. For final delivery to the Digital Intermediate, it is recommended to deliver ACES compliant EXR files.

This is the standard set by the Academy to exchange files between facilities. This is really important. Your render output will be ACEScg (AP1) but your compositing output has to be ACES2065-1 (AP0) with the correct metadata.

Rendering in ACEScg uses color primaries that are closer to actual devices – a little bigger than Rec2020, but AP0 is the target for File Outputs (archive and interchange). When working completely within your own facility without sharing of files, ACEScg is sometimes used for convenience but using the format in the name of the file to distinguish it from the ACES standard (putting ACEScg in EXR with the primaries specified – a device or AP1 – means it is not an ACES file). The ACES flag in a header should not be set.

Explanation by Jim Houston.
Exit full screenEnter Full screen

The interchange and archival files should be written as OpenEXRs conforming to SMPTE 2065-4. In Nuke, you should set the Write node’s colorspace to ACES2065-1 and check the box write ACES compliant EXR to get the correct metadata.

From ACEScentral, Doug Walker explains :

The SMPTE ST 2065-4 spec “ACES Image Container File Layout” currently requires uncompressed files. Also, there are other restrictions such as only 16-bit float and only certain channel layouts (RGB, RGBA, and stereo RGB/RGBA). These limitations do make sense for use-cases that involve archiving or real-time playback.

ACES limitations

ACES Retrospective and Enhancements

In March 2017, a study has listed some possible improvements for ACES : ACES Retrospective and Enhancements. It is an interesting document that has lead to several changes in the ACES organization. Here is a link to the official response from the ACES Leadership.

A list of 48 points to improve has also been published on the forum and the creation of several Virtual Working Groups has already brought some solutions to the table. Do not hesitate to join the process !

This interesting article also describes ACES’ issues and a proposal to solve them.

Hue skews and Gamut Clipping

The two biggest issues I have encountered with ACES are called Hue skews and Gamut Clipping. Some image makers believe that the audience got used to it and are not bothered. Some find them truly horrific. I’ll let you decide for yourselves.

Exit full screenEnter Full screen
previous arrow
next arrow

In these slides, I described the issue as “posterization”. It has been discussed on ACESCentral and described as “Gamut Clipping”.

There are different reasons for this kind of issues. They are pretty technical and beyond the scope of this chapter but I have listed them here (and more extensively in this article) :

  • A 3×3 matrix can only model linear transformations which may induce Abney effect because they are straight lines (just like brute force gamut clips).
  • Discrete per-channel lookups (also called RGB tone mapping) skew the intention. Any aesthetic transfer function that asymptotes at 1.0 suffers this.
  • The aesthetic transfer function ends up collapsing lots of different values into the same one, hence Gamut Clipping. The non-physically realizable primaries of ACEScg may also be responsible.

The ODTs clip values

Most of these notions are eventually related to what we call gamut mapping. This is the key element missing in ACES. I have thought for a very long time that the ODT would remap the gamut it in a smart way. Unfortunately it is not the case, it just does a 3×3 matrix transform, a tone scale and a clamp !

Exit full screenEnter Full screen
previous arrow
next arrow

As you can see in the code below, an ODT brings the values back into its gamut through a 3×3 matrix and a clamp which cause some Gamut Clipping. This is one of the issue we are facing with very saturated values :

// Handle out-of-gamut values
// Clip values < 0 or > 1 (i.e. projecting outside the display primaries)
linearCV = clamp_f3( linearCV, 0., 1.);

All ODTs clamp to the target gamut so it is impossible to have something outside the gamut. All values ​​are assumed to be between 0 and 1 after this process and this the penultimate step before the transfer function (that will not change this result).

Let’s now see the difference between 3×3 matrixes and Gamut Mapping. We could say that the Gamut mapping in ACES is very “crude” or completely missing. It depends on how you see things.

A word about 3×3 matrixes

Thomas Mansencal shared some knowledge about matrixes and their use in ACES. Matrixes are currently used in more than 50 “places” such as IDTs, the BlueLightArtifactFix LMT, the RRT and ODTs. They are mostly used in definitions for colorspace changes, chromatic adaptation and saturation adjustment.

The advantage is that they are modeling a linear transformation which is very fast to apply, is very stable numerically, invertible (most of the time), easy to implement and does not suffer from exposure variance. This advantage is also their curse because they can only model linear transformations.

An online app for RGB Colourspace Transformation Matrix.

Handling a cube

To understand better how matrix 3×3 works, Thomas used a comparison with a cube. I like simple examples !

Imagine that you have two cubes with different rotations and scale. A 3×3 matrix could make one cube fit to the other perfectly. Now imagine that you have a cube and a sphere and you want to fit them together ?  The 3×3 matrix would get you to the point where the sphere and the cube are sharing the same space but they would not have the same shape. This is where you need a non-linear transformation basically that will do more than rotating and scaling your space. You will need to locally distort it ! 3×3 is basically putting two large handles around your space and kind of distorting it globally, note that a 4×4 matrix would also allow you to translate, a 3×3 by convention commonly only scales and rotates (in 3D spaces).

More information about matrixes can be found here.

Matrixes are a good tool for color spaces’ conversions, such as the ones used in the IDT and Utilities but are less than ideal when it comes to Display Transform. A more sophisticated Gamut Mapping process would be appropriate in this case.

Gamut mapping

What is gamut mapping ? A proper Output Transform (also called Display Rendering Transform) should be composed of two main elements :

  • Tone mapping (or intensity/luminance mapping) to compress an infinite range (HDR) on a limited range (SDR).
  • Gamut mapping to compress a wide gamut (ACEScg) to a smaller gamut (P3) by maintaining the intention of the scene as best as possible.

Jed Smith actually does a much more complete description of which “modules” we might need for an Output Transform.

Exit full screenEnter Full screen
previous arrow
next arrow

I have used this experimental OCIO config to compare different Output Transforms.

This process is actually quite complex and there has been many attempts these last few years to solve this riddle : AMD Fidelity FX, AMD Variable Dynamic Range, AMD FreeSync HDR Gamut Mapping, Frostbite… With more or less success.

Apart from ICC, there are not really any systems that do [gamut mapping]. It is the responsibility of the colorist to manage this kind of problem by desaturating a bit the red. But it is not necessarily a limitation of ACES, on the contrary. The system allows you to use extreme values ​​so with great power comes great responsibilities. This is where gamut mapping would be useful. The reality is that all the technology changes super super fast and it takes a lot of time to build the tools. The research is not even finished in fact : for example, LED onset lighting is very recent.

Some advice from Thomas Mansencal.
Exit full screenEnter Full screen
previous arrow
next arrow

One issue that has often been noticed is what we call the Blue Light Artifact. It is very well described in this post from ACEScentral. A temporary fix has also been provided until a more long-term solution is found, such as the Gamut Compress algorithm by Jed Smith.


ACES provides the following characteristics :

  • Compatibility through OCIO with many softwares.
  • Free and support from the community on ACEScentral.

However, several aspects should be adressed for ACES 2.0 to make it more robust and reliable :

  • Hue Skews due to per-channel (RGB) lookup.
  • Gamut clipping (or posterization) due to lack of signal compression (aka gamut mapping).
  • The current Output Transforms are not “neutral”/chromaticity-linear/hue-preserving (light mixtures in the working space are not respected).
  • The path to white is just an “happy accident” where colours converge to primaries and their complements.
  • Lack of predictability between the SDR and HDR Output Transforms.
  • ACES deliveries are still ambiguous and should be fixed by the ACES Metadata File (AMF).

On paper, the idea of ACES is interesting : a standard for professionals to help them color managing. In practice, it is an over-complex system that is actually not really recommended nor even used by color professionals (except for exchanging files).

Exit full screenEnter Full screen
previous arrow
next arrow


Here are the key points we have seen in this chapter :

  • ACES is an attempt at setting a standard Color Management Workflow.
  • It is designed around three important steps : the Input Transforms (for input/import), ACEScg (working/rendering space) and the Output Transforms (for display).
  • Ideally you should know and color the color spaces used for each step. ACES gives you access through OCIO to the transforms needed to do so.
  • ACES lets you manipulate most (all ?) of color spaces but is composed of four different color spaces itself : ACES2065-1, ACEScc, ACEScct and ACEScg.
  • Having access to Wide Gamut Rendering allows for interesting experiments but should be studied with care.
  • ACES has unfortunately issues, such as Hue Skews and Gamut Clipping, which in my opnion cannot be solved by a colorist : “LMTs for a DRT with wiggles are difficult.” (from meeting#36). It actually prevents you to reach certain colors on display :
Exit full screenEnter Full screen

From this post originally. It is virtually impossible to compensate for this behaviour.

Maybe these limitations come from the ACES’ History itself. I have tried to summarize it below from the information I could gather on ACESCentral and during the VWG meetings.

ACES history

ACES has a complex history :

  • It started as an “Interchange Framework” with a single color space ACES2065-1 (AP0 primaries).
  • Then, if I understood correctly, feedback from colorists and VFX studios was not great. So AP1 primaries were created to give birth to ACEScg, ACEScc, ACEScct and ACESproxy. Two of these four color spaces are now deprecated (or almost).
  • Having non-physically realizable primaries (AP1) for a working space is a controversial choice. They are outside of the Spectral Locus !
  • Same thing for White Point which is almost ~D60 ! Would have Rec.2020 (with actual physical primaries and a White Point of D65) been a good candidate for the working space ?
  • Finally, the Output Transforms were created with “contradictory design requirements” and a core technology (Per-Channel Lookup) that has proven to be inadequate and overly complex (the SSTS is over a thousand lines of code).

I actually try to explain and detail all of these points in my latest article (from September 2021) here. I would recommend to read it if you are done with this chapter. Or you can now move to less technical chapters and focus on cinematography. Your call !


Articles and blogs

I have removed most of the articles’ links because most of them are only scratching the surface or just making a blind praise of ACES.