What makes a good picture (formation) ?

Introduction

Yes, this is me again. Back with the same topic. Since my last post from September 2021, I have learned a few things that I would like to share. What is this about again ? Well, it comes with many names :

  • Display Transform
  • View Transform
  • Output Transform
  • Display Rendering Transform (DRT)
  • Tone-Mapping
  • Lookup table (LUT)
  • Print Film Emulation (PFE)

In this article I will use a different name which I think is more appropriate : Picture Formation.

The scene-referred/display-referred workflow has proven to be successful for the past twenty years. But interestingly enough, this workflow seems to have overlooked an important step in our chain. Where does the picture get formed ?

The lack of proper terminology to describe images is one of the biggest issue in our industry. And this is exactly why we need to name this “mechanism” properly. Let’s have a look at a simple diagram :

160_pictureFormation_0010_colorManagement_FHD
Exit full screenEnter Full screen
 

The picture formation is NOT just “putting data on the screen” (like a display encoding), it is building up aesthetic decisions about creative image appearance. In my experience, it will have the biggest impact on the quality of your project (positively or negatively). Hence the obsession.

Anyone who cares for their art seeks the essence of their own technique.
Dziga Vertov

Why does it matter ?

We spend more than 8 hours a day in front of our monitors “staring” at images but we barely ask ourselves how those work. Just think about it for a minute. Most artists (and even supervisors) believe that displaying their work in a viewer is “automatic”.

This could not be further from the truth. Let’s put it simply : no one on the planet has figured out this yet. There have been several attempts, more or less successful and that’s about it. And our winner so far, the “apex predator”, is the chemical film processing developed last century.

160_pictureFormation_0020_rocketLaunch_FHD
Exit full screenEnter Full screen
 

And just to be super clear, we do not paint pixels in CG workflows. This is NOT like Photoshop where you pick a color and it displays “automatically”. That would be true for textures because their range fit between 0 and 1. But not for CG renders where our lights and the Global Illumination (GI) allow for more complex scenarios and much higher values.

Before going any deeper into our analysis, I just want to emphasize that choosing the proper picture formation is going to be the most important decision that you can take on a show. Because every single artistic note that you will give will be dependent of it :

  • Evaluate the roughness of a material.
  • Evaluate the color of a fur shader.
  • Evaluate the correct exposure of a shot.
  • Evaluate the lens flare and noise of a delivery.

So, the better you take this decision, the easier your production will be. It is as simple as that.

This article is based on my experience with full CG shows. And its goal is to provide a guide to help supervisors choosing the “best” color management workflow for their project. It applies mostly to feature animation and video games, but the fundamentals will also be true for any VFX work.

Our monitors are cubes

Most websites (including mine) about color management start their explanation looking at “stimuli data” (inputs and/or renders). But this time, I would like to do it the other way around. We will start from the “end target” (our displays) and then go backwards.

Because we look at everything (shots, assets, renders, playblasts…) through a screen !

Basically a monitor can be represented as a cube:

  • A cube has 6 faces, right ?
  • So imagine 3 channels going from 0 to 100% emission for R, G and B.
  • Then picture 3 more channels from 0 to 100% emission for C, M and Y.
  • Finally visualize one corner being (0,0,0) and one corner being (1,1,1).

This eventually forms a cube. That is your monitor.

160_pictureFormation_0030_display_FHD
Exit full screenEnter Full screen
 

Now pay attention, this is important. No matter how good, expensive or fancy your monitor is, it only can display a range from 0 to 100%. That’s it ! A cube !

There is no magic : a monitor comes with its own physical limitations.

Let’s start simple

At this point, you may want to tell me : “Chris, you are making things complicated again. I just wanna display an exr render on my monitor. Nothing more.” Sure ! Let’s take an exr “HDR” render and display it.

160_pictureFormation_0040_lightSabers_FHD
Exit full screenEnter Full screen
 

Here comes the first question : why does it look wrong (or dark) ?

This is because our monitors apply an “EOTF”. The simplest explanation for an EOTF is that it applies like a “gamma down” (around 0.45~ish in most cases). This is part of the hardware and there is nothing we can do about it. So we need to compensate for that in our softwares. Here is the result :

160_pictureFormation_0050_lightSabers_FHD
Exit full screenEnter Full screen
 

And here comes the second question : why does it still look wrong ?

Well, we are displaying a “scene-linear” exr with a simple “Gamma 2.2 function” (EOTF-1), which means that any value above 1 will get just clipped. Because remember :

  • Our exr files have values way above 1 (technically at 16-bit half float, it can go up to 65504~ish).
  • Our monitors only display a range from 0 to 1.

Hopefully, we all can see the issue we are trying to solve at this point. Somehow we need an operation that allows for a faithful representation of “high dynamic range data” onto a “standard dynamic range monitor”.

Maybe now you want to add : “Chris, I know about this. We need to use some kind of s-curve to display linear exrs properly on a monitor.” Well, this is partly true. Let me quote Cinematic Color here :

“Friends don’t let friends view scene-linear imagery without an “S-shaped” view transform.”
— Jeremy Selan

So let’s look at our image through the “spi_anim” config created by Sony Pictures Animation (2010) and used on Cloudy with a chance of meatballs, Surf’s Up, Arthur Christmas and My little Pony :

160_pictureFormation_0060_lightSabers_FHD
Exit full screenEnter Full screen
 

And here comes the third question : why does it still look wrong ?

Several years have passed since Cinematic Color (2012) was written. And we now know that the “S-curve” is just ONE of the different elements necessary for a proper picture formation. Our image has now more contrast but why doesn’t it feel… “filmic” somehow ?

Well how would our light sabers look with the “Filmic” config ? This picture formation has been developed by Troy Sobotka (2017) and was used on Next Gen and The witness.

160_pictureFormation_0070_lightSabers_FHD
Exit full screenEnter Full screen
 

And here comes the fourth question : why does it still look wrong ?

Well, we have made some progress. The light sabers are now white and the contrast seems consistent with our previous attempt. But somehow our characters do not have any shaping. It looks like all our values are collapsing somehow, like a complete loss of tonality.

Okay, why don’t we try one last time ? This time I will use a new picture formation released in 2025, let’s call it “Candidate A” :

160_pictureFormation_0080_lightSabers_FHD
Exit full screenEnter Full screen
 

And here comes our final question : what do YOU think of this image ?

Now, if you are a supervisor, can you imagine the amount of notes you would have given to the artist if the image would not have been formed properly ? This is what is at stake here.

For clarity, let’s make a summary :

  • We have seen that our monitors are inherently limited devices.
  • We have displayed THE SAME EXR five times : “None”, “Gamma 2.2″, “spi-anim”, “Filmic Very High Contrast” and “Candidate A”.
  • We have seen that a simple gamma function is not enough to display properly an HDRI.
  • We have observed that different picture formations will give different results. For better or for worse.

If you agree with the above points, let’s move forward.

If you are familiar with my previous post, you should not have learned anything new yet. So far this is mostly a recap.

What about ACES 1.X ?

Possibly after reading this, you may want to point out : “Chris, why do you re-invent the wheel again ? ACES has been implemented in most DCC softwares providing the industry a standard for color management.

First, just a reminder that I already wrote about ACES 1.X flaws and reported back to the Academy in 2021. And I am not the only one since an important feedback about ACES 1.0.3 called “ACES Retrospective and Enhancements” was published in 2017. 

Also, I agree that ACES has mainly brought three good things to the industry:

  • It has brought attention to the importance of color management.
  • It has been an entry point to learn about color management (via their forum mainly).
  • It has given an agreed exchange color space (ACES2065-1) between studios.

Basically, I would not be writing those lines without ACES. So thank you !

So why don’t we look at our light sabers with an ACES 1.X config (created by the AMPAS in 2016 and used on Super Mario, Migration and The Wild Robot) ?

160_pictureFormation_0090_lightSabers_FHD
Exit full screenEnter Full screen
 

My main comment here is that our colors have shifted. The light sabers appear violet in the back, orange on Mery and yellow on our beloved zombie. We will see later why hue shifts are necessary, but the issue here is that they are a byproduct of the transform. You cannot escape them.

Maybe at this point, you want to reply : “Chris, this render is clearly using sRGB-BT.709 primaries. Why don’t use ACEScg primaries ?” Sure, let’s do that.

160_pictureFormation_0100_lightSabers_FHD
Exit full screenEnter Full screen
 

It is fascinating to me how this ACES image looks similar to the one formed with Filmic. Sure, they both used different stimuli data but this shows that we are back to square one. We haven’t solved anything (yet).

Please check carefully these two images above and then ask yourself : “Do these examples look like they are color managed ?” Or just put differently : “What does color management even mean at this point ?

Anticipating some feedback

I can already anticipate three reactions to this article so far. Let’s see if we can address those concerns. The first one is : “Chris, your light sabers example is an edge case. Not all exr files have this kind of extreme data.

I would argue the following :

  • There are no edge cases. All we have are exr files with “light data” in them. And even on a project with a “natural” look, I have faced plenty of situations where the picture formation is being pushed to its limits. You don’t need light sabers nor fancy neon lights to see the possible limitations.
  • Another way to put it is that “edge cases” are extremely important because they tell the full story. They just make obvious the potential flaws of the picture formation and help us to spot more easily the patterns.

Here is an interesting picture formed on stimuli data made of “wavelengths”. I think this is the best illustration of what I am trying to express above :

160_pictureFormation_0110_waveLength_FHD
160_pictureFormation_0120_waveLength_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

The second argument I often hear is : “Chris, a colorist would fix those artifacts. That is part of their job.” For instance :

I think sometimes people are getting too fixated on every single image coming out beautifully when it’s untouched through the transform. Because, you know, we’ve got a colorist in the loop here and there are possibilities to do all sorts of other things to images that may be problematic.
— A color specialist

I would reply the following :

  • Sometimes it looks like we have given up on forming proper pictures. Our goal should be to come up with a “good” picture formation that allows colorists to focus on their creative goals. Not duct taping.
  • Also, can you imagine if in a review with a director I would say something like : “A colorist will fix it in DI” ? I would probably get fired. In full CG, we expect the images that we deliver to look “correct”.

For instance, I can replicate the exact same demo as above with an exr file from “The Grinch”. And we could ask ourselves : “Is the light bulb an edge case ? Should a colorist fix the light bulb ?

160_pictureFormation_0130_grinch_FHD
160_pictureFormation_0140_grinch_FHD
160_pictureFormation_0150_grinch_FHD
160_pictureFormation_0160_grinch_FHD
160_pictureFormation_0170_grinch_FHD
160_pictureFormation_0180_grinch_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

The third argument would be : “Chris, I know you can show images where stuff looks clearly broken. BUT there are obviously many shows and projects being delivered under the ACES 1.X standard and I can’t say I am noticing some crazy problems anywhere.

I would answer the following :

  • I already explained in 2021 : most movies that claim to be ACES projects actually have their plates encoded in ACES 2065-1 and their own picture formation (not the ACES Output Transforms).
  • And for the projects that use them, here are some of the most extreme examples I was able to find (from The Wild Robot) :
160_pictureFormation_0190_wildRobot_FHD
160_pictureFormation_0200_wildRobot_FHD
160_pictureFormation_0210_wildRobot_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

A feedback I am ready to accept at this point is the following : “Chris, you have mentioned a few times stuff looking good. But what does good even mean ?

Well now we’re talking. This is exactly what I want to answer here, by proposing objective criteria rather than some subjective “golden eye” like it is too often the case. So let’s move forward.

My Picture formation requirements

First, it is important to acknowledge that there is no perfect picture formation. It is a game of compromises. If you optimize for one thing, you sacrifice other things. For instance, you may favor saturation over “smoothness” or an accurate inverse instead of a “pleasing” picture.

It took me several years to find these requirements. They are the result of comparing hundreds of images through different picture formations and discussing them online. Before diving in, I will share some tips to evaluate them properly :

  • Test as much as possible on a great variety of footage. The broader are your samples, the better.
  • Try increasing the exposure a lot. It is very important to push the limits.
  • Try grading. All our work is being done under the picture formation, so we shall not fight it.
  • I never test invertibility since we do not need it in full CG workflow. That makes things easier for us.
  • It is critical to constantly compare with other LUTs. Because we have to fight visual adaptation.

Visual adaptation means that if you stare at an image for long enough, it will eventually start to look good. This is because of our visual system constantly adapting to what we are looking at.

It shall not break visual cognition

This my main requirement. On an animated feature, where hundreds of artists work, you cannot afford a picture formation that breaks visual cognition. Because you will be constantly fighting it.

Think of images that you like (from movies or TV shows). I bet that they all have one point in common : they do not break visual cognition.

Defining visual cognition is not an easy task : I already tried once and failed. So let me start with the simplest example I can think of : a gradient (or ramp).

If we can all agree that a gradient should look like a gradient, then maybe we can move forward. And yes I can picture you saying : “But Chris, you’ve gone mad. What the heck are you talking about ? Of course a ramp should look like a ramp !

Let’s say I want to make a blue gradient for a sky in a shot. Something like that (this is “Candidate A”) :

160_pictureFormation_0220_blueGradient_FHD
Exit full screenEnter Full screen
 

This gradient goes from (0, 0, 0) to (0, 0, 20) using a BT.709 blue primary. Now let’s have a look at the same gradient with another picture formation released in 2025 (let’s call it “Candidate B” for now) :

160_pictureFormation_0230_blueGradient_FHD
Exit full screenEnter Full screen
 

If your display is correctly calibrated, you should see a line cutting off the transition. A disruption. Not ideal for a gradient though, right ? Now think of all the gradients we have to deal with in CG :

  • Sky ? A gradient.
  • Specular roughness ? A gradient.
  • Glossy reflection ? A gradient.
  • Defocus ? A gradient.
  • Glow ? A gradient.
  • Light decay ? A gradient.
  • Blocker falloff ? A gradient.
  • Volumetric anisotropy ? A gradient.
  • Spotlight cone ? A gradient.
  • Subsurface ? A gradient.

You might think : “Why are we spending so much time on ramps ? All of this is kind of obvious…” Well, you cannot imagine the effort that it takes to get there. Because a color “specialist” might come with these different explanations in a debate :

  • But a colorist can fix it.
  • But the BT.709 blue primary is an edge case : a sky would never have that color.
  • But does this picture formation invert perfectly ?

Just for you to understand the possible consequences of such a disruption, I share below a CG render using a BT.709 blue primary for the color of the lights. If you look carefully at the glossy surfaces, you shall spot a hard line (like on the table, mirror frame and faces).

160_pictureFormation_0232_blueMirror_FHD
160_pictureFormation_0234_blueMirror_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Similar to the gradient images, I will share another simple example : a sphere should look like a sphere. Hopefully we can agree on this as well !

160_pictureFormation_0240_spheres_FHD
160_pictureFormation_0250_spheres_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

In the examples above, the spheres are pure blue, red and yellow BT.709 chromaticities. The plane is a green BT.709 primary. And if you pay attention, the spheres have a better shaping with “Candidate A”.

I personally like to use very simple examples to get to the essence of the issue. They also make the facets we are trying to identify more obvious than with much more complex stimuli data. Visual cognition is so hard to understand and explain that I think the best approach is to keep it as simple as we can.

And if you like some mind-bending ideas, these examples of ramp and spheres are in the end the same. Isn’t a ramp just a cylinder seen from a top view ?

It shall preserve tonality

At this point, I can imagine some feedback : “But Chris, it is easy. You are trading chrominance for luminance. Candidate A looks less saturated and probably cannot reach the display primaries.

And my answer would be : “Yes, you are totally right. Remember about how I talked about compromises earlier ? That is the main one in my opinion : in a picture formation, tonality cannot be compromised.” And if reaching the display primaries is an obstacle to it, well this is a trade-off I am willing to accept.

Do you remember the previous light sabers examples ? How some images had a complete loss of tonality ? Back in February 2021, Jed Smith shared a visualization in Nuke that makes this issue easy to understand. I think it is worth sharing again :

140_misconceptions_1230_gamut_visualization_FHD
140_misconceptions_1240_gamut_visualization_FHD
140_misconceptions_1250_gamut_visualization_FHD
140_misconceptions_1260_gamut_visualization_FHD
140_misconceptions_1270_gamut_visualization_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

This example would probably trigger the following comment : “But Chris, if we cannot reach the display primaries, we are loosing saturation. Isn’t that an issue ?

We already discussed what “tone” meant in my previous post and I won’t go over that again. But all I can say is that we do NOT need to reach the display gamut primaries (the corners of the cube we have seen earlier) to get a colorful image.

For example, the video game below uses AgX as a picture formation (which does not reach the display primaries) and it does look colorful :

And believe me, it has taken me so much time to take that leap. I come from an animation background where our images have like the most outrageous saturation. But that is not the right way to go : tonality is a key component to avoid cognitive dissonant images. Otherwise why would we call it “Tone Mapping” ?

I will finish this paragraph with one more example. I think it will allow to wrap nicely the two points I made above about visual cognition and tonality :

160_pictureFormation_0260_spheres_FHD
160_pictureFormation_0270_spheres_FHD
160_pictureFormation_0280_spheres_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Here is a recap :

  • Those spheres have ACEScg primaries and secondaries.
  • “Candidate A” preserves much more tonality than “Candidate B”.
  • The red and yellow spheres with “Candidate B” do not read as spheres.

And if you still have doubts, I have added a grayscale image because that is our “reference” for tonality. When you remove colors, 99% of our issues are basically gone. Our only ground truth are grayscale images. So compare carefully the two images and ask yourself : “Which one is closer to our ground truth ?

Because our main issue here is that no one has defined exactly what tonality or “tone” means ? Is it luminance, brightness, lightness, brilliance or value ? What is the actual metric ?

An explanation that really helped me moving forward is that chrominance and luminance are on the same plane. So there must be some kind of trade-off. And trying to reach the corners of the display gamut should NOT be prioritized over forming “pleasing” images.

It shall be smooth

This requirement is very similar and related to the previous two, but I think it is worth spending a bit of time on it.

160_pictureFormation_0290_eiskoLouise_FHD
160_pictureFormation_0300_eiskoLouise_FHD
160_pictureFormation_0310_eiskoLouise_FHD
160_pictureFormation_0320_eiskoLouise_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

The footage above has massively helped me to evaluate different picture formations. It is a render of the Eisko Louise model. I tried to light this scene in the most holistic way possible with spot lights and volumetrics.

And just to be clear, yes, this is the same as the ramp test but in a CG render. Hopefully it shows even more clearly the possible issues if the picture formation is not solid enough.

The most obvious difference is of course the light sources themselves, where a disruption of tonality has broken the visual cognition. In the red example, we can even spot some Mach bands around the light source.

But I would suggest you also compare carefully the forehead, cheeks, lips and chin of Louise (anywhere where the light impacts directly) between both candidates. Stunning, right ?

It shall not pass g0 threshold

Before explaining this requirement, we should define what g0 threshold is. Interestingly enough, this requirement was mentioned in the ACES 2.0 document workspace :

highlights shall desaturate at a certain point for low-saturation things and less so for items that are bright and saturated (e.g. neons, car taillights, lightsabers, etc.) – (how do we determine the threshold? – is this purely subjective? can we make it objective?)
Output Transforms Architecture VWG

Even if the wording is not super accurate, the idea is there. When you increase the exposure of an image, reflective objects will look emissive if the picture formation is not properly engineered. This means that you have passed the g0 threshold.

This concept originally comes from Ralph M. Evans who was a physicist who worked at the Eastman Kodak Company.

As the luminance is increased from G0, lightness and hue continue to strengthen and a new perception appears in the central stimulus that […] can best be described […] as though it were fluorescent.”
— Ralph M. Evans quoted in this pdf from the Munsell Color Science Laboratory

An easy way to test this is to have a look at different stripes of exposure of an image. This is how I understood why g0 threshold is so important and that the “path to maximal brightness” is a key part of a picture formation.

160_pictureFormation_0330_legoSailors_FHD
160_pictureFormation_0340_legoSailors_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

If you look carefully at the legs of our Lego sailor, you will see that they look emissive with “Candidate B”. When this figurine clearly does not have any emission in its material ! And yes, there is a thin line between the two images but I have found this threshold to be a very good indicator of the quality of a picture formation.

It looks unnatural if something which is clearly a reflective object suddenly looks like as if it would emit color. This is a perception based threshold named g0 by Evans.
— Daniele Sigarusano in this video

160_pictureFormation_0350_grinch_FHD
160_pictureFormation_0360_grinch_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Same observation as above. If you look at the bike and the side-car, they appear emissive with “Candidate B”. This is kind of an issue for reflective (or in this case metallic) objects.

Remember, there are NO edge cases. So take advantage of these exposure stripes and compare carefully your footage. Exposure stripes are incredibly helpful to evaluate a picture formation (as you can see in Liam Collod’s picture lab website).

The g0 threshold could possibly be re-stated as “a plausible purity/exposure ratio”, something that has been referred to as “pictorial exposure”.

It shall not break polarity

This requirement is a bit difficult to explain but it is related to g0. It can be defined like this : “at equal energy, a color cannot appear brighter than its achromatic corresponding value.” Basically, the chromatic strength cannot overcome the achromatic intensity. Because a picture is worth a thousand words :

160_pictureFormation_0370_polarity_FHD
160_pictureFormation_0380_polarity_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

If you really pay attention to the highlight area, you should see like a dark ring around the specular with “Candidate B”. Which is basically… another disruption of a gradient ! Just like if the red color of the sphere exceeded the white specular in intensity.

Maybe at this point you are thinking : “Wow, Chris really lost it. We are nitpicking the specular of a red sphere…” But I can promise you that all of these requirements are important and will help you to improve the quality of your work. Here is the most extreme example of polarity I could find :

160_pictureFormation_0390_smekal_FHD
160_pictureFormation_0400_smekal_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Now pay attention because this is where it gets really interesting. Remember how I desaturated the spheres’ example earlier ? This was a grade pre-image formation. Here I am going to do the same operation but post-image formation. Check this out :

160_pictureFormation_0403_smekal_FHD
160_pictureFormation_0406_smekal_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

With “Candidate B”, the blue light on the pool table is completely gone ! And if someone would come with the argument : “But Chris, you were not in the bar when this picture was taken. How would you know which one is correct ?” I would reply that a picture is NOT a depiction of “a scene as we were standing there” !

At this point, the only thing that I know and that I care about is that I can cognize one image and the other one breaks my brain.

I also find the two images below fascinating to compare. Not only the difference of color (the orange salmon versus the more pleasing orange-yellow of “Candidate A”) but the relationship between the emissive part and the smoke. It is like the grey smoke reads as a separate form with “Candidate B”. Polarity again !

160_pictureFormation_0410_explosion_FHD
160_pictureFormation_0420_explosion_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

If you want to know more about the topic of polarity, you might want to dig in this thread.

It shall not be chromaticity linear

Did you observe a difference of color between the two candidates in the explosion image above ? This is what this requirement is about. Somehow we need a deviation of the different hues (I used to call them hue shifts”) on their paths-to-white”. Otherwise a neutral” picture formation will make fires look salmon :

160_pictureFormation_0430_filmFire_FHD
160_pictureFormation_0440_filmFire_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

In my opinion, hue-path bendings are necessary to form a pleasing picture. Yes, I can hear you scream at me : But Chris, what do you say ? I set a color in my scene and another one shows up on my screen ?” Short answer is : “Yes, we need this. We ABSOLUTELY need this”.

Let’s rollback a bit. Back in November 2020, I was pointing out an issue about the hue skews in ACES 1.X. And almost five years later, it seems I have completely changed my mind. I understand the confusion. Let me try to make a summary :

  • We definitely need carefully engineered hue-path bendings in the picture formation.
  • Issue with ACES 1.X is that they are just an accidental consequence of the transform.

Just like our previous images, you may notice that fires get a “salmon” color with “Candidate B” :

160_pictureFormation_0450_fire_FHD
160_pictureFormation_0460_fire_FHD
160_pictureFormation_0470_fire_FHD
160_pictureFormation_0480_fire_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

And the truth is that NO ONE has figured out the perceptual hue paths on the planet. So our best option for now is a system that allows some possible tweaks.

160_pictureFormation_0490_huePaths_FHD
Exit full screenEnter Full screen
 

This diagram from Wikipedia shows the Purity-on-hue in CIE 1931 chromaticity diagram. As you can see, there is no consensus on the matter. It is also worth pointing out that these studies measure simple flat field stimuli and make generalized assumptions based on those measurements.

They say nothing about complex stimuli like images.

And as a side question, can you imagine the complexity of what we are trying to address ? This diagram is a top view of an actual 3d model where each color has to go at a certain rate down a certain path… Insane.

It shall respect the air material (e.g. atmosphere)

This is another requirement that is difficult to explain and it is also evolves around the idea of g0 and polarity. But basically something absolutely fascinating to observe is images with volumetrics such as as haze, smoke or mist.

If the picture formation has not been properly engineered, you may see objects “punching through” the atmosphere in unexpected ways.

160_pictureFormation_0500_airMaterial_FHD
160_pictureFormation_0510_airMaterial_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Because volumetric effects are basically an addition (or offset), it becomes quite noticeable if the mechanics at play do not respect that. Here are some live-action examples in a night club where one of the crowd members will “punch through” the volumetrics depending on the picture formation at play :

160_pictureFormation_0520_fmatas_FHD
160_pictureFormation_0530_fmatas_FHD
160_pictureFormation_0540_fmatas_FHD
160_pictureFormation_0550_fmatas_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

And again, you do not need extreme examples to illustrate this issue. Just having some vegetation (such as some leaves or grass) hit by the sunlight with some atmosphere might just tear apart your picture formation.

160_pictureFormation_0560_treesAtmos_FHD
160_pictureFormation_0570_treesAtmos_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

And interestingly, you may observe this phenomenon through any transparent object (such as a glass of milk, a glow or some smoke). So pay attention to those !

You may want to read this thread if you want to dig a bit more.

It shall fit the cube

Remember how we discussed that our monitors are basically cubes at the beginning of this article ? Well, a good test is to check how your formed image fits them (or not). It allows you to visualize if there are any kinks or disruptions, if it reaches the corners, if the ramps are smooth…

160_pictureFormation_0580_display_FHD
160_pictureFormation_0590_display_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

I think this is an interesting and complementary way to check that things go as planned but it cannot substitute actual viewing of images.

This was the last requirement on my list. As you can see, those are pretty much oriented towards forming a pleasing image. This is actually my main goal. Of course, there are other factors at play such as the SDR/HDR behaviour and the support of wide gamut color spaces.

But I will say it one last time : none of them should go against our main requirement of crafting beautiful images because we are… image makers !

What are our options then ?

Since 2017, we have learned a lot about picture formations and new color management workflows have emerged. I have tried to list the main ones :

NameAuthorDateComments
ARRI K1S1Harald Brendel2011THE most used LUT on the planet (ARRI Alexa workflow).
ACES 1.0AMPAS2016Color Encoding System developed by the Academy.
FilmicTroy Sobotka2017Original Blender Color Management used on this movie.
RED IPP2Graeme Natress2017RED Image Processing Pipeline explained here.
Sony VeniceSony / Picture Shop2022LUT files made in partnership with Picture Shop.
ARRI RevealSean Cooper2022The new ARRI Alexa35 workflow described here.
TonyTomasz Stachowiak2023A cool-headed display transform.
AgX BlenderEary Chow2023Blender Color Management used on this video game.
TCAMv3Daniele Siragusano2024Baselight Color Management Workflow explained here.
AgX SB2383Troy Sobotka2024Minimal AgX OCIO Config using Linear BT.709.
JP-2499JP Zambrano2024A popular picture formation pipeline described here.
ACES 2.0AMPAS2025Color Encoding System developed by the Academy.
Open DRTJed Smith2025State-of-the-art Color Management Workflow.

They are all available through OCIO and they all have their pros/cons. But hopefully this article has given you the keys to evaluate them properly.

I almost can hear you ask : “But hey, Chris, are you going to leave us in the dark ? What about Candidate A and Candidate B ?” Well, let’s put it that way : in 2025, two picture formations got released and given that I recommend openDRT… I will let you guess which is which.

But what about OpenColorIO ? OCIO is “just” a container. Basically OCIO is the piece of software that allows us to load LUTs easily in our DCC softwares. What really makes a difference is the “content” (what you put inside OCIO), not the container.

The ACES topic

I guess a valid question from you would be : “Hey Chris, what is your take on ACES ?” I have been thinking about it for quite a while and I will try to explain as best as I can :

  • Having a color management standard implemented everywhere is great. It gives artists with little knowledge something to start with.
  • ACES was actually my entry point into this wonderful world of color management and I have met the most amazing people on their forum and in the meetings.

But here is the issue :

  • Because it is the only open-source standard out there and it comes from the “Academy Of Motion Pictures Arts and Sciences” (AMPAS), no one questions it.
  • And because color management is still a black box to most of the VFX industry, artists just blindly accept that it’s great and use it without knowing the problems.

How many times have I heard : we want to use ACES because it is a standard.

But somehow, never ever anyone asks : “what does this standard give us ?” (Troy Sobotka actually tried but no one answered him). Just stop and think about if for a minute. Do you really think that one unique picture formation can be used in the whole industry ? Why would you want your movies to look the same as every other ?

I will share again this quote that really broadened my perspective on this matter :

I cannot understand why anybody would like to limit all of their productions to use the same output transform. It would be the same as limiting productions to use a single camera. Documentaries, features, animation, hand drawn, all of them have their unique challenges. Do you think film would have flourished in the last 100 years if the Academy would have standardised the chemical receipt ? Instead the Academy standardised the transport mechanism. The 35mm perf. And this was exactly the right thing to do. People could innovate and interchange.
— Daniele Siragusano

There is a risk of leveling down in quality if the entire industry uses the same color management. And because ACES comes from the AMPAS, the appeal to authority is very real. How many artists, executives and supervisors out there think that ACES is the only valid and safe solution ?

And this image below (from a Renderman video) is the perfect illustration of the complete gap between the marketing and the reality. None of these “headlines” are actually visible in this example but in this post-truth era, who actually looks at this image and cares ?

160_pictureFormation_0600_renderman_FHD
Exit full screenEnter Full screen
 

There was a proposal for ACES 2.0 to open the system to other Output Transforms (called “Meta Framework“) and it got rejected by the Technical Advisory Council (TAC) and… Netflix. It was interesting to follow this topic because it raised all sorts of questions such as “if a project only uses one component of ACES, is it still considered an ACES project ?

And why do you think Netflix pushed back on it ? They did a lot of advocacy about ACES because it is easier to deal with hundreds of vendors by using ONE standard. Because it is cheaper. Indirectly ACES has become an imposed norm to reduce costs. And because it only allows ACES Output Transforms, it limits creativity and innovation.

Finally, let me ask you this : why would you use ACES when there are better options out there ? YOU should aim at the BEST color management workflow for your project, not necessarily a “standard”.

160_pictureFormation_0610_redXmas_FHD
160_pictureFormation_0620_redXmas_FHD
160_pictureFormation_0630_dragon_FHD
160_pictureFormation_0640_dragon_FHD
160_pictureFormation_0650_club_FHD
160_pictureFormation_0660_club_FHD
160_pictureFormation_0670_cornell_FHD
160_pictureFormation_0680_cornell_FHD
160_pictureFormation_0690_funfair_FHD
160_pictureFormation_0700_funfair_FHD
160_pictureFormation_0710_blueBar_FHD
160_pictureFormation_0720_blueBar_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

You will never meet a DP, a colorist or a DIT that recommends ACES 1.X.

ACES 2.0 Opinion

I will just share some final thoughts about ACES 2.0. I have nothing but respect for the incredible team that met for more than four years (from December, 2nd 2020 to March, 12th 2025) during 184 meetings. Those are top-notch color specialists and experienced supervisors.

But they were given an impossible task : to design a fully invertible picture formation that should able to reach the display primaries and form “pleasing” images out-of-the box. Unfortunately those requirements were contradictory and they set the group down the wrong path.

We tried to warn them several times and share our concerns. But they were so many parties involved that it became impossible to make our voices hear. Somehow the group lost sight of what should have been our main goal : crafting beautiful pictures for the centuries to come.

I can even point out the three meeting that deviated the group from their noble quest :

  • Meeting #26, September 15th 2021 : the meta-framework proposal was discussed.
  • Meeting #27, September 29th 2021 : post-TAC meeting, Netflix pushes back on the meta-framework.
  • Meeting #29, October 27th 2021 : CAMs are introduced to the group because “ACES is science“.

That was the turning point where the group got derailed. Because ACES tries to be everything to everyone, it just ends up being an over-complex and average picture formation (a “jack-of-all-trades“).

Standards and professional organizations are motivated by profit and efficiency. This is incompatible with art.

And I will repeat for clarity : the members of the Virtual Working Group were top-notch, the actual issue was the requirements (and the political forces behind them). So can ACES 2.0 form pictures ? Yes, totally. It has been used on this movie for instance :

But is it a good starting point for the artists ? I doubt it. Even Luke Hellwig did mention in one of the meetings that its research work (the infamous CAM which ACES 2.0 is built with) would not necessarily apply to picture formations !

About OpenDRT

After OpenDRT was withdrawn from the ACES 2.0 candidates back in April 2022, I honestly thought that its development was being canceled. But Jed did not stop. He kept working on it and between Christmas 2024 and March 2025, around 50 beta versions were pushed until the 1.0.0 release.

When OpenDRT 1.0.0 was designed, the requirement of being “neutral” (whatever that means) and that a picture should be a reproduction of “a scene as we were standing there” were not taken in account. We thought they were completely irrelevant to our goal (just like some kind of perfect invertibility).

I fell myself into these mind traps and after spending years trying to figure them out, I came to this very simple conclusion : those requirements do not matter when it comes to picture formation. I am aware that this claim might go against certain trends in the industry so I will just share one final quote :

I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.
— Leo Tolstoy

The claim of this article is NOT that OpenDRT is the perfect ultimate picture formation. Because no one on the planet has figured out what colour is. The claim is that it is the only open source solution that empowers the image makers to create their own aesthetics.

160_pictureFormation_0730_portraits_FHD
160_pictureFormation_0740_portraits_FHD
160_pictureFormation_0750_portraits_FHD
160_pictureFormation_0760_portraits_FHD
160_pictureFormation_0770_portraits_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

For instance, in this article I have used a tweaked version of OpenDRT, not one of the presets. And this is where its real interest lies in my opinion. The different modules exposed to the user (Tonescale, Purity, Briliance and Hue Shift) give all the needed flexibility and control.

Conclusion

Hopefully at this point I have shown that it is illusory to think that ONE picture formation can fit all projects. And this thought is terrifying because it questions the very existence of the ACES Output Transforms.

Picture Formations are a delicate thing and this is a craft that should be treated seriously and carefully. And just like live-action projects where the show LUT is usually a collaboration between the DP, the dailies post house and the DIT during pre-production, a similar process should be used for full CG animated.

Because the show LUT influences on-set lighting, set design and costume decisions ! And in an era where everything is made to decrease production costs (AI and forced standardization to name a few), we have a responsibility as image makers to push back.

In summary, we discussed in this article about how we shape the image data and what cognitive weirdness results if the data is shaped wrongly :

  • If data slams into the display cube, you get gradient disruptions.
  • If data doesn’t trend downwards as purity increases, you get g0 threshold disruptions.
  • If purity doesn’t get “compressed”, there are disruptions in “surface and atmosphere perception”.
  • If data doesn’t bend as intensity increases, colors don’t look like the right colors anymore !

And if you’re still not convinced by the examples I shared because they are too simple or in full CG, I have added below some live-action footage where we can spot similar issues. Hopefully you have now the proper terminology and insight to spot and name these issues accordingly. Thanks for reading !

160_pictureFormation_0790_example_FHD
160_pictureFormation_0800_example_FHD
160_pictureFormation_0810_example_FHD
160_pictureFormation_0820_example_FHD
160_pictureFormation_0830_example_FHD
160_pictureFormation_0840_example_FHD
160_pictureFormation_0850_example_FHD
160_pictureFormation_0860_example_FHD
160_pictureFormation_0870_example_FHD
160_pictureFormation_0880_example_FHD
160_pictureFormation_0890_example_FHD
160_pictureFormation_0900_example_FHD
160_pictureFormation_0910_example_FHD
160_pictureFormation_0920_example_FHD
160_pictureFormation_0930_example_FHD
160_pictureFormation_0940_example_FHD
160_pictureFormation_0950_example_FHD
160_pictureFormation_0960_example_FHD
160_pictureFormation_0970_example_FHD
160_pictureFormation_0980_example_FHD
160_pictureFormation_0990_example_FHD
160_pictureFormation_1000_example_FHD
160_pictureFormation_1010_example_FHD
160_pictureFormation_1020_example_FHD
160_pictureFormation_1030_example_FHD
160_pictureFormation_1040_example_FHD
160_pictureFormation_1050_example_FHD
160_pictureFormation_1060_example_FHD
160_pictureFormation_1070_example_FHD
160_pictureFormation_1080_example_FHD
160_pictureFormation_1090_example_FHD
160_pictureFormation_1100_example_FHD
160_pictureFormation_1110_example_FHD
160_pictureFormation_1120_example_FHD
Exit full screenEnter Full screen
previous arrow
next arrow
 

Last words

A few things to give a bit more context for the article :

  • All images have been encoded for sRGB display (using a power gamma 2.2 function).
  • The attentive reader may have noticed that I do not mention “looks” (or Look Modification Transform”).

I will explain briefly why I don’t mention them in the article :

  • Having scene-referred grading operations has proven to be a valid workflow.
  • But to generate a proper look is not trivial. You need good grading tools to do so.
  • Also manipulating data prior to the image formation has its own limitations.

This is why I found the parametric approach of openDRT 1.0.0 so powerful. Because in the end, it all comes down to this : the transform should help the artists do their work. That’s about it…

Acknowledgements

I would not have been able to write this article without the help of these truly one-in-a-kind amazing people :

  • Troy Sobotka (aka “the idea factory”, who is behind 90% of the ideas presented here)
  • Jed Smith (aka “the stubborn builder”, who built openDRT based on those principles)
  • Zach Lewis (aka “the swiss army man”, who patiently supported us during the whole process)

I would like also to mention the ACES 2.0 Virtual Working Group that gave us an unique opportunity to learn and share : Alex Fry, Nick Shaw and Kevin Wheatley.

Sources