In this post I’m going to explain something that I have been doing wrong for a while in my at home graphics programming projects, and show you the noticeable loss in image quality it causes.
The C++ code that generated the data and images for this post is on github. https://github.com/Atrix256/RandomCode/tree/master/sRGBPrecision
sRGB vs Linear
Every image that is meant to be displayed on your screen is an sRGB image. That’s what it means to be an sRGB image.
When doing things like applying lighting, generating mip maps, or blurring an image, we need to be in linear space (not sRGB space) so that the operations give results that appear correct on the monitor.
This means an sRGB image needs to be converted to linear space, the operations can then be done in linear space, and then the result needs to be converted back to sRGB space to be displayed on a monitor.
If this is news to you, or you are unsure of the details, this is a good read on the topic: Linear-Space Lighting (i.e. Gamma)
A small example of why this matters is really driven home when you try to interpolate between colors. The image below interpolates from green (0, 1, 0) to red (1, 0, 0)
.
The top row interpolates in sRGB space, meaning it interpolates between those colors and writes out the result without doing any other steps. As you can see, there is a dip in brightness in the middle. That comes from not doing the operation in linear space.
The second row uses gamma 1.8. What is meant by that is that the color components are raised to the power of 1.8 to convert from sRGB to linear, the interpolation happens in linear space, and then they are raised to the power of 1.0/1.8 to convert from linear to sRGB. As you can hopefully see, the result is much better and there is no obvious drop in brightness in the middle.
Getting into and out of linear space isn’t so simple though, as it depends on your display. Most displays use a gamma of 2.2, but some use 1.8. Furthermore, some people do a cheaper approximation of gamma operations using a value of 2.0 which translates into squaring the value to make it linear, and square rooting the value to take it back to sRGB. You can see the difference between those options on the image.
The last row is “sRGB”, which means it uses a standard formula to convert from sRGB to linear, do the interpolation, and then use another standard formula to convert back to sRGB.
You can read more about those formulas here: A close look at the sRGB formula
The Mistake!
The mistake I was making seemed innocent enough to me…
Whenever loading an image that was color information (I’m not talking about normals or roughness maps here, just things that are colors), as part of the loading process I’d take the u8 image (aka 8 bits per channel), and convert it from sRGB to linear, giving a result still in u8.
From there, I’d do my rendering as normal, come up with the results, convert back to linear and go on my way.
Doing this you can look at your results and think “Wow, doing lighting in linear space sure does make it look better!” and you’d be right. But, confirmation bias bites us a bit here. We are missing the fact that by converting to linear, and storing the result in 8 bits means that we lost quite a bit of precision in the dark colors.
Here are some graphs to show the problem. Blue is the input color, red is the color after converting to linear u8, and then back to sRGB u8. Yellow is the difference between the two. (If wondering why I’m showing round trip instead of just 1 way, think about what you are going to do with the linear u8 image. You are going to use it for something, then convert it back to sRGB u8 for the display!)
As you can see, there is quite a bit of error in the lower numbers! This translates to error in the darker colors, or just any color which has a lower numbered color component.
The largest amount of error comes up in gamma 2.2. sRGB has lower initial error, but has more error after that. I would bet that was a motivation for the sRGB formulas, to spread the error out a bit better at the front.
Even though gamma 2.2 and sRGB looked really similar in the green to red color interpolation, this shows a reason you may prefer to use the sRGB formulas instead.
Another way of thinking about these graphs is that there are a quite a few input numbers that get clamped to zero. At gamma 1.8, an input u8 value of 12 (aka 0.047) has to be reached before the output is non zero. At gamma 2.0, that value is 16. At gamma 2.2 it’s 21. At sRGB it’s 13.
Showing graphs and talking about numbers is one thing, but looking at images is another, so let’s check it out!
Below are images put through the round trip process, along with the error shown. I multiplied the error by 8 to make it easier to see.
Gamma 1.8
Gamma 1.8 isn’t the most dramatic of the tests but you should be able to see a difference.
Error*8:
Gamma 2.0
Gamma 2.0 is a bit more noticeable.
Error*8:
Gamma 2.2
Gamma 2.2 is a lot more noticeable, and even has some noticeable sections of the images turning from dark colors into complete blackness.
Error*8:
sRGB
sRGB seems basically as bad as Gamma 2.2 to me, despite what the graphs showed earlier.
Error*8:
Since this dark image was basically a “worst case scenario”, you might wonder how the round trip operation treats a more typical image.
It actually has very little effect, except in the areas of shadows. (these animated gifs do show more color banding than they should, and some other compression artifacts. Check out the source images in github to get a clean view of the differences!)
Gamma 1.8
Error*8:
Gamma 2.0
Error*8:
Gamma 2.0
Error*8:
sRGB
Error*8:
So What Do We Do?
So, while we must work in linear space, converting our sRGB u8 source images into linear u8 source images causes problems with dark colors. What do we do?
Well there are two solutions, depending on what you are trying to do…
If you are going to be using the image in a realtime rendering context, your API will have texture format types that let you specify that a texture is sRGB and needs to be converted to linear before being used. In directx, you would use DXGI_FORMAT_R8G8B8A8_UNORM_SRGB instead of DXGI_FORMAT_R8G8B8A8_UNORM for instance.
If you are going to be doing a blur or generating mip maps, one solution is that you convert from sRGB u8 to linear f32, do your operation, and then convert from linear f32 back to sRGB u8 and write out the results. In other words, you do your linear operations with floating point numbers so that you never have the precision loss from converting linear values to u8.
You can also do your operations in u16 instead of u8 apparently, and also f16 which is a half float.
The takeaway is that you should “never” (there are always exceptions) store linear color data as uint8 – whether in memory, on disk, or anywhere else.
I’ve heard that u12 is enough for storage though, for what that’s worth.
Links Etc
Thanks @romainguy for suggesting a color interpolation for the opening image of this post. It’s a great, simple example for seeing why sRGB vs linear operations matter.
Here is some more info on sRGB and related things from Bart Wronski (@BartWronsk):
Part 1 – https://bartwronski.com/2016/08/29/localized-tonemapping/
Part 2 – https://bartwronski.com/2016/09/01/dynamic-range-and-evs/
And this great presentation from Timothy Lottes (@TimothyLottes)
Advanced Techniques and Optimization of HDR Color Pipelines
This from Matt Pettineo (@MyNameIsMJP) is also very much on topic for this post:
https://www.gamedev.net/forums/topic/692667-tone-mapping/?page=3&tab=comments#comment-5360306
Pingback: New top story on Hacker News: Don’t Convert SRGB U8 to Linear U8 – Tech + Hckr News
Pingback: New top story on Hacker News: Don’t Convert SRGB U8 to Linear U8 – ÇlusterAssets Inc.,
“Don’t Convert sRGB U8 to Linear U8”. Ok. But what if the platform does not have floating point textures or other more than 8 formats?
LikeLike
What platform are you on? There should be an sRGB or otherwise gamma corrected format. The conversion will happen on the card as i understand it.
LikeLike
The author has illustrated an issue, which I have encountered, and I have dealt with this thing, here is a solution: every shader support float point, input as sRGB, convert to linear in shader before any calculation, do the lighting or any other calculation, then convert back to sRGB to store in back buffer.
LikeLike
Surely if you store textures in linear space then that makes your bilinear filtering correct though? If I store them in sRGB and only convert to linear after sampling them I’m going to get a different manifestation of the darkening effect described at the top of this page? Is that less noticeable than loss of precision at the low end?
LikeLike
If you store your textures in linear space at 8 bits per channel, you’ve destroyed your dark colors which you’ll see when it’s converted back to sRGB to be displayed. if you store it in a higher bit format, like F32 or F16, then it will be ok, but will have higher storage costs. Converting to linear after sampling will indeed be a problem too, and give you wrong looking results. When you create the texture, you tell the driver / gpu that it’s an sRGB format, and what it does is on the card when it actually uses the texture, it will handle things appropriately so that you get a properly converted and filtered value for use in your shaders.
LikeLike
In answer to my own question, it appears that there are sRGB versions even of the DXT compressed formats, so the GPU can convert the individual samples and then interpolate in linear space (see EXT_texture_sRGB in OpenGL). I’ll be using those from now on. Thanks for the post.
LikeLike
Yep, that is the way to go! No problem, glad you found it useful (:
LikeLike