Dogway's grading shader (slang)

You mean this? You’ll clip tho:

That’s indeed the triangle I mean. So from your answer I understand that setting “Blue-Green Tint” bg parameter to -0.10 correlates to a shift in the color gamut “triangle”, more precisely correlates to a downshift in G and B coordinates?

Moving “Blue-Green Tint” bg parameter to -0.10 doesn’t seem to introduce clipping, so I’m not quite understanding why changing the chromaticity values for G and B would? (I understand theoretically for Blue it would be the case, but see below…)

Or is it actually a case of where only the green corner schrinks? I.e. would moving “Blue-Green Tint” bg parameter to -0.10 only correlate to a lowering of the “G” coordinate: the gamut gets less saturated in green, so all values that are a mixture of green and blue get less saturated with green? Would that be the right analogy?

Edit:shrinking only green would also affect the green saturation for all values that are a mixture of red and green, which "Blue-Green Tint” bg parameter to -0.10 doesn’t seem to do (at least not very noticeably). So I guess, I still don’t understand how lowering of “Blue-Green Tint” relates to the chromaticity diagram? Any help in understanding this relation would be great.

Edit 2: I’m aware that the triangle is a 2D simplification of a 3D gamut space (because of mixing with black and the whitepoint), just that you know.

I think you are only changing the blue primary coordinates in the direction of the blue-green vector (arrow). Green and red primaries stay at place.

Edit2: And that’s why a LUT is better than a simple matrix.

1 Like

OK thanks, so then we arrive at the “problem” that even wide gamut (DCI, Adobe) has blue fixed at the sRGB primary of 0.15, 0.06? I.e. even with "wide"gamut monitors we’ll clip in blue if we change the CRT gamut (grade shader) to use any blue values more saturated than sRGB blue? :frowning:

Edit: I think one of the caveats with the LUT is that with DisplayCal you need to set a panel technology type, which for some laptops or even desktop monitors is not known with certainty. Which makes it a trial and error procedure, where you may end up with a result where the calibration is less accurate than uncalibrated, because the calibration was done with a “wrong” panel type selection.

Edit 2: I’m looking into the various monitor options, so I was contemplating the LG-27GL850 . But from reading on the DisplayCal forums, it uses nano-IPS, for which there are NO correction curves for DisplayCal available. Which in other words means this panel can’t be usefully calibrated through displayCAL. At least as far as my understanding goes.

That’s why my interest in gamut mapping. You compress values to keep the perceptual relative saturation (hue aligned), instead of clipping. This might work better for highly saturated sources like games, but it’s hard maths and totally undocumented (math wise). I never studied maths after school a long time ago so for me it’s hard to guess how I intersect a plane with a 3D gamut and things like that.

The wikipedia link I shared falls short just before explaining the compression algorithm. Here, an AMD brochure, you have a very graphical explanation of how compression is performed but no code or maths.

One option would be to create a LUT to convert from a specific gamut (like a CRT gamut) to sRGB. DisplayCAL (Argyll indeed) uses CIECAM02 I think, so that’s an option, I just haven’t delved into how to use an user predefined characterization.

On the correction files I really (and most people) don’t fuss much about. For the same reason that not all devices are the same. There are worse offenders like surround luminance.

I read a few raising questions on the color temperature of grade, so I went ahead and added a mathematical accurate function in CIE xy.

Colors are now more accurate as I could check, but still for a no-op use D55 as D65 is slightly blue. I don’t want to add source-target color temperature conversion since it’s a lot of code but I guess we can subtract the difference for our target temperature. That is if we want a temperature of D93 (8942K), then use 7942K. Do you think I should perform this under the hood or what’s your opinion?

Updated grade is in official slang repo.

2 Likes

IMO it’s a good idea for this to be done under the hood since the user probably doesn’t know that they should subtract the difference for the target temperature.

2 Likes

Ok, this changes the current behavior at least for you guys that are up to date, but I will assume the content (games) are D65 so this should better reflect the target temperature, as I could test it’s a bit more neutral (less blue).

I feel bad for hunterk as I said I wasn’t going to do many changes >.<

2 Likes

Hi @Dogway, I’m working the grade shader into the mega bezel and I wanted to know which LUT files are supposed to be used for LUT1 and LUT2?

2 Likes

Anyone that matches the LUT size is fine, my LUT size defaults are 16 and 64, so for simplicity I use “reshade\shaders\LUT\16.png” and “reshade\shaders\LUT\64.png” identity LUTs as no-ops.

2 Likes

Sort of on topic:

Is there a way to do gamma correction as the very last step, after scanlines, mask, glow etc?

All of these things alter the gamma so if we do gamma correction first we don’t have the same gamma after applying scanlines etc. Wouldn’t it make sense to do all the other stuff first, then correct gamma?

Yeah, you’d either have to add gamma correction to the final pass or add another pass at the end for gamma correction.

Makes the most sense to incorporate it into the last pass though.

You mean scanline dynamics affect gamma? well that’s what’s supposed to do I guess. You can compensate using grade’s gamma, after all gammas are additive, so no matter in what position you place them unless you need an operation in a certain gamma space.

I’m emulating signal gamma and that as far I’m concerned happens before scanlines so it’s wise to keep some parity with what the hardware does.

So is CRT gamma more accurately described as “CRT signal gamma?”

Yes, I would think this “signal gamma” is what brands cranked up to compensate for the dim/dark surround, as 2.222 is more for daylight. As far as I’m concerned scanlines is something that happens very close to the end of the chain so it would go after this signal gamma.

I want to take also the opportunity to announce here a video guide I just made about Calibration and Color Management. As you might already know I do Computer Graphics and develop tools. The video guide includes a calibration and profile tutorial for DisplayCAL, as well as LUT authoring for several CG packages, but also other media apps like Retroarch, OBS, and so on. If you own a colorimeter and are interested PM me and I can provide a 33% discount code at gumroad.

4 Likes

@Dogway I was reading the Poynton paper “The rehabilitation of gamma” here https://poynton.ca/PDFs/Rehabilitation_of_gamma.pdf .

It’s in interesting read (18 pages of high level stuff on gamma alone) even though most of it is above my level. I was especially interested by two remarks about misconception vs fact, see page 2 of the paper and quoted below

Misconception: A CRT is characterized by a power function that relates luminance L to voltage V’: L=(V’)γ.

Fact: A CRT is characterized by a power function, but including a black-level offset term: L=(V’ +ε)γ. Usually, γ has a value quite close to 2.5; if you’re limited to a single-parameter model, L=(V’ +ε)2.5 is much better than L=(V’)γ.

Misconception: The exponent γ varies anywhere from about 1.4 to 3.5.

Fact: The exponent itself varies over a rather narrow range, about 2.35 to 2.55. The alleged wide variation comes from variation in offset term of the equation, not the exponent: Wide variation is due to failure to correctly set the black level.

So I was wondering about two things:

  1. whether Grade already accounts for the “black-level offset term” in the gamma equation as above, or whether it would be useful to do so?
  2. A CRT when properly adjusted has a black level of about 0.01 cd/m2, whereas a properly adjusted IPS LED panel has a black level of about 0.30 cd/m2 (or worse): 30 times as high. Could this higher black level of the IPS panel be accounted for in grade’s gamma function by using above black-level offset term?

Hopefully you can shed some light on this :slight_smile:

3 Likes

Thanks for the paper. There are a few things a bit ambiguous to me but that must be because the paper is a bit old (circa 1998).

It took me a while to grasp but the whole point of the paper is about that “offset” or “black level” from where a power law gamma is described. We already account for this as we are aware that CRTs didn’t employ a power law gamma function but a “linear segment towards black”. This is already done in the moncurve() functions in grade.

They also talk about a gamma of 2.35-2.55 instead of a theoretical 2.222, that’s also my recommendation in the grade presets, but as the paper indicates it has to do with implications of the viewing surround which were typically dim.

There are other black level offsets in the pipeline. There’s the black level pedestal of 7.5 IRE, but that only happens for NTSC-U and it’s invisible to the user.

Then there’s the exposed black level which is a bit more artistic, here you can emulate the CRT brightness of 0.01 cd/m2, but to be fair that’s at signal level. CRT displays are way more reflective than current LEDs as we discussed recently, that’s why they look greyish even when turned off. Link

2 Likes

Thanks for the insight, good to know grade already incorporates the adjustments to the gamma curve with the linear segment towards black.

It’s quite interesting to see how important gamma is to correct color reproduction. I was thinking of reproducing a vintage CRT’s gamma of 2.5 theoretically with the current gamma pipeline, but then I realized I don’t quite understand the complete pipeline. Hopefully you can shed some light on this. Purpusedly I’m using a RGB intensity value below to see what goes in and what comes out.

Emulator gamma pipeline

  1. BSNES core outputs raw RGB pixel value (from range 0-255) : R(ed)=128
  2. Dogway’s Grade encodes value with CRT gamma 2.5: R(ed) = ((128 / 255) ^ (1 / 2.5)) * 255 = 194
  3. guest-dr-venom gamma_input DECODES it with gamma 2.4: R(ed) = ((194 / 255) ^ (2.4)) * 255 = 132
  4. guest-dr-venom gamma_out encodes it with gamma 2.4: R(ed) = ((132 / 255) ^ (1 / 2.4)) * 255 = 194
  5. Retroarch graphics API outputs value for videocard. Retroarch gfx api encodes it with gamma 2.2 : R(ed) = ((194 / 255) ^ (1 / 2.2)) * 255 = 225
  6. My PC attached IPS monitor decodes value with gamma 2.33 (DisplayCAL measured): R(ed) = ((225 / 255) ^ (2.33)) * 255 = 190

So for my pipeline with Grade gamma at 2.5 an RGB pixel intensity value of R(ed)=128 goes in and an RGB pixel intensity value of R(ed)=190 gets sent to my eyes.

Real SNES hardware gamma pipeline

So what happens with a real SNES connected to vintage CRT? Does the SNES do gamma encoding or not? Both cases below:

SNES does gamma encoding?

  1. SNES encodes R(ed)=128 with gamma 2.5: R(ed) = ((128 / 255) ^ (1 / 2.5)) * 255 = 194
  2. Vintage CRT decodes it with gamma 2.5: R(ed) = ((194 / 255) ^ (2.5)) * 255 = 128

SNES outputs raw RGB intensity values (NO gamma encoding)

  1. SNES outputs R(ed)=128
  2. Vintage CRT decodes it with gamma 2.5: R(ed) = ((128 / 255) ^ (2.5)) * 255 = 46

Mismatch emulator output and real SNES:

So for a real SNES connected to a vintage CRT (gamma=2.5) when the SNES ouputs an intensity value for a red pixel of 128, then what gets displayed on the vintage CRT monitor is an intensity value of either 128 or 46, depending on whether a real SNES does gamma encoding of the output signal or not.

In the emulator gamma pipeline when the RA Bsnes core outputs an intensity value for a red pixel of 128 and the Grade shader CRT output is set to gamma=2.5 then an intensity value of 190 is displayed on the LED monitor. A large discrepancy versus real SNES hardware connected to a vintage CRT monitor!

@Dogway Since the discrepancy between the input and output for both pipelines is so large I’m sure the pipelines will very probably be different from what is described above.

Purposedly this is a starting point to get to the real answer to both pipelines.

My question is how the real pipelines for both the “Emulator gamma pipeline” and “Real SNES hardware gamma pipeline” are. Could you possibly take both pipelines depicted above and rearrange / add / remove steps where necessary, such that we get to a true picture of both pipelines?

1 Like

In your examples you are using power law gamma operations but as I said grade uses “sRGB” type gamma functions with a linear segment towards black. I do several operations for gamma because no developer could confirm to me why the emulators are not outputing RAW values and instead rely on a gamma encoded output. For this reason I assume the output is already sRGB gamma encoded, so the first step in grade is to linearize that with the inverse operation. Once linearized I apply a SMPTE gamma function with a gamma_in value. In your case 2.5. This is now our CRT ground truth. We relinearize it with the sRGB inverse function, do all our grade operations and output with the color_space related gamma, if sRGB it will do a gamma cancel and get a match for the CRT ground truth.

If you want to get the correct gamma output from Retroarch to match your calibrated display you have to color manage retroarch. For that use DisplayCAL and use the Reshade LUT output, you can load that in grade. (Or calibrate your display to 2.2 gamma)

The snes has to encode gamma for the composite signal, it uses the SMPTE gamma function.

2 Likes

Thanks for the explanation, I think I understand the proces now:

  1. bsnes core output is assumed to be encoded with srgb gamma (2.2)
  2. grade shader decodes output from 1 with inverse operation: we have RAW values now
  3. RAW values are encoded by grade with SMPTE gamma function (2.5 in my case)
  4. grade linearizes again with sRGB inverse operation, does grade operations, and outputs/encodes to target space, which if sRGB cancels out the earlier gamma operation (output values of bullet 3. are maintained)
  5. LED monitor decodes with sRGB gamma 2.2 (calibrated)

So in summary, if we strip out all canceling encoding/decoding steps :

  1. effectively RAW values are encoded by grade with SMPTE gamma function and value of 2.5 (in my case; a setting between 2.35 and 2.55 is recommended in grade shader guide)
  2. LED monitor decodes with sRGB gamma 2.2 (calibrated)

Compare this to real SNES hardware connected to a vintage CRT, as that will most likely look like:

  1. RAW values are encoded by SNES hardware with gamma of 2.35 - 2.55 (is the real value known?).
  2. Vintage CRT decodes with gamma of 2.35 - 2.55 (the exact value will differ per TV set)

Step 1 is comparable between emulation and reaI hardware.

But I see an issue with step 2.: a vintage CRT decodes with gamma between 2.35 - 2.55 while in the emulated setup the LED monitor decodes at 2.2 (calibrated).

So what am I overlooking in this?

1 Like