New CRT shader from Guest + CRT Guest Advanced updates

Hey, i knew you’d figure it out! :grinning:

Meanwhile i added 2 more ‘interlace’ modes 4&5 to the hi-res and ntsc versions. That was still on my mind to add. Also did some parameter range tweaking and cleanup.

Download link:

https://mega.nz/file/g0oXVAgb#7rwEzGqAbTBBxyxwfJ2-QwzVxUMmj8JOfVeJ37VjxWY

Edit: 4k support added for new interlaced modes (untested yet).

7 Likes

With Venom 2 it looks any passes you add to the top of the stack don’t work.

1 Like

Might have to do with how it works now, what are you trying to put at the end of the chain?

1 Like

Good catch! I was already on this, will add a stock as the first pass, which can be replaced and/or previous passes added.

Could you tell us the difference between the five types of interlace modes?

Mode0 disables ‘interlacing’. Mode1 is the core mode, but only works with 50/60 fps content. Mode2 is ‘baked’ mode 1 with no flickering, nice of 30fps content, screenshots or when convenient. Mode3 is ‘vertical linear filtering mode’, if Mode2 is to blurry. Mode4-Mode12 - best test it on preferably high res content. :wink:

Fixed now. It’s now possible to add custom prior passes or/and replace the first stock pass. ‘Interlace modes 6-12’ added (for different output resolutions convenience).

Download link

https://mega.nz/file/o9Qn1aBQ#Ir1pWTAAiAcddS8wmVsWXGBTm4Fa8vgMgWLRIiRRAmk

Edit: updated

9 Likes

Noticed a minor thing with the hires shader, the gamma out range is missing the 1.0 value in the range. This breaks gamma out as soon as you change the parameter in the shader.

#pragma parameter gamma_out "Gamma out" 1.8 5.0 0.05
#define gamma_out    params.gamma_out     // output gamma

Been playing a bit more with the ntsc shader and quite like it :slight_smile:

Could you explain a bit the effect on what the scaling settings for the two passes below are actually achieving (first scaling x times 4.0 and then halving 0.5) in combination with the 12 pixel filtering? Just curious how you came about it as the effect is great; it gives a certain softness to the image without it becoming blurry :slight_smile:

shader3 = shaders/guest/crt-gdv-new/ntsc/ntsc-pass1.slang
shader4 = shaders/guest/crt-gdv-new/ntsc/ntsc-pass2.slang

filter_linear3 = false
filter_linear4 = false

scale_type_x3 = source
scale_type_y3 = source
scale_x3 = 4.0
scale_y3 = 1.0
frame_count_mod3 = 2
float_framebuffer3 = true

scale_type4 = source
scale_x4 = 0.5
scale_y4 = 1.0 
alias4 = NtscPass 

With regards to gamma, I noticed you changed the default from 2.4/2.4 to 1.8/1.8, so I wondered why 1.8/1.8 is darker, whereas previously I assumed this gamma_input / gamma out was purely a gamma neutral operation “decoding/encoding” but apparently it’s not.

So my question below relates to gamma correction for “vintage game developer CRT gamma” versus my current LCD gamma . Supppose I have a gamma neutral output of the shader, but I would like to correct for the difference of the developers CRT monitor gamma, versus mine.

First case would not need adaption: Developer was on a pro monitor PVM (which is studio calibrated at gamma 2.2) versus mine sRGB LCD monitor which is also about 2.2.

But now suppose second case: Game developer developed end of the 80’s on a vintage CRT monitor, which has a gamma between 2.35 - 2.55 (depending on make/model)

So I would like to make an adjustment that would accurately describe the gamma correction of that developers CRT gamma versus my monitor. I.e. in extreme case I want to do a “2.2 / 2.55” correction or in least extreme case “2.2 / 2.35” gamma correction. So how do I do this in your shader?

I have four options:

  1. Lower or raise gamma_input and gamma_out in tandem (the image brightens for in tandem up or darkens for in tandem down)
  2. Raise or lower only gamma_input (darkens or brightens the image)
  3. Raise or lower only gamma_out (brightens or darkens the image)
  4. Use “Gamma correction” parameter (default is at 1.0, raising brightens image and vice versa)

Do you have an idea what is the most correct way to do an accurate gamma adjustment for the use case I put in bold above? I don’t want to end up with a correction which I think is theoretically correct, but in effect is actually breaking the gamma curve.

2 Likes

A very useful find indeed! Maintaining 3 versions sometimes lets some bugs slip. It should be caught by the shader compiler though. :woozy_face:

The ntsc passes don’t linearize input pixels by default, resulting in output, which looks like it should by these circumstances. I was holding to this look a bit with lowering the gamma. Its should be increased to 2.2 though since i thought about it a bit more.

Gamma is neutral, but there are more circumstances which involve color interpolation in the process. Glow has less influence with lower gamma, and the general look is affected by scanlines calculation - is interpolation with lower gamma, which produces a darker image. Also ‘color edges are darker’ etc.

You can’t break gamma cureve this simply in the shader. In a pass-through situation you get (color^(2.4))^(1.0/2.2) = color^1.091 etc. Interpolation makes things less predictable, so it’s more a matter of taste. You should use input gamma of your liking and tweak output gamma together with other variables. Without interpolation and shader parameter influence it would be much more simple though.

It’s IMO a way to let later passes to do some interpolation, since you typically still can do a 2-3x resizing afterwards. And later shaders have acceptable circumstances and don’t have to downsample themselves.

The 12 pixel interpolations with accommodated coefficients is a bit different, but it gives very nice results with the ntsc preset indeed. That’s why i gave myself some work, because former approach could be improved out of the box.

Anyway, shader is updated at above dl. link.

2 Likes

Out of interest what is your reasoning currently for putting both at 2.2? Is it to be as gamma neutral to sRGB gamma monitors as possible?

2 Likes

It’s not obligatory, but it’s more in the line with the standard ntsc preset character regarding horizontal filtering, but still very similar to the standard ntsc preset with other gamma related calculations.

Otherwise it’s tricky to assume direct comparisons with a shader that does a lot of stuff. But, to answer a previous question regarding different input/output gamma combinations - it will mess with saturation and it’s very hard to tell what the monitor will do, since it considers the gamma output neutral. like 2.2/2.2 or 2.4/2.4 etc. In other words, if you are happy with non-shaded output presentation of a game in Retroarch, then combinations like 2.2/2.55 will look much too bright and de-saturated.

4 Likes

I was thinking maybe one last thing about gamma could be considered.

If I look at the wiki page for sRGB (https://en.wikipedia.org/wiki/SRGB) I see that the transfer function is actually only power gamma (2.4) from a certain luminance threshhold, below which it is linear (you already are aware of these functions, but just for discussion purpose).

The actual sRGB forward transformation is:

and the reverse transformation is:

So I was thinking since the user’s LCD displays these days use sRGB gamma, it may be more correct if the shader encoding function (gamma_out) doesn’t use the current pure power gamma, but uses above sRGB transfer function?

Since these sRGB transfer functions seem trivial to implement, it may be an interesting exercise to implement a switch for both the “gamma correction” and the “gamma_out”, making both switchable between pure power and sRGB gamma? That way we can observe how both power and sRGB transfer functions impact color fidelity in the low range.

I guess gamma_input (the decoding to linear space) should always use pure power, since that is what it was originally “encoded” at?

1 Like

There are reasons for such procedures indeed, also rooting a bit in 32 bit per color buffers and interpolation of very dark colors. But with libretro float frame buffers can be used, which opens a nice way for different gamma combinations. Sure, there are some philosophical issues with representable numbers in an 16 bit floating point environment. For exampe, value A is quite small and representable, value B is large and representable, but happens that A + B = B (A+B is not representable) in a limited floating point environment, because of the numerical nature of the implementation.

But the story itself is quite old even for Libretro shaders.

If i would use sRGB buffers after linearization, then i should use the transformation, otherwise floating framebuffers do quite fine.

1 Like

Thanks for the answer. Only I seem to miss your point.

You seem to be saying that it doesn’t matter if you use the power function for encoding gamma at the end of your shader or use sRGB function (which is what LCD user monitor decodes at)?

So are you saying that it doesn’t matter if you do your current conversion/encoding at end of shader with the power function:

u=(Color/255)
Power function: Color=(u^(Gamma_out))*255

Or you would do the sRGB gamma conversion situation 2 at end of shader

u =(color / 255)

If u < 0.00313
Color = (12.92u)*255
Else
Color = (1.055u^(1/2.4) - 0.055)*255

I think grade.slang has a similar switch, but I think it would be more accurate if this switch would be implemented at the stages in your shader, as mentioned in my previous post.

So are we not understanding each other? In my mind this is simple Color manipulation to accurately match the different gamma’s. I.e raw emulator values are power gamma encoded (since created in era of CRT), but what you send to user’s LCD display is decoded with sRGB gamma function. So what goes out to the used monitor should be encoded as such.

Is Dogway with his grade shader wrong on this also then?

Could you in light of the above give some more explanation why you think these conversions are not relevant in the context of your shader?

1 Like

I belive Retroarch adds a layer in between, which converts to the RGBA 8888 format. Monitor is set-and-forget at best.

I’m saying that the linearization / delinearization is limited to 2.4/2.4 with the sRGB conversion functions, but i like more variety.

If you use the same procedure for different gamma values it creates a discontinuity.

And we are also talking about emulating systems with 5bit or less per color component, so float framebuffers do just fine. I didn’t say they were perfect.

I think i answered this a couple of times already. :wink:

Edit: after athought, the 16bit sfloat format should hold the water in most cases. :grin:

3 Likes

I also did a quick test with 8 grayscale colors from 1./255. to 8./255. and multiplied the delinearised result with 32. for better visibility.

crt-guest-dr-venom with 2.4/2.4 gamma settings.

All pixels were still visible at 3.0/3.0, the first was crushed at 3.05/3.05.

It’s interesting to know though. :wink:

3 Likes

I understand your believe / assumption. Unfortunately a believe without backup is just that. The “problem” is we keep assuming. When I asked about it, @hunterk answered sRGB space is NOT set on RA’s side. ( https://forums.libretro.com/t/dogways-grading-shader-slang/27148/375 ) . Dogway who is pretty well versed in color theory has also been forced to make assumptions, because there are no clear answers. And without clear answers, we could be dead wrong on the correct gamma adaption for Vintage CRT ouput to emulating that on our current LCD sRGB monitors.

Why is the above relevant? Because gamma mismatch affects the shape of the Gray-Scale and it has a major impact on not only image brightness and contrast, but also on hue and color saturation! In other words a gamma mismatch cause the intention and vision of the gfx artists of vintage games to be lost.

So just to entertain, the following are my observations.

The Poynton paper says the native gamma of CRT is power gamma with y between 2.35 to 2.55. (Remember gamma of 2.2 on PVM’s is NOT native CRT gamma, it is engineered for studio purposes). Let’s assume a specific value 2.5:

I.e. Brightness = (normalized luminance value) ^ 2.50

Poynton notes that the brightness perception of the human eye is approximately the inverse of the CRT power function. He doesn’t go into details but if you search for this, the theory is there is roughly a “dark adapted” and “bright adapted” eye. It depends on whether you’ve been sitting for an half hour or longer in a dim surround room (dark adapted) or well lighted room (bright adapted) and both can roughly be described by these functions:

Dark adapted eye : Perceived brightness = (normalized luminance value) ^ 0.4

Bright adapted eye: Perceived brightness = (normalized luminance value) ^ 0.5

As you’ll quickly see the dark adapted eye is exactly the inverse function of a CRT gamma function (1/0.4=2.5).

So suppose most games were developed in dim surround (game developers loved working at night :wink: ).

Conclusion / Assumption 1: : In the 80’s and 90’s there was NO layer that did gamma encoding. The eyes response is approximately the inverse of a CRT gamma function. So the perception end-to-end is completely linear. It was indeed a fortunate coincidence as poynton notes indeed!

So that’s the theory on CRT, now to our emulation chain:

Our eye’s response is still the same as it was in the 80’s / 90’s: Dark adapted eye : Perceived brightness = (normalized luminance value) ^ 0.4

But HEY something did change. Our current monitors DON’T have a 2.5 CRT power gamma. No they have an “average gamma” of 2.2 mostly?

Conclusion / Assumption 2: For 80’s and 90’s vintage system there’s a mismatch in emulated color luminance for R, G and B, because Vintage CRTs were at Power gamma 2.35 to 2.55 and our current LCD monitors are at gamma 2.2 average.

Conclusion / Assumption 3: emulated vintage games from the 80’s and 90’s on Retroarch have a color luminance mismatch on R, G, and B that at worst is 2.55/2.2= 1.16 times too bright, or least worse 2.35 / 2.2 = 1.07 times too bright.

Since gamma mismatch affects the shape of the Gray-Scale it has a major impact on not only image brightness and contrast, but also on hue and color saturation! There’s a noticable mismatch between what the CRT game developer saw on his screen in the 80’s and 90’s and what we are now seeing through emulation. It gets borked!

So now tell me how we adapt for this difference in the emulator chain?

  1. Retroarch should do no gamma encoding in the gfx output anywhere. Who can confirm this is really the case?
  2. If 1: then the shader output should be calibrated when using -at least- a gamma correction of 2.35/2.2=1.07. (In current development version of guest-venom this means a “gamma correction” that’s actually the inverse value 1/1.07, so a setting for “gamma correction” by default of 0.935 or lower to be accurate for 80’s and 90’s vintage games, when displayed on an LCD monitor with 2.2 gamma.) Right?
  3. If not 1: Who knows where the gray scale curve gets ripped by some gamma correction nobody can tell what it actually is? Again we shouldn’t settle for believe or assumption here, someone with authority on RA’s core code should be able to confirm.

So who else finds it useful to get to the bottom of this?

If the output of CRT gamma was 2.50 then you want an absolute measured output of gamma 2.50 on your LCD after all things considered (even assuming your display is calibrated to 2.2). That means that digital linearization and gamma encoding should be cancelled each other as I did while matching your calibrated display gamma.

For the analogue color work I use the SMPTE-C transfer function which is similar to the sRGB one but with a different threshold for the linear part. When you use the moncurve functions in grade the disparity in gamma doesn’t create a discontinuity.

I don’t know about scanlines and its requisites but if it keeps gamma unaltered/cancelled (other than scanlines related gamma) it should honor grade’s gamma.

1 Like

RetroArch doesn’t really do anything with gamma, for better or for worse. It really only comes into play as far as the framebuffers are concerned, otherwise everything is ~2.2.

And if you overscale or underscale the shader last pass? Or the rotations with arcade games/cores? You could think about how can the ‘fitting image’ happen, by magic or by applying an additional buffer? Nevertheless, the monitor really doesn’t care about internal gamma processing. :wink:

Emulated cores deliver the frames if, and i mentioned this before, the non-shaded output looks fine to you color-wise, then my crt shader shouldn’t do corrections. If the input frames are off, then a simple correcture should do it on the pre-shader level. Simple as that. There is really no need for every shader to include the variety of corrections, which consider a large set of different crt displays and emulated systems.

It’s not relevant, because the default input color space for shaders is relative gamma 1.0, and the default output color space for my shader is also 1.0. It can do a lot of stuff, even different gamma internal processing and a sort of correction, but in this case the layers after the shader plus monitor will consider the relative gamma 1.0 output.

I can improve my gamma function, but it will still be a gamma function with possible gamma combinations. I think it’s common sense to use the available shader parameters and tweak to one’s liking without expecting something bad to happen. :wink:

1 Like

I don’t understand what you mean to say that the monitor doesn’t care. The only thing the monitor does is applying a gamma of 2.2 right?

I think this is where we differ: you keep saying if it looks fine to me than don’t change a thing. The issue is I don’t know if it looks fine, because I have no reference. Is “gamma correction” of 1.0 correct? Am I then replicating the 2.5 gamma of a CRT monitor correctly on my LCD with the shader? I don’t know, that’s why my long explanation in previous post.

I will do just that :wink:

1 Like