Yeah, I was thinking about this, as well. We’re ultimately constrained by the color of the LCD subpixels themselves, since they’re immutable. The only way to change a perceived color from the subpixels’ own color is to add another subpixel to it at some level of brightness, thereby polluting our pure, perfect mask.
Just to add that although the green axis is the worst (as it has the largest shift in rec.2020) the colour temp trick won’t work for the same problems found in the red and blue axis so I’m not sure how much runway that really has.
Hi and thanks for your answer. Except in the shader menu I don’t see any menu about HDR in retroarch. I tested by enabling hdr through windows and nvdia menus but without success. Concerning the sdr shaders, they are much too dark on my screen to really enjoy them.
If colourspace transformations are affecting the mask to the point of creating additional subpixels then how is this so much different from other shaders which deliberately affect the mask using effects like Bloom for example?
If the goal is to achieve an as accurate as possible solution, I think the mask needs to be static/untouched and whatever accommodations are required to have accurate colour from an output and visual perspective should probably be worked on and applied to the “video signal”/“electron beam” even if the current theoretical methods of achieving such might be inaccurate.
It’s good to have the option to have things either way though.
Hmm so it sounds like you’re possibly not using an up to date version of Retroarch - I’d recommend 1.10.0 at least (I think that’s right). If everything is working correctly and you’ve chosen the Vulkan, D3D11 or D3D12 drivers then you should see it under Settings->Video->HDR (see the very first post in thread for instructions).
Yes! Fixes all the issues I was seeing, thank you!
Just awesome
Yeah so when a gamut conversion is done the primary colors can only be simulated by this break-out (which makes sense). I noticed it myself too when doing a test case with red (an additional green virtual phosphor is lighted on a pure red screen, to make the DCI/2020 red more soft (like 709 red).
In itself there’s not much wrong with the break-out in virtual phosphors imo. The same would happen when (relatively) you would display a more yellowish green on a CRT than the native primary green of the CRT. I.e the virtual phosphor mask behaves like it should.
I can understand it becomes an issue when in your case you’re seeing greens (with the broken out virtual phosphors in green and with a slightly different hue than before or what you’re used to.
However a few thoughts come to mind:
- What is the correct green? Ideally you would verify with a CRT of which you know by certainty that it is calibrated to exact the 709 colour gamut (phosphor primaries are exactly like what your matrix in the shader is based off) AND gamma is calibrated the same. That would eliminate the subjective factor and just stick with what’s real CRT green hue.
- I noticed that the color gamut conversion and the hue impression that is there at output is quite sensitive to gamma changes. So you may want to change gamma in and out a bit to see what it does to the hue.
In that regard there’s a more general thing on my mind with the gamma curves. Some thoughts coming up, would be interesting what your view is on the matter.
For the CRT era, one could discuss what gamma curve/function would best describe a CRT. This could be any of
- pure power law function
- 709
- sRGB
- Some totally whacked function because back in the day we would turn the “Contrast” knob on our TV’s and monitors probably a bit higher than the studio recommended “100 nits” (which is quite dim if you calibrate a CRT to that), and we would turn the “Brightness” to something that in daylight we could see our screens, totally whacking the “studio recommended” 2.2 gamma black level while being in dark room. I’m sure these analog Contrast and Brightness settings did not make for a clean single “gamma exponent” value change, but actually affected the curve function by quite a bit.
From my understanding sRGB calibrated curves is a thing from 1996 CRTs onwards. So what would best describe 80’s CRTs? Power law gamma?
Then on the output (display) side we also have a multitude of possible functions, especially with the addition of HDR.
The issue at hand is what’s best describing the CRT we’re simulating (what gamma curve and values) and what is the display device we’re using (what curve and values)? SDR led? HDR led? Chances are the CRT curve of the simulated CRT and the output curve (your monitor) will differ and ideally we would compensate for that difference.
Since we don’t know for sure what curve (power law, 709 or sRGB) would best describe the CRT we’re simulating, especially in regards to the wild west of the 80’s, I’m thinking it could make sense for the shader to make these functions separate from the colour gamut.
In the current shader setup when you choose 709 primaries colour gamut (which -could- be correct for 80’s CRT) then the 709 gamma curve is stitched to that. Same for sRGB. But maybe pure power law would better describe the 80s CRT?
Maybe ideally you can select a colour gamut and the gamma function seperately from each other. As this freedom would allow for possibly closer/better simulation of CRTs from all era (80’s through to 90’s).
My thoughts so far bring me to believe that for the gamma curves in the shader that are now stitched to the specific color spaces, it would be great to have them seperated, just that for example we could accommodate above case.
Ideally on the output side we could do the same. Currently power law gamma is stitched to the DCI space, while it could make a lot of sense to have sRGB curve in this case as possibly many current monitors have a “target” of DCI-color gamut (like e.g. 90% coverage or what not), but their gamma is actually better described by sRGB than by power law.
So the short story is whether it could make sense to make the gamma_in (CRT) curves and output curves seperate parameters, such that the flexibility of mix and match color spaces with gamma functions is provided for more flexibility to accurately match/simulate the wide range of CRTs out there on the wide range of (O)LEDs out there.
So for both the “Your Display” section and “CRT” section a “Gamma Type” could be introduced, that has the various gamma functions as a parameter broken out from the color spaces (but with same names for easy identification that “by default” they should be matched up):
So for the “YOUR DISPLAY’S SETTINGS:” it could be something like:
SDR: Display's Colour Space: r709 | sRGB | DCI-P3"
SDR: Gamma Type: r709 | sRGB | DCI-P3"
SDR: Gamma Value: (1.0 - 5.0)
and on the “CRT SETTINGS:” it could be something like
Colour System: r709 | PAL | NTSC-U | NTSC-J
Colour System Gamma Type: r709 | sRGB | Power Law
Colour System Gamma Value: (1.0 - 5.0)
So you’re absolutely right in saying switching on these extra sub pixels is wrong when strictly considering a CRT screen and what we’ve highlighted here is that in order to use HDR and therefore the rec.2020 colour primaries, we have to rotate pure green into red to look right and so break this ideal.
The only real thing I can say is that blooms and blurs are there to turn on subpixels to simulate luminance (which our eyes are far more sensitive to) whereas in this case we’re turning on subpixels to simulate chromaticity which are eyes are less sensitive to (this is how jpeg works) i.e you could argue it’s much more noticeable/inaccurate turning on those subpixels for blooms and blurs compared to turning them on to get the right colour in HDR.
I’ll let you decide how much water you think that argument holds but you can definitely see the difference between this shader and most (all?) of the presets in the presets folder for instance.
Basically I just have to be pragmatic on this and accept there doesn’t appear to be a way currently to get HDR levels of luminance without shifts in chromacity.
So yeah that suggestion makes sense the only thing I’d say is that maybe it over complicates things? Mind you we’re over complicated as it is so what’s a bit more complication. Let me have a think about it I need to stream line the shader a bit now anyway as it’s getting a bit gnarly in my opinion.
With regards to what the old colours looked like sure it maybe a bit subjective but I do have two Sony 2730 PVMs so I’ll do a side by side and just check but I’m pretty sure they’re not as yellow in the greens. I’ll check though - I’ll post some pictures for you.
Just added another pull request for Sony Megatron V4.1 which adds a switch to switch between what I’m currently thinking is more colour accurate (soon to be proved/unproved) and what is mask accurate.
This lets you easily see the differences if nothing else. I’ve defaulted it to the shaders old behaviour (as in colour accuracy) right now but it’s totally up in the air what should be the default and whether it being ‘colour accurate’ is correct.
I’ve been testing HDR mode a bit again and I don’t seem to be able to get as good/accurate a Gray Ramp in HDR mode compared to SDR mode. Where in SDR mode I can tweak the Gray Ramp pretty much too perfection, in HDR always something seems off, either the whites remain blown out, or I can balance the whites but then the whole screen is too dark etc. I’ve tried everything in the book (max nits, paper white, gamma, contrast).
Just to verify with you: are you able to get as well a balanced Gray Ramp in the 240p test suite in HDR mode as in SDR mode?
Maybe it’s because 240p test suite was designed for use in SDR mode?
In my understanding there shouldn’t really be any difference, as HDR mode does tonemapping that recreates the SDR luminance scale in HDR “space”. So basicly SDR images should be recreated in HDR space closely as they are seen in SDR hardware.
If I understood correctly from @MajorPainTheCactus then this inverse tonemapping isn’t without caveats though, so maybe he has something to say about this.
Closely but not identically. I’ve yet to see an image or asset that was designed for SDR look identical or even close to identical in HDR mode/colourspace on my Windows PC. So that might account for at least some of the difference.
The solution to this might lie in manual calibration and tweaking as you have embarked upon but with so many things there are tradeoffs.
Maybe if you had a brighter display things might have been better after calibration.
@MajorPainTheCactus seems to be getting stellar results on his hardware setup though but maybe his equipment might be not be representative of what the vast majority of users have access too.
I would like to see more feedback from users to gauge what kind of experience people are getting from this shader because so far it seems like some are getting great results, others not so good and for others something always seems to be missing.
Maybe part of the issue is that the expectations and standards might be a little different for this shader in a sort of uncanny valley kind of way.
The concept is sound and has been proven and the potential is there though but I think the userbase needs to grow so we can see what sort of awesome presets and photos other users come up with and start sharing with the wider community.
So the first question here is: did this happen before the most recent changes of fixing the mask in HDR i.e if you toggle the accurate mask/colour switch introduced last night can you get a balanced grey ramp?
If it did then you are going to probably have some issues with the inverse tonemapping it’s not perfect. But let’s see what we can do.
If it didn’t happen before the mask accurate then I can completely see it not looking right given that we’re breaking out to virtual phosphors much like the colour issue we were talking about last night.
But let’s see if there are any bugs lurking in there.
I wouldn’t jump to these conclusions - we’re talking about reasonably minor complaints and considering everybody has been happy with blooms and blurs and the impact they have on image quality its not really an argument to ditch HDR. I mean feel free to go back to them but you’re going to get a better experience IMO with more luminance.
The toggle changes the gray ramp in HDR a tiny bit for the better, but it doesn’t come close to the quite perfect gray ramp I’m seeing in the SDR version.
Just for the record I’m super happy with the SDR version, so I (currently at least) have no real need for the HDR version.
Are you seeing any differences when looking at the 240p Gray Ramp in HDR mode and SDR mode?
That’s great to hear the SDR version is working for you. So I haven’t had a chance to compare but I’ll do it as soon as I get the chance. Hopefully there is something we can do to bring the two closer together in parity.
I don’t think I was making any conclusions. On the contrary, I would like to see more feedback, buzz and the community using these shaders grow. I know it’s early days still.
I wasn’t trying to be critical in any way. Just sort of summing up in a very simplistic way the type of feedback I’ve read and observed here so far. That’s why I was careful to qualify and quantify my statement using very general terms such as “some” and “others” which infers an unknown quantity or percentage.
I most definitely am not suggesting that HDR be ditched in favour of anything else or that it’s not already a successful endeavor.
I wasn’t referring to the shader when I said I never saw an image or asset designed for SDR look identical or even close to identical in HDR mode. The examples I was referring to actually excluded the shader because I was just wondering if that same phenomenon might have been also at play in the shader.
So it was more like a question, not a statement about the shader which is something that is new, being worked on, tested and therefore variable. So no conclusions there. Just a thought that came to my mind based on my previous observations of other things (besides the shader) and wondering if the shader is also at the mercy of the same phenomenon.
Also, I don’t think there is any issue of going back to anything.
I think one can use any shader and still want to see all of them improve and enjoy supporting novel and different approaches.
So please don’t get me wrong. I am definitely an avid supporter of your work and what you have created. I hope you don’t think that I’m amplifying any minor issues that a few users might be having here and there.
I was trying my best to communicate (not criticize) what I thought might have been obvious, which is that users’ mileage might vary depending on their particular setup (particularly in the brightness and calibration department).
No harm intended. You’re doing a great job and service to the community here.
Keep up the good work!
This gray ramp issue may be the same issue that I reported several months ago. Glad I’m not the only one. Solving this would likely improve overall brightness as well.
With regards to color accuracy I was thinking we need to consider one other thing that may sit in the way of having accurate 709 mapping in 2020 space. It’s not related to the shader, but to the fact that HDR monitors don’t have full coverage of the Rec.2020 space themselves.
If you look at RTINGS.COM and add “Rec.2020 coverage xy” as a column in their table tool, the best monitor has a coverage of 84%, most others are in the range 55%-80%.
This implies native monitor primaries in HDR mode differ from the assumed theoretical Rec.2020 primaries, which the shader gamut mapping is based upon, and will result in a (visible?) color shift. How large this shift is would depend on each individual HDR monitor, and how closely it tracks 2020 color gamut.
Maybe this could be a factor also with the greens you’re seeing?
I think I’ve posted it before also, but if you’re interested to know the native primaries of your HDR monitor, run “dxdiag.exe” in Windows and Click on “Save All Information…”, search the log for HDR and there you’ll find the reported primaries of the monitor, such that you can compare them to the Rec.2020 specs primaries, and see how much smaller the monitors gamut actually is.