Sony Megatron Colour Video Monitor

Great stuff! Thanks @cyber for confirming this. The trick was to separate out the rec.2020 primaries conversion from the ST.2048 PQ conversion doing the former before the scanline and mask simulation and doing the latter (ST.2048) conversion after it.

2 Likes

@Cyber slightly unrelated question: what game are your photos from - I kind of recognise that font.

1 Like

Oh, that’s F-Zero, SNES.

1 Like

So after a some side by side comparisons I think I’m going to add an option to move the HDR primaries before and after as there is a clear trade off between the two in terms of image quality. One (moving HDR primaries before mask and scanlines) gives a crisper more accurate mask however moving them after give more accurate colours and slightly softer look.

The reason for this is that in HDR to move for example green back to the hue it has in SDR you have to add red to it i.e make it slightly yellower which is the rec.709 green we all know and love compared to the greener green of rec.2020.

This creates a problem for our mask as the mask breaks out this red into its own virtual phosphor and has the effect of over emphasising it giving all the greens a slightly more yellower look than if you do this conversion after the mask has been done. You can see this clearly on the opening map screen of Super Mario World on SNES but its there everywhere when you look.

So you can mitigate this effect somewhat by changing the colour temp to being a higher temperature resulting in a cooler/bluer image. I’ve found around the 3000 Kevlin mark seems be a good spot but you could probably go higher. However this then gets away from accuracy of the colour system NTSC-U/PAL is supposed to use D65 (6500K) whereas NTSC-J is supposed to use D93 (9300K) and we’re effectively throwing that all away.

Anyway long and short I’m not sure we’re going to get around the fact we have to do this colour conversion for HDR to make it look right and that just doesn’t play nicely with masks. Possibly the best we can hope for at this point is really bright SDR displays but they don’t look like they will ever get as bright as their HDR counterparts.

Mind you having said all that its quite difficult to tell the difference unless you know what you’re looking for. I personally prefer the original more accurate colour with the ever so slightly softer mask but that will be display, mask and user dependent!

4 Likes

Yeah, I was thinking about this, as well. We’re ultimately constrained by the color of the LCD subpixels themselves, since they’re immutable. The only way to change a perceived color from the subpixels’ own color is to add another subpixel to it at some level of brightness, thereby polluting our pure, perfect mask.

3 Likes

Just to add that although the green axis is the worst (as it has the largest shift in rec.2020) the colour temp trick won’t work for the same problems found in the red and blue axis so I’m not sure how much runway that really has.

1 Like

Hi and thanks for your answer. Except in the shader menu I don’t see any menu about HDR in retroarch. I tested by enabling hdr through windows and nvdia menus but without success. Concerning the sdr shaders, they are much too dark on my screen to really enjoy them.

1 Like

If colourspace transformations are affecting the mask to the point of creating additional subpixels then how is this so much different from other shaders which deliberately affect the mask using effects like Bloom for example?

If the goal is to achieve an as accurate as possible solution, I think the mask needs to be static/untouched and whatever accommodations are required to have accurate colour from an output and visual perspective should probably be worked on and applied to the “video signal”/“electron beam” even if the current theoretical methods of achieving such might be inaccurate.

It’s good to have the option to have things either way though.

2 Likes

Hmm so it sounds like you’re possibly not using an up to date version of Retroarch - I’d recommend 1.10.0 at least (I think that’s right). If everything is working correctly and you’ve chosen the Vulkan, D3D11 or D3D12 drivers then you should see it under Settings->Video->HDR (see the very first post in thread for instructions).

Yes! Fixes all the issues I was seeing, thank you! :smiley:

Just awesome :star_struck:

Yeah so when a gamut conversion is done the primary colors can only be simulated by this break-out (which makes sense). I noticed it myself too when doing a test case with red (an additional green virtual phosphor is lighted on a pure red screen, to make the DCI/2020 red more soft (like 709 red).

In itself there’s not much wrong with the break-out in virtual phosphors imo. The same would happen when (relatively) you would display a more yellowish green on a CRT than the native primary green of the CRT. I.e the virtual phosphor mask behaves like it should.

I can understand it becomes an issue when in your case you’re seeing greens (with the broken out virtual phosphors in green and with a slightly different hue than before or what you’re used to.

However a few thoughts come to mind:

  • What is the correct green? :slight_smile: Ideally you would verify with a CRT of which you know by certainty that it is calibrated to exact the 709 colour gamut (phosphor primaries are exactly like what your matrix in the shader is based off) AND gamma is calibrated the same. That would eliminate the subjective factor and just stick with what’s real CRT green hue.
  • I noticed that the color gamut conversion and the hue impression that is there at output is quite sensitive to gamma changes. So you may want to change gamma in and out a bit to see what it does to the hue.

In that regard there’s a more general thing on my mind with the gamma curves. Some thoughts coming up, would be interesting what your view is on the matter.

For the CRT era, one could discuss what gamma curve/function would best describe a CRT. This could be any of

  • pure power law function
  • 709
  • sRGB
  • Some totally whacked function because back in the day we would turn the “Contrast” knob on our TV’s and monitors probably a bit higher than the studio recommended “100 nits” (which is quite dim if you calibrate a CRT to that), and we would turn the “Brightness” to something that in daylight we could see our screens, totally whacking the “studio recommended” 2.2 gamma black level while being in dark room. I’m sure these analog Contrast and Brightness settings did not make for a clean single “gamma exponent” value change, but actually affected the curve function by quite a bit.

From my understanding sRGB calibrated curves is a thing from 1996 CRTs onwards. So what would best describe 80’s CRTs? Power law gamma?

Then on the output (display) side we also have a multitude of possible functions, especially with the addition of HDR.

The issue at hand is what’s best describing the CRT we’re simulating (what gamma curve and values) and what is the display device we’re using (what curve and values)? SDR led? HDR led? Chances are the CRT curve of the simulated CRT and the output curve (your monitor) will differ and ideally we would compensate for that difference.

Since we don’t know for sure what curve (power law, 709 or sRGB) would best describe the CRT we’re simulating, especially in regards to the wild west of the 80’s, I’m thinking it could make sense for the shader to make these functions separate from the colour gamut.

In the current shader setup when you choose 709 primaries colour gamut (which -could- be correct for 80’s CRT) then the 709 gamma curve is stitched to that. Same for sRGB. But maybe pure power law would better describe the 80s CRT?

Maybe ideally you can select a colour gamut and the gamma function seperately from each other. As this freedom would allow for possibly closer/better simulation of CRTs from all era (80’s through to 90’s).

My thoughts so far bring me to believe that for the gamma curves in the shader that are now stitched to the specific color spaces, it would be great to have them seperated, just that for example we could accommodate above case.

Ideally on the output side we could do the same. Currently power law gamma is stitched to the DCI space, while it could make a lot of sense to have sRGB curve in this case as possibly many current monitors have a “target” of DCI-color gamut (like e.g. 90% coverage or what not), but their gamma is actually better described by sRGB than by power law.

So the short story is whether it could make sense to make the gamma_in (CRT) curves and output curves seperate parameters, such that the flexibility of mix and match color spaces with gamma functions is provided for more flexibility to accurately match/simulate the wide range of CRTs out there on the wide range of (O)LEDs out there.

So for both the “Your Display” section and “CRT” section a “Gamma Type” could be introduced, that has the various gamma functions as a parameter broken out from the color spaces (but with same names for easy identification that “by default” they should be matched up):

So for the “YOUR DISPLAY’S SETTINGS:” it could be something like:

SDR: Display's Colour Space: r709 | sRGB | DCI-P3"    
SDR: Gamma Type: r709 | sRGB | DCI-P3" 
SDR: Gamma Value: (1.0 - 5.0)                                          

and on the “CRT SETTINGS:” it could be something like

Colour System: r709 | PAL | NTSC-U | NTSC-J
Colour System Gamma Type: r709 | sRGB | Power Law
Colour System Gamma Value: (1.0 - 5.0)
1 Like

So you’re absolutely right in saying switching on these extra sub pixels is wrong when strictly considering a CRT screen and what we’ve highlighted here is that in order to use HDR and therefore the rec.2020 colour primaries, we have to rotate pure green into red to look right and so break this ideal.

The only real thing I can say is that blooms and blurs are there to turn on subpixels to simulate luminance (which our eyes are far more sensitive to) whereas in this case we’re turning on subpixels to simulate chromaticity which are eyes are less sensitive to (this is how jpeg works) i.e you could argue it’s much more noticeable/inaccurate turning on those subpixels for blooms and blurs compared to turning them on to get the right colour in HDR.

I’ll let you decide how much water you think that argument holds but you can definitely see the difference between this shader and most (all?) of the presets in the presets folder for instance.

Basically I just have to be pragmatic on this and accept there doesn’t appear to be a way currently to get HDR levels of luminance without shifts in chromacity.

4 Likes

So yeah that suggestion makes sense the only thing I’d say is that maybe it over complicates things? Mind you we’re over complicated as it is so what’s a bit more complication. Let me have a think about it I need to stream line the shader a bit now anyway as it’s getting a bit gnarly in my opinion.

With regards to what the old colours looked like sure it maybe a bit subjective but I do have two Sony 2730 PVMs so I’ll do a side by side and just check but I’m pretty sure they’re not as yellow in the greens. I’ll check though - I’ll post some pictures for you.

2 Likes

Just added another pull request for Sony Megatron V4.1 which adds a switch to switch between what I’m currently thinking is more colour accurate (soon to be proved/unproved) and what is mask accurate.

This lets you easily see the differences if nothing else. I’ve defaulted it to the shaders old behaviour (as in colour accuracy) right now but it’s totally up in the air what should be the default and whether it being ‘colour accurate’ is correct.

4 Likes

Hi @MajorPainTheCactus

I’ve been testing HDR mode a bit again and I don’t seem to be able to get as good/accurate a Gray Ramp in HDR mode compared to SDR mode. Where in SDR mode I can tweak the Gray Ramp pretty much too perfection, in HDR always something seems off, either the whites remain blown out, or I can balance the whites but then the whole screen is too dark etc. I’ve tried everything in the book (max nits, paper white, gamma, contrast).

Just to verify with you: are you able to get as well a balanced Gray Ramp in the 240p test suite in HDR mode as in SDR mode?

2 Likes

Maybe it’s because 240p test suite was designed for use in SDR mode?

2 Likes

In my understanding there shouldn’t really be any difference, as HDR mode does tonemapping that recreates the SDR luminance scale in HDR “space”. So basicly SDR images should be recreated in HDR space closely as they are seen in SDR hardware.

If I understood correctly from @MajorPainTheCactus then this inverse tonemapping isn’t without caveats though, so maybe he has something to say about this.

2 Likes

Closely but not identically. I’ve yet to see an image or asset that was designed for SDR look identical or even close to identical in HDR mode/colourspace on my Windows PC. So that might account for at least some of the difference.

The solution to this might lie in manual calibration and tweaking as you have embarked upon but with so many things there are tradeoffs.

Maybe if you had a brighter display things might have been better after calibration.

@MajorPainTheCactus seems to be getting stellar results on his hardware setup though but maybe his equipment might be not be representative of what the vast majority of users have access too.

I would like to see more feedback from users to gauge what kind of experience people are getting from this shader because so far it seems like some are getting great results, others not so good and for others something always seems to be missing.

Maybe part of the issue is that the expectations and standards might be a little different for this shader in a sort of uncanny valley kind of way.

The concept is sound and has been proven and the potential is there though but I think the userbase needs to grow so we can see what sort of awesome presets and photos other users come up with and start sharing with the wider community.

1 Like

So the first question here is: did this happen before the most recent changes of fixing the mask in HDR i.e if you toggle the accurate mask/colour switch introduced last night can you get a balanced grey ramp?

If it did then you are going to probably have some issues with the inverse tonemapping it’s not perfect. But let’s see what we can do.

If it didn’t happen before the mask accurate then I can completely see it not looking right given that we’re breaking out to virtual phosphors much like the colour issue we were talking about last night.

But let’s see if there are any bugs lurking in there.

2 Likes

I wouldn’t jump to these conclusions - we’re talking about reasonably minor complaints and considering everybody has been happy with blooms and blurs and the impact they have on image quality its not really an argument to ditch HDR. I mean feel free to go back to them but you’re going to get a better experience IMO with more luminance.

3 Likes

The toggle changes the gray ramp in HDR a tiny bit for the better, but it doesn’t come close to the quite perfect gray ramp I’m seeing in the SDR version.

Just for the record I’m super happy with the SDR version, so I (currently at least) have no real need for the HDR version.

Are you seeing any differences when looking at the 240p Gray Ramp in HDR mode and SDR mode?

2 Likes