Sony Megatron Colour Video Monitor

Hmm… perhaps keeping the basic setting as Colour Boost and giving the new gamuts nicknames would work? Say, Off=709 (same as now), On=Expanded709 (same as now), Balanced=P3-D65, and Extreme=Rec2020? That way the options are there for advanced users to make WCG SDR shaders work correctly in HDR, while more casual users get the knock-on benefit of additional “poptions” in a form that is easier to parse?

(It could also be Off/Light/Full/Extreme, or anything else. The gamuts being available and what the setting actually does being documented for advanced users is the important thing.)

What it ultimately comes down to is that it is currently impossible to make gba-color.slangp and other similar shaders work correctly to their full potential with HDR enabled. You have to disable HDR and use WCG SDR instead.

As i see it, the available options to fix this are:

a) Adding those additional gamuts to Colour Boost, regardless of how we name those settings.

b) Making a bespoke “Advanced HDR Options” shader that overwrites the Colour Boost option with those gamuts available as a parameter.

c) Making it possible to turn off scanlines in Megatron like you can in guest-advanced, in which case Megatron would do double duty as the hypothetical “Advanced HDR Options” shader from option b.

I don’t ultimately care how we get there, but I feel very strongly that one of those options needs to happen in some form to make RetroArch’s HDR support more feature complete.

2 Likes

Ok but those shaders would never have worked correctly in the past if they’re not compatible with 709?

Shouldn’t those shaders be changed to either have the option to convert to 709 - a straight forward matrix multiply OR create a hdr native version (which turns off the built in inverse tonemapper and hdr10 conversion - see below) and itself take care of the P3-D65 to rec2020 or whatever it wants to do like the Sony Megatron does.

More work for the latter but pretty much six of one half a dozen of the other in terms of end result.

The beauty of exposing these settings to the shaders is that shaders can use them to decide themselves what to do - they can decide what ‘colour boost’ on off should mean. If they need more than on/off then they can add their own parameter and ignore colour boost altogether in the main menu. It does sound in this case that colour boost should be ignored.

The built in HDR inverse tonemapping and hdr10 conversion is really only a band aid to help all the shaders use HDR without the authors having to do work - its not really meant as a proper solution to any one shader.

Just for clarity of implementation all I did was rename the option - its all still expand gamut in the Retroarch source code and does exactly the same thing.

c) Making it possible to turn off scanlines in Megatron like you can in guest-advanced, in which case Megatron would do double duty as the hypothetical “Advanced HDR Options” shader from option b.

We kind of already have that the key is to put:

#pragma format A2B10G10R10_UNORM_PACK32

in the final pass which turns off the internal inverse tonemapper and hdr10 conversion that RetroArch provides and then include:

shaders_slang\hdr\shaders\include\gamma_correct.h

and call:

void GammaCorrect(const vec3 scanline_colour, inout vec3 gamma_corrected)

optionally for the inverse tonemapper they can include:

shaders_slang\hdr\shaders\include\inverse_tonemap.h

and then call:

vec3 InverseTonemap(const vec3 sdr_linear, const float max_nits, const float paper_white_nits)

Im sure Gemini or Chat Jippity can knock a slang together in seconds if pointed at the files.

Isn’t this the whole point of hiding certain things under Advanced Settings? You can’t make the proper selection without having the baseline technical knowledge anyway. Abstracting things behind made up lingo is just annoying.

My consumer television uses the technical terms. If it’s good enough for a TV, I can’t see why it can’t be good enough for someone who is knowledgeable enough to set up emulation.

2 Likes

Hi @MajorPainTheCactus, just a small update: I decided to test the AppImage nightly build of RetroArch on Linux and the HDR option still doesn’t appear on it, even with these updates to the RetroArch HDR system. For Linux systems, AppImage are self-contained application packages, with all its dependencies bundled directly in the .AppImage binary, making it portable like some Windows .exe files. So, the missing HDR option is probably due to some library missing or something going wrong, and this will likely only be fixed if someone looks what is missing/wrong and fixes it. In the Flatpak build, the HDR option is there, so I just need to wait for the HDR metadata update to test how it behaves, but I couldn’t compile the Flatpak build the last time I tried, so maybe I’ll also have to wait for a new release after the HDR metadata update to be able to test it.

But I took the opportunity to download the updated Sony Megatron V2 from your repository, which hasn’t even been merged in the Libretro shaders repository yet, and it works perfectly in SDR. I loved that now it doesn’t even need to switch between SDR/HDR in the shader parameters like before. As I mentioned, my hardware is modest, an AMD desktop with its iGPU and a cheap entry-level monitor running Debian Linux, but even so, the Sony Megatron shaders look great after smoothing the scanlines, I really like getting close to the screen and seeing the details of the mask, just like I did as a kid with my real CRT (that still exists and works :stuck_out_tongue:).

So, thank you again for the excellent work and dedication!

1 Like

Real CRT

Sony Megatron Colour Video Monitor

Apparently CRT-Guest-Advanced already does this with the scanline gaps appearing “behind” the Phosphors or at least appearing much weaker, diffused and less opaque than what was presented in the Tyris’ forearm example.

It’s controlled via the “Preserve Mask Strength” and “Preserve Scanline Strength” Shader parameters.

When Preserve Scanline Strength is set to 1.00, the Scanline gaps appear sharp, opaque and over the phosphors, bisecting them unnaturally on bright pixels or where the scanline gaps are narrow.

When it is set to a low value, it looks much closer to how it is in the real CRT photo, with the scanline gap appearing behind the phosphors and that portion of the mask appearing dimmer, which makes sense because the phosphors over the tiny scanline gap are less energized.

@MajorPainTheCactus, it it feasible/possible to have the Mask/Phosphors and scanline gaps have the proper Z order with alpha blending and behave more similarly to what is in the CRT pics and what happens in CRT-Guest-Advanced?

Also, if it is feasible and possible can you implement the “Base Black Mask” feature which allows us to see the unexcited vertical phosphor stripes if we go close enough to the screen and also gives the edges of the phosphors something “non-zero” to fade and blend into as energy levels decrease.

1 Like

V2.6.0 Sony Megatron Shader

Evening all, so @hunterk has kindly merged (thanks!) the latest version of the Sony Megatron Shader V2.6.0. The eagle-eyed amongst you might notice we’ve jumped from v5.8 to v2.6.0 as in a new major number. This is because this is not only a major rewrite it is also more tied into the RetroArch runtime as it now uses the HDR menu’s settings rather than its own settings. This means it will only work with builds from last night onwards .

Because we allow users to download the latest shaders we will have the situation where old versions of RetroArch wouldn’t work with the new Sony Megatron shader and thus I have left the old v1 shader as is and the new v2 shader is named ‘crt-sony-megatron-v2’. This has the pleasant side effect that all presets should still work perfectly fine and users can upgrade to the new version of the Sony Megatron at their leisure.

Although this is largely a rewrite it should still be quite close to the original but with fixes and QoL improvements and be able to target devices like the Raspberry Pi 5 that I’m particularly interested in supporting:

Quality Improvements:

  • Linear inverse tonemapping without SDR elbow - The old version used a piecewise function with a hardcoded knee at 0.75, creating a “shoulder” that treated highlights differently: sdr_shoulder_factor = (1.0 - 0.75) + luma * 0.75 . This meant shadows and midtones (below 0.75) were expanded less aggressively than highlights, compressing the tonal distribution. V2 uses a pure linear rational function across the entire range: input / (1.0 - input * (1.0 - 1.0/peak_ratio)) . Every stop from black to white gets expanded proportionally to the peak_ratio, preserving the original gamma curve’s shape. No artificial kinks in the tone mapping - what goes in at gamma 2.2 comes out at gamma 2.2, just scaled to HDR range.
  • Max-channel scaling instead of luma-weighted - V2 finds the brightest RGB channel and scales all three channels by the same ratio to preserve hue exactly. The old version calculated perceptual luma (0.2126R + 0.7152G + 0.0722B), then scaled RGB proportionally to that, which could shift hues when the max channel and luma diverged (particularly noticeable in saturated blues and reds).
  • Better ST2084 (PQ) conversion - The old version used abs(channel) before the power function, which could produce incorrect results if you somehow got negative values in your linear HDR signal. V2 clamps to positive values first with max(colour, 0.0) , then processes each channel. This is mathematically cleaner - negative light doesn’t exist, so we enforce that constraint before the perceptual quantizer curve rather than silently converting negative values to positive ones inside the transfer function.

Performance Gains:

  • 60% smaller shader code (1697 → 683 lines) - Fits easily in the GPU’s instruction cache instead of thrashing it with uber-shader bloat.
  • Single texture read instead of two - V2 only samples SourceHDR, eliminating the redundant SourceSDR texture fetch. Halves memory bandwidth in the hottest code path.
  • Branchless mask lookups via LUTs - Replaced nested switch statements with const array indexing ( kApertureGrilleLUT[lcd][tvl][bgr] ). GPUs perform a single scalar load from constant memory per warp instead of navigating a control flow tree. Array indexing is arithmetic, not branching—GPUs prefer this.
  • Reduced register pressure - The old shader’s massive switch-case structure forced the compiler to reserve registers for potential code paths even if dead. V2’s linear control flow lets the compiler reuse registers aggressively across the simplified GetMask functions.

Usability Wins:

  • Native HDR integration with RetroArch - I modified RetroArch to expose new HDR constants (PaperWhiteNits, ExpandGamut) that the shader can read directly. No more manually duplicating values between RetroArch’s HDR menu and shader parameters - V2 reads exactly what you’ve set in RetroArch’s settings automatically
  • Streamlined parameter menu - Removed the clutter from the old version. V2 puts your display settings (resolution, subpixel layout) right at the top where you need them first. The old version buried these under walls of instruction text and mixed HDR/SDR settings together
  • Clear SDR/HDR separation - V2 has a dedicated “SDR MODE SETTINGS” section that clearly states “These settings only apply when HDR is turned off”. In the old version, HDR and SDR parameters were interleaved in the same section with “HDR:” and “SDR:” prefixes - easy to get confused about what applied when
  • Removed obsolete parameters - Cut the manual HDR toggle, Peak Luminance, and Paper White parameters since the shader now reads these directly from RetroArch. Also removed Pin Phase/Amp controls that most users never touched
  • Better grouping - Instructions and reference info (white point values, gamma cutoffs) are now tucked under the CRT settings where they’re relevant, not blocking the top of the menu
  • Reorganized presets - Split HDR and SDR presets into separate folders for easier navigation
  • Bug fixes galore - Squashed numerous issues from the original implementation

All the existing CRT model presets (PVM-2730, PVM-20L4, JVC TM-H1950CG, etc.) have been updated to V2 versions.

4 Likes

I can assure you every TV out there has marketing terms for pretty much every technical term. I have a Samsung S95 and its menu is full to the brim of them at the top level: ‘Film’ mode, ‘Dynamic’ mode etc In game mode I have RPG mode, FPS mode.

It makes complete sense to have straight forward non technical terms in your high level menus. Every consumer product does this.

Advanced settings are different, yes, but thats where the shader parameter menu comes in.

Yes sorry I haven’t done the Linux work yet - this work was done in building up to doing that. I needed to do some perf work and fix some big issues before I went to enabling HDR on Linux.

1 Like

We are talking about two different things. RetroArch has built-in support for advanced settings which are hidden by default. A specific color gamut selection could be put there if it’s ‘too technical’ for users.

The color boost setting seems to be intended for early ‘HDR’ monitors that had poor tone maps, resulting in washed out colors. Make the description explicit: ‘Increase color saturation’ with description ‘Increases saturation of colors at the expense of accuracy for monitors with poor HDR support’. Just make it explicitly clear what the purpose of the setting is.

1 Like

Yes CRT guest advanced shader is usually trying to simulate the look of a CRT on cheaper SDR monitors by adding loads of software effects (and it does it brilliantly). The Sony Megatron is tackling the problem from a different angle: do as little as possible in software and let the display do all the work i.e hardware effects - this means using it with very expensive displays to recreate the CRT look.

The 5K monitor resolution slot mask is just because scanlines then have 12 pixels for 240p. You then divide that by two for 6 pixels high and then -1 pixel for the horizontal bar. This gets you near the right ratio for horizontal bar and scanline (8-9%) and keeps the the horizontal bars in the same position going down the scanlines as you point out.

Most of the lighting effects you describe fall out of the higher brightness but I’m sure there is a bit of halation etc that can be thrown in. Signal noise is a whole other story though.

1 Like

Ok you’re right I could add an advance section but this then gets ‘political’ pretty quickly as the line between what the built in shader does (it has to support what the main menu does) and what the custom shaders do starts to blur significantly.

As in if the built in shader does a lot of the stuff that the custom shaders do then why would 95% of users bother downloading the custom shaders and getting into all the shader and preset authors hardwork?

I think there’s a balancing act here - we as a community dont want the built in shader doing too much that it takes users away from the custom shaders. It should just do the bare minimum for the high level features (like HDR) for people who dont want to get into tweaking halation values and Japanese colour gamuts.

Granted what one person calls a bare minimum and what another person calls bare minimum is up for debate. Largely the features exposed today are legacy of the initial implementation when things were far less understood.

Well one step at a time. I think if you can eliminate that Tyris’ forearm anomaly via optimizing Z hierarchy and alpha blending between layers/passes/elements/textures/tiles, you would have accomplished a lot and we already have proof of concept.

Take a look at this gallery for some more inspiration.

https://www.reddit.com/r/crtgaming/s/vCXuepQFnW

1 Like

It’s easier to maintain RA than it is to maintain a collection of community-contributed shaders. If something like a BT.2020 setting in RA allows a bunch of WCG shaders to work in HDR mode, that seems closer to a no-brainer than to a political debate, IMO, especially when there are situations where a shader needs WCG but not HDR (Game Boy color correction).

Another solution could be a #pragma to inform RA what the shader is expecting the output color space to be. Then RA could do a final conversion if needed without needing to update the whole shader.

2 Likes

I’m assuming v2 is broken on my system?

According to Lilium’s HDR analysis reshade, with Luminance at 800 and Paper White at 650, crt-sony-megatron-v2-default has a MaxCLL of only 36.7 nits. Using crt-sony-megatron-default-hdr, 800/650 results in a MaxCLL of 776.7 nits. I have to turn v1 all the way down to 40/30 to get a similar result to the one seen in v2 (37.5 nits).

A Windows 10 issue perhaps?

Relatedly:

I honestly don’t think this is a good idea. I use vastly different Luminance and Paper White with Megatron compared to my shaderless settings.

(Relatedly, we will also need to add a HDR UI brightness setting now that it is no longer tied to Paper White. 10,000 nit text is ludicrously bright.)

Tbh, between this and your reaction to the gamut discussion, i’m getting the impression you think that HDR in RetroArch has no use beyond CRT simulation, and should be disabled otherwise?

HDR’s WCG capabilities also have a variety of applications for emulation. Nearly every single color console and handheld released prior to 2005ish was presumed capable of displaying colors outside of the Rec709 gamut.

1 Like

Okay, so i found the source of the problem.

In crt-sony-megatron-hdr-pass-v2.slang, replacing:

   if(HCRT_HDR >= 1.0f)
   {
      const vec3 rec2020  = hdr_colour * k2020Gamuts[uint(HCRT_EXPAND_GAMUT)];
      transformed_colour  = rec2020 * (HCRT_PAPER_WHITE_NITS / kMaxNitsFor2084);
   }
   else
   {      
      transformed_colour = hdr_colour;
   }

with just

   transformed_colour = hdr_colour;

makes v2 works as expected for me.

That used to be part of the old Mask Accurate/Colour Accurate code. From “crt-sony-megatron-hdr-pass.slang”:

   if((HCRT_HDR >= 1.0f) && (HCRT_COLOUR_ACCURATE < 1.0f))
   {
      const vec3 rec2020  = hdr_colour * k2020Gamuts[uint(HCRT_EXPAND_GAMUT)];
      transformed_colour  = rec2020 * (HCRT_PAPER_WHITE_NITS / kMaxNitsFor2084);
   }
   else
   {      
      transformed_colour = hdr_colour;
   }

So this change would be equivalent to using Colour Accurate in v1.

1 Like

I can also confirm that if the “SDR | HDR”, “HDR: Display’s Peak Luminance”, “HDR: Display’s Paper White Luminance”, and “HDR: Original/Vivid” are added back into v2, v2 will then work on older versions of RetroArch (tested on 1.22.1 stable).

Do with that information what you will.

1 Like

Ah great stuff thanks! I must have merged that in incorrectly as I found that bug myself and fixed it. Ok Ill get that fixed ASAP. Just a note is that Ive ditched ‘mask accurate’ as essentially its impossible to be correct from a colour perspective and so doesn’t make sense to me. I probably should have added that in the release notes.

UPDATE: I know what happened I fixed it in the internal shaders glsl/hlsl but not in the v2 (hmm but I did visually check that they all produced identical results - maybe lost the change - anyway thats wrong as youve pointed out)

The equivalent SDR codepath may also need to be fixed in “crt-sony-megatron-source-pass-v2.slang”. I didn’t get around to testing it, but the code looks like it has the same issue.

1 Like

Yup I noticed that too and is now fixed in the new pull request:

V2.6.1 Sony Megatron Shader

Thanks for pointing that out - I dont know what happened I’ve got two machines with two lots of shaders each and multiple shaders inside each set plus the in built ones - its a bit hectic. (and yes I should probably ditch the downloaded set and replace it with my forked repo set to at least remove that duplication, oh well something to change going forwards).

Do you remember why Mask Accurate was added in the first place? Certain displays don’t need it but LG WOLED displays wouldn’t show Accurate RGB triads and be able to pass the Coke-Pepsi test without it.

Why destroy that joy of having triads and slot masks look like real triads and slot masks for those users? Unless the issue of stray subpixels being turned on in Colour Accurate mode was in fact solved.

As for not being able to get correct colour with Mask Accurate, have you heard a single user complain about the colour they were getting using Mask Accurate on a WOLED display?

It’s not as if colour correction or adjustments can’t be made in order to compensate if even necessary on those displays. Why limit user choice? Some might prefer more accurate looking CRT element structure vs theoretically slightly more accurate colour because the actual colour you get out of a dimmer WOLED screen might be far from accurate in any case, especially the reds when Saturation or Paper White Luminance is low.

1 Like