Sony Megatron Colour Video Monitor

We could still make the bug report to AMD and see what happens. It could be a genuine bug/mistake on their part and who knows what else could be affected by it?

I’ve tried with HDR on/off in Windows, in RA and within the shader params. No combination of settings helps. I’ve tested cyberlabs miniLED Epic pack and Azmods and all presets using aperture are bugged.

I couldn’t get megatron working in mega bezel, it caused RA to close

1 Like

By the way, have you tried enabling and disabling display scaling?

When I get a chance, I’m going to share an alternative sonymegatron.slang with reversed X (Dark subpixel) placement. In Sony Megatron Colour Video Monitor the X is after the RGB subpixels but at least for some of CRT-Guest-Advanced’s masks, the X is at the beginning.

Have you used a test pattern to 100% verify that the problematic AMD driver is correctly outputting 444 chroma? (Tho i highly doubt it’s chroma subsampling, that presents quite differently from what you describe.)

Also, for trouble shooting purposes, check if the masks are visible at 1920x1080.

1 Like

@Wilch1, @bigretrofan you can try this version of crt-sony-megatron.slang which has the Black subpixel in front of the RGB subpixels instead of after. Just be sure to backup your original crt-sony-megatron.slang before trying. Only the 4K, Aperture Grille masks have been modified.

Update:

Just did some testing with an AMD Radeon RX 6600 using Driver 25.12.1 and was able to replicate the behavior, even on a 1080p BGR SDR display. D3D11 is completely fine though.

I tested the above crt-sony-megatron.slang and the problem was still there.

Does the mask bug occur with both Retroarch and Megatron in SDR mode?

Yes, SDR/HDR mode doesn’t seem to matter.

I tried switching between 8 bit, 10 bit and 12bit modes as well as between 8 bit and 10 bit Open GL mode in my Adrenalin Driver Contro Panel. I tried setting Integer Scaling On/Off, Enabled/Disabled GPU Scaling and set scaling mode to center. None of that helped.

Yeh I tried with scaling disabled straight away, forgot to mention it

1 Like

Yep, tried the test pattern early on too, 444 seems fine. Just tried 1080p with same result.

1 Like

Ah nice, you’ve been able to replicate :+1:

1 Like

Good evening everyone, long time no see. I hope you’re all doing well.

Ok, so 2026 is going to be a big year for displays if CES 2026 is anything to go by. We saw TCL show off 9000-nit TVs, Samsung and LG show off proper RGB subpixel technology for QD-OLEDs and OLEDs respectively, and of course, we saw Nvidia release G-Sync Pulsar—the ultimate conclusion to LCD backlight strobing for motion clarity.

With all that in mind, it gave me enough enthusiasm (along with a recent Retro Crisis video showing the pain of setting up HDR) to revisit my HDR implementation and upgrade it.

I set out with a number of goals, which I have now implemented:

1. Simplify the end user experience:

Previously, users had a “three-body problem” to solve in the HDR menu by balancing Peak Luminance, Paper White Luminance, and Contrast. I have done away with Contrast as it is not needed. Users now just set a one-time Peak Luminance value and from then on just use Paper White as their overall brightness control.

Also, previously when users went into native HDR shaders, they were expected to change yet another set of shader-internal parameters that seemingly conflicted with the menu settings, adding another layer of confusion. I’ve exposed the HDR menu settings to the custom shaders so this should no longer be an issue, once the custom shaders have been updated to take advantage of the new features (I will be updating Sony Megatron soon!).

2. Improve quality:

It’s been noted for a long time that there are better ways to do HDR. The previous implementation in RetroArch actually dated back nearly a decade (although it didn’t appear in RetroArch until four years ago). That was in the early days of HDR, and different constraints based on displays at the time were taken into account that now don’t exist.

The old implementation had an artificial “elbow” in the mapping: below 0.5, the SDR image was mapped linearly to HDR space, and above 0.5, the highlights were expanded out to the rest of the HDR range. This effectively only dealt with highlights. I have updated this to remove the elbow and simply map the entire SDR space into the available HDR space, handling shadows and mid-tones much better. Whether you’ll really notice this in retro gaming is another matter, but it is definitely an improvement.

I also fixed the way we scaled linear color to determine how much to boost a color. Previously, I used a luma coefficient to get a grayscale value; this resulted in reds, for example, not being boosted as much as they should and whites being over-boosted, causing a washed-out look. We now use RGB maxing, which is a much better solution.

3. Future-proofing for efficient hardware:

I believe the future of this tech for retro gaming lies in cheaper devices paired with more expensive displays—i.e., most people are willing to spend an awful lot on the TV in their living room rather than the PC in their bedroom (although looking at memory and 5090 prices, that isn’t universally true!).

To that end, I wanted to target Raspberry Pi 5 level hardware that you can put behind your Samsung or LG OLED in the lounge. To achieve this, I’ve updated the RetroArch HDR system to be as efficient as possible by offloading as much processing to the display as possible. I’ve included a fast, single-pass scanline simulation for HDR aimed at running close to 60fps at 4K.

My aim is to bring this HDR implementation to LakkaTV on the Pi 5. The guys in charge of LakkaTV and the OS it sits on have done most of the heavy lifting, and we just need RetroArch to pass HDR metadata to it. I’ll hopefully get the time soon to add that specific support, but in the meantime, the core RetroArch HDR system is now updated and ready.

So, two things are to come on top of the large update to RetroArch:

  1. The Sony Megatron shader is going to get an upgrade to its scanline simulation, for better HDR blooming and, in addition, better inverse tonemapping and HDR10 support. Hopefully, there will be a good performance boost and the shader parameters will be simplified.

  2. HDR support on Linux through the KMS/DRM layer and the Vulkan driver rather than the desktop Wayland/X11 (the tech used by the Steam Deck/SteamOS). This has actually mostly already been done by others we just need to pass through the vulkan hdr metadata correctly and enable HDR.

Anyway, lots to do!

7 Likes

Here’s the merged pull request with some more description of the changes contained inside it:

https://github.com/libretro/RetroArch/pull/18616

3 Likes

That’s great news, thank you for your work and for these amazing shaders! I started using the Sony Megatron shaders only recently, after trying for days to fine-tune the parameters to make it look good on my cheap monitor (1080p, only ~300nits, but it looks good if I use aperture grille as mask and smooth the scanlines), and even so I’m still surprised at how good it looks, I can get close and see the details of the mask simulation.

My aim is to bring this HDR implementation to LakkaTV on the Pi 5. The guys in charge of LakkaTV and the OS it sits on have done most of the heavy lifting, and we just need RetroArch to pass HDR metadata to it. I’ll hopefully get the time soon to add that specific support, but in the meantime, the core RetroArch HDR system is now updated and ready.

Is that related to this issue? https://github.com/libretro/RetroArch/issues/18186

I made some comments there when we are trying to debug it, HDR seems broken on both Linux and Android, although according to some comments there, it depends on the quality of the screen/HDR support the user has, so due to my current monitor, my experience and initial attempts using it were very bad, so I’m using it in SDR for now.

1 Like

Welcome back! Good to see you back at it.

When users go to RTINGS they see multiple Peak Luminance values as it relates to different window sizes so it’s a bit confusing. Which Window Size are we to use when we go to RTINGS to determine our display’s Peak Luminance value to use in our RetroArch or Sony Megatron Colour Video Monitor setup? Is it 2% Window Peak Brightness, 5% Window Peak Brightness, 10% Window Peak Brightness, 25% Window Peak Brightness…e.t.c.?

On another note, there seems to be a showstopping bug since AMD upfated their Vulkan Drivers:

CRT-Guest-Advanced users using supposedly similar Masks and TVL’s are not expering this, nor DX11/12 Video Driver users.

Thanks again for your excellent work on RetroArch and your amazing shader(s)!

1 Like

Hi @MatheusWillder, Im glad you got it working to a satisfactory level.

With regards to that post that’s exactly it. I believe the underlying OS, driver and hardware is all supported the only sticking point is the Pi 5 hdmi port spec for 60fps (haven’t looked into it). So all that should need setting is that HDR metadata. I got HDR working on a personal build of Android RetroArch and it kicks and screams a bit saying it doesn’t support rgb10a2 but if you swap it over to rgba16 its fine. I’m sure different devices will support different things. Whether the Pi 5 will play ball is another question but we can only try.

1 Like

Hi Cyber, firstly thanks for all your hard work and effort over the years its greatly appreciated by me (and I’m sure the community at large).

When users go to RTINGS they see multiple Peak Luminance values as it relates to different window sizes so it’s a bit confusing. Which Window Size are we to use when we go to RTINGS to determine our display’s Peak Luminance value to use in our RetroArch or Sony Megatron Colour Video Monitor setup? Is it 2% Window Peak Brightness, 5% Window Peak Brightness, 10% Window Peak Brightness, 25% Window Peak Brightness…e.t.c.?

Yes its a very good point and sadly doesn’t have a simple answer. The general rule of thumb is that OLEDs should look at 2-5% Peak Luminance figures and LCD (backlit LEDs) should look towards the 50-100% Sustained Luminance figures.

Why?

Well that’s a bit more interesting. For a OLED it can turn on/off each subpixel as in it doesn’t use power for off subpixels and so they don’t contribute to the ABL (Auto Brightness Limiter) power limits. This works great for the Sony Megatron with its strict full mask and scanline simulation as most of the sub pixels are turned off. See this table of calculations:

Component Efficiency Cumulative Percentage
Native Panel 100% 100%
RGBX Mask (3/12 subpixels) 25% 25%
Bezier Scanlines (45% AVL) 45% 11.25%
G-Sync Pulsar (Strobing)/BFI@240hz 25% 2.8%

So if you ran the Sony Megatron just on its own on a full white screen you’d get 11.25% of the pixels lit up and if you have BFI on then its just 2.8% of them turned on over a four frame cycle. Hence 2-5% Peak Luminance is the metric to look at for OLEDs.

For MiniLED LCD’s though its a different story they use dimming zones i.e a group of LEDs lighting up a section of LCDs and its these LEDs that use the power and cause the heat. As such they cant take advantage of turning off sub pixels and so a full white screen with the Sony Megatron uses the same power as a full white screen without the Sony Megatron masks and scanlines. Hence for backlit LCD users they need to look at the 50-100% Sustained Luminance figures.

Luckily it just so happens that LCD’s are much better at sustained luminance than OLEDs are and so in a lot of cases LCD’s at 100-50% sustained are still brighter than OLEDs at 2-5% Peak. But the Sony Megatron certainly narrows the brightness gap between the two technologies drastically.

Just out of curiosity I plugged in the above figures to see what would be required to simulate various CRT displays would require to simulate on a OLED/LCD with backlight strobing/BFI turned on for motion clarity and we’re looking at these kinds of figures (its worth noting nits in human perception of brightness, scale logarithmically i.e exponentially higher nits are required for ever smaller perceptual brightness increases - i.e. gamma 2.2 curve):

  • To match a 100-nit BVM: You need 3,571 nits .[2]

  • To match a 150-nit “Punchy” BVM: You need 5,357 nits .[2]

  • To match an “HDR CRT” (400-nit peaks): You need 14,285 nits .[2]

3 Likes

As for the Vulkan AMD driver issue its something to do with the drivers messing up the output. Sony Megatron is pretty simple it just turns off sub pixels - if the driver does some kind of reconstruction/compression/fancy new ai feature its going to wreck the Sony Megatron and to an extent all other output. Sadly this is one for AMD as I cant do anything about it. Maybe post on Phoronix forums - they do a lot of work with AMD Vulkan drivers see if they know if its a known issue.

1 Like

Welcome back! Very excited to see these updates in action.

(Just as well that i hadn’t gotten around to actually posting that fork topic xD)

2 Likes

This is an absolutely massive night and day upgrade. Well done and thank you for your efforts. I eagerly await digging into a version of Megatron that includes these updates.

In case anyone is wondering, with this update:

  • Setting both Peak Luminance and Paper white to 100cd/m2 matches SDR baseline.

  • Colour Boost “Off” uses Rec709/sRGB primaries. “On” stretches the gamut to Expanded709.

@MajorPainTheCactus Could you consider adding additional options to Colour Boost for P3-D65 and Rec2020 primaries? This would allow non-HDR aware shaders with WCG options to work correctly.

(Rec601 and PAL/EBU would be nice as well, but admittedly wouldn’t really fall under the definition of “boost” xD)

2 Likes

Whats the impact of this for HDR users who run Retroarch in Gamescope? Steam Deck and SteamOS is growing in popularity; and with the advent of the Steam Machine we may see even more users running Retroarch (or solutions like Retrodeck/Emudeck). Most (including myself) will be running SteamOS or a fork like Bazzite. Will HDR still work after this change to KMS/DRM Layer?

Excited to see the new updates to the shader!

1 Like