PlainOldPants's Shader Presets

The raw palette is in the NES’s internal hue-level-emphasis format, in that order, normalized from 0 to 255. The NES PPU writes composite video directly by switching between just a few fixed voltage levels, without any concept of Y or C, so in order to convert that into YC space, you have to simulate the NES PPU’s composite video and decode it.

This article goes over the whole process with some sample code. https://www.nesdev.org/wiki/NTSC_video

For me, the easiest way to understand has been to read source code. These two programs generate only the colors, as opposed to a whole NTSC-filtered image. https://github.com/ChthonVII/gamutthingy https://github.com/Gumball2415/pally

Maybe a text summary would be better, though.

The level is an integer from 0 to 3. This selects a pair of voltages. The NES directly outputs a square wave that alternates between those two voltages. That affects both Y and the saturation of C.

The hue is an integer from 0 to 15. Hues 0 and 13 are grays, which use only one of the two voltages. Hues 1 through 12 output the square wave with one of 12 different phases, giving 12 different hue angles to pick from. If you pick hue 14 or 15, the result is always the same as picking level 1 and hue 13 together, even if level has been set to something other than 1.

Emphasis is 3 bits: One for red, one for green, and one for blue. Each emphasis bit corresponds to a specific twelfth of the chroma phase. If the emphasis bit is set to 1, that twelfth of the phase is attenuated down to a different, lower preset voltage. In effect, this causes some reduction of red, green, or blue. Emphasis is skipped if you set hue to 14 or 15.

The NES’s output isn’t shaped perfectly, however. For each increase in “level”, you get some shift in hue. Hues 2, 6, and 10 (if memory serves) have a hue shift too, as well as a slight increase in Y. The amount of hue or Y shift varies significantly between different revisions of the NES PPU.

Here’s something that those programs and that article don’t address, though I haven’t ruled out the possibility that my NES just isn’t working properly. On my own NES, choosing between RF or composite also has a significant effect. The NES is much sharper over composite than over RF, with composite having a grainy, sandpapery look, and RF having a more flat, blurry look. Connecting the NES’s composite into a VCR to convert into RF allows you to get the sharp, grainy look on an RF-only TV set. I need to check this again, but I believe I’ve noticed my NES having more saturated colors when connected over RF instead of composite. Don’t forget that the original Famicom in Japan had only RF (though it could be modded for composite), while the NES in the rest of the world had composite, so simulating both is necessary to reflect different developers’ intents.

Edit: I forgot to mention, in the past, I have also video-captured my NES’s palette, but this never turned out great. Decoding the NES’s colors into RGB results in some values becoming less than 0 or more than 255, which makes it impossible to convert them back to YUV/YIQ. That is why using FirebrandX’s Composite Direct palette (or any normal NES palette at all) with a composite video emulation shader does not work. In an attempt to fix this, I did my own NES video capture, with the capture’s black level increased and white level decreased, to keep all values between 0 and 255 without clamping, and in my shaders, I would perfectly undo that change. I have tried this several times, both on Ubuntu and on Windows, and the capture has had problems every single time, so I won’t link it anywhere. Therefore, for now, we’re stuck with emulated palettes, not video capture.

1 Like

I have had two different Sony LCD TVs with HDR capabilities and I have basically zero complaints about them.

@PlainOldPants If you take one of Mesen’s normal palettes, quantize the values according to the measured values, map that to YC, and reconstruct the dot pattern, are you not just doing the same thing effectively? It sounds like you need to use a lookup table and reconstruct the dot pattern either way. Does the raw palette have a bit indicating the missing dot? Can it tell me about the Battletoads exception? But I suppose the raw palette could be more convenient.

I remember trying one of the raw palette shaders and thought it was ugly. But I was a RetroArch noob and maybe was just using it wrong LOL.

IMO we shouldn’t be using these goofy kind of constructions. They’re hacks. The cores should be able to send a block of metadata data to the shaders. Each frame. It would solve a lot of problems and limitations.

2 Likes

Which shaders can append to your shader to make it look even better?

Pretty much any CRT shader works - guest-advanced, CRT-Royale, etc.

2 Likes

Thanks, Nesguy. I wanted to smooth out his shader with like a CRT shader but wanted to make sure.

@Cyber Thanks on the recommendation. I thought all you need was the Samsung QD-OLED since I’m hearing WOLED are at their peak and aren’t going to get better.

That’s based on testing and opinions of others who have no clue what we’re doing here with CRT Shaders which have different performance requirements compared to general usage.

If WOLED was at its peak, why are the LG G5 and Panasonic Z95B 2 of the best and most accurate TVs money can buy?

Why are the brightest OLED TVs ever tested? Why are they brighter than last year’s models? Why do they have wider colour gamuts than last year’s models?

Why is there a roadmap for continued development of the technology?

Out of the two main competing OLED technologies, why are they the only one that has proper black levels in a bright room? You did know about the blacks getting raised and turning brown on QD-OLED displays when there is light in the room, did you?

You have to be able to sift through marketing spin and analyze the individual numbers and characteristic for yourself to determine which TV or display is the best for you. Which may not necessarily be the winner of any annual shootout.

That’s why many end up disappointed. QD-OLED is not good for or good at these things we do here at all.

OLED on the whole is not universally better or the best OLED is the best in some aspects of CRT Emulation, while good miniLED is the best in many other areas where OLED struggles to compete.

So it’s up to the user or potential purchaser to acknowledge the strengths and limitations and go with the one that they think would give them the best experience for them or the one that has compromises that they are willing to live with.

You will not find a single QD-OLED display in this list:

1 Like

OLEDs have two things going for them: low black levels and low response time. And people hear about that and must think they’re like CRTs because CRTs had those things, too.

But really I think the most important metric for our shaders is brightness, followed by color linearity (because how can you mimic another display if your own display is inconsistent)? Black levels are nice, but not really as important for 2D games. Contrast is more important than absolute black level I think. Low input lag is a ‘nice to have’, a certain value is good enough, especially if you have a 120 Hz+ display.

2 Likes

Don’t forget the subixels man. That’s where OLED displays fall flat on their faces and regular LCD technology shines.

But like nobody in the mainstream realm of RetroGaming seems to be aware of the existence of these things.

2 Likes

So, like MiniLed in a decade or so should be the most optimal since they have the best of both worlds, right?

1 Like

No, that doesn’t work at all. The NES’s video signal works too differently from normal for us to be able to do that.

The first problem has to do with how the NES lacks true RGB to begin with. As I described in my previous post, decoding the NES’s video signal into RGB results in both some negative numbers and some excessively high numbers. In order to convert back to YC, you need to have those out-of-bounds RGB values intact. Those NES palettes have everything clamped between 0 and 255, which makes it impossible to get the correct YC. If you use that incorrect YC to simulate the composite signal, like with Blargg’s filters, you get less signal interference than you normally would, and the signal interference only gets reduced for colors that got clamped.

The second problem is with how the NES’s signal has a specific shape that needs to be emulated, which I also described in my last post. You already know that the NES’s signal is a square wave, but it’s not as simple as modulating C as a square and adding it to Y, since there is no Y or C (or “modulating” or “adding”) in the NES video signal. The active portion of the signal is output by switching between just 7 possible voltage values (or 14 if you count de-emphasis), with exactly 12 possible hue angles that are spaced evenly apart by 30 degrees. It’s not hard to understand why sticking to just those 7 voltage values and 12 evenly-spaced hue angles is important for getting convincing signal artifacts, as this does impact how different colors (including grays) of the NES palette are going to interfere with each other when decoding. A less obvious problem is in games that quickly cycle the colors, such as Ninja Gaiden 1 (when getting a game over) or to some extent the Game Genie title text, where real hardware makes it easy to see the chroma artifacts moving at a consistent speed as the colors are rotated, which happens because the 12 hue angles are spaced perfectly 30 degrees apart. I don’t see a good way to recreate these artifacts convincingly without the raw palette.

You don’t need a lookup table for this. GTU-famicom does use a LUT for speed, but this isn’t necessary at all.

Unfortunately, I’m fairly sure this is exactly the only information we don’t get from using the raw palette. It wouldn’t hurt to try to put in an issue or pull request for this feature, but that doesn’t exactly help to solve the bigger problem that we need some way to get more information from the cores.

Looking at them right now, I agree that they’re not that great. All five of them (mine included) are old and each have their own unique problems.

I don’t know what you mean by this “missing dot”. Following the documentation at https://www.nesdev.org/wiki/NTSC_video , the raw palette gives enough information to reconstruct the NES’s video signal in its ideal form, excluding impedance-related distortions like the so-called “row skew”, and excluding Battletoads and Battletoads + Double Dragon’s three-frame phase cycle. From there, we can get convincing graphics by digitally decoding that signal and applying a simple color correction for row-skew. The next step, which isn’t in any shaders to my knowledge, would be to make solid enough approximations of the filtering circuits found in the console and in some known CRTs, instead of using these simple symmetrical FIR filters that are based on standards and eyeballing.

I agree. With the current system, there are so many different workarounds that we’re having to do, when it could all be solved by getting some basic metadata. Even just knowing what core or console is being played would help.

My old NES raw palette shader, which I was just trying a few minutes ago, absolutely reeks of these workarounds, even just from looking at the preset’s file name which states the specific emulator that it’s compatible with. At the time, only Mesen output the de-emphasis portion of the Raw palette correctly, while FCEUmm had a glitch with its raw palette. Then, in that long list of settings, there’s a manual switch for the Battletoads games, along with a bunch of settings meant for only other consoles, which control things like how to “detect” different screen resolutions and cropped overscan (which can only be guessed, not known decisively), plus a switch to convert Genesis Plus GX’s colors into BlastEm’s colors. What a mess. All of this information should have been provided by the core instead. The punch line is when I did create an issue in FCEUmm regarding its raw palette: When the raw palette did finally get fixed, it was done in a way that broke some (if not all) custom palettes for that emulator.

Edit:

Something else that’s related is the complex directory structures and long lists of .slangp files that we have in large packs like with sonkun, CyberLab, Mega Bezel, etc. (Not the fault of the authors, but of the system.) It’s starting to look like the entire shader system needs yet another overhaul, but is it really worth the time and effort?

2 Likes

It’s hard to say where things will be in a decade. I’m hoping that LG Display is going to retire the white subpixel as they improve efficiency with RGB Tandem OLED and reemploy MLA, despite the cost issues of implementing it.

That might be the ultimate for CRT Emulation if it materializes in a few years.

Right now folks are sleeping on the TCL QM851G, QM9K and QM8K for CRT Shaders. I have a TCL QM751G and it totally rocks with CRT Shaders but all of those I’ve listed there have way more brightness and dimming zones and supposedly better black levels and contrast. Not to mention the 8K and 9K have higher precision Backlight technology, wide viewing angle tech, faster Backlight response and R-G-B Subpixel layout!

Then there are the Bravia 9 and 7 which seem to do more with less zones and peak brightness in the home cinema, sports and TV show sphere.

So things are already at a really nice point technology wise.

Next year we’ll see what happens when RGB miniLED takes the stage.

The worst aspects of current miniLED technology for me when it comes to CRT emulation are the blooming that comes with off angle viewing and dark room performance and the generally poor off axis viewing that results in colour saturation and gamma shifts.

OLED is amazing for CRT Emulation in a dark room, once you can live with the fact that it can’t handle all the RGB Masks and TVLs well all the way down to the subpixel level.

Scanline gaps do cause uneven wear over time though.

The missing dot is from the scanline with one less PPU dot every other frame. But you can’t actually know which frame is odd or even without guessing, right?

Let me put it this way: the NES video output can be represented as a 4096x240 monochrome image, right? If we take that and know the relative time for each subpixel t, we should be able to treat it as the composite signal directly and determine the right phase to demodulate at each subpixel. The raw palette gives us the information needed to reconstruct the monochrome image, but doesn’t give us the info to get time at that subpixel. We still need to guesstimate that, right? Is there any room bitwise in the raw palette to squeeze in more metadata? Like you would only need one bit somewhere to flag the current field, another to flag if the missing dot is present in the field, etc.

There actually is, since the colors in the NES’s raw format are only 9 bits each, while the raw palette expands this into 24-bit color. Staying compatible with existing shader presets might be slightly tricky, but it’s doable, although it should also be simple to modify those shaders too, since there are only five presets total.

Is the raw palette an actual palette like the other palettes? Or is it a different mode entirely? Like the does the emulator actually read off current the PPU state to write out the output color value?

It is just a color palette. Shaders currently have to make up the rest of the information from thin air, like with mine which has a manual switch for a 2-frame or 3-frame phase cycle, or the other shaders which only support a 2-frame cycle.

1 Like

I remain unconvinced that this is the case for at least LG panels from 2020 on, so long as the Screen Move/pixel shift mitigations remain enabled, and the refresh cycles are allowed to run every 4-6ish hours of use.

Well not everyone who has a WOLED TV has a 2020+ LG Display OLED panel eh?

I can only speak from my experience with my 2016 LG E6P which had Pixel Shift enabled and I never unplugged my TV so it was able to run all its panel refresh cycles on schedule.

The thing is once I noticed that there was an issue, Clear Panel Noise made no difference whatsoever.

Not sure if it was due to the fine pitch between the more worn and less worn areas.

Plus there’s no way to predict how every user is going to use their TV.

1 Like

“NES Color Decoder” looks very similar to Composite Direct FBX. I think NES Color Decoder + Raw looks a bit better.

I came to the same conclusion- the NES needs its own composite video shader.

I’ve been doing a crude workaround where I just eyeball settings between an RGB and composite video preset (guest-advanced and guest-advanced-ntsc) until everything looks equal, and then applying the NES color decoder. Not ideal, obviously.

This translates to around +20% Saturation and +20% NTSC Saturation using guest-advanced-ntsc.

2 Likes

@PlainOldPants The raw palette will send its RGB values to the shaders in normalized 0…1 values. This is what we have from NESDev:

Standard Video

Type IRE level Voltage (mV)
Peak white 120
White 100 714
Colorburst H 20 143
Black 0 0
Blanking 0 0
Colorburst L -20 -143
Sync -40 -286

NES Measurements

Signal Potential IRE
SYNC 48 mV -37 IRE
CBL 148 mV -23 IRE
0D 228 mV -12 IRE
1D 312 mV ≡ 0 IRE
CBH 524 mV 30 IRE
2D 552 mV 34 IRE
00 616 mV 43 IRE
10 840 mV 74 IRE
3D 880 mV 80 IRE
20 1100 mV 110 IRE
0Dem 192 mV -17 IRE
1Dem 256 mV -8 IRE
2Dem 448 mV 19 IRE
00em 500 mV 26 IRE
10em 676 mV 51 IRE
3Dem 712 mV 56 IRE
20em 896 mV 82 IRE

I’m changing black to 0 because we are going to assume no setup on black. I believe blargg did his own measurements, but can’t find them. Do they line up with this chart? I’m concerned about the repeatability of this measurement and how the IRE values were derived (looks like assuming 1 V is 100 IRE and defining 0 IRE as 312 mV. We know 0D must be less than 0 IRE because of how it can be interpreted as a sync on some TVs).

Let’s simplify this to only the grayscale palette values and ignore the IRE:

Signal Potential
0D 228 mV
00 616 mV
1D 312 mV
10 840 mV
2D 552 mV
20 1100 mV
3D 880 mV
30* 1100 mV

*Not measured, assuming same as $20.

R value from the raw palette represents a pair of voltages indexed from 0 to 3:

R-value (Normalized) Potential Low Potential High Vpp
0 228 mV 616 mV 388 mV
1/3 312 mV 840 mV 262 mV
2/3 552 mV 1100 mV 548 mV
1 880 mV 1100 mV 220 mV

However, when there’s emphasis, it will modify this only when the emphasis attenuator is active:

R-value (Normalized) Potential Low Potential High Vpp
0 192 mV 500 mV 208 mV
1/3 256 mV 676 mV 420 mV
2/3 448 mV 896 mV 448 mV
1 712 mV 896 mV 184 mV

We can turn that into an array:

[[0.228, 0.616],
 [0.312, 0.840],
 [0.552, 1.100],
 [0.880, 1.100],
 [0.192, 0.500],
 [0.256, 0.676],
 [0.448, 0.896],
 [0.712, 0.896]]

x is the low-level PPU clock output position which we calculate ourselves based on pixel position, current field, and if we’re playing Battletoads. In the active video portion there are 256 pixels corresponding to 2048 clock cycles. We can calculate Y:

Convert R-value into an appropriate integer, 0 to 3.

Check which attenuator bits (B) are set AND check if the attenuator color cycle is active for this x based on which bits are set. If it is, add 4 to R-index.

Check if the current cycle for the given hue (G) gives us voltage high or voltage low.

Y(x)= Array[R-index + 4 * Emphasis(B, x)][high or low from VHL(G, x)]

Emphasis(B, x) is a function that returns 0 or 1, VHL(G, x) is a function that returns 0 or 1.

Do I have that right? If so, I can figure out how to handle G and B values later, and then finally scaling the voltage levels to the appropriate (unenforced) range, 0 to 1 corresponding to 0 to 100 IRE.

1 Like

It may look good at first glance, but it’s actually the worst one. In my previous post, I didn’t explain this because I was trying to stay concise and on-topic.

For the heck of it, I’ll go through all five of those NES raw palette shader options and go over what’s wrong with each one. Information like this isn’t easy enough to find on the internet. This table is a bit of a rush job, being done largely from memory.

cgwg-famicom-geom gtu-famicom ntsc-nes (or nes-color-decoder) pal-r57shell-raw patchy-mesen-raw-palette
Performance :white_check_mark:Fastest, highly optimized. :white_check_mark:Fast, simple. :white_check_mark:Fast, simple Idk. :x:Crap. Slow, poor quality code
Actually does the NES signal :white_check_mark:Implemented. :white_check_mark:Implemented :x:Maister SNES signal. All color filtering is done before encoding, which is wrong. :warning:The only PAL option available. Horribly wrong colors. :white_check_mark:Implemented
Battletoads 3-frame phase cycle :x:Unsupported :x:Unsupported :x:Unsupported Not applicable :white_check_mark:Supported via manual toggle in settings
Frequency Filters :white_check_mark:Good “windowed sinc” FIR filters, but chroma is too sharp. :warning:Just a raised-cosine integral lowpass filter. Luma still contains the subcarrier, and chroma is too sharp. :white_check_mark:Precomputed in MATLAB. Idk. :x:Unfinished “windowed sinc” FIR filters. Settings are just guessed by eye, looking at a CRT and video capture. Technically can fix with settings, but who has the time?
Comb Filter :warning:Adaptive with notch. Not a good choice for NES. :x: Wrong! Mid-line comb filter. :white_check_mark:Unsupported, trivially correct It does something PAL-specific, but idk. :white_check_mark:Off by default; trivially correct.
Row skew :x:Unsupported :x:Unsupported :x:Unsupported Not applicable :warning:Only supported for colors without de-emphasis.
NTSC color :x:Unsupported :x:Unsupported :warning:CXA2025AS US axes, but wrong whitepoint, no Sony primaries, and wrong default tint/color settings. Defeats the purpose. Not applicable :warning:Various chips’ US and JP axes, but wrong whitepoint by default, and wrong default tint/color settings. Defeats the purpose. Fixable with settings, but who has the time?
Gamma/EOTF Idk. :white_check_mark:Unsupported, but appendable :x:Totally wrong! Idk. :x:Wrong, but can technically fix with settings, but who has the time?
Over-brightened color clipping :x:Clamps at 255 :x:Clamps at 255 :warning:Can clamp,darken, or desaturate. “Desaturate” is in R’G’B’ space, not great. Idk, but the whole shader (at least the .slang version, idk about GLSL) looks absolutely disgusting. :warning:Can darken the entire screen uniformly, and then clamp at 255.
Phosphor gamut :x:Not included, not appendable :warning:Not included, appendable :warning:Not included, appendable It does something? :white_check_mark: By ChthonVII’s program, gamutthingy: LUT for Sony P22 (and others) with Jzazbz-based gamut compression.

All of them are wrong. As of today, I recommend using my latest NTSC shader release, with p68k-fast-mp-nes-unfinished, which does all the above steps well, except that row-skew still is only supported for colors without de-emphasis (so games like Darkwing Duck (I think) and The Immortal will look off) and the BT.1886 EOTF is unused by default in favor of a straight 2.2 power law. (Edit: Now that I think of it, my shader here doesn’t support the Battletoads phase properly anymore, due to a bug that got introduced when adding interlacing support.)

While I don’t know where blargg’s measurements are, I do know there was at least one other post on the nesdev forums that showed different results. Those different results can be found in gtu-famicom. Notice how gtu-famicom only attenuates by multiplying by a constant factor, instead of switching to another pre-defined value. So, I am also concerned about repeatability. For now, this is the best I think we can do.

Just one thing: If hue is 14 or 15, you set Y(x) to the 1D voltage constantly, regardless of level or emphasis.

1 Like