That’s based on testing and opinions of others who have no clue what we’re doing here with CRT Shaders which have different performance requirements compared to general usage.
If WOLED was at its peak, why are the LG G5 and Panasonic Z95B 2 of the best and most accurate TVs money can buy?
Why are the brightest OLED TVs ever tested? Why are they brighter than last year’s models? Why do they have wider colour gamuts than last year’s models?
Why is there a roadmap for continued development of the technology?
Out of the two main competing OLED technologies, why are they the only one that has proper black levels in a bright room? You did know about the blacks getting raised and turning brown on QD-OLED displays when there is light in the room, did you?
You have to be able to sift through marketing spin and analyze the individual numbers and characteristic for yourself to determine which TV or display is the best for you. Which may not necessarily be the winner of any annual shootout.
That’s why many end up disappointed. QD-OLED is not good for or good at these things we do here at all.
OLED on the whole is not universally better or the best OLED is the best in some aspects of CRT Emulation, while good miniLED is the best in many other areas where OLED struggles to compete.
So it’s up to the user or potential purchaser to acknowledge the strengths and limitations and go with the one that they think would give them the best experience for them or the one that has compromises that they are willing to live with.
You will not find a single QD-OLED display in this list:
OLEDs have two things going for them: low black levels and low response time. And people hear about that and must think they’re like CRTs because CRTs had those things, too.
But really I think the most important metric for our shaders is brightness, followed by color linearity (because how can you mimic another display if your own display is inconsistent)? Black levels are nice, but not really as important for 2D games. Contrast is more important than absolute black level I think. Low input lag is a ‘nice to have’, a certain value is good enough, especially if you have a 120 Hz+ display.
No, that doesn’t work at all. The NES’s video signal works too differently from normal for us to be able to do that.
The first problem has to do with how the NES lacks true RGB to begin with. As I described in my previous post, decoding the NES’s video signal into RGB results in both some negative numbers and some excessively high numbers. In order to convert back to YC, you need to have those out-of-bounds RGB values intact. Those NES palettes have everything clamped between 0 and 255, which makes it impossible to get the correct YC. If you use that incorrect YC to simulate the composite signal, like with Blargg’s filters, you get less signal interference than you normally would, and the signal interference only gets reduced for colors that got clamped.
The second problem is with how the NES’s signal has a specific shape that needs to be emulated, which I also described in my last post. You already know that the NES’s signal is a square wave, but it’s not as simple as modulating C as a square and adding it to Y, since there is no Y or C (or “modulating” or “adding”) in the NES video signal. The active portion of the signal is output by switching between just 7 possible voltage values (or 14 if you count de-emphasis), with exactly 12 possible hue angles that are spaced evenly apart by 30 degrees. It’s not hard to understand why sticking to just those 7 voltage values and 12 evenly-spaced hue angles is important for getting convincing signal artifacts, as this does impact how different colors (including grays) of the NES palette are going to interfere with each other when decoding. A less obvious problem is in games that quickly cycle the colors, such as Ninja Gaiden 1 (when getting a game over) or to some extent the Game Genie title text, where real hardware makes it easy to see the chroma artifacts moving at a consistent speed as the colors are rotated, which happens because the 12 hue angles are spaced perfectly 30 degrees apart. I don’t see a good way to recreate these artifacts convincingly without the raw palette.
You don’t need a lookup table for this. GTU-famicom does use a LUT for speed, but this isn’t necessary at all.
Unfortunately, I’m fairly sure this is exactly the only information we don’t get from using the raw palette. It wouldn’t hurt to try to put in an issue or pull request for this feature, but that doesn’t exactly help to solve the bigger problem that we need some way to get more information from the cores.
Looking at them right now, I agree that they’re not that great. All five of them (mine included) are old and each have their own unique problems.
I don’t know what you mean by this “missing dot”. Following the documentation at https://www.nesdev.org/wiki/NTSC_video , the raw palette gives enough information to reconstruct the NES’s video signal in its ideal form, excluding impedance-related distortions like the so-called “row skew”, and excluding Battletoads and Battletoads + Double Dragon’s three-frame phase cycle. From there, we can get convincing graphics by digitally decoding that signal and applying a simple color correction for row-skew. The next step, which isn’t in any shaders to my knowledge, would be to make solid enough approximations of the filtering circuits found in the console and in some known CRTs, instead of using these simple symmetrical FIR filters that are based on standards and eyeballing.
I agree. With the current system, there are so many different workarounds that we’re having to do, when it could all be solved by getting some basic metadata. Even just knowing what core or console is being played would help.
My old NES raw palette shader, which I was just trying a few minutes ago, absolutely reeks of these workarounds, even just from looking at the preset’s file name which states the specific emulator that it’s compatible with. At the time, only Mesen output the de-emphasis portion of the Raw palette correctly, while FCEUmm had a glitch with its raw palette. Then, in that long list of settings, there’s a manual switch for the Battletoads games, along with a bunch of settings meant for only other consoles, which control things like how to “detect” different screen resolutions and cropped overscan (which can only be guessed, not known decisively), plus a switch to convert Genesis Plus GX’s colors into BlastEm’s colors. What a mess. All of this information should have been provided by the core instead. The punch line is when I did create an issue in FCEUmm regarding its raw palette: When the raw palette did finally get fixed, it was done in a way that broke some (if not all) custom palettes for that emulator.
Edit:
Something else that’s related is the complex directory structures and long lists of .slangp files that we have in large packs like with sonkun, CyberLab, Mega Bezel, etc. (Not the fault of the authors, but of the system.) It’s starting to look like the entire shader system needs yet another overhaul, but is it really worth the time and effort?
It’s hard to say where things will be in a decade. I’m hoping that LG Display is going to retire the white subpixel as they improve efficiency with RGB Tandem OLED and reemploy MLA, despite the cost issues of implementing it.
That might be the ultimate for CRT Emulation if it materializes in a few years.
Right now folks are sleeping on the TCL QM851G, QM9K and QM8K for CRT Shaders. I have a TCL QM751G and it totally rocks with CRT Shaders but all of those I’ve listed there have way more brightness and dimming zones and supposedly better black levels and contrast. Not to mention the 8K and 9K have higher precision Backlight technology, wide viewing angle tech, faster Backlight response and R-G-B Subpixel layout!
Then there are the Bravia 9 and 7 which seem to do more with less zones and peak brightness in the home cinema, sports and TV show sphere.
So things are already at a really nice point technology wise.
Next year we’ll see what happens when RGB miniLED takes the stage.
The worst aspects of current miniLED technology for me when it comes to CRT emulation are the blooming that comes with off angle viewing and dark room performance and the generally poor off axis viewing that results in colour saturation and gamma shifts.
OLED is amazing for CRT Emulation in a dark room, once you can live with the fact that it can’t handle all the RGB Masks and TVLs well all the way down to the subpixel level.
Scanline gaps do cause uneven wear over time though.
The missing dot is from the scanline with one less PPU dot every other frame. But you can’t actually know which frame is odd or even without guessing, right?
Let me put it this way: the NES video output can be represented as a 4096x240 monochrome image, right? If we take that and know the relative time for each subpixel t, we should be able to treat it as the composite signal directly and determine the right phase to demodulate at each subpixel. The raw palette gives us the information needed to reconstruct the monochrome image, but doesn’t give us the info to get time at that subpixel. We still need to guesstimate that, right? Is there any room bitwise in the raw palette to squeeze in more metadata? Like you would only need one bit somewhere to flag the current field, another to flag if the missing dot is present in the field, etc.
There actually is, since the colors in the NES’s raw format are only 9 bits each, while the raw palette expands this into 24-bit color. Staying compatible with existing shader presets might be slightly tricky, but it’s doable, although it should also be simple to modify those shaders too, since there are only five presets total.
Is the raw palette an actual palette like the other palettes? Or is it a different mode entirely? Like the does the emulator actually read off current the PPU state to write out the output color value?
It is just a color palette. Shaders currently have to make up the rest of the information from thin air, like with mine which has a manual switch for a 2-frame or 3-frame phase cycle, or the other shaders which only support a 2-frame cycle.
I remain unconvinced that this is the case for at least LG panels from 2020 on, so long as the Screen Move/pixel shift mitigations remain enabled, and the refresh cycles are allowed to run every 4-6ish hours of use.
Well not everyone who has a WOLED TV has a 2020+ LG Display OLED panel eh?
I can only speak from my experience with my 2016 LG E6P which had Pixel Shift enabled and I never unplugged my TV so it was able to run all its panel refresh cycles on schedule.
The thing is once I noticed that there was an issue, Clear Panel Noise made no difference whatsoever.
Not sure if it was due to the fine pitch between the more worn and less worn areas.
Plus there’s no way to predict how every user is going to use their TV.
“NES Color Decoder” looks very similar to Composite Direct FBX. I think NES Color Decoder + Raw looks a bit better.
I came to the same conclusion- the NES needs its own composite video shader.
I’ve been doing a crude workaround where I just eyeball settings between an RGB and composite video preset (guest-advanced and guest-advanced-ntsc) until everything looks equal, and then applying the NES color decoder. Not ideal, obviously.
This translates to around +20% Saturation and +20% NTSC Saturation using guest-advanced-ntsc.
@PlainOldPants The raw palette will send its RGB values to the shaders in normalized 0…1 values. This is what we have from NESDev:
Standard Video
Type
IRE level
Voltage (mV)
Peak white
120
White
100
714
Colorburst H
20
143
Black
0
0
Blanking
0
0
Colorburst L
-20
-143
Sync
-40
-286
NES Measurements
Signal
Potential
IRE
SYNC
48 mV
-37 IRE
CBL
148 mV
-23 IRE
0D
228 mV
-12 IRE
1D
312 mV
≡ 0 IRE
CBH
524 mV
30 IRE
2D
552 mV
34 IRE
00
616 mV
43 IRE
10
840 mV
74 IRE
3D
880 mV
80 IRE
20
1100 mV
110 IRE
0Dem
192 mV
-17 IRE
1Dem
256 mV
-8 IRE
2Dem
448 mV
19 IRE
00em
500 mV
26 IRE
10em
676 mV
51 IRE
3Dem
712 mV
56 IRE
20em
896 mV
82 IRE
I’m changing black to 0 because we are going to assume no setup on black. I believe blargg did his own measurements, but can’t find them. Do they line up with this chart? I’m concerned about the repeatability of this measurement and how the IRE values were derived (looks like assuming 1 V is 100 IRE and defining 0 IRE as 312 mV. We know 0D must be less than 0 IRE because of how it can be interpreted as a sync on some TVs).
Let’s simplify this to only the grayscale palette values and ignore the IRE:
Signal
Potential
0D
228 mV
00
616 mV
1D
312 mV
10
840 mV
2D
552 mV
20
1100 mV
3D
880 mV
30*
1100 mV
*Not measured, assuming same as $20.
R value from the raw palette represents a pair of voltages indexed from 0 to 3:
R-value (Normalized)
Potential Low
Potential High
Vpp
0
228 mV
616 mV
388 mV
1/3
312 mV
840 mV
262 mV
2/3
552 mV
1100 mV
548 mV
1
880 mV
1100 mV
220 mV
However, when there’s emphasis, it will modify this only when the emphasis attenuator is active:
x is the low-level PPU clock output position which we calculate ourselves based on pixel position, current field, and if we’re playing Battletoads. In the active video portion there are 256 pixels corresponding to 2048 clock cycles. We can calculate Y:
Convert R-value into an appropriate integer, 0 to 3.
Check which attenuator bits (B) are set AND check if the attenuator color cycle is active for this x based on which bits are set. If it is, add 4 to R-index.
Check if the current cycle for the given hue (G) gives us voltage high or voltage low.
Y(x)= Array[R-index + 4 * Emphasis(B, x)][high or low from VHL(G, x)]
Emphasis(B, x) is a function that returns 0 or 1, VHL(G, x) is a function that returns 0 or 1.
Do I have that right? If so, I can figure out how to handle G and B values later, and then finally scaling the voltage levels to the appropriate (unenforced) range, 0 to 1 corresponding to 0 to 100 IRE.
It may look good at first glance, but it’s actually the worst one. In my previous post, I didn’t explain this because I was trying to stay concise and on-topic.
For the heck of it, I’ll go through all five of those NES raw palette shader options and go over what’s wrong with each one. Information like this isn’t easy enough to find on the internet. This table is a bit of a rush job, being done largely from memory.
cgwg-famicom-geom
gtu-famicom
ntsc-nes (or nes-color-decoder)
pal-r57shell-raw
patchy-mesen-raw-palette
Performance
Fastest, highly optimized.
Fast, simple.
Fast, simple
Idk.
Crap. Slow, poor quality code
Actually does the NES signal
Implemented.
Implemented
Maister SNES signal. All color filtering is done before encoding, which is wrong.
The only PAL option available. Horribly wrong colors.
Implemented
Battletoads 3-frame phase cycle
Unsupported
Unsupported
Unsupported
Not applicable
Supported via manual toggle in settings
Frequency Filters
Good “windowed sinc” FIR filters, but chroma is too sharp.
Just a raised-cosine integral lowpass filter. Luma still contains the subcarrier, and chroma is too sharp.
Precomputed in MATLAB.
Idk.
Unfinished “windowed sinc” FIR filters. Settings are just guessed by eye, looking at a CRT and video capture. Technically can fix with settings, but who has the time?
Comb Filter
Adaptive with notch. Not a good choice for NES.
Wrong! Mid-line comb filter.
Unsupported, trivially correct
It does something PAL-specific, but idk.
Off by default; trivially correct.
Row skew
Unsupported
Unsupported
Unsupported
Not applicable
Only supported for colors without de-emphasis.
NTSC color
Unsupported
Unsupported
CXA2025AS US axes, but wrong whitepoint, no Sony primaries, and wrong default tint/color settings. Defeats the purpose.
Not applicable
Various chips’ US and JP axes, but wrong whitepoint by default, and wrong default tint/color settings. Defeats the purpose. Fixable with settings, but who has the time?
Gamma/EOTF
Idk.
Unsupported, but appendable
Totally wrong!
Idk.
Wrong, but can technically fix with settings, but who has the time?
Over-brightened color clipping
Clamps at 255
Clamps at 255
Can clamp,darken, or desaturate. “Desaturate” is in R’G’B’ space, not great.
Idk, but the whole shader (at least the .slang version, idk about GLSL) looks absolutely disgusting.
Can darken the entire screen uniformly, and then clamp at 255.
Phosphor gamut
Not included, not appendable
Not included, appendable
Not included, appendable
It does something?
By ChthonVII’s program, gamutthingy: LUT for Sony P22 (and others) with Jzazbz-based gamut compression.
All of them are wrong. As of today, I recommend using my latest NTSC shader release, with p68k-fast-mp-nes-unfinished, which does all the above steps well, except that row-skew still is only supported for colors without de-emphasis (so games like Darkwing Duck (I think) and The Immortal will look off) and the BT.1886 EOTF is unused by default in favor of a straight 2.2 power law. (Edit: Now that I think of it, my shader here doesn’t support the Battletoads phase properly anymore, due to a bug that got introduced when adding interlacing support.)
While I don’t know where blargg’s measurements are, I do know there was at least one other post on the nesdev forums that showed different results. Those different results can be found in gtu-famicom. Notice how gtu-famicom only attenuates by multiplying by a constant factor, instead of switching to another pre-defined value. So, I am also concerned about repeatability. For now, this is the best I think we can do.
Just one thing: If hue is 14 or 15, you set Y(x) to the 1D voltage constantly, regardless of level or emphasis.
I’d really like to, but I’m pretty much locked into an HDR setup, now.
That, and there are some (gamma?) difficulties combining it with guest-advanced - I’m not sure what the fix is for that. Maybe a different CRT shader would work better.
@Nesguy have you tried using float_framebuffer in your presets? That may help prevent inaccurate values when going from one shader to the next. If a shader outputs in linear space or outputs negative values or values past one, you must use a float_framebuffer.
@PlainOldPants Is row skew the measured phase distortion that increases at brighter values? I believe implementing this would require an NES specific composite demodulation shader in addition to the encoder to modify the phase applied to the mixing functions. A phase error in the output can be represented as an error in demodulation.
Also, what do you mean by gamma/EOTF in that chart?
That is true. My implementation doesn’t correctly do row skew, but instead, it has a correction only in the decoder that detects certain Y values and fakes the hue rotation based on that. That is why mine does not support row skew with de-emphasis. The technically correct way would be to put row skew when encoding.
I screwed up that entire row. It was partly about whether a power law or more complex EOTF (either way) is implemented correctly, but it also became about whether you’re able to adjust gamma, or whether the shader conflates the linear luminances with gamma corrected voltages (as in the colorimetry shader or Drag’s palette generator). I’ll need to fix that row of that table.
Why would a gamma function need to be used at all in an NTSC shader? The output voltages from the device are already in gamma space. On the decoding, YPbPr matrix should output to gamma-corrected RGB values either for direct display or further processing by another shader.
Do the decoding chips on TVs involve linearization and gamma in their intermediary processing?
No, it’s not something that the decoding chips are doing. It’s done separately after simulating the signal, for simulating CRT colors. Some of these video signal shaders, including mine, include these color processing steps, in case you want to play without appending a CRT shader. Mine also let you turn off all this color processing so that a CRT shader can do these steps in its own way. Some of the shaders fail at this, such as ntsc-nes which does the entire NES palette and gamma correction before simulating the signal.