I don’t see anything wrong with the picture but I’m going on very old and unreliable memories.
What does the “uncursed” version look like for comparison?
I don’t see anything wrong with the picture but I’m going on very old and unreliable memories.
What does the “uncursed” version look like for comparison?
Standard 1953 NTSC colorimetry: (This is closer to the CXA2025AS palette’s representation of this, unless you’re on Nestopia where they have their own CXA2025AS palettes)
Uncorrected: (This is more like FirebrandX’s palettes and Wavebeam)
I assume the second one is the intended result, since this game was developed in Europe by Rare.
Notice how even if you have the brown ground, there are strips of green in it
This wasn’t a thing at all.
Normal kids didn’t adjust their sets or tint for Retrogaming. We just played our games. There was no reference to tell us we weren’t seeing the right colours.
I know I used to adjust brightness to get better black levels though and I might have adjusted the colour to make things nice and vivid but I wouldn’t have known to adjust tint to get BattleToads colours corrected and definitely were not adjusting tint between gaming and regular TV watching.
Back then we used NTSC colour bars when TV signed off and eyeballing to get colours to look right.
A good reference might be a BattleToads review on Nintendo Power, E.G.M. or GamePro.
I thought this might be the case. This makes me glad that I’ve worked so hard over these months to figure out the decoder’s intended settings based on the broadcast standards. The default settings have the tint and color settings for broadcast, and the brightness for the game to become approximately 2.2 gamma.
The problem is with 16-bit onward, where some games become just obviously wrong, even if you have no reference for how it should look. That’s why my posts about “white trash” have those color test charts: You can’t tell if it’s calibrated/matched, but there are certain colors where you have a general idea of what they should look like and can tell if they’re messed up badly beyond a certain threshold. (I should also clarify, that post has the tint and color settings set to make the test chart results become as acceptably close as possible.)
The main question that’s unanswered for me is how you dealt with bright red at the time. Perhaps it’s not a problem for NES, but definitely elsewhere. On this old PR, I included a game that takes advantage of MegaDrive rainbow banding while also having oversaturated reds: https://github.com/libretro/slang-shaders/pull/627 Hyllian posted just a day ago that they don’t remember having red saturated that much, and the preset in that post has it down to 80%. Did you guys watch TV with 80% saturation on TVs that had an on-screen display?
About the games looking obviously wrong when your TV is set for standard NTSC, I’m wanting to share screenshots of that, but for some reason, everything suddenly just looks good for some reason. I could’ve sworn I saw some games getting screwed up badly when I tried these same color filters with Chthon’s color correction (particularly when using my reverse-engineered US 9300K color settings), but now that I’m trying again with my NTSC composite video filtering added on top of that, I’m having trouble finding games that look wrong. I thought for sure Moto Racer 2 looked absolutely horrible with standard NTSC color correction, but I’m starting to think now that the game is meant to have NTSC correction.
As I’ve said before, I grew up in the 2000s (in other words, Generation Z), so CRTs were only still around for a brief part of my life. I distinctly remember Crash Bandicoot being a deep red on those CRTs, not this lighter red that we see in emulation and the PS4 remakes, which points to the CRT not having been adjusted specifically for games. Still today, I generally play PS1 games with my CRT’s default settings (which are based on standard NTSC), or sometimes with color turned down a bit. NES and Genesis absolutely benefit from lower saturation on that CRT.
Can I just say, the PS4 Crash remakes are just… gross. All those goofy cartoonish death animations suddenly become like a realistic horror film, with some Baldi’s Basics style weirdness thrown in. I wanted to remember Crash Bandicoot, not KidPix Deluxe 4 Home Edition.
Even as a kid, I’ve always calibrated (by eye) my parent’s TVs to a neutral color position. And that worked for movies and games. If the set hadn’t the knob to adjust the tint, my only option was to desaturate all colors to some safe position.
Since we were discussing calibration, I’ve worked on this alternative multi-pass shader that more faithfully implements the NTSC video standard. The idea was to look up YouTube videos, watch them through the WindowCast core, and try my best to calibrate the tint and color dials by eye. The original video is converted into NTSC color, filtered with the standard filter characteristics before modulating, lowpassed under 4.2 MHz as if broadcast over air, demodulated with only narrowband filtering, and finally corrected back to sRGB in the same way a consumer CRT corrects NTSC color. All that’s missing right now is interlacing.
What I didn’t expect to come out of this was a substantial improvement in performance. What the heck just happened? https://www.mediafire.com/file/fdvi71hyi4bi3bs/p68k-fast-multipass-2025-06-12.zip/file SNES and mostly-standard 240p are supported well in the “multipass” subfolder, along with an incomplete NES implementation. If you’re looking for accuracy on NES, you should instead look inside the “presets” subfolder in this zip, and use the NES options in there, which have also been updated slightly.
I’m just very happy that this has unexpectedly happened. I’ll make a more serious, more complete post later.
P.S.: The way to disable gamma correction has changed. Simply change the CRT EOTF type from BT.1886 to the power law. These two things are default, but also make sure that the power is 2.2 and that the output sRGB EOTF is also a 2.2 power law. I’ve said before, but I suggest increasing sharpness a little, especially if using the 170m 240p preset which will have less artifacts.
P.P.S.: Some comments in the source code of the multipass shader are just blatantly wrong. There’s no need for the resolution to divide 2560 evenly, and there is no automatic letterboxing or cropping. Arbitrary cropped overscan compensation works completely different from that: It is now an optional feature which is handled similarly to Maister NTSC or NTSC Adaptive, with an added feature to detect if the resolution has been scaled by an arbitrary integer.
This is an update to the fast multi-pass shader, now featuring an adaptive comb filter. I’m definitely proud of what I have so far. The problem is that it only gives good results when using baseband (I) filtering on chroma.
This is not to be confused with the very sloppy, unfinished one that I posted here in the crt-beans thread a couple days ago. That one is something at least, but the one here is better. The main big improvement is that the lower 2.2-ish MHz of the signal are now completely untouched by the comb filter, so that neighboring scanlines aren’t blurred together nearly as much as before.
Download: https://www.mediafire.com/file/9owq702yarm8h5j/p68k-fast-multipass-2025-09-24.zip/file
Updated version, 2025-09-26: https://www.mediafire.com/file/9qv5fl2bhkxry4h/p68k-fast-multipass-2025-09-26.zip/file - Added a new setting, “Comb filter sharpness”, to sharpen edges more. The default setting of 1.0 does nothing. I recommend changing it to 1.5.
After hours of experimentation and trial-and-error, I’ve settled on a very simple implementation. Originally, I tried swapping in a notch filter at the exact right times, or only comb-filtering just enough to hide the color carrier, as well as trying to include some kind of controllable sharpness setting for the user, but ultimately, it looked far better to forcibly comb-filter every single line, even in cases where it would seem more mathematically sound to not comb-filter that line.
There is a known issue: It looks horrible if you use narrowband (Q) filtering on chroma.
Here are some screenshots to show how this comb filter looks. I don’t know if any NTSC shaders for RetroArch have such a clean adaptive comb filter yet. Even when I’ve messed with the one in cgwg-famicom-geom, I didn’t get this kind of result.
Original unfiltered image:
Comb filter:
Notch and lowpass filters:
The adaptive comb filter makes a big difference.
This is great! Have you successfully combined this with any CRT shaders? So far I’ve been mostly unsuccessful… I can get it to work but I’m fairly certain that it’s not looking as intended.
The reason why it’s looking wrong is probably because those CRT shaders tend to make their own changes to the color, whether it’s on purpose or on accident. The most common purposeful change is from gamma, and the most common accidental one is from shortcutting the scanlines.
I’m short on time to try out all the major CRT shader presets, but just for now, I was able to get plain crt-royale to look right. To make it work, you just have to change both gammas to 2.2. I also like to set “Halation Weight”, “Diffusion Weight”, and “Bloom - Underestimate Levels” all to 0.
While you’re at it, you might feel like disabling the NTSC color simulation in my shader, or changing it between different kinds of color corrections. The setting for that is called “Video decoder/region”, and you can set it to 0 to turn off the color filter entirely.
Another option that isn’t being talked about enough is https://github.com/ChthonVII/chthons_color_correction , which does CRT simulation without compromising on color accuracy and overall screen brightness. It performs a color corrective LUT, scanlines, an aperture grille mask, and glow, while keeping 95% brightness and not messing up the colors. For me personally, it’s too much for my computer to handle.
Speaking of gamma, I’ve been unsure about how to handle gamma lately. When I measured my RCA ColorTrak from 1989 using my X-Rite i1Display 2 colorimeter, it came out pretty close to a 2.2 power curve, even when I had my contrast and brightness both set to the exact settings that I like. It’s too far from BT.1886, so I’ve become hesitant about using BT.1886 from now on. Another factor could be from how these 1980s CRTs have such light gray screens even when powered off, whereas later CRTs like my Panasonic from 2000 (and my Toshiba from 1995, if memory serves) tend to show a deep black instead. Perhaps the BT.1886 standard is based on later CRTs, or based directly on the phosphor luminance or the electron gun’s output, while my i1Display 2 might be assuming a slightly higher black level to account for the viewing environment and/or gray-ish screen.
I’ve seen that modern displays tend to aim for a 2.2 power law instead of the piecewise sRGB standard EOTF, and that the original sRGB proposal from 1996 says that CRTs at the time are all aiming for a 2.2 power law as well. There’s also the fact that all 3 of my CRT TVs are doing approximate corrections for 1953 NTSC primaries, and that the 1953 NTSC standard specifies a 2.2 power law. Since this is the best information I have right now, I’ve decided to stick to a 2.2 power law on everything, both for the CRT EOTF and for the sRGB OETF.
Thanks for your explanation- this is what I was suspecting was going on.
Looking forward to trying this!
At least viable crt-guest-advanced shaders include some closed, mostly neautral gamma loops within used buffer accuracy - by defaul settings. The most notable difference from using the default Retroarch linear or nearest scaler is that horizontal filtering is done in linear space and that if used, mask and scanlines tend to decrease overall image brightness by design. They can be setup for more brightness etc. Royale has some big brightness boosts in it’s default preset. I think it makes a difference.
Vanilla:
Linear upscaling:
guest-hd:
From “shortcutting the scanlines”, i think it’s correct crt behaviour since electron guns only recieved one “shot per scanline”. They weren’t done in multipass for each gun afaik, regarding vertical positions within single scanline.
Otherwise great work, keep it on!
Is this shader written to be HDR aware? Should we be avoiding HDR with this?
My NTSC simulation is currently all for sRGB, but adding HDR support actually is a great idea that I just haven’t gotten to doing yet. That would be able to simulate a known CRT’s color directly. I might just add that today really quick.
Great work @PlainOldPants! I have some assorted feedback and questions.
It looks like most of your presets have a steep low pass filter at 4.2 MHz. Wouldn’t this only be appropriate for simulating an RF connection? Composite connections could support bandwidth above 4.2 MHz. On the other hand, I suspect that most old consoles had lower bandwidth outputs even than that.
I also wonder about the correctness of simply sampling the input and applying a discrete time FIR filter. Nearest neighbor sampling should result in some aliasing. Maybe it isn’t visible, though. To avoid aliasing and try to simulate the DAC behavior, I’ve been using a continuous time FIR filter, treating the input as a piecewise constant function, and then resampling. But maybe my approach isn’t necessary.
It looks like you pack 4 samples into each vec4, so with a width of 640 there are actually 2560 samples per line. Am I reading your code correctly?
Should the primaries be corrected in the first pass? Most cores, as far as I know, just output the digital data from the console without any correction (NES can be a weird exception, obviously). I have assumed that the input should be treated as-is and any primary corrections, white balance, etc done at the end of the process.
Nicely commented code! It’s much easier to follow than some of the other code I’ve seen.
By the way, this is the comb filter that I am planning to attempt: https://patents.google.com/patent/US5663771A/en
The 4.2 MHz lowpass is broken. All the presets are skipping it by default.
The problem is that my 4.2 MHz filter implementation reduces the chroma saturation. In other words, it is not actually behaving according to the graph, as it’s causing the 3.58 MHz frequency (as generated by the encoding pass) to get decreased too much.
I had implemented the chroma and luma filters first, and it looked like they were working alright. The RF lowpass was added later, so I didn’t realize something was off until then.
I need to check this later, but part of the culprit could be this bug in my hamming window. The original function from my June 12th version looked like this:
vec4 window(vec4 offsets, float hammingSize) {
const float a0 = 25.0 / 46.0;
return a0 + (1 - a0) * cos(2 * pi * clamp(offsets / hammingSize, -0.5, 0.5));
}
The fixed version is this. This is found in my latest version.
vec4 window(vec4 offsets, float hammingSize) {
const float a0 = 25.0 / 46.0;
vec4 res;
for(int i = 0; i < 4; i++) {
res[i] = offsets[i] >= -0.5 && offsets[i] <= 0.5 ? a0 + (1 - a0) * cos(2 * pi * clamp(offsets[i] / hammingSize, -0.5, 0.5)) : 0;
}
return res;
}
I only remembered to paste this fixed version in the encode and decode passes, but not in the RF lowpass pass. Still, I doubt that this is the entire problem. As you said, the simple discrete time FIR filter isn’t necessarily the best choice. Until I added the RF lowpass, I had thought that I was over-sampling high enough to where this wasn’t an issue, but clearly I was not.
Based on your previous message to me in your crt-beans thread, and based on what you’re saying here, I can clearly see that you are more experienced with signal processing/filtering than I am.
About the 4.2 MHz lowpass only being applicable to RF, that’s true according to the standard. I need to check the schematic diagrams to make sure that consoles are properly skipping that lowpass when connected only over composite.
I believe I remember the SNES having different filtering for the final RF output and for the final composite output–That’s two different lowpass filters (or whatever kind of filter), where one is for composite, and the other is RF. Again, I need to check to make sure I’m remembering correctly. I do not own a real SNES.
My NES has excessively blurry RF and excessively sharp composite. If I connect the NES’s composite through my VCR to modulate it into RF so that I can connect it to my 1989 RCA CRT, I still see the grainy, sandy look of the NES’s overly sharp composite, although it’s not as horrible as the straight composite signal.
I also believe the NES’s RF modulator may be causing the NES’s chroma to get more saturated. It may also be causing the luma to create more chroma artifacts. I need to check this later.
As for my Genesis, I don’t know. At least, everyone knows that the Genesis low-passes its audio, but I suspect that they were using the same audio low-pass filter for RF and for composite. That might suggest that video goes through the same low-pass for both RF and for composite. I need to check the schematic and see.
Speaking of filtering, I just noticed this past weekend that my 1989 RCA ColorTrak is being relatively generous with its chroma lowpass/bandpass. It looks like it is not lowpassing as low as the standard Q response, but it is definitely lower than the standard I response. I’ll need to investigate.
Long story short, the answers to most of our questions about filtering are in the schematic diagrams.
I have not gotten the capacitors replaced on my consoles. This might be affecting the video filtering. My Genesis has the well-known rainbow banding issue, and I’ve heard that you can reduce it a lot by replacing capacitors. The only CRT I have that’s been serviced is the 1989 RCA one, which still has its original whitepoint intact, but has its default brightness set based on 0 IRE and its default tint and color set to make the yellow area look acceptable without accounting for 1953 NTSC primaries. Everything else that I own may have capacitor-related problems with filtering.
That is correct. The number 2560 ensures that resolutions of 256, 320, 512, and 640 all divide each pixel into an integer number of samples. I don’t remember exactly how my code works and whether it properly handles an input size that doesn’t divide 2560 evenly. I know my previous release would either crop or add pillarboxes to ensure that the sample rate was divided evenly. As a bonus, that meant that if the user had enabled cropping overscan, or if the console (such as SNES or Genesis) supported multiple resolutions, the shader would automatically detect and compensate for that.
As far as I’m aware, never. I’m sure that consoles never ever did this.
The reason why I added that was so that I could try watching YouTube videos with the shader (either using ShaderGlass or the WindowCast core; I recommend ShaderGlass for better performance), and calibrate the NTSC brightness, tint, color, and sharpness manually by eye. This is because of how consumers could’ve done anything with their knobs, and how they may have set their TV’s settings manually instead of sticking to defaults.
The Genesis/MegaDrive is the only exception I know of. The RGB analog voltages are not linearly related to their internal digital values. Different Genesis/MegaDrive models had different voltages. According to the Genesis 240p test suite, the Genesis’s black level is about 6 IRE.
In Genesis Plux GX, you get the console’s raw digital RGB values. In BlastEm, you get the analog voltages. I assume everything except BlastEm also only has the raw digital values. https://github.com/ekeeke/Genesis-Plus-GX/issues/345#issuecomment-1675678054 “Literally all I did in BlastEm was took the voltages others had measured for each of the Mode 5 RGB levels, divided by the max measured voltage and multiplied by 255. Those values are in levels
in vdp.c and get used for translating palette entries. Nothing fancy at all.”
Thank you. A lot of the detailed comments in there were just to compensate for my own crappy memory and difficulties with understanding code after it’s been written (and maybe because I’m too nerdy), but I’m happy to see that all my documentation and explanations are paying off, whether it’s in the code itself, or in my detailed posts that have hardly any views/downloads. Just beware that some comments were written before I actually wrote the code and are still unchanged.
I’ll look at this in more detail later, since I’m in a bit of a hurry. It took longer than expected for me to write this.
Edit, about 2 hours later:
I’m looking over the patent now, and this paragraph in particular on page 4 (or page 10 of the PDF) sticks out to me:
“In accordance with the present invention, an adaptive comb filter for NTSC or PAL television standards has only two horizontal scan lines of delay and effectively removes the high frequency luma. The magnitude and phase of the chroma signal from the comb filter are compared to the input signal to the comb filter which has been subject to a bandpass filtering process. Unless both the magnitude and phase of the comb filter chroma signal are equal to or less than that of the bandpass filtered signal, the comb filter chroma signal is assumed to be incorrect (“illegal’) and a different comb filter is adaptively selected to legalize the chroma output signal.”
This is similar to what my latest NTSC shader is doing for its adaptive comb filter. What makes mine different is that, instead of checking whether a comb filter gives a “legal” result before falling back to another comb filter, mine compares all the comb filters to see which one is the most “legal”, even if all three are illegal. In other words, this paper’s invention puts the different comb filters in an order of priority, and always picks the highest-priority one that picks a legal result, whereas my shader assigns no priority order at all, and always picks the one that has the best result.
Another difference in my shader is that I perform a 1.3 MHz lowpass on each line, and I assume that this lower frequency range is almost entirely correct luma information. Then, when comb-filtering, the comb filter only is performed on the higher frequency information above that. This means that consecutive lines won’t get blurred together as badly. This patent instead claims that the entire luma signal needs to be comb filtered because even the lowest frequencies are allowed to contain chroma.
Based on your previous message to me in your crt-beans thread, and based on what you’re saying here, I can clearly see that you are more experienced with signal processing/filtering than I am.
I’m not sure about that. Unfortunately, I took compilers instead of signal processing and now I wish I had taken both.
Until I added the RF lowpass, I had thought that I was over-sampling high enough to where this wasn’t an issue, but clearly I was not.
We can do some calculations for a simple square wave, like black/white alternating pixels. A real signal would have more complicated frequency content, but this would be the worst case scenario for high frequency content. With a 640 pixel wide input and 2560 samples per line, the 5th harmonic would be the first to be reflected as aliasing, and it would be at about -14dB. With a 320 pixel wide input, the 9th harmonic would alias at -19dB. (Assuming I’ve done the math correctly.)
Some of the aliasing will be in the range that is getting attenuated by the low pass filter.
Having the number of samples be an even multiple of the input may help hide the aliasing in some cases, since some of it it will be reflected back onto the lower harmonics of the signal. It might look a little like sharpening, overemphasizing the higher frequencies.
All of this is to say that I don’t really know how much of a visual impact the aliasing will have in the end. It might be okay with this much oversampling, but the aliasing should be more significant with higher resolutions.
I believe I remember the SNES having different filtering for the final RF output and for the final composite output
It does look that way from the schematic. It looks like there is also:
Nintendo sure put a lot of components onto that board.
I have a SNES but it isn’t currently working. I was hoping to use it for some visual comparisons.
The number 2560 ensures that resolutions of 256, 320, 512, and 640 all divide each pixel into an integer number of samples
Genesis Plus GX will output something like 384 pixels (I didn’t have the exact number in front of me right now). The main 320 pixel output doesn’t cover the whole active area. Some border padding is added. In my opinion, other cores should be doing this as well, and Ares seems to have made an effort to do this. So that might be something to consider.
I will have to take a look at your comb filter. It sounds interesting! What happens when there is input like the Genesis, which keeps a consistent phase from line to line and thus can’t be comb filtered? I haven’t been able to test that yet.
3840 is even better than 2560 for being an integer multiple of even weird horizontal resolutions.
On my shaders, when set up for Gen/MD, comb filters killed all of the chroma due to the fixed-phase signal, while notch filters do just fine. It’s surprisingly hard to find first-hand accounts of this, and I never heard anything about it during the CRT era, but did find this on reddit, talking about the retrotink 2X’s comb filter:
There are just some edge case exceptions for certain Genesis & 32X revisions which the 2X can’t decode the colors accurately. (in such cases a 5X is needed as you can correct it with the phase setting)
So yeah, seems like it indeed buggers the chroma completely. Apparently, some (most? all?) comb-based sets had a fallback notch filter, so maybe it switches over to that if the phase is fixed? or maybe if the chroma signal drops below a certain voltage? Dunno.
Note that most comb filters are adaptive, which means they also use a notch filter for portions of the decoding process depending on the properties of the signal.
That same page includes an excerpt from a JVC model’s owner’s manual that suggests it was a manual switch, at least for that model.
EDIT: I also recall PlainOldPants asking if anyone had done anything with overdriven electron guns that caused smearing to the right. I did a “chroma smear” shader a number of years ago that wasn’t really based on anything but my own memory of failing TVs, but the result was pretty good:
I was looking for something like this before
thanks! and why didnt added to https://github.com/libretro/slang-shaders yet?
I didn’t think it warranted it, since it’s just a unidirectional blur in yiq colorspace. if it did anything super-special, I’d have put it up there.
I’ve made some random progress on my NTSC shader. Hopefully I’ll have it finished enough by the end of this month to where I’m comfortable submitting a PR to the slang-shaders repository.
The NES simulation still needs more work. In particular, it is missing row-skew in its colors, so you won’t get a very accurate NES palette with this.
Download here: https://www.mediafire.com/file/3e652wqa2kfpxuh/p68k-fast-multipass-2025-10-07.zip/file
One of the included presets lets you do only the NTSC color correction. It contains a lot of confusing settings, but there is only one that really matters much: “Video decoder/region (see list below)”