I’m wondering if dull, dim, dirty colors are something we really should be pursuing in our composite video shader presets.
I’ve never actually seen a photo of a well-functioning CRT displaying composite video that had dull, dim, dirty colors. The colors are different from RGB, but not really dull or unsaturated. If anything, I’ve noticed composite video on a CRT tends to be a bit more saturated.
Here’s what I think is happening:
-The saturation loss is caused by bleed. Bleed is reduced or eliminated by notch or comb filtering
-What little bleed remains causes a slight saturation loss in those colors that are bleeding
-TV manufacturers often did “nostalgic” calibrations, which probably caused clipping and other “incorrect” things, but it looked good to the consumer (bright and vibrant)
-The consumer then did whatever they wanted with the knobs
In short, I think a generic “composite video look” really comes down to chroma bleed and artifacting. In guest-advanced-ntsc, we should pay attention to the “NTSC Chroma Scaling / Bleeding” parameters, along with “NTSC Artifacting” and “NTSC Fringing.”
For a more nostalgic look, a slight hue shift of red towards orange (or some other subtle shift) can help, to mimic whatever weird things the manufacturer was doing.
I don’t think we really need to touch NTSC Color Saturation or NTSC Brightness, though.
I think the way chroma bleed currently works might be improved if we can get it to actually cause saturation loss where it’s bleeding from, if that makes any sense. edit: Actually I think GTUv50 is doing this. guest-ntsc / ntsc-adaptive kinda do it, but only at a very low setting for chroma bleed and you don’t have full control over YIQ, so it’s not quite as accurate/realistic.
Based on my experience trying to vibe-code an NTSC de/modulation pipeline recently, saturation loss is a tradeoff resulting from the notch/comb filtering, as it’s killing a chunk of the chroma to avoid interfering with the luma.
I don’t really have an answer for how real hardware manages to stay super-saturated even after filtering. Maybe there’s some internal chroma boost after the filter stage? Dunno. Maybe @PlainOldPants has some knowledge here.
I’m glad to see the general public opinion on video signal and color emulation starting to shift, both with you and with beans.
As someone born after 2000, I was introduced to retro gaming only through emulation, both official and unofficial. It wasn’t until the past couple of years that I started finding out how the consoles and TVs actually looked. It is so hard today to get the correct information about this, and that’s reflected in how surprisingly incomplete the shaders in RetroArch/LibRetro are. Really, my impression of this site since joining has been that people are aiming for a “nostalgic” look based on their memories and based on what they enjoy playing, without quite supporting game preservation in the way that I would like. There’s definitely some solid information available here, especially from users like Dogway and Chthon, but it’s still not telling the whole story on what’s happening.
Because of all this, I’ve been having to experiment around on my own to get that missing information that others aren’t sharing online. As a disclaimer, I’m not an expert on this subject area, and everything I post should be taken with a grain of salt. There is a lot of information that I still need to find out, and there are a lot of assumptions that I’m having to make to fill in the gaps.
What you’re talking about is the NTSC color correction, which happens over RF, composite, and s-video (when demodulating chroma), but not over YPbPr or RGB. This NTSC color correction is why video games get more saturated, and why yellow becomes more orange. Even though all my TVs have these deep colors, very few LibRetro shaders try to do this properly.
This also has to do with the TV’s whitepoint. Many consumer TVs use a nonstandard bluer whitepoint and change their NTSC correction to compensate, and at least three out of four (or possibly all four) of the TVs that I’ve owned do this. In Japan, the standard reference white is different. Many shaders are missing this, and others don’t get it quite right. I don’t know of any shaders other than mine that have the 9300K+27MPCD whitepoint.
Because of how RGB connections skip NTSC color correction, I assume many game developers didn’t try to account for NTSC color. There’s also the fact that professional displays might have knobs instead of on-screen display settings, which meant that game developers could’ve set those knobs to look similar to the RGB video.
I made a post about this in the crt-beans thread recently, so I’m crossing my fingers that they’ll attempt it.
Fully detailed explanation with sources, because I severely need to spread awareness of this
The established standard practice since the mid-1960s onward (and continued into the 2000s) was to mess with the B-Y/R-Y (and G-Y) demodulation to approximate the absolute chrominance (but not necessarily luminance, meaning you would have the right color at the wrong brightness) without any compression (meaning, for the colors that are possible to represent perfectly, you try to represent them perfectly, and you don’t care about the colors that are not possible to represent perfectly). (Sources: I’ll get the links to these sources later, but look up a paper from 1966 by N. W. Parker, and a paper from 1975-ish by Neal–You can thank Chthon for finding these two papers. Look up ITU-R BT.472 from 1997 or 1998 or something. Also, look up basically any CRT TV’s video decoder chip’s data sheet, and you’ll see the nonstandard demodulation settings in there.)
A lot of people might incorrectly cite 1987 (SMPTE C, RP 145) or 1994 (SMPTE 170M) as a year that NTSC color correction was no longer used. That’s not true. In 1987, SMPTE C (RP 145) was only 6 sentences long, and it only the standard phosphor primaries for a studio monitor, while using this same old kind of approximate NTSC correction. (However, according to an ARIB TR-[some number] regarding HD 1125/60 video (thank Dogway for finding this), the phosphors that actually got used were very different, with deeper green and less deep red.) In SMPTE 170M, they explicitly say that you’re allowed to continue using the old primaries. SMPTE 170M and BT.472 don’t agree on whether illuminant C or D65 is the whitepoint, but I’m 99% sure it is C.
There is no universal 1.0 or 255 on a CRT. Any RGB value that the NTSC color correction causes to go excessively bright will just be over-brightened, and if you turn up your TV’s contrast and/or color too high for the electron guns to handle it, you’ll see these really bright RGB values (not B-Y/R-Y; that’s RGB) bleed over to the right, and ONLY to the right, which I haven’t seen any shaders attempt at except my older, less finished ones. (Source: All four CRT TVs that I’ve owned have this behavior.)
Consumer TVs, even as late as 1997 with the Sony Trinitron KV-27S22 which had the Sony CXA2025AS (as in the popular NES palette), or also the Toshiba FST BlackStripe CF2005 in 1985, often had a crazier correction where the white point would be dragged to a special point called 9300k+27MPCD (x=0.281, y=0.311) while trying to keep the area near red and green as reasonable as possible without taking chromatic adaptation into account, at the expense of the other colors getting dragged towards blue and having blue get oversaturated. That 9300k trick, which Sony called Dynamic Color, also causes games’ over-saturated colors to look different, especially on the NES where the brown column would become even browner. I’m not joking, but I have a modern TV manufactured in the mid to late 2010s which actually has a setting called Dynamic Color that does this same stupid thing. (Source: The 1966 paper by N. W. Parker describes this same 9300k trick. If you look at my post in the crt-beans thread, or in my post on the nesdev forums (username patchy68k), you can see output from my program that shows similar graphs for the KV-27S22 and CF2005.)
Since at least the 1970s in Japan, the standard white point in NTSC video over the air has been D93 or 9300K+8MPCD (x=0.2831, y=0.2971, or x=0.2838, y=0.2984). (Source: Chthon found a paper from the late 70s or something in Japanese which describes this. The paper also cites Parker and Neal. The ARIB paper also notes D93 as a standard whitepoint for monitors.) According to comments in Dogway’s Grading Shader, consumer units still used 9300k+27MCPD (x=0.281, y=0.311) instead of D93.
This is yet another key point that not enough people are talking about. Depending on how you set the Brightness (Black Level) knob or on-screen display setting, you’ll get a different approximate “gamma” (or EOTF, as it’s properly called). Consumers could’ve set this to anything. After that, they would’ve probably adjusted their Color and Tint settings to get the yellow region to look better, to compensate for an incorrect Brightness setting. As of recent, I’ve temporarily settled on a pure 2.2 power function for the EOTF, and you can see my latest post in my “PlainOldPants’s Shader Presets” thread for my full reason for this.
By the very late 1980s, consumer TVs started using on-screen displays for settings, instead of using physical knobs. This made it possible for users to reset the TV to its defaults. Professional sets continued to have knobs for some reason, so those didn’t have defaults. This is definitely something to be aware of.
I assume that, if a consumer was adjusting their TV’s settings at all, they would have set it to make actual NTSC TV video look right, without changing the settings for their video games. That way, your games would still have the NTSC color correction. Developers might not be able to do that, but I don’t actually know.
Other stuff that I want to spread awareness of
I don’t have a source for this, but I believe that CRTs would usually get a lower default black level as their capacitors aged. I would then guess that many people generally stuck to the default on-screen display settings, which meant that people with older TVs would end up with a deeper gamma. I think I heard somewhere that Toshiba in particular had the worst capacitors. Surely, aging also affected the other settings, and people would adjust the settings too.
To be more precise, it’s caused by the chroma portion of the video signal being low-passed or band-passed. For solid color regions, the saturation stays the same, but when switching from one color to another, saturation is lost on that edge.
I was meaning to say this in the “cheat sheet” thread that you created. These settings are definitely not enough to simulate the artifacts correctly. For simulating pure notch-filtering displays without a comb filter, this can get very close visually, but it won’t get the exact hardware behavior right. The actual problem is with comb filter TV sets, which work by comparing consecutive lines. In other words, a notch filter set will separate luma/chroma horizontally only, while a comb filter will separate it mostly vertically with some horizontal separation. These NTSC simulation settings won’t make the shader do comb filtering at all. I haven’t checked in a long time, but the only decent comb filter shader that I know of is the one that I’ve just posted in my shader thread.
I have no source, but I believe that comb filters only ever became decently widespread in consumer units sometime in the mid or late 90s.
Sharpness is an important setting too. It affects the Y (luma) signal only. It’s not uncommon for real hardware to have this. Still, there were probably a fair number of sets in the 70s and 80s that lacked a sharpness setting, which meant that they would look blurry. I’ll need to find out if a fair number of 90s TVs didn’t have sharpness, but for now, I assume it was in almost every TV by then.
Over RF, you get a 4.2 MHz lowpass on the entire video signal, so this results in a blurrier low-pass on chroma over RF. While I do think I get more rippling artifacts over RF, I definitelly do not get the disgustingly high fringing/artifacting levels that I’ve seen in ntsc-adaptive (though I haven’t used it in a long time), which has doubled fringing/artifacting for RF compared to composite.
Something completely missing from your post is RF noise. Hardly anyone gets RF noise right. They usually just do random dots over the image. On my own original hardware, the noise tends to stay close to a few consistent frequencies that change slowly and in repeating patterns. You end up with consistent diagonal lines that crawl across the image and keep disappearing and reappearing. I’ve attempted at this in my older shaders, but I can’t get it to look quite right. Something I realized recently is that a large amount of the noise might actually be coming from the game’s audio signal, which is frequency-modulated just barely above the video signal.
Point is, no. Real hardware is more saturated, and just better, even when using an RF signal. Here are some videos of my 1989 TV over RF. There’s definitely desaturation and artifacts on edges, but it’s actually not that bad.
The only comb filter I know in the repo is the one from aliaspider’s GTU-Famicom shader:
Not much to it. This preset does manage to keep the saturation up nicely, though.
These are the (inadequate, simplistic) filters (notch first, followed by comb) that I was using in my de/modulation shaders:
#version 450
#pragma format R16G16B16A16_SFLOAT
layout(push_constant) uniform Push
{
vec4 SourceSize;
vec4 OriginalSize;
vec4 OutputSize;
uint FrameCount;
float filterPicker, combLines;
} params;
#pragma parameter filterPicker "Filter Type (0 = notch, 1 = comb)" 0.0 0.0 1.0 1.0
#pragma parameter combLines "Comb Filter Lines" 2.0 2.0 3.0 1.0
layout(std140, set = 0, binding = 0) uniform UBO
{
mat4 MVP;
} global;
#pragma stage vertex
layout(location = 0) in vec4 Position;
layout(location = 1) in vec2 TexCoord;
layout(location = 0) out vec2 vTexCoord;
void main()
{
gl_Position = global.MVP * Position;
vTexCoord = TexCoord * 1.0001;
}
#pragma stage fragment
layout(location = 0) in vec2 vTexCoord;
layout(location = 0) out vec4 FragColor;
layout(set = 0, binding = 2) uniform sampler2D Source;
void main()
{
vec4 composite = texture(Source, vTexCoord);
float dotClockHz = composite.b;
float phase = composite.a;
if(params.filterPicker < 0.5){
// Horizontal texel offset
float texel = params.SourceSize.z;
// Sample composite signal from neighboring pixels
float sM1 = texture(Source, vTexCoord - vec2(texel, 0.0)).r;
float sP1 = texture(Source, vTexCoord + vec2(texel, 0.0)).r;
// Notch filter to suppress 3.58 MHz
// Using simple FIR kernel tuned for chroma suppression
float luma = (sM1 + 2.0 * composite.r + sP1) / 4.0;//sM1 * 0.25 + s0 * 0.5 + sP1 * 0.25;
// Chroma is the residual (subtract filtered Y from raw composite)
float chroma = composite.r - luma;
// Pack filtered result for next pass
FragColor = vec4(luma, chroma, 0.0, phase);
}else{
float texelX = params.SourceSize.z;
float texelY = params.SourceSize.w;
// Get composite signal from current line and previous line
float curr = composite.r;
float prev = texture(Source, vTexCoord - vec2(0.0, texelY)).r;
float next = texture(Source, vTexCoord + vec2(0.0, texelY)).r;
// Comb filter
float luma = (params.combLines > 2.5) ? 0.25 * prev + 0.5 * curr + 0.25 * next : 0.5 * (curr + prev); // luma stays the same across lines
float chroma = (params.combLines > 2.5) ? curr - 0.5 * (prev + next) : 0.5 * (curr - prev); // chroma alternates phase -> difference isolates it
FragColor = vec4(luma, chroma, 0.0, phase);
}
FragColor.b = dotClockHz;
}
They’re meant to function on a composite video signal, and the dotClockHz value is carried over from the modulation pass for use in the subsequent demodulation pass.
I was tripped up by this too at first. Gtu-famicom’s comb filter is wrong. The problem is that prev6 should be sampled from the previous scanline, not from the current one.
I don’t know where gtu and gtu-famicom get the numbers for their lowpass filters from (but it’s probably obvious). The code for the lowpass filter is obfuscated enough to be called malware.
Cgwg-famicom-geom (or something) has an actual good attempt at comb filtering that averages consecutive scan lines. Ntsc-blastem has a wrong attempt at it, but it has the right idea.
ntsc-blastem’s incorrectness may be my fault from the porting process. Blastem feeds in separate fields on a counter, and I just collapsed that completely since we don’t have a mechanism for it.
That’s because of another issue that I forgot about. The NES doesn’t alternate phase 180 degrees every line. It rotates by 120 degrees instead (hence “3 phase” because 360 / 120 = 3), so if you go straight up, you’ll be misaligned. So, in order to perfectly cancel out chroma, you have to also go two samples over to the left (or maybe right) when you sample the previous scanline. Real CRTs and video captures do actually do this, instead of just blindly going up a line and hoping it cancels out.
Comb filter is dead simple, just grab a dot, another dot up, then add them the carrier phase locked on their position but the dot up will add +PI on it before cos/sin
NTSC = 3579545.0;
PIXEL_CLOCK = 21477270.0/4.0; // SNES 4 cycles is 1 dot
phase = (TEX0.x*TextureSize.x)*2.0*PI*NTSC/PIXEL_CLOCK; // just a dummy phase, TEX0.x* TextureSize.x is "time" on an analogue monitor
phase_of_dot_up = phase + PI;
// 2*PI is a dot
// NTSC color freq. is fixed, won't ever change, 170.666 actual visible "color samples"
// PIXEL_CLOCK is how many actual dot we have on screen, or else "InputSize"
// 170.666 color samples will have to be shared between our InputSize.x`
// dot is our dot.r + cos(phase)*dot.g + sin(phase)*dot.b; all merged to composite
luma = (dot + dot_up)/2.0;
since we have luma our dot carried the whole signal and
chroma = dot - luma;
Bang, everything in proper order heh
Cancelling the sin/cos wave by adding a +PI, see how at the same spot the wave is e.g. 1 and -1 at dot up, when we add them it’s 0. All chroma carrier signal is erased.
I don’t know where gtu and gtu-famicom get the numbers for their lowpass filters from (but it’s probably obvious). The code for the lowpass filter is obfuscated enough to be called malware.
I worked out the macros once and as far as I can tell it does the same thing that I do: a continuous time FIR filter using a Hann function (I don’t want to call it a window because it’s not windowing a sinc function). The filter size is set a bit differently. Some of the math isn’t simplified fully, so the performance could be better despite the macros. Not that it’s all that slow, really.
So, in order to perfectly cancel out chroma, you have to also go two samples over to the left (or maybe right) when you sample the previous scanline. Real CRTs and video captures do actually do this, instead of just blindly going up a line and hoping it cancels out.
Do you have a source describing this, or could you explain how CRTs do it? I was just thinking that, pre-digital, they’d be using simple analog delay lines and being smart about the phase differences might not be possible. It would be great to get more details on this.
Good question. I don’t have a solid source or explanation yet, but I at least know that my Dazzle DVC100 video capture and 2000 Panasonic CRT both aren’t struggling with comb filtering my NES or Genesis.
I think this realignment just makes the most sense as an explanation for why the comb filter isn’t screwing everything up on my real hardware. The horizontal rate is already able to vary within a certain tolerance, so I assume these comb filters had to have somehow made the delay line dynamic to stay in sync. Hardware to keep the delay line exactly aligned with the color carrier would have made a lot of sense too.
It could be that it’s falling back to notch filter decoding because it detects the comb as “illegal” as described in your paper. That’s something that I didn’t consider.
I don’t think I have footage of my Genesis or NES on the Panasonic CRT yet, but using my Dazzle DVC100 for video capture, I at least have a Genesis video capture (RF) of the most annoying music ever here https://youtu.be/2pzzyb-YusY and an NES video capture (composite, while TV is over RF) of the coolest SMB1 game genie code here https://youtu.be/OVc38MIgjRg (Having the console connected to over both RF and composite simultaneously on NES does seem to affect the video compared to using only one output at a time, but that’s another mess that I’ll pretend doesn’t exist.)
I might take pictures of this later, but if I use the Genesis 240p test suite’s color bleed test screen, the vertical white bars leave artifacts, while the white checkerboard pattern mostly eliminates them. If this TV were doing an exact delay line without realigning to the color carrier, this wouldn’t make sense.
I still want to know how exactly this worked. Time is a little tight for me right now, but I plan to read into this more. Looking at service manuals and datasheets sounds like the best plan.
Edit: I actually just remembered, when I connect my Genesis to my BenQ projector, it becomes completely black-and-white. It still gets cleanly filtered luma, but no chroma. I’ll have to try connecting it to another modern TV that I have, and to a cheap composite-to-HDMI convertor, and see what happens.
You can tell they didn’t use RGB when designing these games on SNES as the colors are over-saturated, probably used composite. There is a clear difference in saturation when using rgb. I believe saturation is suppressed on composite because of low pass filtering cutting parts of chroma, but composite look is what was intended in the first place. Probably used the default cable that came with the machine, which is composite for most consoles up to the Wii.
I’m confused by this because I though it was established earlier that saturation loss via composite was always associated with chroma bleed and was a localized desaturation.
This just looks like a general whole screen desaturation and I don’t see any bleed…
When connecting a PAL Super Nintendo (SNES) to an American (NTSC) television, it is very common to get a washed-out or too-bright image, or sometimes no color at all
. This is due to the fundamental differences in how PAL and NTSC consoles and video cables process and transmit analog color signals.
The technical reason for the washed-out colors
The cause of the washed-out or overly bright image is an impedance mismatch between the console and the cable:
PAL SNES consoles are designed to work with a cable that has a specific 75-ohm load resistor on the video signal line.
NTSC SNES consoles and cables do not require or include this resistor, as the impedance is handled differently by the console and TV.
When you use a standard NTSC cable with a PAL console, the missing resistor causes the video signal’s voltage to be roughly twice the standard level. This overdriven signal results in a bright, washed-out, and desaturated picture on your TV
There’s a couple of other videos on his channel which are differently captured. I’ve been actually meaning to open a thread sometime where we can collect channels that capture from real hardware.
I think this is it. Some TVs may have had less aggressive comb filters or notch filters, so you’d have more artifacts but less desaturation.
More aggressive filtering: sharper, less artifacts, less saturated
This also explains the look of composite video on PVMs that lack a comb filter: perfectly well saturated, but all of the artifacts are present, and chroma smeared even though local (edge) contrast is sharp, due to the excellent sharpening circuit.
The chroma-smeared but sharp look is only seen on high-end displays without comb filters. These are meant to display the entire signal clearly, warts and all, for production purposes.
that will be a nice thing! not only channels but also images (there was Calling all CRT owners: photos please! but it kinda random, I think it need to add index in OP to every post with the TV type and cables)