Were composite video colors that bad, in practice?

Yes, the ones that I’ve seen. This is particularly easy to see on the SNES schematic:

snes_filters

Those two identical blocks on the right (for example, R34, R35, C13, and C14) are the two filters for the chroma components (R-Y and B-Y). They use all the same parts and so filter both components the same way.

Some consoles modulate the chroma and then use a band-pass filter rather than low-passing each of the components separately. Some later chips have the filter on-chip rather than relying on the designer to build one off-chip. But I haven’t seen any that filter the chroma components to different bandwidths.

3 Likes

the irony is in PAL they made new PAL-D that is more expensive to make the colors better, while in NTSC they did the opposite, even though the bandwidth for NTSC chroma is originally less than PAL, so that why I remember that NTSC was bad and came after SECAM in the lack of color accuracy (Especially in terms of color oversaturation and its proximity to RGB), and btw we used to associate PAL with light colors (which was often crushed in SECAM, the image in secam was generally oversaturated), it’s worth pointing that we did not have much contact with NTSC and most of the NTSC exposure was in the game consoles

Inside the chip

Those “I F” (intermediate frequency filters) limit the actual bandwidth inside the chip seperately. In PAL it uses the same line for both, so they are different in NTSC, same in PAL. Those resistors R34 etc are low pass filters. Or i could be wrong lol

2 Likes

I can’t tell what the"I F" blocks are from the datasheet, but I don’t see any in the chroma path. The chroma output is on pins 24 and 1, and the input is on pins 11 and 10. It loops back into the chip to allow the designer to use their own filter.

ok so SCLN and SCLP should be some jumpers that activates N(TSC) or P(AL) or something on pin 19. Need some investigation, looks like colors come as raw RGB into pins 20,21,22 from PPU2 pins 97,96,95.

If you look the diagram looks like raw RGB passes the YIQ matrix and comes back to pins 9,10,11 as Y,I,Q to be further processed, add phase etc :wink:

It ends on pin 7 of the S-ENC as Video Out, it is send to RF plug and multi-out video plug then. Internal filters of the chip should be the ones that limit bandwidth after it passes pins 9,10,11 or after pins 8,19 (probably not)

2 Likes

This is what I’ve always done. I allow my eyes and brain, including my “unreliable” memory to be the judge, while also balancing this against trying to not clip colours and highlights and maintain gradients, contrast and details.

Great find and it makes complete sense. This is sometimes done in video and audio capture/recording and production as well to avoid clipping and preserve dynamic range.

1 Like

Quick example. First is a run of the mill guest-advanced RGB preset, second is guest-ntsc with tvoutweaks doing the stuff just described, leaving all the mask/scanline/brightness/etc stuff the same. Mask adjusted to 50% for tolerable viewing on non-HDR displays:

I think this is approximating the differences seen in the example @DariusG posted

There are certain standards in formats like NTSC YIQ channel bandwidth, or NTSC-J, clearly documented in various sources that one cannot just revision, ammend them or what, without having solid and valid points about it. Actually cannot cancel it at all lol. It is what it is.

Is this about the equal bandwidth thing?

I guess this part is somewhat iffy.

Did anyone read the document i posted with NTSC bandwidth standard, that if you make them equal you’ll introduce hell of a lot of crosstalk between I and Q? And PAL U/V needs extra work on decoder side to get rid of the crosstalk because they are equal?

1 Like

I see it now, yeah. I’ll leave this to you and beans :smiley:

I think if they were actually using the asymmetric IQ bandwidth, they must have been applying a pretty aggressive automatic gain control / color boost.

It would still be nice to have a definite answer, though!

Remember: only do the YIQ conversion once! You don’t want to repeat this in your shader chain. I made this mistake in the Mega Man example I just posted. :smiley:

Here are the shaders that do the YIQ thing:

TVouttweaks
GTUv50
NTSC Colors
nes-color-decoder
any recent composite video shader

Probably missing a few. I think these can all be combined with ntsc-adaptive without “doubling up” on effects.

EDIT: yeah nvm, this is all kind of mysterious to me, still. I think the general principle is correct, though- only do the YIQ conversion once. How this plays out exactly with all the shaders gets a bit messy :slight_smile:

If you deep dive the schematics it’s possible to create a most faithful representation, e.g. those R35, R37 270 low pass I,Q to less than 2.7mhz and go back to chip, then approximate the detail loss there (assuming full detail is like 5mhz or something), then the next one, enter the S-ENC do what it does and so on. It will take so much time that you’ll do only one system like blargg did lol.

Start from ppu2 follow the color line and emulate what’s going on. I’ve also seen some ntsc encoders that divide Q like Q/2 in their data sheets after matrix pass. There is a truckload of small details on this rabbit hole, phase shiftings 1 degree on sin, 4 on cos and more and more and more lol

2 Likes

I think the document is saying something else. Filtering both at 0.6 MHz should result in the least crosstalk.

The crosstalk they mention is due to the asymmetrical sidebands of I when it is limited to 1.3 MHz. If you allow more than 0.6 MHz of bandwidth for either (or both) of I and Q and center them on the chroma subcarrier, you’ll have chroma data reaching beyond 4.2 MHz. That is not allowed for broadcast (but it may work with composite input). To keep everything under 4.2 MHz, you need to filter out the higher frequencies and you are left with a vestigial sideband. The phase differences that result from this can introduce crosstalk when you demodulate.

From the same book, p. 435:

When using lowpass filters with a passband greater than about 0.6 MHz for NTSC (4.2 – 3.58) or 1.07 MHz for PAL (5.5 – 4.43), the loss of the upper sidebands of chrominance also introduces ringing and color difference crosstalk.

This isn’t really an issue for composite, though. Unlike broadcast, composite is not limited to 4.2 MHz and so we don’t have to cut the upper sideband.

If you deep dive the schematics it’s possible to create a most faithful representation, e.g. those R35, R37 270 low pass I,Q to less than 2.7mhz and go back to chip, then approximate the detail loss there (assuming full detail is like 5mhz or something), then the next one, enter the S-ENC do what it does and so on. It will take so much time that you’ll do only one system like blargg did lol.

I was curious and I actually did try to simulate these low pass filters, but I think either I am doing something wrong or the capacitor values in the schematic are incorrect. According to my simulation, they’d make terrible filters. Like, -6 dB at 4-5 MHz or something (I don’t remember off the top of my head). 🤷

1 Like

That’s a huge rabbit hole right there, lot’s of smaller rabbit holes in each step, notice how emulator devs never touch it to offer some Composite/RGB option, they just bypass that part and end up directly to RGB video out lol. They will just forget S-ENC was inside there.

1 Like

with new guest update that add safe volt limit

and it’s remind me of

so now I think I know why I didnt like to use NTSC back then, aside from those

3 Likes

Another interesting one is the Sony CXA2075M similar encoder, you can see there are 2 LPF after YIQ matrix, probably setting the bandwidths, subcarrier is then added, chroma combined and Bandpassed, Luma goes to Trap Filter to remove high frequencies of chroma, and mixed then Y/C to end in pin 20. Luma probably loses some high frequency detail too in trap filter.

RGB out in pins 23-22-21, Composite in pin 20, S-Video in pins 16-15.

1 Like

I think there are some misconceptions in this thread.

There is nothing intrinsic in the design of NTSC or PAL video signals that requires a reduction of saturation before encoding. In fact, both FCC and SMPTE standards for NTSC have no mention of safe colors or anything else regarding desaturation.

For PAL, signal degradation of phase, which is very common, can lead to saturation loss. On NTSC, this results in hue shift, not saturation loss.

Saturation loss on NTSC would need to be bandlimited noise near the subcarrier, due to AGC normalizing any reductions in amplitude on the entire signal due to weather.

A direct composite connection would not have any significant degradation.

Aggressive filtering by notch or comb filters does not affect saturation. These filters solely affect the resolution or visual clarity of the chroma, which is very hard for us to distinguish when the luminance is good anyway.

The difference between 1.3 MHz and 0.6 MHz chroma bandwidth for Q is inconsequential. A signal with symmetrically filtered UV vs the asymmetrically filtered IQ is nearly imperceptible. In practice, the 0.6 MHz Q filters were not used by the time video game consoles came around. These details don’t affect saturation.

There is overwhelming evidence that palettes commonly used for emulators for systems like the NES, Commodore 64, and Apple II, correlate with what they look like on broadcast monitors and with what they theoretically should look like based on standards. FirebrandX’s NES palettes for example show only minor variance between measurements from the NES composite output, his PVM monitor, and Nintendo’s official NES Classic palette.

From my own experience, on multiple CRT displays, when adjusted to a reasonable calibration, the colors are not significantly different from monitors today.

One thing I will mention is that a great deal of digitization and innovation occurred in the 80s that greatly changed how TVs operated. Broadcasters continued to adhere to a great deal of tribal knowledge to ensure that their broadcasts would look adequate on old TVs. Old TVs were very sensitive to ‘hot’ colors and so broadcasters filtered ‘unsafe’ colors to avoid this situation (as well as to avoid disrupting the audio subcarrier, irrelevant to direct composite connections). This knowledge carried on even to the DVD era where DVD masterers continued to filter out the unsafe colors even though there was almost no risk of it disrupting anyone’s viewing experience by that time.

Phosphors also changed rapidly through the 80s and by the 1990s the phosphors used in TVs were completely different from the ones before. They were more responsive and clear, but needed to be ‘juiced’ to achieve the same color gamut. These formulations were not consistent in the beginning, and cheaper sets could have poor color gamut, leading to some cheap tricks like boosting red to make up for it. Poor circuitry or tuning, or aging, could lead to color bleeding and localized saturation loss, but not significant color changes.

For NTSC, a bad hue setting was much more common than saturation issues. PAL on the other hand did have issues with saturation when phase shift occurred. However, for direct connection there was not a significant phase error.

Finally, even if any part of the signal process degrades the saturation of an RGB image when converting to a composite signal, the user of the TV is expecting to compensate for it. That is, after all, one of the main purposes of color bars. Therefore, it is in my opinion that an NTSC type shader should generally not degrade saturation of its input, and a PAL type shader should only do so if simulating degradation. An NTSC shader could, however, have a Color control to simulate the setting on TVs. The default setting should not change the base saturation however.

5 Likes

I believe for all these debates, one should get the original equipment, test it and compare and trust his eyes only, and probably keep the conclusion for himself.

If you raise that on a debate, everyone will say his own opinion and has his own motives, someone will flex his TV set to make a point (it doesn’t look like that on my new Sony Teratrillion 65", when i capture it with my new Snapdragon Gen 21, when i enable HDR 550 and BFI on 240hz, just got out of my new Ferrari btw), some other guy has a junk CRT TV with different results and so on. It only gets messy and to nothing really useful. If not end in flame war.

4 Likes