Were composite video colors that bad, in practice?

Remember: only do the YIQ conversion once! You don’t want to repeat this in your shader chain. I made this mistake in the Mega Man example I just posted. :smiley:

Here are the shaders that do the YIQ thing:

TVouttweaks
GTUv50
NTSC Colors
nes-color-decoder
any recent composite video shader

Probably missing a few. I think these can all be combined with ntsc-adaptive without “doubling up” on effects.

EDIT: yeah nvm, this is all kind of mysterious to me, still. I think the general principle is correct, though- only do the YIQ conversion once. How this plays out exactly with all the shaders gets a bit messy :slight_smile:

If you deep dive the schematics it’s possible to create a most faithful representation, e.g. those R35, R37 270 low pass I,Q to less than 2.7mhz and go back to chip, then approximate the detail loss there (assuming full detail is like 5mhz or something), then the next one, enter the S-ENC do what it does and so on. It will take so much time that you’ll do only one system like blargg did lol.

Start from ppu2 follow the color line and emulate what’s going on. I’ve also seen some ntsc encoders that divide Q like Q/2 in their data sheets after matrix pass. There is a truckload of small details on this rabbit hole, phase shiftings 1 degree on sin, 4 on cos and more and more and more lol

2 Likes

I think the document is saying something else. Filtering both at 0.6 MHz should result in the least crosstalk.

The crosstalk they mention is due to the asymmetrical sidebands of I when it is limited to 1.3 MHz. If you allow more than 0.6 MHz of bandwidth for either (or both) of I and Q and center them on the chroma subcarrier, you’ll have chroma data reaching beyond 4.2 MHz. That is not allowed for broadcast (but it may work with composite input). To keep everything under 4.2 MHz, you need to filter out the higher frequencies and you are left with a vestigial sideband. The phase differences that result from this can introduce crosstalk when you demodulate.

From the same book, p. 435:

When using lowpass filters with a passband greater than about 0.6 MHz for NTSC (4.2 – 3.58) or 1.07 MHz for PAL (5.5 – 4.43), the loss of the upper sidebands of chrominance also introduces ringing and color difference crosstalk.

This isn’t really an issue for composite, though. Unlike broadcast, composite is not limited to 4.2 MHz and so we don’t have to cut the upper sideband.

If you deep dive the schematics it’s possible to create a most faithful representation, e.g. those R35, R37 270 low pass I,Q to less than 2.7mhz and go back to chip, then approximate the detail loss there (assuming full detail is like 5mhz or something), then the next one, enter the S-ENC do what it does and so on. It will take so much time that you’ll do only one system like blargg did lol.

I was curious and I actually did try to simulate these low pass filters, but I think either I am doing something wrong or the capacitor values in the schematic are incorrect. According to my simulation, they’d make terrible filters. Like, -6 dB at 4-5 MHz or something (I don’t remember off the top of my head). 🤷

1 Like

That’s a huge rabbit hole right there, lot’s of smaller rabbit holes in each step, notice how emulator devs never touch it to offer some Composite/RGB option, they just bypass that part and end up directly to RGB video out lol. They will just forget S-ENC was inside there.

1 Like

with new guest update that add safe volt limit

and it’s remind me of

so now I think I know why I didnt like to use NTSC back then, aside from those

3 Likes

Another interesting one is the Sony CXA2075M similar encoder, you can see there are 2 LPF after YIQ matrix, probably setting the bandwidths, subcarrier is then added, chroma combined and Bandpassed, Luma goes to Trap Filter to remove high frequencies of chroma, and mixed then Y/C to end in pin 20. Luma probably loses some high frequency detail too in trap filter.

RGB out in pins 23-22-21, Composite in pin 20, S-Video in pins 16-15.

1 Like

I think there are some misconceptions in this thread.

There is nothing intrinsic in the design of NTSC or PAL video signals that requires a reduction of saturation before encoding. In fact, both FCC and SMPTE standards for NTSC have no mention of safe colors or anything else regarding desaturation.

For PAL, signal degradation of phase, which is very common, can lead to saturation loss. On NTSC, this results in hue shift, not saturation loss.

Saturation loss on NTSC would need to be bandlimited noise near the subcarrier, due to AGC normalizing any reductions in amplitude on the entire signal due to weather.

A direct composite connection would not have any significant degradation.

Aggressive filtering by notch or comb filters does not affect saturation. These filters solely affect the resolution or visual clarity of the chroma, which is very hard for us to distinguish when the luminance is good anyway.

The difference between 1.3 MHz and 0.6 MHz chroma bandwidth for Q is inconsequential. A signal with symmetrically filtered UV vs the asymmetrically filtered IQ is nearly imperceptible. In practice, the 0.6 MHz Q filters were not used by the time video game consoles came around. These details don’t affect saturation.

There is overwhelming evidence that palettes commonly used for emulators for systems like the NES, Commodore 64, and Apple II, correlate with what they look like on broadcast monitors and with what they theoretically should look like based on standards. FirebrandX’s NES palettes for example show only minor variance between measurements from the NES composite output, his PVM monitor, and Nintendo’s official NES Classic palette.

From my own experience, on multiple CRT displays, when adjusted to a reasonable calibration, the colors are not significantly different from monitors today.

One thing I will mention is that a great deal of digitization and innovation occurred in the 80s that greatly changed how TVs operated. Broadcasters continued to adhere to a great deal of tribal knowledge to ensure that their broadcasts would look adequate on old TVs. Old TVs were very sensitive to ‘hot’ colors and so broadcasters filtered ‘unsafe’ colors to avoid this situation (as well as to avoid disrupting the audio subcarrier, irrelevant to direct composite connections). This knowledge carried on even to the DVD era where DVD masterers continued to filter out the unsafe colors even though there was almost no risk of it disrupting anyone’s viewing experience by that time.

Phosphors also changed rapidly through the 80s and by the 1990s the phosphors used in TVs were completely different from the ones before. They were more responsive and clear, but needed to be ‘juiced’ to achieve the same color gamut. These formulations were not consistent in the beginning, and cheaper sets could have poor color gamut, leading to some cheap tricks like boosting red to make up for it. Poor circuitry or tuning, or aging, could lead to color bleeding and localized saturation loss, but not significant color changes.

For NTSC, a bad hue setting was much more common than saturation issues. PAL on the other hand did have issues with saturation when phase shift occurred. However, for direct connection there was not a significant phase error.

Finally, even if any part of the signal process degrades the saturation of an RGB image when converting to a composite signal, the user of the TV is expecting to compensate for it. That is, after all, one of the main purposes of color bars. Therefore, it is in my opinion that an NTSC type shader should generally not degrade saturation of its input, and a PAL type shader should only do so if simulating degradation. An NTSC shader could, however, have a Color control to simulate the setting on TVs. The default setting should not change the base saturation however.

5 Likes

I believe for all these debates, one should get the original equipment, test it and compare and trust his eyes only, and probably keep the conclusion for himself.

If you raise that on a debate, everyone will say his own opinion and has his own motives, someone will flex his TV set to make a point (it doesn’t look like that on my new Sony Teratrillion 65", when i capture it with my new Snapdragon Gen 21, when i enable HDR 550 and BFI on 240hz, just got out of my new Ferrari btw), some other guy has a junk CRT TV with different results and so on. It only gets messy and to nothing really useful. If not end in flame war.

4 Likes

what about

https://creativecow.net/forums/thread/16-235-vs-0-255ae/

and

from wikipedia

1 Like

The situation is very similar to usage of sqrt(x). At’s abysimal to claim every coder or whatever uses sqrt(max(x,0.0)) etc. everytime x is a float or something. :grin:

2 Likes

That’s the ‘local broadcaster tribal knowledge’ I was talking about. Because the color carrier is a modulated sine signal, the color signals go above 100 IRE. Thus analog TV signals are afforded to go above 100 IRE, but they cannot go too high, especially on the old equipment. The recommended limit is 120 IRE, but that is very conservative, and most TVs actually should not have a problem with higher values. Lots of equipment we used, like VHS players and vide game consoles, violated the limit, and they still worked fine. The only problem I ever saw was with a weird multi-scanning TV that would lose sync with bright enough picture.

The Wikipedia page is talking about the limited output range afforded to rec. 601 signals. Rec. 601 afforded some space in the digital domain to hold negative IRE values and values above 100. This is so that digital processing could operate on that data without clipping.

1 Like

Thank you for sharing your knowledge with us.

That tracks - the stuff we’re describing should all be happening on the TV side of things. The issue is not the signal itself, but how it’s being physically transmitted via a single cable. There’s no desaturation resulting from intentionally limiting I and Q during encoding?

Saturation loss on NTSC would need to be bandlimited noise near the subcarrier, due to AGC normalizing any reductions in amplitude on the entire signal due to weather.

I was actually not sure if the safe volt thing was exclusively an RF thing, I was starting to lean in that direction. Good to have that cleared up.

Ok, this tracks- blurring the chroma is a form of local desaturation. Filtering doesn’t affect saturation in a uniform way.

The difference between 1.3 MHz and 0.6 MHz chroma bandwidth for Q is inconsequential. A signal with symmetrically filtered UV vs the asymmetrically filtered IQ is nearly imperceptible. In practice, the 0.6 MHz Q filters were not used by the time video game consoles came around. These details don’t affect saturation.

Can we determine if TVouttweaks is way off in what it’s doing, then? There’s a very significant difference between the asymmetrical “correct” bandwidth and symmetrical bandwidths (e.g., 54/54 instead of 83/25) as seen in the shader.

Do you have a source for the symmetrical bandwidth? It’s been a point of contention around here, would be great to have this settled.

Are you saying that this is pretty much correct? This thread has been a wild ride.

From the OP

Here’s what I think is happening:

-The saturation loss is caused by bleed. Bleed is reduced or eliminated by notch or comb filtering

-What little bleed remains causes a slight saturation loss in those colors that are bleeding

-TV manufacturers often did “nostalgic” calibrations, which probably caused clipping and other “incorrect” things, but it looked good to the consumer (bright and vibrant)

-The consumer then did whatever they wanted with the knobs

In short, I think a generic “composite video look” really comes down to chroma bleed and artifacting. In guest-advanced-ntsc, we should pay attention to the “NTSC Chroma Scaling / Bleeding” parameters, along with “NTSC Artifacting” and “NTSC Fringing.”

1 Like

It depends how aggressive the filtering is and how we define the cutoff frequency. In analog electronics, the cutoff frequency is where the signal’s power level is reduced by 3 dB (or 6 dB if measuring voltage). SMPTE however specifies a cutoff frequency at 2 dB for some reason, so the effective bandwidth is actually slightly bigger than 1.3 MHz. For the Q filter, SMPTE specifies -6 dB at 0.6 MHz. SMPTE recommends filters have a Gaussian characteristic. A digital filter can implement a Gaussian perfectly, but in analog, the closest would be a Bessel filter. This type of filter is economical to implement.

I think it’s important to consider the history here. The original reason the exact phases for I and Q were developed was because it was found that the eye was less sensitive to color changes on the Q axis compared to the I axis, thus different bandwidths could be used and it was established that the Q bandwidth would be narrower. While this was a benefit for broadcasters, it made the decoding circuitry more complicated. RCA developed YPbPr (it seemed to develop completely separately from YUV, despite being almost the same) as a more affordable way to decode the YIQ signals and used the symmetric filtering. The phase difference between YPbPr and YIQ is inconsequential because the color is based off of phase difference, not absolute phase. Thus the development of YPbPr from YIQ is founded on the basis that the bandlimited Q is more or less visually equivalent to the less bandlimited Pb. Most TVs ended up using the YPbPr system, but a few TVs did have true YIQ decoding.

The standard is SMPTE 170M-2004. The standard states encoding as either YPbPr or YIQ is acceptable. You can find a copy from ITU in the BT.1700 package. https://www.itu.int/rec/R-REC-BT.1700-0-200502-I/en

Yeah, pretty much correct.

EDIT: Another thing I should mention, from the perspective of ‘artist’s intent’, is that S-Video was very common in Japan, nearly standard. Japanese developers certainly had S-Video in mind by the time of the 4th gen.

3 Likes

Yeah the dithering stuff is mostly a Sega thing, and Turbografx-16, PSX, N64. It’s just not widely seen on other systems. You can find some examples on SNES for sure, but the system could do real transparencies and display a lot of colors, so there really isn’t a need for dithering. Examples on NES are few and far between as well.

And let’s not forget the fabled Toys R US SNES kiosk, which featured a razor sharp RGB monitor. I was pissed I couldn’t get that quality at home.

1 Like

That is the video output, and the color coding and black levels produced at the CRT video input are also ignored.

It is not an attribute in itself. System M and SMPTE-C are 71% of NTSC, and when colors are remapped with gamut calibration, the values lose 100% of their value. Compensation is needed in the receiver (TV) encoder with saturation (I suppose) to obtain 100%.

This isn’t quite true. Filtering can affect the saturation in fine detail because it is done in gamma space rather than linear color. That won’t affect something like the color of the sky in Super Mario Bros., but it is noticeable in some scenes. At the very beginning of Mega Man X, for example, try adjusting the bandwidth and watching the red lights on the overpass.

I might have time to come up with a mathematical example tomorrow, but for a quick demo you can try blurring an image in gamma space.

2 Likes

What do we make of this? Is this basically composite video without any TV processing? If this is a good source, it tracks what we’ve been saying re: desaturation from encoding, and then the decoder adds it back (automatic gain control and whatever else) because the finished image as seen on a TV certainly didn’t look this washed out- at least not to my memory.

https://www.chrismcovell.com/gotRGB/rgb_compare.html

@beans Thank you for bringing this up. I agree that filtering is distorted when done in Gamma space. I’d love to have the math documented. Still, such distortions would be localized and not affect the picture as a whole, supporting the assumptions in the OP.

@Nesguy the problem I have with that page is that we don’t know what signal processing is done on the capture side. The signal chain needed to show a composite signal vs. an RGB one on a monitor is very different and could introduce the distortions we see. If the capture method is properly terminating the connexions and going off solely on voltages, those degradations might exist. Even RGB is not perfectly clean because we are still working with analog circuitry. We have to do our best to eliminate variables on the display/capture side.

1 Like

Thanks, I suspected this would be the answer!