what about
https://creativecow.net/forums/thread/16-235-vs-0-255ae/
and
from wikipediaThe situation is very similar to usage of sqrt(x).
At’s abysimal to claim every coder or whatever uses sqrt(max(x,0.0)) etc. everytime x is a float or something.
That’s the ‘local broadcaster tribal knowledge’ I was talking about. Because the color carrier is a modulated sine signal, the color signals go above 100 IRE. Thus analog TV signals are afforded to go above 100 IRE, but they cannot go too high, especially on the old equipment. The recommended limit is 120 IRE, but that is very conservative, and most TVs actually should not have a problem with higher values. Lots of equipment we used, like VHS players and vide game consoles, violated the limit, and they still worked fine. The only problem I ever saw was with a weird multi-scanning TV that would lose sync with bright enough picture.
The Wikipedia page is talking about the limited output range afforded to rec. 601 signals. Rec. 601 afforded some space in the digital domain to hold negative IRE values and values above 100. This is so that digital processing could operate on that data without clipping.
Thank you for sharing your knowledge with us.
That tracks - the stuff we’re describing should all be happening on the TV side of things. The issue is not the signal itself, but how it’s being physically transmitted via a single cable. There’s no desaturation resulting from intentionally limiting I and Q during encoding?
Saturation loss on NTSC would need to be bandlimited noise near the subcarrier, due to AGC normalizing any reductions in amplitude on the entire signal due to weather.
I was actually not sure if the safe volt thing was exclusively an RF thing, I was starting to lean in that direction. Good to have that cleared up.
Ok, this tracks- blurring the chroma is a form of local desaturation. Filtering doesn’t affect saturation in a uniform way.
The difference between 1.3 MHz and 0.6 MHz chroma bandwidth for Q is inconsequential. A signal with symmetrically filtered UV vs the asymmetrically filtered IQ is nearly imperceptible. In practice, the 0.6 MHz Q filters were not used by the time video game consoles came around. These details don’t affect saturation.
Can we determine if TVouttweaks is way off in what it’s doing, then? There’s a very significant difference between the asymmetrical “correct” bandwidth and symmetrical bandwidths (e.g., 54/54 instead of 83/25) as seen in the shader.
Do you have a source for the symmetrical bandwidth? It’s been a point of contention around here, would be great to have this settled.
Are you saying that this is pretty much correct? This thread has been a wild ride.
From the OP
Here’s what I think is happening:
-The saturation loss is caused by bleed. Bleed is reduced or eliminated by notch or comb filtering
-What little bleed remains causes a slight saturation loss in those colors that are bleeding
-TV manufacturers often did “nostalgic” calibrations, which probably caused clipping and other “incorrect” things, but it looked good to the consumer (bright and vibrant)
-The consumer then did whatever they wanted with the knobs
In short, I think a generic “composite video look” really comes down to chroma bleed and artifacting. In guest-advanced-ntsc, we should pay attention to the “NTSC Chroma Scaling / Bleeding” parameters, along with “NTSC Artifacting” and “NTSC Fringing.”
It depends how aggressive the filtering is and how we define the cutoff frequency. In analog electronics, the cutoff frequency is where the signal’s power level is reduced by 3 dB (or 6 dB if measuring voltage). SMPTE however specifies a cutoff frequency at 2 dB for some reason, so the effective bandwidth is actually slightly bigger than 1.3 MHz. For the Q filter, SMPTE specifies -6 dB at 0.6 MHz. SMPTE recommends filters have a Gaussian characteristic. A digital filter can implement a Gaussian perfectly, but in analog, the closest would be a Bessel filter. This type of filter is economical to implement.
I think it’s important to consider the history here. The original reason the exact phases for I and Q were developed was because it was found that the eye was less sensitive to color changes on the Q axis compared to the I axis, thus different bandwidths could be used and it was established that the Q bandwidth would be narrower. While this was a benefit for broadcasters, it made the decoding circuitry more complicated. RCA developed YPbPr (it seemed to develop completely separately from YUV, despite being almost the same) as a more affordable way to decode the YIQ signals and used the symmetric filtering. The phase difference between YPbPr and YIQ is inconsequential because the color is based off of phase difference, not absolute phase. Thus the development of YPbPr from YIQ is founded on the basis that the bandlimited Q is more or less visually equivalent to the less bandlimited Pb. Most TVs ended up using the YPbPr system, but a few TVs did have true YIQ decoding.
The standard is SMPTE 170M-2004. The standard states encoding as either YPbPr or YIQ is acceptable. You can find a copy from ITU in the BT.1700 package. https://www.itu.int/rec/R-REC-BT.1700-0-200502-I/en
Yeah, pretty much correct.
EDIT: Another thing I should mention, from the perspective of ‘artist’s intent’, is that S-Video was very common in Japan, nearly standard. Japanese developers certainly had S-Video in mind by the time of the 4th gen.
Yeah the dithering stuff is mostly a Sega thing, and Turbografx-16, PSX, N64. It’s just not widely seen on other systems. You can find some examples on SNES for sure, but the system could do real transparencies and display a lot of colors, so there really isn’t a need for dithering. Examples on NES are few and far between as well.
And let’s not forget the fabled Toys R US SNES kiosk, which featured a razor sharp RGB monitor. I was pissed I couldn’t get that quality at home.
That is the video output, and the color coding and black levels produced at the CRT video input are also ignored.
It is not an attribute in itself. System M and SMPTE-C are 71% of NTSC, and when colors are remapped with gamut calibration, the values lose 100% of their value. Compensation is needed in the receiver (TV) encoder with saturation (I suppose) to obtain 100%.
This isn’t quite true. Filtering can affect the saturation in fine detail because it is done in gamma space rather than linear color. That won’t affect something like the color of the sky in Super Mario Bros., but it is noticeable in some scenes. At the very beginning of Mega Man X, for example, try adjusting the bandwidth and watching the red lights on the overpass.
I might have time to come up with a mathematical example tomorrow, but for a quick demo you can try blurring an image in gamma space.
What do we make of this? Is this basically composite video without any TV processing? If this is a good source, it tracks what we’ve been saying re: desaturation from encoding, and then the decoder adds it back (automatic gain control and whatever else) because the finished image as seen on a TV certainly didn’t look this washed out- at least not to my memory.
@beans Thank you for bringing this up. I agree that filtering is distorted when done in Gamma space. I’d love to have the math documented. Still, such distortions would be localized and not affect the picture as a whole, supporting the assumptions in the OP.
@Nesguy the problem I have with that page is that we don’t know what signal processing is done on the capture side. The signal chain needed to show a composite signal vs. an RGB one on a monitor is very different and could introduce the distortions we see. If the capture method is properly terminating the connexions and going off solely on voltages, those degradations might exist. Even RGB is not perfectly clean because we are still working with analog circuitry. We have to do our best to eliminate variables on the display/capture side.
Thanks, I suspected this would be the answer!