This post is prompted by Syh’s question in the CRT showoff thread:
@hunterk Could you please explain the difference between the ntsc 2-phase and 3-phase composite/svideo shaders, like what the visual difference is (could it be a quality difference)? Or is the difference something else?
For our uses, the phase has to do with the resolution. 3-phase is for 256px width images, 2-phase is for 320px. More on this in a bit.
background:
The analog color TV standards were built to be backward-compatible with the black and white standard–which is just luminance (aka luma) data–by stacking color data on top. The signal is separated into the YIQ (NTSC) or YUV (PAL) colorspaces, with Y representing the luma and IQ/UV representing the color data channels (aka chroma).
relevant bit:
Svideo is often referred to as Y/IQ because it runs the Y (luma) signal on a separate line from the IQ (chroma) signal, while composite is a … wait for it… composite of the 2 signals running over the same line.
The luma and chroma carrier channels are mostly separate/separable (the “phase” has to do with how the chroma is encoded), but they overlap slightly, and in a composite signal, the luma can bleed into the chroma signal through a process known as “chroma/luma crosstalk”, which is what makes the characteristic color rainbows and “artifact colors”. (H and V sync pulses are in the signal, as well, but they don’t cause any visible issues)
There are also some other artifacts, like the sawtooth and dot crawl effects that are common with NTSC composite video, that should be lessened or gone entirely with svideo.