Were composite video colors that bad, in practice?

I was curious so I was just plugging my PS2 into the AEG TV and do a NTSC/PAL and RGB/composite check.

  • Tint control also works under RGB (what?)
  • Color difference between PAL and NTSC, but also when switching from 50 Hz RGB to 60. PAL looks a little more muted.
  • Composite and RGB look pretty similar.

I don’t need to test my other CRT TV to know that the differences are even larger (between PAL and NTSC anyway).

I guess next thing is testing LCDs and searching for my USB grabber. Looking through my captures, I did -S-Video versus composite, but not directly PAL versus NTSC :unamused:

Posted here (bottom 3 pics).

2 Likes

maybe you didnt use RGB part of scart and you did use the Composite?

maybe PAL-60 was treated as if it PAL-M? in any case, I think the extent of the flicker of the fields in 50hz vs 60 hz also affects how the eye sees colors

that why it’s better to make sure that the console and tv are in a good shape and the tv is calibrated, no mods and only using original cables, the tv and console and games should be from the same region, also it’s better if the TV is from the same era of the console or a bit older

Lol, no I did pay attention, it works both with RGB and composite, as well as with PAL and NTSC. Maybe the result of it being an end of era (2006-2008) cheap CRT.

Idk, I still kinda think whatever significant color changes we see between composite and RGB are caused by the decoding process.

Themaister NTSC / guest-NTSC etc is basically a very idealized composite video, very close to the “near-perfect” examples I’ve posted. It’s doing all of the steps, conversion to YIQ and then decoding etc. What it isn’t doing is the analogue hardware-specific decoding - specific notch filters and comb filters. I think that’s where we’re getting big color changes, if anywhere.

This example makes a lot of sense to me. Colors are the same. There’s no comb or notch filtering involved. It’s just a sharpness difference.

Comparison of RGB and Composite. RGB is fed via my PC and a 2nd PCI-E Radeon 6450 and VGA cable, composite via RPI 2W and Lakka. Brightness has gone down in general, a bit of saturation too, overall composite image quality is excellent. Both used fceumm on Retroarch. Probably just a tiny detail loss due to artifacts and minimal chroma bleed, at least on a RPI 2W.

3 Likes

What kind of TV?

You made sure to select NTSC composite and not PAL composite? sdtv_mode = 0

The example just posted by Jamirus basically proved that the conversion from RGB to YUV does not need to entail any loss of brightness or color. It must be something happening during the decoding step.

That’s NTSC composite. But it’s on two different systems, OS etc. In general i could live with composite if i had to.

1 Like

I think there’s too much going on between those two setups-

What happens if you change to limited range for RGB? This drastic of a change points to a setup issue, I think.

I dont mind Idealized composite video but it will be nice if we have something that act like the real hardware (just in case and for preservation)

anyway IIRC this https://github.com/happycube/ld-decode can act like the real composite video

Edit: this only decode, to encode I think this https://github.com/fsphil/hacktv will do it

dont know if there are something better (there are https://github.com/LMP88959/NTSC-CRT and https://github.com/LMP88959/PAL-CRT and there are https://github.com/Gumball2415/pally that been also used in https://github.com/L-Spiro/BeesNES, there are also https://github.com/Slamy/fpga-composite-video) also maybe we need some SECAM https://github.com/tuorqai/libsecam love :slight_smile: (Personally I have not played any video game with SECAM but it’s nice to have)

edit: also there are this https://github.com/svofski/CRT

BTW, we seems forget the sub-type of Composite video like PAL-N and NTSC-J which can make different in color and even in brightness

1 Like

The ntsc video output has a color clipping, which is compensated for by saturation in the TV modulator.
Then I write about this, I am trying to clarify some doubts and terms.

2 Likes

well, seems there are 2 type of official PAL (aside from regional types like N and M) https://youtu.be/A8wNBBv_ttk?t=1067 (saturation lost in 18:50) also why PAL Chroma subsampling 4:2:0 (DV not MPEG-2)

(NTSC is 4:1:1, but both can be recorded as 4:2:2 in digital world)

I think PAL-D is not a type rather than a way to do decode? the signal is the same?

also in case of “NTSC use equal bandwidth to both I and Q” it also in the receiver (consumer TV) side not the signal? https://youtu.be/CBFlhj2UMEk?t=1343

1 Like

also in case of “NTSC use equal bandwidth to both I and Q” it also in the receiver (consumer TV) side not the signal?

I’ve looked at a few data sheets for NTSC decoders and they all describe equiband decoding. They’d need extra components (a separate delay line, for example) to do decoding with different bandwidths for I and Q.

1 Like

what about the encoders? the encoded signal that come for example from consoles?

Ok, I pretty much fully understand what’s happening, now.

Gemini:

“NTSC composite video output can suffer from color clipping when saturation levels are too high, causing the combined luma and chroma signal to exceed the legal voltage range. To avoid this, the video encoder compensates by reducing saturation, ensuring the signal stays within limits.”

During decoding:

many NTSC decoders (especially in TVs) apply a uniform chroma gain — often called chroma AGC (Automatic Gain Control) or color boost — to ensure the resulting image has reasonable saturation across the board .

But — and this is key — this is not a restoration of lost saturation , it’s just a fixed or adaptive gain applied to whatever chroma signal is present.

So, to be “correct” in our composite video shaders, we should do the YIQ conversion (TVout Tweaks or GTUv50 are the older options for this), give equal bandwidth to I and Q (per @beans) then just crank up saturation. It’s basically that simple. We can do this in a more hardware-specific way by studying the actual encoders and decoders, but this is the basic process.

1 Like

This is the chip the SNES used, probably rebranded. Takes RGB and converts it to S-VIDEO/COMPOSITE

https://wiki.console5.com/tw/images/e/e6/BA6592F.pdf

2 Likes

Great work! I hope this topic continues to serve as a catch-all for such finds.

1 Like

If we could translate Japanese it could be useful heh

1 Like

Rough translated on AI and here it’s what is doing and result

// Input: either a texture with RGB or you can feed color another way
#define uPixelClock 13.5e6    // pixel clock in Hz (e.g. 13.5e6 typical for SD sampling)
#define uSubcarrierHz 3.57e6  // chroma subcarrier freq (NTSC = 3579545.0, PAL = 4433618.0)
#define uModePAL 0.0         // 0 = NTSC, 1 = PAL (pal alternation enabled)
#define uChromaGain 1.0    // scale factor for chroma carrier amplitude
#define uDeltaRdeg 1.0     // ΔR in degrees (datasheet typical ~ +1°)
#define uDeltaBdeg 4.0     // ΔB in degrees (datasheet typical ~ +4°)
#define uBurstPhaseDeg 180.0 // burst phase relative to R-Y (deg). Datasheet implies ~180°
#define uPixelsPerLine SourceSize.x // visible pixels per line (e.g. 720)

// coefficients: Rec.601-like luminance for simple RGB->Y
const vec3 Ycoeff = vec3(0.299, 0.587, 0.114);

// Helper
float deg2rad(float d) { return d * 3.14159265358979323846 / 180.0; }

void main() {
    // Read the RGB input
    vec3 rgb = texture(Source, vTexCoord).rgb;

    // Compute luminance and color-difference signals
    float Y  = dot(rgb, Ycoeff);      // 0..1
    float RY = rgb.r - Y;             // R - Y
    float BY = rgb.b - Y;             // B - Y

    // Convert ΔR/ΔB and burst phase to radians
    float deltaR = deg2rad(uDeltaRdeg);
    float deltaB = deg2rad(uDeltaBdeg);
    float burstPhase = deg2rad(uBurstPhaseDeg);

    // Calculate pixel time index t_pixel
    // We compute an approximate continuous time value per pixel:
    // t_pixel = (lineIndex * pixelsPerLine + pixelIndex) / pixelClock
    // For simplicity we use pixel coordinates to derive these values.
  
    float pixelIndex = floor(vTexCoord.y*SourceSize.y)*SourceSize.x + vTexCoord.x*SourceSize.x;
    //float t_pixel = pixelIndex / uPixelClock;

    // Subcarrier phase
    float phase = 2.0 * 3.14159265358979323846 * pixelIndex * uSubcarrierHz / uPixelClock;

    // PAL alternation: flip one carrier axis 180° every line
    if (uModePAL == 1) {
        // flip sign every other line (equivalent to add PI)
        if (mod(floor(vTexCoord.y*SourceSize.y), 2.0) > 0.5) {
            phase += 3.14159265358979323846; // add pi on odd lines
        }
    }

    // Apply small datasheet phase errors (ΔR, ΔB). The datasheet defines ΔR = eR - 90°,
    // but here we allow direct tuning offsets (additive).
    float phaseR = phase + deltaR;  // used with sin()
    float phaseB = phase + deltaB;  // used with cos()

    // Build modulated chroma (carrier)
    // We use sin for R-Y and cos for B-Y to represent quadrature (90° separation)
    float chroma = (RY * sin(phaseR) + BY * cos(phaseB)) * uChromaGain;

    // Compose outputs:
    // YO: luminance + sync (we don't generate sync here; if needed add negative pulses)
    // CO: chroma (modulated) - real hardware filters & levels would differ; normalize
    // VO: composite = Y + chroma (clip to 0..1)
    float composite = clamp(Y + chroma, 0.0, 1.0);

    // Optionally: overlay the burst reference (for debugging), gate it during burst region
    // (Not implemented: you'd need horizontal timing + burst window timings.)

    // Output as grayscale visualization of composite on all channels
    FragColor = vec4(vec3(composite), 1.0);
}

Had to correct some dumb errors of AI like using gl_FragCoord for pixel index but otherwise pretty amazing job in translating to a shader,

1 Like