I think there are some misconceptions in this thread.
There is nothing intrinsic in the design of NTSC or PAL video signals that requires a reduction of saturation before encoding. In fact, both FCC and SMPTE standards for NTSC have no mention of safe colors or anything else regarding desaturation.
For PAL, signal degradation of phase, which is very common, can lead to saturation loss. On NTSC, this results in hue shift, not saturation loss.
Saturation loss on NTSC would need to be bandlimited noise near the subcarrier, due to AGC normalizing any reductions in amplitude on the entire signal due to weather.
A direct composite connection would not have any significant degradation.
Aggressive filtering by notch or comb filters does not affect saturation. These filters solely affect the resolution or visual clarity of the chroma, which is very hard for us to distinguish when the luminance is good anyway.
The difference between 1.3 MHz and 0.6 MHz chroma bandwidth for Q is inconsequential. A signal with symmetrically filtered UV vs the asymmetrically filtered IQ is nearly imperceptible. In practice, the 0.6 MHz Q filters were not used by the time video game consoles came around. These details don’t affect saturation.
There is overwhelming evidence that palettes commonly used for emulators for systems like the NES, Commodore 64, and Apple II, correlate with what they look like on broadcast monitors and with what they theoretically should look like based on standards. FirebrandX’s NES palettes for example show only minor variance between measurements from the NES composite output, his PVM monitor, and Nintendo’s official NES Classic palette.
From my own experience, on multiple CRT displays, when adjusted to a reasonable calibration, the colors are not significantly different from monitors today.
One thing I will mention is that a great deal of digitization and innovation occurred in the 80s that greatly changed how TVs operated. Broadcasters continued to adhere to a great deal of tribal knowledge to ensure that their broadcasts would look adequate on old TVs. Old TVs were very sensitive to ‘hot’ colors and so broadcasters filtered ‘unsafe’ colors to avoid this situation (as well as to avoid disrupting the audio subcarrier, irrelevant to direct composite connections). This knowledge carried on even to the DVD era where DVD masterers continued to filter out the unsafe colors even though there was almost no risk of it disrupting anyone’s viewing experience by that time.
Phosphors also changed rapidly through the 80s and by the 1990s the phosphors used in TVs were completely different from the ones before. They were more responsive and clear, but needed to be ‘juiced’ to achieve the same color gamut. These formulations were not consistent in the beginning, and cheaper sets could have poor color gamut, leading to some cheap tricks like boosting red to make up for it. Poor circuitry or tuning, or aging, could lead to color bleeding and localized saturation loss, but not significant color changes.
For NTSC, a bad hue setting was much more common than saturation issues. PAL on the other hand did have issues with saturation when phase shift occurred. However, for direct connection there was not a significant phase error.
Finally, even if any part of the signal process degrades the saturation of an RGB image when converting to a composite signal, the user of the TV is expecting to compensate for it. That is, after all, one of the main purposes of color bars. Therefore, it is in my opinion that an NTSC type shader should generally not degrade saturation of its input, and a PAL type shader should only do so if simulating degradation. An NTSC shader could, however, have a Color control to simulate the setting on TVs. The default setting should not change the base saturation however.