Does anyone know? I wasn’t able to find any information on this. Is this supposed to be raw composite, no notch filter or comb filter?
Without checking the code i think it just goes to YIQ in “composite mode”, then blur each channel separately, no real composite modulate/demodulate.
Sorry, I should have been more specific. I mean, what do the default values for YIQ mean? What is the baseline? Are we seeing what composite looks like without a notch filter? Then for “notch filter” we’d increase to around
Y 780
I 250
Q 25
For “comb filter”, max them all out? (theoretically, can use the full signal)
Or are the defaults just kind of arbitrary and eyeballed?
there’s no filtering involved, comb or notch, since it’s not actually doing any de/modulation, like DariusG mentioned. It’s just putting it into the YIQ colorspace and then applying the simulated bandwidth blurring to each channel according to the amount of bandwidth they would have had in normal transmission.
Sorry, I get that, I was just trying to figure out if the default values are like “Max composite, full signal bandwidth is used” or something. I didn’t explain very well. But, I answered my own question- the default values represent the full signal bandwidth.
Bandwidth Allocation
- Y (Luminance/Brightness): 4.2 MHz
- I (Orange-Cyan Color): Approximately 1.3 MHz
- Q (Green-Purple Color): Approximately 0.4 to 0.6 MHz
Which corresponds to the defaults of
Y 256
I 83
Q 25
So, that question’s answered
My next question is: why are the ranges so wide if this is already the full signal bandwidth for composite?
I guess it’s so we can emulate S-video and component video?
We can also now do a notch filter adjustment to GTU (sort of) with the following:
Y: 195 (you lose about 1 MHz from notch)
I: 59 (comb filter is full signal bandwidth, comb filter can recover 40% more signal information)
Q: 25 (about the same between comb and notch)m
Edit: eh, didn’t really pan out. Probably no way around just emulating the whole thing like Plain Old Pants, beans, etc are doing
One issue, if I remember correctly, is that these numbers will have to change whenever the input resolution changes. This is kind of related to the questions @Jobima was asking in the crt-beans thread, and the reason I use MHz instead of pixels for the low pass filter (the blur). So you need some range in the parameters just for dealing with different input resolutions.
Also, I’d suggest keeping the I and Q bandwidths the same. The standard gives more bandwidth to I, but I didn’t think anyone actually did it that way. To be really picky, something like YUV was used instead of YIQ (again, the standard was disregarded), but YIQ works well enough there.
In any case, I think this approach is great for simulating S-Video (because you didn’t really lose any significant information by modulating and demodulating the chroma, so we can skip that step). You can simulate some of the extra blur in a composite connection by reducing the Y bandwidth, but this approach can’t truly simulate the artifacts of composite.
Aliaspider intended GTU to be sort of a swiss army knife for resampling. When you crank the bandwidth up really high, you can get a sharp sort of pixellate/bandlimit-pixel effect. Turn it down and you get nice, natural-looking blur.
I’m currently stacking it with guest-advanced-NTSC and letting guest-advanced-NTSC handle the artifacting stuff, but yeah it’s kind of a “good enough” solution until more accurate shaders arrive
Just trying to see how close we can get with the existing tools, basically. Learning a few things along the way.
I’m really stuck on the sharpening part - it’s pretty easy to achieve a sharp, Trinitron composite look. It’s more difficult to achieve something that looks like a notch filter- I guess it makes sense, it’s even further from RGB.