Guys, forget about the term information loss.
Let’s try to remain constructive. I’m very well aware that the output of the emulator is a 480p image and that “unweaving” it results in odd and even fields of both 240p (or in the linedoubled case 960 is unweaved to 480p twice).
When I say information loss, I’m seeing less sharpness on the interlaced mode 2.0 than the progressive output of mode 4.0. This should not happen, as 2.0 is the unweaved version of 4.0. So please forget about the term information loss, but re-read it as the following:
With the release of yesterday version 2.0 mode 3.0 (out of 5) I am seeing the same information in interlaced mode as current mode 4.0. However with the latest mode 2.0 of the release of today, I see some instances where the sharpness of the interlaced mode is less than that of mode 3 (out of 5) from the 2,0 release of yesterday.
So for me this code in linearize-ntsc.slang:
current mode 2.0:
//if (interm == 2.0) { res = clamp( res1 * max(ii, 0.5) - res1*min(1.0-ii, 0.5), 0.50*min(res1,res2), 1.0.xxx); }
is losing some detail in some intances versus the old mode 3.0:
` if (interm == 3.0) { res = res1 * max(ii, 0.5); }`
Since I cannot capture what I’m seeing (because of the alternating fields) I try to come up with a test case that somehow illustrates it. However it seems I keep getting the feeling of not being understood and that this issue will remain in the current mode 2.0. Again I’m not critisizing, I just would like to end up with a satisfactory outcome for the interlaced mode.
As said “{ res = res1 * max(ii, 0.5); }`” this one is working very well for me. Maybe I should try to show the difference between the outcome of that and the current 2.0?