Adding more needed builtin uniform variables

as said in this post and in this

I suggest adding those

  • “FieldOrder” [0=none (progressive), 1=bottom field first, 2=top field first]
  • “FieldSeparate” [0=frame based (use FieldOrder to know whether it progressive or not), 1=it’s the bottom field, 2=it’s the top field]

– both will help a lot in interlaced cases

  • “GameConsole” will help in case if the resolution is not enough, like if the console has special form of composite signals
  • “GameRegion” will help in case if the game is PAL or even special form of PAL like PAL-N, or in case of NTSC-J vs NTSC-M, etc…
  • “OriginalResolution” will help in case of the core render in more than 1x like 4x and 8x so the shaders can use that to give correct output and work faster

if this get integer value, like if the NES get value of “9” and if the core output as raw then “-9” can be used

Edit: I forget

  • “OriginalOverscan” it will help if the retroarch or the core didnt crop the Overscan and leave it to the shader

Edit #2: maybe OriginalOverscan can have negative value to output pure composite grayscale output (with Colorburst and other stuff) just like composite/s-video video from the real console, and negative value of GameRegion to output as s-video instead of composite

To calculate correct artifact colors we need to reconstruct the dot pattern of the signal. To do this we need to know the exact horizontal frequency, the color subcarrier frequency, and the number of lines and pixels including blanking. Some examples:

NES/SNES: Get framerate from OriginalFPS, multiply by 262 to get H freq., multiply by 341 (derived through a parameter) and divide by 1.5 to get SC freq.

Genesis: Subcarrier is exact to standard, get framerate from OriginalFPs, multiply by 262 to get H freq. Guesstimate nonactive horizontal because it depends on the mode (usually 100 px for 320 mode)

N64: OriginalFPS is wrong, use a hardcoded value…

Even if cores report horizontal frequency or region, it isn’t enough to know the right way to get the correct composite artifacts. I had to implement different modes for different situations because the consoles all work in different ways. In NTSC, sometimes the line count is 262 and sometimes it’s 263 (and for PC Engine it can switch between fields). In PAL it can be 312 or 313.

Probably the most natural for emulator authors would be pixel clock freq and current number of lines + v. blanking, as I believe emulators need to be aware of both for timing to work accurately. Those two values combined with OriginalFPS can be used to derive a lot of the other timings.

2 Likes

so that mean

  • “HorizontalFrequency”

  • “SubcarrierFrequency”

maybe the Solution is to give horizontal frequencies as array? and that array has every horizontal frequency for every line

or gone full analog style (which mean using horizontal lines just like analog tv) with maybe negative value of FieldSeparate

Looking more deeply into this, I came up with a more workable solution:

Most systems use a very stable oscillator for the colorburst generator. This is because the colorburst needs to be very close to standard or the TV won’t treat it as a color signal and display it as black and white. The Genesis and Mega Drive derives their subcarriers by dividing their master clocks. The result is a little off standard but not significant. PAL-M (Brazil) consoles use a different subcarrier from PAL and NTSC and it is different enough to cause issues if not set correctly.

Solution: cores can report either their region or the subcarrier frequency directly

Master clock is usually related to subcarrier for older consoles. However, systems like the PS1 and Neo Geo have separate oscillators. There are three cases I’ve found:

  1. Multiply subcarrier frequency (SNES)
  2. Master clock is defined, subcarrier is derived by dividing (Genesis)
  3. Master clock is defined, subcarrier is defined or derived independently (Neo Geo)

Solution: if cores express their master clocks, we don’t have to worry about the different cases. But the values have to be very precise.

AFAIK master clock cycles are fixed per line within the same video mode. But the Genesis can have different cycles for certain video modes. However I believe these video modes were not used in games (I could be wrong). H. frequency is found by dividing master clock cycles per line.

Cores could report the clock cycles per line, but I’m not sure if it is something easy to determine. Cores could also report horizontal frequency directly, but I don’t want to recommend this because I think it could be too easily rounded or guesstimated and that will cause issues. The clock cycles per line is a more stable and workable value.

With some fuzzy math we can then figure out the inactive video length in pixels from fhe core source size and the number of inactive scanlines. The current OriginalFPS parameter can be used and it doesn’t need to be precise.

If OriginalFPS is wrong, we alternatively need to specify the number of lines per field we want (for NTSC this is usually 262 or 263). We can then correct for OriginalFPS.

So in most situations, it seems region, Master Clock rate, Master Clock cycles per line, and either OriginalFPS or number of lines for the current field will be enough to reconstruct composite signal dot pattern (at least in the ideal sense).

Trying to work backwards from V. Freq and number of scanlines to get H. Freq. doesn’t work as well because OriginalFPS is not precise enough. Also that number is often irrational anyway. Going from subcarrier/Master Clock lets us work with precise ratios.

I’m not sure it makes sense to add all of that stuff to every core and then add to RetroArch a way to pass that information to the shader backend vs just having a database of those values in the shader that can be personalized through parameters/presets to match the desired core, like this:

That doesn’t keep up with changing modes on its own, but I think that’s also something that can be handled in console-specific code without talking directly the core, for the most part.

1 Like

That’s essentially what I’m doing now, except I’m bypassing the database and just putting it as parameters directly (one of the design goals of my shader is to avoid per-system exceptions in the shaders as much as possible). Since we can do the calculations in the vertex shader, it’s not expensive to derive values we need from the core parameters, of which there are really only four: Subcarrier, Master Clock, Master Clock cycles per line, OriginalFPS/lines per field.

There are additional controls I needed to add to change how those parameters are defined, but that stuff wouldn’t be necessary if the cores reported those values.

The issues for me are that it has to be maintained across all shaders that would want to use that logic and the potential of video modes changing. I am not sure how much of an issue that is in practice. For example I was able to wrangle all the mode possibilities for the SNES and get the right output, even with different source sizing, interlacing, etc. The Genesis I’m not sure about, depends if those uncommon modes were ever used.

If you are curious about the exact way I derive the time bases you can find it here in the compute_timebase function:

https://github.com/anikom15/scanline-classic/blob/dev/shaders/modulation.inc

But you know, I’m very happy with how far I’ve been able to get with this parameter-based system.

Also, I don’t know how important Field Order actually is. I don’t implement it in the timing code. If someone knows the correct values for different systems, I would like to know how it can affect things.

2 Likes

the problem is even if you know for sure the system correct value, it will change based on how you interacts with the retroarch, and it kinda random, you can test with ps2 silent hill 3 intro and see, you will get interlace aliasing unless you use deinterlace shader, even with deinterlace shader sometime you will see interlace aliasing because the core sometimes be TFF and sometimes BFF, there are a hack for it by pressing “P” to pause and then another “P” sometimes help in aligned both the core and the shader Field order

also you will need this hack with my https://www.mediafire.com/file/t2hqpgn0vrfh86d/Motion-Adaptive-Deinterlacing-a.rar/file which do interpolating [Point Sampler (Nearest Neighbour)] instead of blending, and sometimes it will work and sometimes not, unless you use [pressing “P”] hack, and that even in ps1 games

and from my (and beans) point of view, output Separate fields will be the best in many cases, see here

having the output field would be ideal, but the other thing you can do is: if there’s a BFF/TFF parameter (easy enough to add if not), as long as you don’t pause the game while in the menu, you can toggle it without any guessing/trial-and-error involved.

1 Like

and to do that you need to use “F1” which kinda act like pressing “P” :slight_smile:, and that will make the hack work even harder! (probability will be like 1 in 4 instead of 1 in 2)

1 Like

still kinda hack but I think that will work, also as I said the core randomly sometimes be TFF and sometimes BFF, also I think for now [pressing “P” to pause and then another “P” until both the core and the shader Field order aligned] is easier, also since not all shaders has BFF/TFF parameter

so if we cant have “having the output field” then at least we need “FieldOrder” [0=none (progressive), 1=bottom field first, 2=top field first]

I think I know what you mean. When I bring up the menu, the shaders still cycle through as if FrameCount is increasing. To me that seems like a bug.

1 Like

That’s why I added FrametimeDelta too; it could give better results if averaged through time.

I apologize in advance for this. This post is a bit of a stream-of-consciousness list of issues I’ve encountered that could be solved with more information from the core.

I use the “database” style so that users don’t need to change parameters for each new game (or change parameters when a game changes modes, in some cases). They just need to pick the system. There are still some issues, though.

For one thing, you need at least three values to derive the chroma phase for a given position:

  1. Chroma subcarrier cycles per dot (chroma subcarrier clock / dot clock).
  2. Chroma subcarrier cycles per line.
  3. Number of lines per frame/field.

(Or some variation of those. For example, you can use the dot clock frequency, hsync frequency, and vsync frequency. Or you can use the active line time, total line time, and frame time. In the end, you need to know information about the position in the line, the line, and the frame/field.)

Even with three values, we’re only really getting the relative phase correct. The phase is shifting by the correct amount but the zeroes are not necessarily in the correct place. Consoles may not even be consistent about this. For example, the NES seems to have 6 (I think?) different initialization states and you don’t know which you’ll get when you turn it on. The relative phase might be good enough, though.

I detect which mode (e.g. 256px or 320px for Genesis) is being used by the width of the original input. Requiring the user to manually select the mode isn’t ideal, first because it is cumbersome to change settings with each game, and second because some games change modes on the fly (especially on consoles like the PSX). Some cores optionally display borders around the image (for Genesis, for example), which has to be accounted for. This works for the most part, but upscaling breaks it. If the core provided the chroma information, this could work even with upscaling. Some systems can switch modes in the middle of a frame or maybe even scanline, which probably isn’t possible to deal with even if we get more info from the cores (although I don’t have an example of this actually occurring).

The NES/SNES are slightly more complicated because of the one skipped dot in one line every odd frame. And even this isn’t always the case. Battletoads on NES, for example, doesn’t do this. There’s no way for us to know which behavior is correct without input from the core or explicit configuration by the user.

The PCEngine is another issue, because developers could choose whether the frame should be 262 lines or 263, and this leads to different artifacts. 263 looks much better. Again, there’s no way for us to know which is correct without input from the core or explicit configuration by the user.

Regarding interlacing, the pause trick can work but is cumbersome for games that switch back and forth (some games use interlacing for the menus, for example). It would be nice to know if we are supposed to be displaying an odd field or an even field.

Ultimately, I don’t expect to actually get these parameters. It seems like it would be quite a bit of work and would require buy-in from all the relevant core maintainers. Still, it is good to know what our limitations are.

3 Likes

Absolute phase doesn’t matter. As long as the modulator and demodulator are locked, you’ll get the right colors, artifacts and all. What’s more relevant is the possibility of differential phase distortion which has been reported on the NES but I don’t believe has been explored anywhere else.