An input lag investigation

Thanks for reply, It was done with a be simple test method.
After posing with P, hold the action button and press K to move to the next frame until the action is reflected on the screen.
As a result, it was found that the Wii titles has an additional input lag of 1 frame as compared to the stand-alone.

Gradius ReBirth was 0 frames for Stand-alone and 1 frame for RA core. Muramasa had 2, 3 frames turned into 3, 4 frames (This title was tested on the title screen or next difficulty selection, as there is an intentional additional input lag by the Vanillaware Ltd. developer for actions inside the game.)

But in the title of GC it was exactly the same input lag (2 frames) as stand-alone.
It’s too bad I do not have a camera that can record at 120 or 240 fps, so I can not do a concrete test.

Was the +1 frame additional delay in certain libretro cores (like mame) when pausing+stepping from retroarch also confirmed to be there when using high speed camera recording without pausing?

I’m asking since I got these results with the arcade version of shinobi with mame_libretro and native mame0209 when using the jump button.

  1. Retroarch + mame core with pause/step from Retroarch keys: 3 frames delay
  2. Retroarch + mame core with pause/stop from mame core keys: 2 frames delay
  3. mame0209 native: 2 frames delay

So when the mame core got paused by its own keys, its delay in retroarch matches the native mame’s delay. Which lead me to wonder if the additional +1 frame delay when pausing/stepping from retroarch’s buttons wasn’t some weird artifact of how retroarch unpauses/steps/pauses again in combination with certain cores.

I noticed that too.
When you use MAME pause key RA is not blocking and still runs its logic and polling regardless.
It would be interesting to record it with a high speed camera to confirm, but I think there’s an additional frame of input lag here.

The new bsnes “hd” core has 1 extra frame of lag once again.

It’s there when using the “next frame step” test method in stand-alone bsnes too so I tried to open an issue upstream:

I tried to change it myself but the code isn’t really the same and I couldn’t get somewhere.
This was the patch for the older bsnes core:

Also run ahead isn’t working fine: even 1 frame of reduction is making the core crawl, there must be something wrong with the way states are called.
2nd instance mode is jumping around.

1 Like

@mdrejhon @rafan

It’s possible. The runahead algorithm just needs to run on a frameslice basis insted of once per frame.

f = number of scanlines per frameslice
r = runahead value (in scanlines)

for each frameslice:
Run the emulator for f scanlines; save state
Run the emulator for r scanlines; beamsync last f scanlines
load state

All else equal, this multiplies the workload by the number of frameslices per frame, with some overhead for (de)serialization. So if the frame is divided into four frameslices, you have to do four times the work. This isn’t as bad as it seems if we’re using sub-frame runahead values. For example, if r = f, it’s approximately equivalent to a runahead of 1 today, 2r = f is 2 frames of runahead, and so on.

One caveat: Since we’re no longer presenting discrete frames, if the runahead value is set too high (above that of the game’s “internal lag”), then in addition to the usual “frame skipping” effect you normally get when runahead is too high, you might also see occasional screen tearing, not unlike when vsync is turned off, except it will only happen when state changes across frameslice boundaries in response to changes in input, which doesn’t happen as frequently as you might imagine.

This is fundamentally unavoidable. You can’t have intra-frame responses without tearing unless the game was designed with it in mind (which obviously won’t be the case if we are using runahead to achieve it).

However, provided you don’t set the runahead value above that of the “internal lag”, you can still reduce input lag without producing visual disturbances.

This makes it more useful for removing (small) amounts of host lag (e.g. driver, display, polling lag) up to the limit of the internal lag, while still maintaining faithful system latency.

A few more thoughts.

To my knowledge, most games poll for input during the vertical blanking period. This limits the value of scanline syncing somewhat, because we can bruteforce similar latency reduction with sheer cpu power by just sleeping as long as possible (frame delay in retroarch), compressing the entire emulated frame into a much smaller window.

A game that polled for input during the last visible scanline is where beamracing would truly shine. In that case it would shave off almost a full frame of input lag. The same is true of cores that exit at the end of the vertical blanking period, instead of the beginning (e.g. SNES cores pre-Brunnis lagfix patches). The lagfix changes would be superceded by scanline syncing.

Even for the most performant cores, scanline syncing would let us eke out a millisecond or two. And having frame delay-like latency reduction without the high CPU requirements would be nice.

Finally, here are some crude ASCII diagrams which illustrate the differences across various lag-reduction methods.

I’ve attached a screenshot below in case the forum’s limited width forces you to scroll back and forth.

Legend

| = host v-blank interval
* = input lag
p = game polls input
o = earliest possible point at which a visible reaction could occur

snes9x pre-lagfix

 |                                          |                                          |
 <emulation><<<<<<<<blocking_on_video>>>>>>>><emulation><<<<<<<blocking_on_video>>>>>>>
          p*****************************************************************************o

snes9x post-lagfix

 |                                          |                                          |
 <emulation><<<<<<<<blocking_on_video>>>>>>>><emulation><<<<<<<blocking_on_video>>>>>>>
  p******************************************o

snes9x pre-lagfix with frame_delay:

 |                                          |                                          |
 <<<<<<<<<<<<sleeping>>>>>>>>>>>><emulation><<<<<<<<<<<<sleeping>>>>>>>>>>>><emulation>
                                  p*****************************************************o

snes9x post-lagfix with frame_delay

 |                                          |                                          |
 <<<<<<<<<<<<sleeping>>>>>>>>>>>><emulation><<<<<<<<<<<<sleeping>>>>>>>>>>>><emulation>
                                  p**********o

snes9x post-lagfix with frame_delay, game polls input on last scanline

 |                                          |                                          |
 <<<<<<<<<<<<sleeping>>>>>>>>>>>><emulation><<<<<<<<<<<<sleeping>>>>>>>>>>>><emulation>
                                           p********************************************o

scanline sync

 |                                          |                                          |
<emulation><emulation><emulation><emulation><emulation><emulation><emulation><emulation>
                                  p**********o

scanline sync, game polls input on last scanline

 |                                          |                                          |
<emulation><emulation><emulation><emulation><emulation><emulation><emulation><emulation>
                                           p**********o

Screenshot

I hope that isn’t too difficult to follow. The asterisk trails give you a quick visual guide. Shorter = less lag.

Note: these diagrams consider only lag introduced by the syncing method. I’m disregarding USB polling, display, driver lag, etc, because they’re independent of sync method. Most games also have at least one frame of internal lag, but that’s out of scope too. In short, the asterisks represent the smallest theoretical time in which you could see a reaction to your input under ideal (unrealistic) conditions.

The diagrams really don’t do scanline sync justice, because I could fit only four <emulation>'s (frameslices) in the space I gave myself. If you double the frameslice count from 4 to 8, which is quite doable, you halve the input lag. Nevertheless, it still illustrates some of its benefits, namely fixed input lag irrespective of polling time, and comparable input latency to a large frame delay without the high CPU requirements.

2 Likes

This is one of the topics that most interests me as I really enjoy playing Shoot em ups and side scrollers. I’m always trying to bring the latency down.

My guess is that this topic should be priority for future builds.

I’ve recently upgraded from Win7 to Win10 (same exact hardware) and I’ve noticed Retroarch was choppy. Turns out my frame delay settings which were flawless on Win7 need to be lowered 1-2ms on Win10. Anyone experience that or figure out any way to mitigate it?

i5-3570k + GTX 1070, latest drivers.

Hi everyone! Long time, no see. Just thought I’d provide a very quick update regarding Raspberry Pi 4 input lag with RetroArch. I’ve made a few quick tests with my trusty old LED-rigged controller. I used the development branch of RetroPie for these tests. My results so far are:

  • As opposed to my previous tests of the Pi 3, I could not measure worse input lag with threaded video enabled on the Pi 4.
  • The default OpenGL driver on the Pi 4 matches the Pi 3 and earlier using the Dispmanx video driver, in terms of input lag.
  • The Max swapchain images setting works as expected. A setting of 2 reduces input lag by one frame.

This is good news (particularly the input lag performance of the new open source GL driver), as it means the Pi 4 now behaves the same in terms of input lag as RetroArch does on PCs. The Pi 4 is obviously still slower than a PC, so what input lag reducing settings that can be used depends on how well the particular game and emulator runs. As always.

I ran some tests with Super Mario World 2: Yoshi’s Island using snes9x2010. It appears the following settings work fine (tested both the spinning island scene and some gameplay):

  • Threaded video off
  • Max swapchain images = 2
  • Frame delay = 6

I also forced 1000 Hz polling for USB gamepads (add usbhid.jspoll=1 at end of line in /boot/cmdline.txt in Raspbian). With these settings and a good gamepad, you’ll be approximately 0.8 frames (13 ms) behind a real NES or SNES. Not including your display, of course. Given the fact that most NES and SNES games on an original console took on average 33-50 ms from button press to showing a reaction on screen, being just 13 ms behind is of course very good.

Obviously, for more demanding emulators, you’ll have to scale back the latency reducing settings accordingly.

It’s also worth mentioning that the video driver for the Pi 4 is very much a work in progress. I observed occasional distracting tearing during my tests. It mostly worked fine, though. Still, I’d consider these first tests very much preliminary.

EDIT: To avoid any confusion: These tests were run in a DRM/KMS context, so not under X.

6 Likes

whoa, that’s pretty surprising, but in a good way. Pretty great news all around :slight_smile: Thanks for your testing and reporting, as always!

1 Like

@Brunnis Have you had a look at RetroFlag’s Classic USB Controller-J /U controller?

No ghost input, great d-pad and buttons and as far as i can tell no added latency.

I also forced 1000 Hz polling for USB gamepads (add usbhid.jspoll=1 at end of line in /boot/cmdline.txt in Raspbian).

interesting! could you measure any performance degradation with this option? I wonder if it would be a ‘safe’ default in retropie (do you know what retropie defaults to?)?

Note that this is not guaranteed to work. For xbox controllers for example, when using the kernel’s xpad driver, you need xpad.cpoll=1 for 1 millisecond poll interval (1000Hz.)

And you need to verify by running the evhz tool:

I’ve found this information on the MisSTer Wiki though, so not sure if this is something that only works there or in general. Need to test.

Update:
Nope, no effect with XInput gamepads. xpad.cpoll is a custom patch in the mister kernel.

1 Like

Yeah, I bought one a good while ago. It’s really nice in most ways (look, feel, 250 Hz USB polling by default), except for one important aspect: D-pad sensitivity. I noticed immediately when playing Street Fighter II that when rocking your thumb left and right, there’s a very high likelihood of performing an involuntary jump or crouch. This phenomenon is not nearly as likely to occur on my 8bitdo controllers or my original SNES Mini controllers.

I’ve not noticed any performance degradation, but I’ve not run any formal tests on it. I would guess that if there is any measurable performance impact, it would only be seen while any button/stick is being pressed. I guess there might also be some risk that certain devices don’t like being polled at 1kHz. It would be nice if this could become a new default for RetroPie, but it certainly needs thorough testing.

Good info. Thanks.

@Brunnis

I have not noticed any d-pad sensitivity issues so far.

Recently completed Super Castlevania for Snes.

Yeah, it could of course be my sample that is particularly sensitive.

1 Like

This could be the same problem you are describing that Level1online mentions in his review of both the US version and the Japanese Famicom.

I myself have two J versions and don’t have this problem.

So is there any chance waterbox save states could be implemented in the MAME core to eliminate all input lag?

1 Like

Hello! I’m doing some measurements right now (RetroArch, Windows 10, LCD…) with a custom led SNES classic controller + raphnet adapter with my ROM test (NES) and Xperia 960fps HD vidéo. I will give you my conclusions later (french google trad, sorry).

3 Likes