An input lag investigation

With the Pause method:

Mgba Super Mario Advance = 2 frames Nestopia Chip n Dale = 2 frames

No problem there it seems.

Nestopia Batman = 2 frames get ready to jump, 9th frame is in the air. :slight_smile:

But that’s just the way it is. I can finish it in less than half an hour anyway, one of my all time favourite.

To be clear for anyone confused, the input lag you feel is something like:

game code / design choices + emulator (core) processing time (Pause method we give here) + Retroarch dealing with all these “Operating sytem dependant timings” + your joypad response time + your monitor/TV response time

Brunnis fix is at the emulator / core level. Then you can hope for Vulkan? faster standards for peripherals? Retroarch magic?

Yeah, my mGBA results seem indeed consistent with yours, Tatsuya. It’s actually one of my favourite emulators out there: lightweight and responsive, with extremely low audio latency, great accuracy while not too taxing on the CPU. :slight_smile:

I think Brunnis’ fix has brought the SNES core back in line with the others as far as processing time is concerned. Now I wonder if it’s possible to further reduce emulator latency and get to the point of seeing a reaction time of just one frame.

It is just pure speculation on my part and I don’t have any programming competency either, but the fact that the lowest amount of latency we’re experiencing on all cores (even the fastest ones) is of 2 frames makes me wonder if we may look at something at a Retroarch-wide level to decrease it further.

The reason I’m making this assumption is that there is another libretro frontend that actually appears to be even more reactive than what we have measured so far. I’m talking about Alcaro’s ZMZ, which couples the old ZSNES interface with the SNES libretro cores. You can find it here: http://www.smwcentral.net/?p=section&a=details&id=5681 Here is its github page: https://github.com/Alcaro/ZMZ

Back when I made the other thread on SNES latency, hunterk advised me to try it and I immediately noticed a huge improvement, although the timings were not as buttery-smooth as in Retroarch. Hunterk references this difference with actual measurements on his blog page: http://filthypants.blogspot.it/2015/06/latency-testing.html

I have made a quick test by using the latest Brunnis-fixed cores within ZMZ and it seems even more responsive, to the point of starting to really resemble actual SNES hardware. I wouldn’t really know where to start honestly, but maybe Brunnis might find something relevant by comparing the two programs. :slight_smile:

Another interesting resource, mentioned also by hunterk, might be to look at Calamity’s GroovyMAME, available here: http://forum.arcadecontrols.com/index.php/board,52.0.html It’s a special distribution of MAME that diverges from baseline thanks to several features aimed at CRT usage (but it also works on common LCD screens) and minimizing latency. As far as I can understand, they have implemented two things that make it great from an input-lag perspective: a d3d9ex backend which supposedly skips all sorts of driver overhead and a “frame_delay” function. They claim to achieve next-frame latency on Windows x64, so once again I wonder if we can look at it as a reference.

I tested those programs before, but ZMZ seemed very hacky and I couldn’t see myself using this regularly. About GroovyMAME I didn’t get much out of it, don’t know if I missed some settings. ShmupMame looked interesting to me too but it can’t do shaders and is a bit outdated. I don’t remember if I tested it in the end.

The latency tests Hunterk made are strange to me, particularly on the shader part within Retroarch. I’m using CRT geom all the time and it’s just impossible it would give twice the response time vs stock.

I can go between both with the N & M keys while making mario jump on bsnes: difference is hard to tell. It can’t be 86 vs 148ms while I can feel the 16ms difference Brunnis fix makes. I put it on some kind of problem, AMD drivers with cg shaders or something.

And there is this “RetroArch - fullscreen Hard GPU Sync = 0 bsnes-compatibility core xbr-lvl4-multipass” at 80ms average Vs ZMZ fullscreen 78ms that looks strange too compared to the other cases.

With Brunnis’ fix and using frame delay of 4, everything feels instant to me (w/ 1 frame of LCD lag). Have you tried fiddling with that setting?

Yes, I actually pushed it all the way to 7 and I agree, with the Brunnis-fixed cores it feels almost instant, at least subjectively.

Don’t get me wrong, the results Brunnis has achieved with the SNES cores are outstanding and make things way more playable and responsive than before. Once again, I can’t be thankful enough. :slight_smile:

I’m just asking myself whether it is possible to theoretically shave one further frame off in the actual emulator processing code, the one we’re measuring through the pause-method, and achieve a minimum latency of just 1 frame there. ZMZ and other solutions out there are surely way more hack-ish and inaccurate than Retroarch as far as timings go, but maybe there’s something useful in there.

I’m not sure about that frame delay setting.

I test it with bsnes-mercury balanced, GPU Hard Sync is ON @0.

Mario World accepts frame delay 5 maximum, is it really faster? I can’t tell for sure Then Super Aleste can go to frame delay 10. Sometimes I feel it’s fast, sometimes the same, I really can’t tell.

In the end I put it on OFF as I’m afraid it could cause frame skipping / stuttering in different games / cores.

what about the input polling option in settings > input called “poll type behavior” that may help when set to “late” vs “early” or “normal” ? did anyone get any numbers for that? i think the fix is great but i still think there is a need for a nice spreadsheet detailing all the various latency related settings, and how many frames they save (or add!).

Does V-Sync still add one frame latency? I remembered testing mgba with real hardware using GBA-SP and NDS and I scroll the settings in Pokemon and Mario main menus with d-pads and recorded with 60fps camera, and I get 4 frames response when scrolling on those system and using Retroarch without vsync. With vsync with hard off or on, I would get one frame added, and this goes on both amd and nvidia cards.

However, I get better response with N64 in RA than other emulators, especially testing Doom 64 and Quake.

Also, it’s nice to keep track on SNES cores getting better response time.

Thanks everyone for testing out the code change in bsnes-mercury!

Thanks a lot for this! I will spend some time tomorrow looking at what possible side effects this change could have.

Unless I’ve misunderstood it (and I havent’ spent much time on it), this setting is not all that interesting in most cases. Setting it to “late” allows the input to be polled when the emulator requests it. As far as I can see, the libretro implementation must be specifically written to utilize this. If the input is simply polled before calling the emulator main loop, the “late” setting doesn’t make a difference. The reason that this setting isn’t very interesting is:

  • In most cases, the emulator loop is constructed so that input is read almost immediately after entering the loop. This is what snes9x-next and bsnes-mercury with my fix do. Whether we poll input from the system just before or just after entering the emulator main loop will not make any meaningful difference in this case.
  • Even for emulators that poll late (such as snes9x-next without my fix), “Poll type behavior” set to “late” is not of much use unless the emulator executes rather slowly. If the emulator loop runs quickly, like in a millisecond or two, polling early or late during the loop obviously won’t make much difference (i.e. only a couple of milliseconds at most).

So, in summary, you’d need a combination of pretty slow emulation and an emulator that polls late to reap any benefits from this setting.

I’ll try this with camera recording on my rig. I believe my 6700K should allow for a quite high frame delay setting.

I honestly don’t think there is much more to do in the emulator(s). I believe the 2-3 frames that it now takes with snes9x-next and bsnes-mercury is inherent to the actual console/games.

On that same theme, I’ve spent a lot of time today looking at the Nestopia source code. Long story short (yeah, it took a while to sift through the sources…): I don’t think Nestopia suffers from the 1 frame additional input lag, like the SNES emulators. Although the source code was a little hard to follow, it looks like Nestopia does the right thing, i.e.:

  1. Polls input.
  2. Kicks off emulator right at the start of VBLANK.
  3. Exits the emulator loop right when the frame has been generated.

I haven’t used the frame advance method on many games with Nestopia, but I have tested Mega Man 2 and it showed 2 frames of emulator input lag. This matches what the SNES emulators can do with my fix and my current hypothesis is that the NES games also have some inherent lag that we probably can’t get rid of.

Hi Brunnis,

Thanks again for all the research you’re doing.

Could you possibly explain again why your bsnes-mercury fix is working? With some of the explanation you gave I’m still a bit puzzled how and why it works. But I’ll try my best to word it like I see it now. Please comment if this is the correct view.

If I would take the perspective of a real SNES (for simplicity take only 240 line case), if I understand you correctly then a real SNES:

  1. polls input at first line when entering vblank (this is line 240)
  2. does all game logic in vertical blank (timewise equivalent of about 22 lines, consisting of say 2 lines frontblanking, 3 lines vsync, rest back blanking)
  3. scans out a raster that displays 240 lines (counting from 0-239) sequentially

Step 1-3 is what we call a frame and a real SNES takes about 16,67ms for each frame. It does this 1-3 cycle over and over until the player turns off the console.

Now I make the step to emulation. The emulation of Step 1 to 3 above on a fairly beefy PC would take about 2 ms for each SNES frame.

The host PC emulating the SNES does this each frame:

  1. get input state
  2. run emulation (steps 1-3 above)
  3. get into a waiting loop (of about 14ms when emulation took 2ms) until host PC screen vertical blank / vsync is reached

This is what we call a frame in the host PC context. It does this frame generation cycle over and over until user quits emulation.

If I understand you correctly your fix for BSNES is purely focused on Step 2 of Host PC context. Taking that Step 2 in Host PC context apart, we get the 1 -3 cycle as mentioned for real SNES. (Only difference is that it takes ~2ms for each 1-3 cycle instead of original ~16,67ms)

Byuu decided to program his emulation such that he gets the PC host input state at line 241, which is one line after where a real SNES gets the input state. Getting the input state at line 241 and not at 240 actually means that the game logic for that frame doesn’t take into account the host PC input state that was read at step 1. It actually takes until the next run of the game logic (one host pc frame later) for this input to be considered, thus causing the extra frame of delay in BSNES. Your fix moves the input polling event to before that line 240, to make sure game logic always gets fed the most current input state (read at step 1 of host PC -current- frame cycle).

Please let me know whether this is the correct understanding of how your fix shaves off one frame of delay in BSNES-Mercury.

P.S. Thanks again for all the informative posts you made, it really helps in getting a better understanding of how these things are related to one another.

[QUOTE=rafan;41731]Hi Brunnis,

Thanks again for all the research you’re doing.

Could you possibly explain again why your bsnes-mercury fix is working? With some of the explanation you gave I’m still a bit puzzled how and why it works. But I’ll try my best to word it like I see it now. Please comment if this is the correct view.

If I would take the perspective of a real SNES (for simplicity take only 240 line case), if I understand you correctly then a real SNES:

  1. polls input at first line when entering vblank (this is line 240)
  2. does all game logic in vertical blank (timewise equivalent of about 22 lines, consisting of say 2 lines frontblanking, 3 lines vsync, rest back blanking)
  3. scans out a raster that displays 240 lines (counting from 0-239) sequentially

Step 1-3 is what we call a frame and a real SNES takes about 16,67ms for each frame. It does this 1-3 cycle over and over until the player turns off the console.

Now I make the step to emulation. The emulation of Step 1 to 3 above on a fairly beefy PC would take about 2 ms for each SNES frame.

The host PC emulating the SNES does this each frame:

  1. get input state
  2. run emulation (steps 1-3 above)
  3. get into a waiting loop (of about 14ms when emulation took 2ms) until host PC screen vertical blank / vsync is reached

This is what we call a frame in the host PC context. It does this frame generation cycle over and over until user quits emulation.

If I understand you correctly your fix for BSNES is purely focused on Step 2 of Host PC context. Taking that Step 2 in Host PC context apart, we get the 1 -3 cycle as mentioned for real SNES. (Only difference is that it takes ~2ms for each 1-3 cycle instead of original ~16,67ms)

Byuu decided to program his emulation such that he gets the PC host input state at line 241, which is one line after where a real SNES gets the input state. Getting the input state at line 241 and not at 240 actually means that the game logic for that frame doesn’t take into account the host PC input state that was read at step 1. It actually takes until the next run of the game logic (one host pc frame later) for this input to be considered, thus causing the extra frame of delay in BSNES. Your fix moves the input polling event to before that line 240, to make sure game logic always gets fed the most current input state (read at step 1 of host PC -current- frame cycle).

Please let me know whether this is the correct understanding of how your fix shaves off one frame of delay in BSNES-Mercury.

P.S. Thanks again for all the informative posts you made, it really helps in getting a better understanding of how these things are related to one another.[/QUOTE] Yes, that’s pretty much exactly what the fix does. Just to be clear, though, I just moved the loop entry/exit point to line 240 or 225 (depending on overscan setting) instead of 241. The emulator itself reads/polls the input at exactly the same place within the SNES frame as before.

Two new important discoveries:

No. 1

In the menus of Mega Man 2, Nestopia achieves single frame latency. This more or less proves two things:

[ul] [li]That Nestopia is implemented correctly like I thought, i.e. it doesn’t suffer from additional input lag due to a misaligned main loop.[/li][li]Two frames of latency during actual gameplay, such as in Mega Man 2, is a result of how the game is written and would be there on a real console as well.[/li][/ul]

No. 2

I just tried the frame-advance method on fceumm. I was interested in this, since I tested this emulator on RetroPie with my old camera test setup and seemed to get higher latency than Nestopia. Guess what? fceumm does indeed have one frame higher input lag than Nestopia! In the menus of Mega Man 2 it has 2 frames lag (compared to 1 with Nestopia) and in actual gameplay it has 3 frames lag (compared to 2 with Nestopia).

To be honest, I’m not particularly keen on digging into the fceumm source code as well. However, I have created an issue report (https://github.com/libretro/libretro-fceumm/issues/45) and I’m now hoping that someone else will pick this up and fix it.

That’s great work in finding the causes of latency in emulation. Also, I compared mednafen’s version of bsnes source code to the recent Brunnis fix to bsnes-mercury-libretro. Here is the Brunnis fix first to src/system/system.cpp:

#if LAGFIX if(cpu.vcounter() == (ppu.overscan() == false ? 225 : 240)) scheduler.exit(Scheduler::ExitReason::FrameEvent); #else if(cpu.vcounter() == 241) scheduler.exit(Scheduler::ExitReason::FrameEvent); #endif

And that from mednafen:

exit_line_counter++;

// if(cpu.vcounter() == 241) scheduler.exit(Scheduler::FrameEvent); if((cpu.vcounter() == 241 && exit_line_counter > 100) || (!ppu.overscan() && cpu.vcounter() == 226)) // Input latency reduction fun. { //printf("Exit: %u ", cpu.vcounter()); scheduler.exit(Scheduler::FrameEvent); }

It appears that mednafen also used overscan to determine “scheduler.exit”, but the two cpu.vcounter values should instead be decremented by 1. This would mirror the Brunnis fix (save for the condition that exit_line_counter is greater than 100). It may be worthwhile to confirm that the current mednafen changes are not adequate to fully decrease input latency and the effect of exit_line_counter on the snes demos (or a similar use of exit_line_counter to test for increased compatibility).

Okay, so I’ve run a camera test on the “Frame Delay” setting. With Nestopia, I could run Mega Man 2 with a frame delay setting of 12 ms on my Core i7-6700K. If everything works as expected, input lag should reduce by 12/16.67 = 0.72 frames. And the test results are as expected (within tolerances):

Without frame delay:

Average: 4.3 Min: 3.25 Max: 5.25

With frame delay set to 12:

Average: 3.4 Min: 2.5 Max: 4.5

This obviously feels great when playing. To understand exactly how good this is and to understand how much room there actually is for improvement, let’s make a simple calculation. We’ll start with the average result (in milliseconds) and remove all the known quantities:

3.4 * 16.666… = 56.67 ms -4 ms (average time until USB poll) -8.33 ms (average time until emulator runs) -4.67 ms (time until emulator finishes first loop and receives vsync. This would be 16.67 ms if the Frame Delay setting was 0, but setting it to 12 has removed 12 ms.) -16.67 ms (time until emulator finishes second loop and receives vsync) -11 ms (time for scanning display from top left until reaching the Mega Man character)

Time left unaccounted for: 12 ms

Although the USB polling time could be decreased slightly by increasing the polling rate, there really isn’t that much to do about the other known quantities listed above. The remaining time could come from other small delays within the system (perhaps specifically the GPU driver/hardware). We also haven’t accounted for any delay within the HP Z24i display I’m using. Even if it’s fast, we can probably expect a couple of milliseconds between receiving a signal at the display’s input and getting detectable change of the corresponding pixels.

What about an actual NES on a CRT?

If we go by the hypothesis that the actual NES hardware also has 2 frames of delay in certain cases (such as during Mega Man 2 gameplay) and that it reads input at the beginning of VBLANK, we arrive at:

-8.33 ms (average time until input is actually read) -16.67 ms (time until NES has finished one frame) -12 ms (time for running through vblank again and scan out the lines until reaching the Mega Man character at the bottom of the screen)

Expected average input lag for Mega Man 2 on real NES and CRT:[B] 2.2 frames

[/B]If the above calculations hold true, our emulated case using an LCD monitor is only 1.2 frames behind the real NES on a CRT. 1.2 frames translates to 20 ms. That’s actually very, very good. :slight_smile:

[QUOTE=Sam33;41747]That’s great work in finding the causes of latency in emulation. Also, I compared mednafen’s version of bsnes source code to the recent Brunnis fix to bsnes-mercury-libretro. Here is the Brunnis fix first to src/system/system.cpp:

And that from mednafen:

It appears that mednafen also used overscan to determine “scheduler.exit”, but the two cpu.vcounter values should instead be decremented by 1. This would mirror the Brunnis fix (save for the condition that exit_line_counter is greater than 100). It may be worthwhile to confirm that the current mednafen changes are not adequate to fully decrease input latency and the effect of exit_line_counter on the snes demos (or a similar use of exit_line_counter to test for increased compatibility).[/QUOTE] Thanks Sam33! I’ll see if I can have a look at that during the day.

[QUOTE=Brunnis;41746] I just tried the frame-advance method on fceumm. I was interested in this, since I tested this emulator on RetroPie with my old camera test setup and seemed to get higher latency than Nestopia. Guess what? fceumm does indeed have one frame higher input lag than Nestopia! In the menus of Mega Man 2 it has 2 frames lag (compared to 1 with Nestopia) and in actual gameplay it has 3 frames lag (compared to 2 with Nestopia).

To be honest, I’m not particularly keen on digging into the fceumm source code as well. However, I have created an issue report (https://github.com/libretro/libretro-fceumm/issues/45) and I’m now hoping that someone else will pick this up and fix it.[/QUOTE] interesting! currently fceumm is the default in retropie. a bit off topic, but can you think of any reason why we shouldn’t just switch to nestopia as the default? i presume they both work fine on the pi, but if nestopia has this advantage…

The only reason I can think of is that Nestopia is slower. My tests (on the i7), indicate that fceumm runs 15-20 percent faster. This is not going to be an issue on the Pi 2 & 3, but it may cause issues with the Pi 1. Would you mind asking the question on the RetroPie forum (or as a GitHub issue) to see if any of the devs would care to comment?

The advantages to fceumm are: a little faster/lighter, better support for a handful of weird chinese pirate mappers and better determinism (for netplay, so not really an issue here). In short: if Nestopia is full speed on RPi 1/0, it’s probably a better choice.

Brunnis keeps going with great findings! Are some devs already involved into this?

[QUOTE=xadox;41800]Brunnis keeps going with great findings! Are some devs already involved into this?[/QUOTE] Thanks! Here’s another one: I believe I just found and fixed the lag issue in fceumm. Pull request is here: https://github.com/libretro/libretro-fceumm/pull/46

Guess I can be counted as a dev now… :stuck_out_tongue:

I’ve tested the fix and it performs as expected, i.e. it removes a full frame of lag and brings fceumm up to the same level of input lag performance as Nestopia. Talk about small fix (moving one line of code one line up…).

EDIT: Repo with the fix can be found here: https://github.com/Brunnis/libretro-fceumm

EDIT: I can see that twinaphex just merged the fix into the fceumm master. Yay!

I’ve spent the better part of the day looking at bsnes-mercury and the viability of my first fix. The problem with that one was that it could break compatibility if a game were to change the overscan setting mid-frame. Apparently, no commerical software does, but still… So, I went in again and devised what I believe to be a much better solution. For the details, please see this pull request:

I’d really appreciate some feedback. The code is available in this repository:

Below are downloads to all core variants (accuracy, balanced, performance) for Win x64. I would very much appreciate if you helped test these out. If you do, please use the frame advance method to confirm the improvement.

Accuracy Balanced Performance

Cheers!