A visible change in pixel color is much faster. The full transition takes more time, but a visible change happens much quicker. This is why pixel response doesn’t have much to do with input latency. It affects motion ghosting though.
Why people use stinky Android to emulate, and at the same time they pretend to have low input lag, is something that defies logic.
I agree so much with you. I just got a Steam Deck and the latency is night and day compared to my old GPD XD Plus running Android.
Can you please elaborate on this?
What makes you think VRR makes frame delay irrelevant?
Frame delay supposedly only reduces Vsync lag. When your framerate is in VRR range (anything below your max refresh rate) Vsync isn’t active, so frame delay should be irrelevant. Since most emulated games are 60 FPS or below (Wonderswan is like 75) you should always be in VRR range on a 120hz or higher display.
That is true for most standard games, but not for RetroArch.
While on a normal game, the frame is processed and then displayed as fast as possible, on RetroArch you poll input and process the frame right after the previous frame is displayed, and then you need to wait for the next emulated game Vsync. This wait is the input lag that can be reduced by using frame delay.
Even if VRR is active, RetroArch is artificially limiting the framerate to the the emulated game’s framerate. This means that (with frame delay disabled) the input is polled as soon as the previous frame was scanned out, meaning the input lag is theoretically the same as without VRR.
EDIT: Apparently I was wrong. @RealNC explains it on the posts below.
At least that’s how I think things happen on RetroArch. But please correct me if I’m wrong.
Has anyone measured this before? I did measurements here, but I have yet to test with VRR on:
The frame limiting happens before input polling, so frame delay does indeed nothing useful with VRR.
Frame delay is a method to bring input polling closer to the vblank interval. You can’t move the vblank, so you have to move the input polling.
With VRR, you do move the vblank. It’s not necessary to move the input polling. RetroArch waits (frame limit requested by core,) then polls input, then runs the core loop. When the core presents the frame, that’s when the vblank happens. Frame delay has no place at all here.
So you mean the sequence is the following?
- scan-out previous frame
- wait for next frame time
- poll input and process frame
- immediately scan-out current frame
If that’s the case, won’t this result in uneven frame pacing? Step 3 will not always finish at the same time. Is this the reason why I observe slightly uneven panning on the 240p test suit with VRR enabled?
EDIT: Sorry, I made the sequence before your edit. I believe we’re describing the same sequence, yeah.
Yes, that’s the sequence. But it’s important to understand that the “wait for next frame time” step takes into account how long the previous frame took. On a 10ms interval (100FPS), if the core loop took 3ms to present a frame, RetroArch will wait for 7ms afterwards, not 10. Otherwise, you would indeed get uneven pacing (and FPS would be off by a lot.) So when the core’s frame times vary, you still get correct frame pacing.
This is very interesting! I had no idea the VRR cycle on RetroArch was like this.
Yeah, it certainly must take into the account the time taken to process the previous frame. What you say makes sense.
But still, frame pacing cannot be as perfect this way, compared to the standard VSync. The previous frame processing time will never be the exact same as the next frame. There will always be a variation in frame times, even if very small.
I believe this is observable on the grid scroll test on the 240p test suit.
But I also believe this can have a relatively simple fix. Why don’t we pad an elastic few microseconds extra to the processing time so that every frame takes the exact same time to be ready? This would give us perfect frame pacing for a negligible amount of input lag.
I noticed that the frame delay settings are still available and have an effect even when VRR is turned on on RetroArch. When setting the frame delay too high, there is still slowdown, and if the Auto frame delay
setting is enabled, it still adjusts it.
Shouldn’t these settings be completely ignored if VRR is being used?
VRR is an external thing. RA has no control over it and doesn’t even know whether it’s active or not.
I meant when VRR is turned on in the RetroArch settings. RetroArch is at least aware of the intention of the user to use VRR.
In this situation. What do the frame delay settings do in the VRR loop?
Nice discussion here, it gots into something I had no knowledge enough to understand but still is nice to have it here for future reading.
I would like to know if someone is able to write the best settings ever to enable/disable/modify in Latency and how much powerful the hardware must be to run it at it’s full glory.
@JulianoFdeS I just make sure I’m running in exclusive fullscreen mode and set max swapchains to 2. Then I check on the console log that the 2 swapchains are in fact in use (some graphics drivers might only provide more than 2). Oh, and make sure threaded video is turned off (but that’s the default on PC already).
Other than that, you can try to increase frame delay to as high as your computer allows. The faster your hardware, the faster the frame gets ready and the higher you’ll be able to push frame delay. And don’t fret about 2 or 3ms minuscule differences.
Lastly, you can use run-ahead, but that’s a different topic because you’re actually running ahead of real hardware.
Lastly, you can use run-ahead, but that’s a different topic because you’re actually running ahead of real hardware.
Wait, this means that a powerful hardware can use this feature to run a game with less latency than the real console itself?
yep. it uses savestate trickery to make it happen, but when done correctly, it’s transparent to the user.
yep. it uses savestate trickery to make it happen, but when done correctly, it’s transparent to the user.
Does it make some games inteded to be hard more easier? Would this be considered cheating? Does speedrunners use this feature?
If artificial latency is part of the difficulty design, yeah. I think a notable shmup has something like 8 frames of latency and plays very differently if you shave off, say, 4 of those frames.
AFAIK, speedrunning rules are pretty strict about this sort of thing and encourage using original hardware for “official” runs. It’s ultimately not really any different from existing tool-assisted cheating (such as playing back a recording and just moving your hands in time, etc.)
However, people who want to be able to practice on an emulator–with all of its inherent conveniences, such as savestates–with a real-hardware-like latency experience will sometimes use runahead to tune their setup to match an otherwise unattainable-with-software level of latency.
Learned a lot with that! I never imagined some games would input artificial latency to make them harder. I guess genres like FPS and Racing that are genres where Milliseconds is everything can extract the best benefit of it. So, the best setup would be an 1ms monitor, mechanical keyboard/wired gamepad and a powerful - compatible with vulcan drivers - hardware, to turn ON the latency settings to the max, without getting any stutter, can be expensive but probably can enhance some hard games experience to the max and making them less frustrating. The only sad part of it is that this isn’t something a portable emulating handheld as my PowKiddy can do, but now I know that I don’t suck in some hard games that are only harder because I’m not playing them in real hardware, so I can try those games again later after getting an better hardware setup in my hands.
I wouldn’t say the developers put the latency there intentionally. I believe it was related to technical limitations.
And yes, playing a game with lower latency than the original hardware will make it easier. You can consider that cheating, I guess, as you’re essentially altering the original game somewhat. But that depends on how strict you wanna be about it.
I personally don’t use it because I want to feel the same difficulty as when I originally played the game, but using runnahead to offset the software latency and make it closer to real hardware might be a valid usage as hunterk said.
Runahead is amazing and was a groundbreaking addition to RetroArch, but it doesn’t come without some caveats.
Not all games have the same ammount of frames of latency, so you need to configure them individually. And to make matters worse, the same game might have different frames of latency on different areas (such as gameplay vs menus), which might introduce runahead stutters.
Also, depending on the core in use, it might be very performance intensive.