Can't achieve ~0.1% fps deviation rate for the life of me, need help.

Greetings, long time retroarch user here.

I’m opening this thread here as a last ditch effort. I’m not a code-savvy user unfortunately, but through extensive testing and research I’ve managed to get a pretty good idea of how the different settings work and affect each other, aswell as the basic principles to respect in order to achieve as best aural/visual/input synchronization as possible.

Unfortunately I feel like I’ve hit a wall that I cannot possibly get through without some help. I’ve scoured all over Google, all over the forums, but haven’t been able to fix this problem. Assuming it is even fixable with my current setup as it is. Maybe it’s just a matter of disabling some obscure AMD Driver or Windows 10 setting. Or maybe it’s just impossible with the display I’m using currently. I’d honestly just like to get a third party to give me some confirmation that it’s impossible, or point me in some obscure direction to try and troubleshoot this issue.

Not long ago, I purchased a heavily upgraded gaming rig. Using my old hd 7750 1 gig vram gpu, i5-3470 cpu, 8gb ram, Windows 7-clad desktop I managed to achieve a refresh stability rate of 0.2/0.3 percent using a nightly 1.7.0 release (1.7.0 Git d387cfbfb according to the log file).

However, since upgrading to a RX 570 8 gig vram (the most noticeable performance upgrade), a Ryzen 2400G 3.9GHz quad, 16 gigs of ram (with a way better clock, and ddr5 as opposed to ddr3), I haven’t been able to achieve a deviation rate lower than 0.5%.

I’ve tried using the release I mentioned above (1.7.0 Git d387cfbfb), the latest stable release of Retroarch available for x64-based Windows systems (1.7.6 as of the date of this post) and a specific nightly build from 2 months ago (1.7.6 Git 9750719074). The first build, the one that has performed best overall, still never goes below 0.5%.

Already disabled most overlays and compositors that come bundled in with Windows 10, already disabled Windows’ fullscreen optimizations, and tried to turn off most AMD settings that I figured would interfere with the app’s interaction with the computer’s hardware. No matter what version of Retroarch I used, no matter what drivers for audio/video/menu, no matter the settings, I haven’t been able to achieve the 0.2% fluctuation that I found perfectly acceptable back with my old rig.

Hell, even using third party software has provided erratic results at best. Using the links detailed by user James-F (thank you a million times for your synchronization guide dude) in this post. has yielded the following results.

I made a Gist trying to log as much comprehensive system information as possible. Again, sorry if it seems a bit overbearing or unnecessary, please tell me whether the post is either lacking information or some of it should be trimmed down for the sake of convenience/readability:

If you need me to post a HWMonitor or CrystalDiskInfo log, I’ll do so in a jiffy. I thought about adding it from the get go, but the log seemed gargantuan, and the Gist itself seems like a wall of text as it is lol.

Most of the testing was done running the menu itself for quite a bit more than the suggested 4096 frames. For ingame testing, I ran Castlevania 3 Dracula’s Curse in Mesen. It didn’t seem pertinent to log however, seeing as if even the idle menu can’t achieve a stable enough framerate, ingame won’t be any better no matter what I do…

This is the current display I’ve been using. I’ve used the same monitor for both computers, the old one and the current one. The old connection method was a DVI-I (24 + 5) to VGA converter (an analog signal afaik). Currently, I’ve been using a DVI-D (24+1) cable, without any kind of conversion, to connect the gpu to the monitor. Maybe it’s the different kind of signal that adds “static” to the monitor’s actual refresh rate?

Again, sorry for asking here for help. I’ve been butting my head against the wall for a couple of months now, and I’m desperate to find the solution, or at the very least to be told there isn’t one. I just need closure on this issue. :frowning: If you guys require anything from me to help, I’m all yours. And if you’re reading this, and got all the way down here, and are even willing to help out: thank you so fucking much :heart_eyes:.

To my knowledge, it’s mostly up to the drivers, and GL is basically the worst (especially when it comes to AMD+Windows). Do you get any better results with vulkan or d3d11?

1 Like

Tried them both but to no avail. Performance just bottlenecks at 0.5%, no matter what I use. Trying different drivers in quick succession yields the same results (opengl, vulkan and d3d11). Vulkan also seems to crash whenever I set the “max_swapchain_images” setting to “1”, no matter the version I’m using.

At this point I’m fairly certain it’s a display device + Windows Setting/AMD Driver shenanigan. I tried Lakka on a bootable usb and it reaches 0.1% no problem (at least, with an idle menu that is). Though tbf, that one seems to have it’s own slew of problems, since the Ribbon menu shader appears glitchy and doesn’t render properly.

Should I give up trying to get Windows and AMD to cooperate with my current monitor, and just try to get a new display ASAP? Or will that not do anything?

P.S: Checked extensively for any kind of virus tomfoolery with MalwareBytes, HitmanPro, AdwCleaner, etc. PC seems virus-free, so that shouldn’t be a factor.

I’m getting like 10% deviation in Ubuntu. 0.5% sounds pretty darned good to me /shrug