Will you take a look at other cores such as Nestopia and Snes9x to see if their audio glitches are caused by the same issue as in QuickNES?
If you need it, I’ll try to help out with testing this and also doing camera tests. Unfortunately, I have far too little time to help out with the coding (kids… ).
Fascinating, but I’ll be the first to say that this input lag “fast-forwarding” is objectively cheating. Not that I care, but it’s an “overclock” to get information sooner than designed. Were these games developed on digital computers synchronizing output to X Hz? Then yes, speeding up the game is objectively cheating.
I won’t be using this because I care about smooth motion and audio, but great for speed runners! Very cool.
The visuals and audio are actually perfectly smooth with this when emulating simple systems, once the bugs are knocked out of the cores. I can’t hear any audio irregularities, or missing portions of sound effects, and can’t see any visual irregularities like where you move one pixel too far then snap back.
It’s removing a lag frame, and there are no mispredicted frames.
If it’s being used to reduce latency that’s intentionally added for difficulty, sure, you can make a case that it’s cheating, specifically if you’re already in a close-to-original-latency setting. However, the reason I started thinking about it is because most modern displays have >=1 frame of latency even in the best of circumstances, so you’re getting a latency double-dip. That is, X unnecessary frames from the game plus Y frames from the TV. Reducing that to just X or even X-1+Y isn’t really cheating, IMO, though it is taking the latency from an area that can’t be touched on original hardware.
Yeah, I guess the video and audio will both be smooth. (Yay, a new enhancement for me!) It’s just that the video is compensating for latency in the display and/or in the game logic. Couldn’t this also make some parts of the game more difficult, like less time (a few frames) to react to projectiles, of course offset by the fact that your input also takes effect equally sooner?
Actually, depending on how many frames you configure to compensate for a game’s delayed logic, you’re creating a bigger window of opportunity for an unsolvable state. Like, saving a state before inescapable death, but exaserbated.
I don’t think what I said was backwards. At the start of emulation (AKA, loading a state, like normal), you won’t be able to see the current actual state. This is offset by having your input take effect “immediately” once you do see the future state. Not being able to see the current state on start is a very brief and slight handicap.
So technically, it is “harder,” momentarily, at the start of emulation. Yes, pedantic. Don’t care.
well, as long as this will be an “optional” setting then i don’t mind the feature…since this changes the general “timing” of how the actual console is suppose to react(assuming that the core does have the most accurate timing of its input emulation already e.g.)
Using this function to compensate for a single frame of latency, you’d hardly have to worry about ”cheating”. Even if you have a zero-latency display, controller polled at 1kHz and frame delay maxed out, you’d only be looking at ~10 ms faster response than the real thing. Also, using this feature to compensate for a single frame of latency, but omitting to also use frame delay, it’s impossible to get lower latency than the real thing.
It’s sort of hilarious that we’re already worrying about cheating because it’s too fast, when traditionally it’s all been about how emulation sucks because of input lag.
And what about all the arcade game conversions to home consoles / computers? In some cases the original arcade game has 2 frames of internal lag whereas the homeport has 3 frames of internal lag.
Okay, sounds good. If you provide an x64 Windows build when you think you’ve found and fixed the issue, I’ll be happy to test it and perhaps also do a camera test.
glad to see work being done this, this is the same method i described in the input lag thread on this forum a few years back:
i toyed with the idea as well since i wrote that post with results and issues similar to this. however, i gave up before coming up with a way to implement the idea cleanly and/or efficiently, let alone address the problems. what really discouraged me, other than a severe lack of time in general, was the fights i knew i was going to have in order to get anything relating to the hack merged anywhere.
with that said let me offer you this advice, if you’re going to pursue this make sure you implemented it as cleanly as possible. while people like us may think this is a cool idea it’s still kind of a nasty hack, and judging by past arguments i’ve had with various emulator authors/maintainers, most are going to be reluctant to merge any changes having to do with it. the less code you touch and the nicer it is the better the odds will be that various maintainers will adopt it. i think your best bet would be to focus solely on libretro curated/downstream projects to start with, as the libretro devs are the ones most open to these kinds of enhancements.
anyway, good luck! wish i could offer more than just words of encouragement, but if you ever need someone to back you up on the merits of this when trying to convince maintainers to adopt your changes let me know.
on a fast enough, well supported, and properly configured machine it can accomplish just that. (not by much though… but still cool to brag about to the “purists” )