Graph show much compute power needed to emulate a system

Hi Everyone,

I am trying to make a graph to show how much compute power you would need to “perfectly” emulation a system (per core), based on the PassMark - CPU Mark. perhaps for now under a basic GPU. A graph / CPU architecture, but now it would just be X86_64 and ARM (Maybe).

Each system (per core) will be show the core need to “perfectly” emulate the easiest game, the more demanding games, and average power needed to emulate ~80% of the title under Retroarch.

By “perfectly”, I mean the best possible for each emulation core, WITHOUT shader overhead at the console’s native resolution. a different graph will be use for up-scaling resolution, sense it might also need to list the GPU power as well.

The biggest questions are 1.) is such graph / info exist? 2.) what would be best way test this, is there something built-in I can use in Retroarch?

No such graph/info exists, no.

if you disable vsync and audio sync (i.e., the same as hitting fast-forward), it will run as fast as it can. On very, very fast cores, this will bottleneck at blitting the video (on my machine, it’s somewhere around 1100 fps), but as long as it’s bottlenecking at the actual emulation, it should be fairly linear. That is, if you get 600 fps, 1/10 of that power should be 60 fps (though you probably want to leave some headroom for background task/power fluctuations, etc.).

I would just pick a few games per system you want to test and script it to run those games and then get an average. There’s a command line option to run for X frames and then take a screenshot and exit, so if you have the fps counter enabled, that should be included in the shot, AFAIK.