Building new emulator PC, go with 3060Ti or 6700XT?

My new project is building a two player arcade cabinet that I am going to stick a PC into and use LaunchBox+RetroArch+all the other emu cores. While the main goal of this arcade will be to use with some light guns and play various shooters on PS1 and PS2, I do plan on having being able to use the joysticks and buttons to play some SNES, N64, and Gamecube games as well.

I am trying to figure out what GPU to get for this emulator PC. I will be upscaling every game I can up to 1440p resolution, so that plus filters will require a bit of GPU grunt. It has been a long time since I have had an AMD card, but in looking at the availability, price, and performance today I see that a 6700XT in modern games renders on average a bit better than a 3060Ti does, uses a little less power, and is cheaper as well. But the real question is, how do the AMD cards perform in the various emulators I plan on using? Does anyone have some experience using 6000 series AMD cards to play games while upscaling in things like PS2 and Gamecube? Do they run into any weird graphical issues?

1 Like

I cannot say for certain but it appears from recent conversations that AMDs OpenGL drivers are lacking a bit compared to Nvida. There have been reports of cores that force glcore not running.

Legend has it, you’ll need atleast RTX 4090 - 4x crossfire setup :stuck_out_tongue_winking_eye: The 4090 has only an pricetag of that of an new car these day’s :thinking:

No seriously, any gpu will do just fine… Except when you start using additional → Shaders :flushed: to polish ingame gaphrics somewhat …

Then, it’s an different story…

I saw my gpu peak from between 5% - 10% Max to 40 - 75% depending what shader preset (passes ←) you use !!

And iam talking about an new RX-6600 PCIe 4.0 card i recently have purchased …

Take care for yourself & eachother …

cheers TD

A 3060 is an overkill.

I have a 1060 and still have a lot of breathing room for more advanced emulators like Cemu and Yuzu.

For PS2 and Gamecube/Wii upscaling even a 1030 should be more than enough.

And then there is the pS3 project which is still in its infancy ! But looks very promising , iam running many games at Full speed and Upscaled x3 in rpcs3 with my card (RX-6600) … Good value card price/performance wise… :kissing_smiling_eyes: :ok_hand:

cheers TD

I guess I should have just asked AMD or Nvidia so this didnt turn into a discussion on whether the card choice is overkill or not. lol. But I thought a generic “AMD vs Nvidia” would turn into a war too. Sure the cards might be a bit overpowered, but I do want to be able to have a very smooth 60fps constant even when upscaling the resolution and using shaders so I figured better more powerful than not enough.

So we have one person saying AMD OpenGL driver have been having some issues lately, and another saying an RX 6600 is working well. Hmm. I would like to go with the 6700 XT, as it is the same price but offers better performance and lower power draw. I was just worried about if there were any weirdness with their emulators, as I have often seen that even though AMD has the performance there in games it only would run at like half the speed it should in various program tasks like encoding or running GPU compute applications and the like. I guess that was just a dumb question then on my part because emulator game rendering is still just rendering graphics.

But that’s the point. If you plan to emulate the PS2/Gamecube with shaders + upscaling, a 1060 is still more than good enough with plenty of breathing room left.

In most cases, the bottleneck with emulation is the CPU, not the GPU. So you could save some money from the GPU and get a faster CPU, like an i5 12400, which should be the most efficient choice (6 very fast cores (you won’t need more for emulation) with no “efficient cores” nonsense.

If i was to build a PC just for emulation, i would go with a 12400 CPU and a 1660 GPU (the fastest non RTX Nvidia has) which should be perfect for upscaled emulation and higher FPS for systems up to 360/PS3 and Switch.

You can’t go wrong with an even faster GPU, but i doubt it will ever reach the usage you expect.

It’s your money though.

As for your on-topic question, go for Nvidia for it’s better GL performance.

1 Like

And one last thing, i assume you plan to emulate those systems in RetroArch? Because if you do, you are not going to have a good time. Dolphin is based on a 2+ year old version and even then a few things don’t function as they should and there are additional graphics bugs.

The PCSX2 equivalent in RetroArch is also in alpha stage with various issues (like how it gets stuck in the background after quitting) and nobody knows when a proper core will be made.

So maybe you have to stick to the standalones for the time being. Thankfully, they both have some sort of shader support, though not as extensive as RetroArch’s.

1 Like

I definitely remember AMD being more troublesome for with emulators and having inferior OpenGl, but this was years ago. Still, if you suspect it may cause problems I’d just go with Nvidia. Upscaling just to 1440p is underutilitzing the RTX 3060, but perhaps you want to connect or upgrade to a 4K display in the future or use downsampling. Otherwise, you might as well just stick to something like a RTX 3050.



Why worry about OpenGL problems :sweat_smile: ? More and more new console emulators and or new revisions of old emulators are switching to → Vulkan :smirk: :ok_hand: … i.e: psp, pscx2, rpcs3 the list goes on…

Planet Vulkan is the future, the needs of the many outweigh the needs of the few :vulcan_salute: :grin:

And from my experience, using Vulkan requires less cpu resources i have noticed.
Which is not to say that CPU is suddenly unimportant!

I recommeneded the 6000 series gpu, also because it has RSR feature something that other Amd gpu’s lack (not supported in driver / → NoN-RDNA gpu’s). RSR (Radeon Super Resolution) is an technique to upscale for example 1080p to 1440p in pc games to maintain FPS or even improve at higher resulotions.

And it’s not a heavy priced card at that too …

I don’t know if Nvidia cards has an similar upscale technique ? Not talking about INGAME settings like FSR, DLSS, etc… , but pure GPU driver based …

Cuz ingame there are settings like DLSS (Nvidia) that does an similar thing to improve fps in a nutshell But developers need to implement that in their games for nvidia to make use of it …

Whereas, AMD can apply RSR to ANY game by an flip of a switch :yum: :ok_hand:

So, choose wisely :woozy_face:

Cheers, TD

1 Like

The 6700XT should be fine and would have a lot of extra headroom for running at 4K with the most advanced shaders currently on the planet. You might also be able to get by with a 6600 but you may not have the extra headroom if you want to stream and stuff like that at 4K using the GPU at high bitrates at the same time. I’m not even sure if having a faster 6700XT would help in this regard because I’m not 100% sure how encoding works on AMD cards, but what I do know for certain is that the quality and encoding efficiency is currently worse than both Intel and nVIDIA and they have a lot of catching up to do.

If you go with an Intel non-F CPU, you can take advantage of the dedicated encoding block in the IGP to more or less negate this shortcoming. If you went with an nVIDIA GPU, you could easily use an AMD CPU and not have to worry about encoding performance.

Then again in general nVIDIA GPUs are more expensive for the rasterization performance they offer and they have higher driver overhead so AMD GPUs tend to perform relatively better than them when paired with older/slower CPUs.

So these are most of the pros and cons here. You don’t have to pay more for a 3060Ti in order to have the ultimate emulation experience, you can step down to a cheaper nVIDIA RTX 2060 12GB, 6GB, 1660 Super or Ti, RTX 3060 or RTX 3050 and be just fine.

As for the CPU, you don’t need much. If Intel don’t leave out the IGP because it can come in handy. Anything in the XX400 range from the last few generations should be fine. With AMD the X600 range would be excellent from the 3600 onwards.

So just consider all things once more and make an informed decision but don’t overspend unless you know you’re going to be doing other things with the system besides emulation.

You can take a look at my shader preset pack to see a good example of what can be achieved with the different performance tiers of probably the most advanced Shaders/Shader combinations in the world - HSM Mega Bezel Reflection Shader.

I don’t use generally use curvature but certain types of curvature need some serious GPU grunt if also using the highest Mega Bezel performance tiers but do note that the highest performance tiers are not a requirement since the lower ones can look very close in terms of the quality of the CRT effects.

fyi, when you talking about Video encoding … For the best quality , encoding has to bedone by CPU not GPU … Although it GPU+CPU combo can speed up the process quite an bit… But in my experience, the results aren’t that great compared to cpu → ONLY encodes…

i.e artifacts, color banding, compression noise → filesize etc…etc …

Just saying…

I have 2 videos here:

One was encoded using an nVIDIA GeForce GTX 1070 and the other, an AMD Ryen 5 5600X.

One had an average encoding speed of 3.66 fps while the other had an average encoding speed of 60 fps.

Can you tell which one is the GPU encoded file and which one is the CPU encoded file?

First of, the “flicks” looks like animated gifs, an bunch of still images stitched together LoL !! 2nd, the color palettes of that of an game is much different of that of cartoons, or live action movies !!

Realy, the best example to compare gpu vs cpu is to use VCEenc/NVEenc (depending wich card you have) on Live action motion captured content at 1080p res …

Not to mention the DIFFERENCE in →→ FILEsize ←← !! Noticed atleast an 100Mb gap between your samples up there …!!!

And very SHORT samples at that too, multiply these samples to an duration of an movie !!! The difference will be huge in filesize !!!

Compression + quality = CPU encode buddy …

Use CloneBD , Hybrid among others, that support GPU encoding… To create an test sample using → Very Low-Med Bitrate (2500 - 3500 kbits/sec / Avg - VBR), and upload these to me for comparison… …

Test samples of atleast 1080p res, because of the bitrate you’ill notice the difference more easily …

cheers, TD

All I see in that comment is a bunch of “noise” and deflection from my simple question at the end of the post. The source doesn’t matter and you seem to be missing the point.

You still haven’t told me which one you think is the GPU encoded file and which one is the CPU encoded file and why.

Anyway, the point is that while there might be a benefit in terms of achieving the absolute best possible quality encoding and the most extreme small file sizes by using CPU based encoding, this can come at a great cost in terms of time. If a user already has a system with a decent GPU, it doesn’t have to be spectacular, they can achieve a more than acceptable level of quality and compression ratio and very importantly, encoding time using whatever resources they have at their disposal.

Good or acceptable quality and file sizes are all subjective but they’re not things that are exclusive to CPU based encoding for all users.

I like to repurpose older systems with few cores and not very fast cores for use in emulation as well as streaming machines. The CPUs are more than up to the task for emulation + streaming (but not all of that plus encoding) and when paired with a modern GPU, the user has the option of using the GPU to offload their encoding duties to or the Intel IGP as well if using an Intel CPU.

This is why I gave my original response comparing GPU encoding resources based on the different GPU choices the OP had to make showing pros and cons of going with either vendor and how it was possible to negate some of the cons depending on which CPU choice the OP went with.

So please don’t take my original response out of context and attempt to turn it into a pissing contest between CPU (software) and GPU (hardware accelerated) encoding choices. The original post didn’t even mention streaming. That was just a side note.

Very easy, the smallest flick should be the cpu encode… If, it wos encoded with the very same preset / settings (bitrate, fps, maybe filters) !!

Well, true craft takes time… AND ussually → Money :smirk: :ok_hand:

Yes and NO !! You can’t have BOTH , especially when using GPU only (My point exactly) !! However , GPU encodes are just great for streaming / broadcasting yes… But for your favorite media content that have to last decades eg , favorite movies, TV-series, family recordings etc… You want the best of best out of it, REALY… → Quality CLOSE as possible to the ORIGINAL … but with the smallest filesize possible … YOu’ll never achieve BOTH with current GPU tech … NO WAY …

It wos not diz ducks intention to flush your response down my toilet…
Even though my bowl wouldn’t have the slightest difficult to do JUST THAT :point_up: :smile: :pinching_hand:

Indeed, just like you wos trying to make an point, diz duck merely stated the facts of the difference in Low Bitrate encodes between GPU & CPU. The differences are just huge in quality and in size,when it comes down to Very Low Bitrates.

And believe me , i have experimented allot… Even using FILTERS to cleanup the source right BEFORE encode process won’t favor the GPU encode in any way (macro blocks, compression noise etc…) !!

Therefor, GPU encodes is an No-GO-SHOW … oh nose :lying_face:

cheers, TD

I would argue that because of this issue, you are only trying GPU encoding in low budget software, which isn’t a valid sample.

You get exactly what you pay for. Trust me when I say that GPU encoding in high end software is more than acceptable.