Which gpu is best for the Mega Bezel

there is a reason why nvidia gives less ram on certain configurations, it depends on the memory bus.

the memory bus of the 4070 and 4070ti cannot hold 16gb, can only hold 6-12-18 and so on. While the 4060 can do also 16 and so they give that as an option.

Also 4000 serie have a load amount of cache compared to previous generation, which play a role into it.

And then nvidia is also making neural textures, which will help games staying on few gb of ram still.

https://hothardware.com/news/nvidia-neural-texture-compression

This is because ram is especially useful for server and workers, and if game dedicated gpus get all that ram, then they better off removing them, and they don’t want to.

1 Like

The problem is that they choose to cut down the bus size on certain tiers of GPUs that previously had wider buses. A 70 series card was traditionally a card with a 256bit bus for under US$400. Now it’s 128 bits, only 4GB more after 3 long generations of graphics cards?

So they continue with the shrinkflation, while increasing prices. Again look at the performance gap between the 4090 and the rest of the lineup.

More GPU Cache isn’t going to prevent performance from tanking when there just isn’t enough VRAM to hold all of those high resolution textures and more in the present and future poorly optimized games.

Let me tell you, none of you here are probably bigger nVIDIA fanboys than me but I refuse to support the nonsense. They’ve constantly been driving the prices of their products higher and higher. Scalping, pandemic and crypto boom were some significant drivers of exhorbitant prices of graphics cards, those three things have all subsided but why haven’t the prices on new generation GPUs gotten back to normal?

But hey, we can keep this discussion going on ad infinitum lets don’t derail the topic of @HyperspaceMadness Mega Bezel Reflection Shader.

nVIDIA has done an awesome job of commanding the market and entrenching loyalty in the minds of its die hard customers. It got plenty of help from AMD’s half baked approach to many aspects of competition as well as its follower approach to many innovations as well as genenaral uncompetetiveness in key areas and several missteps along the way.

if you adjust for inflation basically we got the same prices as 3000 msrp.

Then the bus size, with a new production the chip size is way smaller, so they couldn’t avoid the fact that also the bus width had to shrink down. Also because of this generation using higher cache and higher frequencies, they needed to do this to favor card stability. This of course is a progress in terms of performance per watt, because a smaller size will ultimately consume less, and will cost less while inflation and production chains increases.

It;s like in the cpu side, intel doesn’t shring down the chip, and in fact it manages great performance but at absurd consumptions. Ryzen shrinks down and manages to keep good performance but at great consumption. On the gpu side it feels the way that nvidia is like ryzen and intel is radeon, in fact radeon decided to shring only a small part of the die, while keep most at 5nm and 6nm, that’s why we still have absurd consumption on radeon graphics cards compared to nvidia.

GPU Cache does help a lot in fact, on RE4 4070 vs 3080 12gb when i tested it, if in the same vram bar in the settings i had 13.5 gb loaded for the game, so both would be out of vram, afterburner said that 4070 was allocating 11 gb of vram, while 3080 was allocating 11.8. At the same run, 3080 12 gb crashed out of vram, and 4070 in the same point detected 11.3. Now this repeatable test and in other more optimized games aswell, it feels like the added cache gives you around 1 gb more of vram than previously. Also on 4000 serie gpu vram monitoring is not really possible, msi does read allocated vram, but never means that the gpu is using all that like in previous generations.

Then of course at one point if you run out of vram you have to adjust, but if you reach that point, why suggesting radeon 6000 serie gpus if they can not do ray tracing game? It’s like saying, go for 6000 gpus 16gb but dnt play ray tracing instead of buying a more than decent 12 gb 4000 that can do it all and in some games you might be forced out with using dlss at performance when in 4k. Tweaking is part of the fun, but i would give more priority to a more balanced gpu. Then it’s not 16gb or nothing, 12gb is a great value of vram and should give no concern for being a middle premium graphics card.

1 Like

PCPartPicker is good for checking the price of different pc parts on multiple sites. GPUCheck shows the performance of GPU and CPU combinations.

https://pcpartpicker.com/

Edit: Removed them. I didn’t know their reputation was so bad.

Man, for real, UserBenchmark is one of the worst tech site ever made.

None should ever mention it.

2 Likes

No the reason why we are seeing higher relative power consumption on the Radeon Cards is because we’re comparing last gen AMD to current Gen nVIDIA. Last gen nVIDIA built on Samsung 8nm process were actually more more hungry and less power efficient than the Radeon RX 6000 series built on the superior TSMC 7nm process.

Why do you think nVIDIA had to design such an exotic cooling solution for the RTX 3000 FE graphics cards?

Who says 6000 series GPUs cannot do RayTracing? That’s not true. RayTracing performance per tier is lower on AMD but a higher tier RX 6000 graphics card like the Radeon RX 6950 can match a lower tier nVIDIA graphics card in RayTracing.

In the Hardware unboxed review I shared above you can skip straight to the RayTracing results to see that you can do RayTracing on an AMD 6000 series GPU.

@1080p RTX 4070 was 12% faster on average using RT.

@1440p it was 11% faster, while @4K it was 2% faster when RT was enabled in the titles tested.

In that same video if you switch to the 4K non RT results you’ll see that the RTX 4070 is 17% slower on average at 4K than the Radeon RX 6950 XT. Doesn’t the user want to use the graphics card for 4K gaming?

At 1080p resolution in non RT titles, the Radeon RX 6950XT’s lead shrunk to 12% on average, while at 1440p it increased to 15%. Notice as resolution increased the performance gap between the RTX 4070 kept increasing in favour of the Radeon RX 6950XT. I wonder if that extra 4GB VRAM could have anything to do with that?

Here is another comparison video.

We got the same price for what tier SKU or what level of Gen on Gen performance increase?

I’d say we’re getting less of a performance increase for most SKUs gen on gen at much higher prices.

3000 series only looked better based on MSRP or rather less worse because of how horrible value 2000 series was proceeding it. In reality however what percentage of gamers were to purchase any 3000 series GPUs at anywhere close to MSRP?

Also, while the 3070/Ti matched the previous flagship’s 2080Ti’s performance at lower resolutions, nVIDIA chopped off 3GB VRAM leaving 4K performance worse.

Ehm i think you misunderstood, Lumen from fortnite is not Ray Tracing and it is software based.

This generation is a side upgrade for sure, much like it was 2000 serie compared to 1000, where you get nothing more in terms of performance and less consumption while leveraging on fefatures like dlss and rt.

Same now compared to 3000 where you have dlss3 as a bonus. Also shader execution from ADA lovelace architecture is not implemented yet, and when it will the disparity in rt workload will be superior.

4070 is even 40% faster on rt heavy workload compared to 6950xt which consumes also double.

I haven’t misunderstood anything, did you specify that you were speaking about a specific, cherry picked scenario? I don’t think you mentioned Lumen in our discussion. Was it mentioned in one of your external links?

I don’t think “liking” is a factor here. It’s about getting something for your money now that would be most suitable for one’s needs and provide the best possible experience for the money spent.

They could never be “worthless” because they are all products that one can actually purchase right now and they have the potential to perform the tasks that the user is looking to perform. So they are viable and possibly very attractive options.

The release date of a product is actually irrelevant once it can perform the tasks that are required.

This might be true on paper but from a practical perspective all one needs to consider would be if there is available case cooling, space and power supply capacity to support said GPU. Hardly the universal deal breaker to me.

Higher power consumption does not automatically equate to worse acoustics. That depends on the quality, effectiveness or focus of the cards cooling system.

You can have a card with lower power consumption with a crap cooling solution that is louder and a card with a noise optimized, higher quality or more robust cooler that is quieter even though the card might have higher power consumption.

You linked to the video and from daniel with timestamps at Fortnite when you were saying that 6950xt is better or on par with a 4070 in a ray tracing workload, that’s where i intervened to fix the argument due to Lumen not being an rt workload.

with if you like amd i mean, if you prefer vram over the rest, it is fine but i would recommend to wait the new gen from amd in case considering the guy requesting help is willing to wait.

about new gen and old gen, man the upcoming few months will see new graphics card happening and releasing, again if you consider a last gen video only for 16gb of vram at low price, you are limiting yourself a lot in terms of versatility. A gpu is not made only of vram and 12 gb is not to be taken of concern lik it is 8gb when it is a middle premium gpu and the only new middle premium available until amd launches mid range gpus with 16gb of vram.

on the budget given by the user, power consumption also makes a deal as you can go a grade lower on psu, case, fans and also graphics card while having similar or better performance in vastly different scenarios.

in fact it actually depends that lower consumption means better cooling, because the coolers are reused from upper tier levels, so they tend to perform better in cooling and acoustics (whine noise aside) because they have better cooling for the asked work.

the recommended 4070 shines especially when you watch the videos of the 4060ti, so thats even more a plus to it.

Also 7600 8gb from amd will have 8gb and will need to keep up with it. So if you wait it is better to have options layed out, otherwise for the asked budget 4070 is the best new card available.

Talking about used market on the internet where all regions have their own pricing and local vendors may or may not have the deal is a bit limiting as we don’t have all the options available there to have the full picture.

1 Like

I didn’t say 6950XT was better or on par with a 4070 in a RayTracing workload.

This is what you said:

I would admit that I ignored the first part of your sentence but this was actually my response:

I never compared the 6950XT’s RT performance to an RTX 4070 in that statement.

I posted several videos at that point all showing RT performance in some part of the video. I was supposed to edit my introduction to the video to pluralize it.

I don’t think I mentioned the used market specifically at all. I wasn’t even considering used graphics cards as a recommendation in this discussion.

Here i was precising that 6950xt is not an rt in rt focused titles that leverage less in raster. Saying that RT was enabled means a gimmick, because if you enable only shadows, or reflections and so on doesn’t depict the gpu power in rt works. in 2023 making a pc that cannot sustain rt focused works is a bit pointless, and in case you are all on competitiveness and no rt for max latency and so on, i mean you are probably first bottlenecked by your monitor refresh rate cause every gpu can handle more than what his oled tv is capable of in those scenarios.

New GPUs from offiical vendors can never go that low on loss, so when you talking about great prices for the performance referring to last gen gpus it is obviously used. New from past generation on official vendors do always stay even higher than new released cards in many cases.

I don’t think that this is wholy accurate and I already stated that I was never considering the used market in my last gen value comparisons.

If someone is on moderate budget and wants a nice all-rounder then some prev-gen cards seem like a good idea, starting with RTX 3060. Ofc. RTX 4070 is a great card, but for double price.

2 Likes

I thought this is the best gpu for megabezel thread! :thinking:

Which gpu do i need to run megabezel with 4k under retroarch linux? I always test my stuff with raspberry pis, igpus or in virtualbox so a gpu shouldn’t be too expensive.

1 Like

Listen, it started off this way but the original poster said he wanted to use his system for 4K PC gaming as well, then…

This should answer your question:

Also, the original poster asked about being able to run the most demanding Mega Bezel Reflection Shader presets at 4K. Mega Bezel Reflection Shader also caters to multiple lower performance tiers.

3 Likes

The user asked for consideration in the 4070ti class, that’s why in comparison a 4070 feels a better deal than the 4070 ti. A 3060 is too cut down for the considered budget.

@gizmo98 a 4gb card can not do reflections and smooth preset, or even a normal preset with full scaling, so with the graphics of the side, so i would say 8gb as a minimum, most likely 10gb if you also do heavy system upscaling on ps1 or ps2 cores just to run them.

As performance goes, the more the merrier especially with shader presets which are heavy in certain systems. But for 4k shaders max presets and heavy system cores i would suggest personally a 3070 class performance however it only has 8gb of vram. Probably as i could test it the amd 6800 performance at 4k is perfect for 4k max shaders and heavy systems, so if you make emulation your primary gaming focus on your system that is probably the ultimate card so far.

You can get away with 8gb card if you cut the corners of the shader, so max presets but without the graphic background cut in the scaling as that is very resource heavy, and of course if you are not planning on doing heavy emulation at 4k, you also can get away with way less performance, even 3060 12gb for instance or amd 6700xt depending on their current prices.

Hello to all and thank you all for your work,

Personally I use mainly for retrogaming an I5 10400 F coupled with an RTX 3050.

I see no difference with my other config i5 111600k and RTX 3060 ti about the megabezel even with 4k demanding settings.

Why do you want more in terms of hardware power except consuming more electricity for retrogaming just need the right hardware for the experience you want without overpaying?

I’m really bluffed by the performance of the Koko-aio shader for a visual result equivalent to the mega bezel shader.

I use mainly Sonkun’s pre-settings because I’m not a fan of reflections on the edges, specifying that I have a Sony trinitron KV-29C5B not seen this type of reflection on the contour, maybe on bvm, pvm with a deeper frame the frame have this type of reflection.

Attention this is not a criticism towards such or such shaders and they need for all tastes.

2 Likes

Ok. Maybe i will try megabezel in the future. 4GB and a dGPU > 400€ is no option at the moment. My braswell nuc with 8GB ram fits quiet good behind a tv and can run some shaders 4K (and if not 1080p). A pi 4 is even without shaders slow at 4K.

2 Likes

This is what the original poster asked:

I don’t know how to quote so i post images, however mate u seem a bit disconnected from the chat… you reply always forgetting what has been happening already prior.