Using the beetle hw core, On ps1 2d games, when i increase the resolution the characters become more pixelated, why? shouldn’t it be the opposite?
If the characters are sprites, they won’t be scaled up. Er, they will, but they won’t get new pixels, if that makes sense.
The same thing happens if you take a PNG image and scale it up 10x. It will look all pixelated since there’s no extra data to fill in the image. This is why there are various image filters that blend/blur the pixels.
What would be interesting would be to use Tensor Cores in Nvidia’s Turing GPU’s to apply AI inferencing to this.
AI Neural Networks achieved incredible results in upscaling graphics, and they are still improving constantly. Now, i’m not an expert, but problem could be that to make them functional let’s say, for emulation purposes, the power processing needed to do the job in realtime would just not be sustainable, as for now. Pre-upscaling of original material extracted from gamedata and texture overriding during emulation is the way.
The NNEDI3 shader is apparently based on a neural net thingie, just with the weights hardcoded. That’s why if you look at the shader source, it’s just line after line of nonsense numbers. I believe MadVR has some others like it that are specialized for anime, as well.
Ultimately, the neural net stuff isn’t all that different from, say xBR or ScaleFx, they just use a machine to come up with the patterns and rules instead of a person painstakingly figuring it out themselves.
What sorcery is this>?
Some of the AI generated HQ textures look like texture replacements to me.
It would be interesting how fast this could be done on Nvidia’s Turing GPU’s with their AI driven Tensor cores.
Yes, those are texture replacements, just like this guy is doing:
Nvidia’s neural upscaler doesn’t seem to be trained on very low-res images, let alone pixel art, so I don’t think it would do a great job on retro games. It already has issues with making realistic artwork look more cartoony.
The big difference between the popular neural upscalers like ESRGAN and the shaders we have is that they create details whole-cloth; that is, detail that was never there to begin with. It can look cool, but it can also look out of place.
Here’s a gif comparing the shaders we have with ESRGAN on the right:
The main difference to me is those sharp, single-pixel details on the ESRGAN side, which our shaders don’t create.