How difficult would it be to write an AI that was fed a series of images of a crt screen, and adjusted the shader settings to try to match the reference images as much as possible?
I had the same thought just now. Would be really nice.
In most cases you can replace “AI” with “infinite monkeys with typewriters” and I think we’re pretty close to that level already with our own forum-monkeys
Srsly tho, I think humans are as good or better at tweaking settings in an existing shader than an AI would be, and you’d have a hard time with the training phase. However, you may have success training a CRT-ification filter/shader similar to existing AI upscalers, it would just have a chunky/glowy signature instead of the smooth/scratchy/painterly signatures that existing AI upscalers tend to have.
AI applications work for situations where you have a lot of known inputs and a lot of known outputs. You train the AI on that (extremely large) training set and then feed it unknown inputs and it applies the derived transformation that it inferred from the training set to produce a new output.
So, if you give it a training set containing shitloads (that is, tens of thousands) of raw-pixel frames from different games generated via emulators and those same frames displayed on a CRT, you may end up with something that could take raw-pixel inputs and spit out something that looks like a CRT.
…except it probably won’t look like any actual CRT.
Maybe it’s going to look like one of the shaders that already exist in RetroArch.
Hey, that sounds like me.