My guess is that we live in an era where creativity is too rare, hence we see remakes or even remakes of remakes, and not only in games, but in movies too.
In topic, I second @guest.r.
Also, such things cannot be called “shaders”, technically; Since the latter always operates on single pixels (a bit of simplification here), while “generative AI” works on the “context” of a whole image.
While this can smell like a niptick statement, it is not, because the bigger the context, the more the generated/shaded content would be bound to it, ofc.
As a result, Working on an image context cannot assure the context would be preserved across frames and as a consequence, “AI” needs to be pre trained on the whole game for that or risk is seeing characters or objects morphing across the game.
Pretraining, on the other side, defeats the purpose of using it like you would do with a shader.
At best, I can imagine better algorithms of edge smoothing/dedithering with less false positives, at least in the near future.
Off topic again, my hope is to see brand new (and I mean it, new) games (IA models certainly would only make it worse).