I’ve made a shader that tries to unblend any shader combination. That is, for example, let’s say you upscale a game using Retroarch and a combination of shaders. And it results in something too blurred for your tastes. If you put unblend.cg after the last shader, the output is unblended and the final palette will be more or less, the original one.
It works surprisingly well in some shaders. For example, it can turns ddt.cg output in something similar to xBR:
ddt standard output:

ddt at 2x and unblend.cg as a second shader to upscale to any size using linear samples:

Even xBR can benefit from this. Using xBR at only 2x (for a fast usage), and using unblend.cg as a second shader, turns the output into something very similar to a much higher xbr (5x, for example).
xBR at 2x only output:

xBR at 2x with unblend.cg as a second shader to upscale to any size using linear samples:

But, it doesn’t work with shaders that changes too much the original source. For example, it doesn’t work with crt shaders, because the scanlines aren’t in the original source, so it can’t be unblended. It works well with shaders that try to interpolate the source, like: bicubics, lanczos, ddt, bilinear, jinc2, advancedAA, etc.
The unblend.cg shader is very simple and only works as the last shader with linear enabled: http://pastebin.com/AHxyrUFN
EDIT: The ddt I’ve used in the first shots is an updated one. The current ddt in the repo doesn’t work well with unblend.
EDIT2: The way unblend works is like a FAKE signed distance field (SDF) method. It treats the input as an SDF map and do some kind of alpha test, though it doesn’t choose between two colors (black or white), it decides among a bunch of colors in the region of the low res input. It’s a bit prone to errors, though. I think I’ll try a gaussian fitler to see how it can fake an SDF compared to ddt.