Mdapt & gdapt - dithering treatment [Updated 06/06/14]

@hunterk: could this blending filter be converted to GLSL for usage on the Pi as an 1-pass blending filter for example?

I don’t think the second pass will do much on its own, so it would have to be the full 2 passes. Do they not get full speed on rpi currently (I’m assuming not, since it has all of those texture sampling actions…).

@hunterk: I made a very interesting discovery. Loading ONLY the second pass in the gdapt.glsp (gdapt-pass1.glsl actually) does add dithering! No need for multipass so we can have dithering on the Pi! Pseudo-transparencies work in MegaDrive games, and ZX Spectrum games that rely on adjacent pixels to create new colors on old TV CRT screens in composite and RF work too. The “bad” part is that image is too “blurry”. Maybe that’s how things looked back then, I don’t know.

Yeah, if you don’t include the first pass, it can’t detect the dithering patterns and only blur them. Instead, it blurs the whole image, which we can do cheaper already, though it looks bad.

Isn’t that what analog RF/composite did? I don’t have a CRT setup here to compare anymore, but since these setups couldn’t do “multiple passes” or detect patterns, I guess they simply dithered the whole image, didn’t they?

The probem is that the result of replicating the effect looks strange on a digital TV these days. Or is there anything beside those considerations?

RF/composite indeed blurred the image but the way it was blurred was much more complex and related to signal interference, which makes it harder to reproduce.

[QUOTE=hunterk;40963]Yeah, if you don’t include the first pass, it can’t detect the dithering patterns and only blur them. Instead, it blurs the whole image, which we can do cheaper already, though it looks bad.[/QUOTE]I would actually be interested in an adjustable linear-light blur if that would be possible. While MDAPT/GDAPT do a very good job de-dithering, there are some situations where they fail and the best option is to just blur the whole image. Sometimes that’s best to be a horizontal-only blur, a vertical-only blur, or blurred in both directions.

GTU is probably your best bet there. It’s super-customizable and does very high-quality blurring in either/both directions.

@hunterk: Isn’t it possible to make a less agressive (ie: less “blurry”) 1-pass gdapt by altering gdapt-pass1.glsl? What variables or parameters should I try to adjust if it’s possible at all?

@hunterk yeah, looks good. but I thought the result should look a lot more closer to the unmodified version. don’t update it yet please, I plan to go over the code again and maybe add a feature here and there.

[QUOTE=papermanzero;40738]An optimisation would be to a) select the repeat level of a pattern (means how many times, must a dot, a vertical or horizontal line appear to be a dither pattern). b) select the length of lines when a pattern should be applied (I guess the VO and HI Option in mdapt can provide this, but I am not sure - Moreover I was not able to manage that some pattern are ignored)[/QUOTE] I appreciate your suggestions. dithering detection is really not a trival task, balancing between missing parts and false detections is difficult if the shader only looks locally and can’t differentiate between objects, patterns and background like a neural network could do. a) is already in gdapt as the Error Prevention LVL, which forces that this number of “vertical pixels” must be in a row. b) is a good idea. I’m certain that I also thought about it, but don’t know why I haven’t implemented it. this would probably require a third pass though. I probably wanted gdapt to be a very easy and simple solution, I can’t remember. but maybe it’s too error-prone in its current state. I’ll try it out.

if you have any good sample screenshots for testing, please post’em. thanks!

@Sp00kyFox No problem. Take your time. I won’t commit anything without your go-ahead, since it’s your shader, after all :slight_smile:

@valfanel I’m not sure how you would do it with gdapt. You could try reducing some of the texCoord offsets.

@hunterk: I have tried doing

_ret_0._texCoord1 = TexCoord.xy - 1

and also

TEX0.xy = _ret_0._texCoord1 - 1

Both make the shader effect dissapear, it’s as if no shader was loaded. What offset do you mean? I can’t understand shader code to be honest so I am blind here.

I haven’t looked at the converted GLSL code, but I meant: tex2D(decal, VAR.texCoord+float2((dx),(dy))*VAR.t1) the dx and dy offsets, which are being added to VAR.texCoord. You’ll probably have an easier time of it working on the Cg version and then using the script to convert to GLSL, since the converted GLSL is spaghetti code.

@hunterk: I played with dx and dy, but all they do is move the image around the x and y axis, so these are not the parameters I am looking for :stuck_out_tongue:

@SpOOkyFox: Since you did this shader, can you give me any ideas here, please?

The next place to try would be these: float4 C = TEX( 0, 0); float4 L = TEX(-1, 0); float4 R = TEX( 1, 0); Try TEX(+/-0.05, 0);

@hunterk: These seem more like it, yes! However, pseudo-transparencies don’t look right at any value below 1.0. In Comix Zone, for example, Gravis shadow is formed by bars at 0.9, and doesn’t looks solid and right until 1.0, which in turn makes the game a bit too blurry.

[QUOTE=Sp00kyFox;40992]@hunterk I appreciate your suggestions. dithering detection is really not a trival task, balancing between missing parts and false detections is difficult if the shader only looks locally and can’t differentiate between objects, patterns and background like a neural network could do. a) is already in gdapt as the Error Prevention LVL, which forces that this number of “vertical pixels” must be in a row. b) is a good idea. I’m certain that I also thought about it, but don’t know why I haven’t implemented it. this would probably require a third pass though. I probably wanted gdapt to be a very easy and simple solution, I can’t remember. but maybe it’s too error-prone in its current state. I’ll try it out.

if you have any good sample screenshots for testing, please post’em. thanks![/QUOTE]

Thanks a lot. :slight_smile: I already tried myself to look how to improve the dithering detection. I am strongly agreeing, that dithering detections are difficult.

So currently I am looking for patterns which could be used for tests. I am looking for two kind of patterns: Real Patterns (Like Sonic the Hedgehog, Kirby Dreamland 3 …) False Patters (like Secret of Mana, A Link to the Past). With these patterns, a comparision can be done to see if the improvements of the algorithm are leading to the right direction.

[QUOTE=hunterk;40974]GTU is probably your best bet there. It’s super-customizable and does very high-quality blurring in either/both directions.[/QUOTE]Thanks, I gave it a try but 1) it blurs in gamma-light, not linear-light 2) the result is so blurry that I can’t use it. Combining it with some sort of CRT shader - perhaps one that adds a mask to the image - might help reduce the apparent blur enough that it becomes usable though. I’m not sure. Really makes me appreciate these de-dithering shaders though.

As usual, it’s pretty easy to get it calculating in linear light: original (gamma light) linear light and you can use the dotmask.cg shader to sharpen it up a little, with some caveats: default setting suffers from clipping mask type = 0

[QUOTE=Sp00kyFox;37355]this is what gdapt does with the different error levels. but again like hunterk said, if you set it to just the right amount like in the case with the letter “W” to 4 so that those letters arend smeared anymore, all dithering patterns which are smaller than 4 are no longer detected anymore. with this approach you’ll always have this problem, since you cannot clearly differentiate between dithering and regular structures. the only thing I could imagine would be to train a neural network on this problem, but of course this isn’t trivial either.

regarding your edits… there are some snes games that have dithering by using the snes high resolution mode (Jurassic Park, Kirby’s Dream Land 3, …). in these cases you have a very easy solution. just put this as the first pass of your shader preset (cgp file):

shader0 = stock.cg
filter_linear0 = true
scale_type_x0 = source
scale_x0 = 0.5

and you’re done (you probably need to correct the path to the stock.cg file). this cuts the horizontal resolution in half by blending horizontal pixels pairwise together which results in a picture with the standard snes resolution of 256x224 instead of 512x224 (hires).[/QUOTE]

Finally I found some time to try that. So using this method in kirby’s dreamland 3 makes the foreground completely invisible. Means the “stock” method is not working for all games.

Means the proposed optimisations for the mdapt shader are really necessary.