Mdapt & gdapt - dithering treatment [Updated 06/06/14]

Would it be possible to update this shader to support linear light blending, similar to how the pixellate shader was recently updated?

  1. Source image
  2. Blurred in linear light
  3. Blurred in gamma light
  4. MDAPT (results similar to the blur in gamma light)

As you can hopefully tell, blending in gamma light darkens the image compared to the source while blending in linear light matches the original brightness.

yeah, this is actually pretty easy to do, since Sp00ky Fox used a handy macro for all of the texture sampling, which we can wrap in a pow in the first pass and then use float framebuffers for the rest of the passes. Seems to work as intended: gdapt original: gdapt linearized: If Sp00ky’s okay with it, I can push linearized versions of mdapt and gdapt to the repo.

I don’t think so. problem is that the metric I use is for gamma light as far as I understand (check it out http://www.compuphase.com/cmetric.htm). so transforming the colors values to linear light in pass0 would give you messed up metric values. this is why you got almost no blending there in your screenshot, the big difference is not a result from the now gamma-aware blending. the correct way should be to leave pass0 alone. in pass1 only apply the linearization on the xyz-component of the sampled colors (w-component is strength) and do gamma correction on the return line.

This is exactly why I wanted to hear your input before pushing anything up :smiley:

I’ll try your suggestion and see if that helps and run those results by you, as well, before making any changes.

Played around with this filter for weeks. But honestly, the filter causes to many failure. gdapt is to agressive and mdapt is missing options to avoid failure. Most common issues 3 dots (e.g. in a text), the letter W and M, spirte patterns.

An optimisation would be to a) select the repeat level of a pattern (means how many times, must a dot, a vertical or horizontal line appear to be a dither pattern). b) select the length of lines when a pattern should be applied (I guess the VO and HI Option in mdapt can provide this, but I am not sure - Moreover I was not able to manage that some pattern are ignored)

@Sp00ky Fox Like this?

/*
   Genesis Dithering and Pseudo Transparency Shader v1.3 - Pass 1
   by Sp00kyFox, 2014

   Blends pixels based on detected dithering patterns.

*/


#pragma parameter STEPS "GDAPT Error Prevention LVL"	1.0 0.0 5.0 1.0
#pragma parameter DEBUG "GDAPT Adjust View"		0.0 0.0 1.0 1.0

#ifdef PARAMETER_UNIFORM
	uniform float STEPS, DEBUG;
#else
	#define STEPS 1.0
	#define DEBUG 0.0
#endif


#define TEX(dx,dy) tex2D(decal, VAR.texCoord+float2((dx),(dy))*VAR.t1)

struct input
{
	float2 video_size;
	float2 texture_size;
	float2 output_size;
};

struct out_vertex {
        float4 position : POSITION;
        float2 texCoord : TEXCOORD0;
        float2 t1;
};


/*    VERTEX_SHADER    */
out_vertex main_vertex
(
	float4 position	: POSITION,
	float2 texCoord : TEXCOORD0,

   	uniform float4x4 modelViewProj,
	uniform input IN
)
{
        out_vertex OUT;

        OUT.position = mul(modelViewProj, position);

        OUT.texCoord = texCoord;
	OUT.t1       = 1.0/IN.texture_size;

        return OUT;
}

/*    FRAGMENT SHADER    */
float3 main_fragment(in out_vertex VAR, uniform sampler2D decal : TEXUNIT0, uniform input IN) : COLOR
{

	float4 C = TEX( 0, 0);
	float4 L = TEX(-1, 0);
	float4 R = TEX( 1, 0);

	C.xyz = pow(TEX( 0, 0), 2.2).xyz;
	L.xyz = pow(TEX(-1, 0), 2.2).xyz;
	R.xyz = pow(TEX( 1, 0), 2.2).xyz;
	

	float str = 0.0;

	if(STEPS == 0.0){
		str = C.w;
	}
	else if(STEPS == 1.0){
		str = min(max(L.w, R.w), C.w);
	}
	else if(STEPS == 2.0){
		str = min(max(min(max(TEX(-2,0).w, R.w), L.w), min(R.w, TEX(2,0).w)), C.w);				
	}
	else if(STEPS == 3.0){
		float tmp = min(R.w, TEX(2,0).w);
		str = min(max(min(max(min(max(TEX(-3,0).w, R.w), TEX(-2,0).w), tmp), L.w), min(tmp, TEX(3,0).w)), C.w);
	}
	else if(STEPS == 4.0){
		float tmp1 = min(R.w, TEX(2,0).w);
		float tmp2 = min(tmp1, TEX(3,0).w);
		str = min(max(min(max(min(max(min(max(TEX(-4,0).w, R.w), TEX(-3,0).w), tmp1), TEX(-2,0).w), tmp2), L.w), min(tmp2, TEX(4,0).w)), C.w);
	}
	else{
		float tmp1 = min(R.w, TEX(2,0).w);
		float tmp2 = min(tmp1, TEX(3,0).w);
		float tmp3 = min(tmp2, TEX(4,0).w);
		str = min(max(min(max(min(max(min(max(min(max(TEX(-5,0).w, R.w), TEX(-4,0).w), tmp1), TEX(-3,0).w), tmp2), TEX(-2,0).w), tmp3), L.w), min(tmp3, TEX(5,0).w)), C.w);
	}


	if(DEBUG)
		return float3(str);

	float sum  = L.w + R.w;
	float wght = max(L.w, R.w);
	      wght = (wght == 0.0) ? 1.0 : sum/wght;

	return pow(lerp(C.xyz, (wght*C.xyz + L.w*L.xyz + R.w*R.xyz)/(wght + sum), str), 1.0 / 2.2);	
}

Here’s the result:

@hunterk: could this blending filter be converted to GLSL for usage on the Pi as an 1-pass blending filter for example?

I don’t think the second pass will do much on its own, so it would have to be the full 2 passes. Do they not get full speed on rpi currently (I’m assuming not, since it has all of those texture sampling actions…).

@hunterk: I made a very interesting discovery. Loading ONLY the second pass in the gdapt.glsp (gdapt-pass1.glsl actually) does add dithering! No need for multipass so we can have dithering on the Pi! Pseudo-transparencies work in MegaDrive games, and ZX Spectrum games that rely on adjacent pixels to create new colors on old TV CRT screens in composite and RF work too. The “bad” part is that image is too “blurry”. Maybe that’s how things looked back then, I don’t know.

Yeah, if you don’t include the first pass, it can’t detect the dithering patterns and only blur them. Instead, it blurs the whole image, which we can do cheaper already, though it looks bad.

Isn’t that what analog RF/composite did? I don’t have a CRT setup here to compare anymore, but since these setups couldn’t do “multiple passes” or detect patterns, I guess they simply dithered the whole image, didn’t they?

The probem is that the result of replicating the effect looks strange on a digital TV these days. Or is there anything beside those considerations?

RF/composite indeed blurred the image but the way it was blurred was much more complex and related to signal interference, which makes it harder to reproduce.

[QUOTE=hunterk;40963]Yeah, if you don’t include the first pass, it can’t detect the dithering patterns and only blur them. Instead, it blurs the whole image, which we can do cheaper already, though it looks bad.[/QUOTE]I would actually be interested in an adjustable linear-light blur if that would be possible. While MDAPT/GDAPT do a very good job de-dithering, there are some situations where they fail and the best option is to just blur the whole image. Sometimes that’s best to be a horizontal-only blur, a vertical-only blur, or blurred in both directions.

GTU is probably your best bet there. It’s super-customizable and does very high-quality blurring in either/both directions.

@hunterk: Isn’t it possible to make a less agressive (ie: less “blurry”) 1-pass gdapt by altering gdapt-pass1.glsl? What variables or parameters should I try to adjust if it’s possible at all?

@hunterk yeah, looks good. but I thought the result should look a lot more closer to the unmodified version. don’t update it yet please, I plan to go over the code again and maybe add a feature here and there.

[QUOTE=papermanzero;40738]An optimisation would be to a) select the repeat level of a pattern (means how many times, must a dot, a vertical or horizontal line appear to be a dither pattern). b) select the length of lines when a pattern should be applied (I guess the VO and HI Option in mdapt can provide this, but I am not sure - Moreover I was not able to manage that some pattern are ignored)[/QUOTE] I appreciate your suggestions. dithering detection is really not a trival task, balancing between missing parts and false detections is difficult if the shader only looks locally and can’t differentiate between objects, patterns and background like a neural network could do. a) is already in gdapt as the Error Prevention LVL, which forces that this number of “vertical pixels” must be in a row. b) is a good idea. I’m certain that I also thought about it, but don’t know why I haven’t implemented it. this would probably require a third pass though. I probably wanted gdapt to be a very easy and simple solution, I can’t remember. but maybe it’s too error-prone in its current state. I’ll try it out.

if you have any good sample screenshots for testing, please post’em. thanks!

@Sp00kyFox No problem. Take your time. I won’t commit anything without your go-ahead, since it’s your shader, after all :slight_smile:

@valfanel I’m not sure how you would do it with gdapt. You could try reducing some of the texCoord offsets.

@hunterk: I have tried doing

_ret_0._texCoord1 = TexCoord.xy - 1

and also

TEX0.xy = _ret_0._texCoord1 - 1

Both make the shader effect dissapear, it’s as if no shader was loaded. What offset do you mean? I can’t understand shader code to be honest so I am blind here.

I haven’t looked at the converted GLSL code, but I meant: tex2D(decal, VAR.texCoord+float2((dx),(dy))*VAR.t1) the dx and dy offsets, which are being added to VAR.texCoord. You’ll probably have an easier time of it working on the Cg version and then using the script to convert to GLSL, since the converted GLSL is spaghetti code.

@hunterk: I played with dx and dy, but all they do is move the image around the x and y axis, so these are not the parameters I am looking for :stuck_out_tongue:

@SpOOkyFox: Since you did this shader, can you give me any ideas here, please?