ok here we go. finally take some time to explain mdapt.
I’ll go through the different phases and explain in words what the algorithm does but not every line in detail. in the end the code is net very complex and I didn’t use any advanced tricks so it should be pretty easy to understand and modify.
let’s start with the notation:
where C is the “central” pixel we’re currently looking at and L,R,U and D are standing for left, right, up and down.
pass1 - pattern detection:
in this pass the algorithm goes through every pixel and checks if it’s the central of one of those patterns. here there are:
well to the colors. the black pixel with the white square in it is the central pixel. all black pixel must have the same color where every mangenta pixel can be an arbitrary color but they must differ from the “black color”. the green pixels are not considered part of the pattern itself but are an additional check, they also must differ from the black ones.
and the extra case for the one line patterns (activate with “hori” and “vert” in pass1). red is the counterpart to black. all the red pixels must be equal and they must differ from black.
these are basically partial checkerboard patterns from the bayer dithering. but I think that I probably have to enlarge the pattern since it still leads to some annoying errors here and there. anyways, if a pattern is detected the pixel will get tagged with a specific number in the alpha channel to give the information to the other passes.
pass2 - isolation termination:
you probably can get rid of the pass if the patterns in pass1 are bigger. well the idea is that if there is only one tagged pixel in an area it’s probably a false detection due to letters, text or hud elements. so the algorithm counts the tagged pixel in the square with radius two (3x3 area) and central pixel C. if the number is smaller than three C will get untagged (alpha = 0) otherwise it won’t get touched.
pass3 - pattern completion:
we need to tag the rest of the patterns we detected in pass1. since we gave every pattern a specific number we can identify them. so the algorithm again goes through every pixel and looks if it’s part of a pattern by checking if there is a a tagged one in the surrounding area. the perspective in the code is kinda upside down in comparison to pass1 but one can understand it well if you look at the pixel scheme.
pass4 - blending:
so this is the final step. since the patterns are two dimensional and the shader should also work with pseudo transperency effects like the ones in the sf3a screenshots I couldn’t just blend the tagged pixels in one direction. there are two different cases, one line pattern and checkerboard.
the first one is of course pretty straight forward:
C_new = 0.5*C + 0.25*(L1+R1) //horizontal
C_new = 0.5*C + 0.25*(U1+D1) //vertical
for the checkerboard pattern I though that “black” and “white” should have the same weight and the (black) central pixel should get the biggest factor. which got me the following formula:
C_new = 0.25*C + 0.0625*(UL + UR + DR + DL) + 0.125*(L1 + R1 + D1 + U1)
I played around a little bit with the factors but this seems to be the best combination.
in the case that not every neighbor pixel is tagged the weights are appropriately rescaled (check out the merge methods in pass4).
improvements:
like I already said I think the algorithm could benefit from bigger and more patterns which also could make pass2 obsolete.
another thing is that so far mdapt is very strict with the colors and doesn’t detect patterns if there is only a small difference. so one would need to work with thresholds. the problem is that it would increase the cross checks between the pixels because equality is an equivalence relation but that’s not true for similarity.