Mdapt & gdapt - dithering treatment [Updated 06/06/14]

it depends on the threshold value which is set in pass1. basically if the color difference of the underlying structure is too high the pixel won’t get detected (but a high thresh value also increases the number of errors). since the algorithm in the moment only looks at horizontal neighbors but not at the other surrounding ones, there is room for improvement to detect these easy patterns. btw is that screenshot from retroarch? which core supports the arcade version of mortal kombat? oO

of course, I’ll do it tomorrow.

@Squarepusher I don’t have one yet. would be nice if you could update my new version as long as I’m not registered. but I’ll look into it. any tools I need or can I update files with a web frontend (I’m a windows user)?

1 Like

it depends on the threshold value which is set in pass1. basically if the color difference of the underlying structure is too high the pixel won’t get detected (but a high thresh value also increases the number of errors). since the algorithm in the moment only looks at horizontal neighbors but not at the other surrounding ones, there is room for improvement to detect these easy patterns. btw is that screenshot from retroarch? which core supports the arcade version of mortal kombat? oO

of course, I’ll do it tomorrow.

@Squarepusher I don’t have one yet. would be nice if you could update my new version as long as I’m not registered. but I’ll look into it. any tools I need or can I update files with a web frontend (I’m a windows user)?[/quote]

Yeah, you can download ‘Github for Windows’ from the Github site itself which is an easy and self-contained installation EXE which contains everything you need to start ‘cloning’ repos and ‘committing’ to them.

I’m sure Hyllian could get you through the initial setup hurdles in case you experience any beginner problems with Github.

1 Like

regarding pseudo-hires transparency ( the snes one, not the flickering transparency in sf3 ), the easiest workaround would be to simply scale down the input width to 256 with bilinear filtering, and then feed it to your shader (or any other sharpening shader for that matter ). it tends to work extremely well in most cases, the only games where this approach would break are those that use high resolution text @512, but I don’t really know of any snes game that uses both hires transparency and hires text at the same time.

another approach is to split the input texture into two solid 256-width textures, one containing the transparency layer and one without it. you then feed them both to your shader, then to sharpening shaders, then blend the two outputs in a final pass. this approach gives far better results at the cost of much higher performance requirements.

I have made a small collection of cgp’s that use combinations of these ideas so you can compare the results directly: Download

you need to copy “2xBR-Hybrid-v5-gamma.cg” to the same directory where you extracted the zip, and put both “mdapt_pass1.cg” and “mdapt_pass2.cg” in the “/shaders” subfolder.

Edit: made some comparison screenshots : http://imgur.com/a/DUx8M#8 .

1 Like

This is screen from mdapt v1.2, you can see horizontal pattern I’m talking about here: https://dl.dropboxusercontent.com/u/8913813/retroarch/mdapt_v1_2_horizontal_lines_pattern.png

1 Like

The last pic is amazing!

How pseudo-hires work on snes? How do you split it?

No. No core supports it yet.

I have my own rudimentary offline test app that works over png pics. I’ve got that pic somewhere in the web. I’d like to get your algorithm idea and insert in my app and do some tests.

[quote=“Hyllian”]

No. No core supports it yet.

I have my own rudimentary offline test app that works over png pics. I’ve got that pic somewhere in the web. I’d like to get your algorithm idea and insert in my app and do some tests.[/quote]

There already is a core that supports the arcade version of Mortal Kombat actually (and MK2/MK3/UMK3) - you’ll have to use it on PC for now though -

Added aliaspider to the commit list (common-shaders) at the request of Hyllian.

a quick question. if you’ve looked at my code you probably have seen that I use the alpha value in the first pass to tag the detected pixels. now I wanted to use different tags for other cases and patterns but it seems the alpha value is binary or boolean. well pass2 doesn’t work correctly except for when the alpha value is 0.0 or 1.0. so is the alpha value binary (is should be a float so, kinda weird) and if so any ideas how I can pass over different “pixel tags” (more than two) from one pass to another?

it could be that the FBO is in RGBA5551 format, or that you are using values outside the range (0.0,1.0) which would get clamped. you can add this to each pass declaration in the cgp file : float_framebufferN = true that should allow you to pass the alpha value as a 32-bit float ( or 16-bit , not sure ) between shaders.

thank you for the quick response aliaspider. the float_framebuffer command resolved the issue :wink:

pseudo-hires transparancy in snes is basically two solid 256*240 images interlaced vertically. I’m not sure how the hardware does it exactly, but I imagine something along the lines of : draw a 256x240 image ,copy it , draw on the copy, get final output by alternating reads between original buffer and copy.

so to split it back to 2 different solid images, you resize the input to 256 width with nearest neighbor , and sample with a tiny negative/positive offset to get first/second image. that way ,if the input width is 512, you get 2 different images, and if the input width is 256, you get 2 images identical to the input.

you can also get a blended 256 width image directly by resizing down with bilinear, and the result is perfectly sharp. if the input is P0,P1,P2,P3… the blended output is (P0+P1)/2,(P2+P3)/2,… . and since odd/even pixels are either equal or from original/transparency layer , no blurring occurs. some shaders will get confused though with the added transparency, and both layers can even have their own independent dithering as you can see in the kirby screenshot.

high resolution text is a different story ( example : seiken densetsu 3), but you can get a good result by blending the 2 final images with a 0.5/256.0 offset ( normalized texCoord) instead of exactly on top of each other. ( this is also how I got the last picture that you liked )

it is also better to do the blending in linear gamma, even when just resizing down with bilinear.

splitting the image this way or resizing down to a single 256 texture with bilinear solves the issue most pattern detection shaders have with the 512-resolution mode of snes, since it’s basically a pixel doubled 256 texture with only some parts being high res, or two alternating 256 textures. in seiken densetsu 3 for example, the sprites and background look different each time a text-box appears when using such a shader.

nice :smiley:

1 Like

just to let you know, I’m still working on it. right now I’m perfecting the checkerboard pattern detection. here is a little teaser:

Some progress of my “cbod”(conditional blending of dither) shader: https://dl.dropboxusercontent.com/u/8913813/retroarch/coastkid_filter_v02_001.png https://dl.dropboxusercontent.com/u/8913813/retroarch/coastkid_filter_v02_002.png https://dl.dropboxusercontent.com/u/8913813/retroarch/coastkid_filter_v02_003.png https://dl.dropboxusercontent.com/u/8913813/retroarch/coastkid_filter_v02_004.png

1 Like

@Sp00kyFox: if you want you can improve the quality of the blending by adding gamma correction. you just need to add one line of code at the end of each cg file, just before “return C;” : mdapt_pass1.cg:

C=pow(C,2.2);

mdapt_pass2.cg:

C=pow(C,(1.0/2.2));

this is how it would look like then with the current mdapt:

1 Like

Very good samples. It’s something in the same category of Sp00kyFox mdapt?

Hey, CoastKid, Do you have interest in contribute to common-shaders repo? If you have a github account, then we can add you.

1 Like

would you mind sharing you ideas coastkid? that looks pretty good. still working on the pattern recognition but having a hard time coming up with rules so that closed areas are sharp at the border but at the same time keeping smooth transitions between neighbored dithered areas.

@Hyllian sry to keep you waiting but I’d like to finish this version before explaining the code in detail, since it will be a huge improvement in detection and picture quality (at least I hope so ^^).

1 Like

Tried it in Silent Hill. That game makes heavy usage of dithering. It correctly blended the dithering used in the fog.

1 Like

I’ve done with my first version of CBOD shader, you can download it from here: https://dl.dropboxusercontent.com/u/8913813/retroarch/cbod_shader_v1.zip

There is a two passes, horizontal and vertical. Recommended settings for passes: Pass #0 Filter: Nearest Pass #0 Scale: 1x Pass #1 Filter: Nearest Pass #1 Scale: 1x

I did not have the Github account yet, but thanks for the offer.

ok, checkerboard detection and handling is complete. but the one line dithering is not included in this version (so it won’t do much on genesis’ lion king or aladdin). I need to rethink how I can combine both methods. you can of course just use the old version (or combin’em in a cgp). anyway, here is the new version… https://anonfiles.com/file/7d8e2992fe5f27b56a06e273f468e76c

and a gallery with comparison shots:

especially arcade games with this kind of effect look pretty good with mdapt.

1 Like