pseudo-hires transparancy in snes is basically two solid 256*240 images interlaced vertically. I’m not sure how the hardware does it exactly, but I imagine something along the lines of : draw a 256x240 image ,copy it , draw on the copy, get final output by alternating reads between original buffer and copy.
so to split it back to 2 different solid images, you resize the input to 256 width with nearest neighbor , and sample with a tiny negative/positive offset to get first/second image. that way ,if the input width is 512, you get 2 different images, and if the input width is 256, you get 2 images identical to the input.
you can also get a blended 256 width image directly by resizing down with bilinear, and the result is perfectly sharp. if the input is P0,P1,P2,P3… the blended output is (P0+P1)/2,(P2+P3)/2,… . and since odd/even pixels are either equal or from original/transparency layer , no blurring occurs.
some shaders will get confused though with the added transparency, and both layers can even have their own independent dithering as you can see in the kirby screenshot.
high resolution text is a different story ( example : seiken densetsu 3), but you can get a good result by blending the 2 final images with a 0.5/256.0 offset ( normalized texCoord) instead of exactly on top of each other. ( this is also how I got the last picture that you liked )
it is also better to do the blending in linear gamma, even when just resizing down with bilinear.
splitting the image this way or resizing down to a single 256 texture with bilinear solves the issue most pattern detection shaders have with the 512-resolution mode of snes, since it’s basically a pixel doubled 256 texture with only some parts being high res, or two alternating 256 textures. in seiken densetsu 3 for example, the sprites and background look different each time a text-box appears when using such a shader.
nice