Basically what happens is the core outputs an image and the touchscreen surface corresponds to this image. Then the shader adjusts the image by scaling and moving it, and the touch screen area is unchanged, and therefore doesn’t match the image you see.
To have a fix for this we would have to send the new window coordinates of the scaled and moved screen out of the shader, which is not part of the framework.
So if you really need the touch screen interaction then you’ll have to use a standard overlay like Duimon suggests. I don’t think there is any way around this at the moment.