Hi so ok what are you using Windows 10? Does your gfx card support HDR (what is your gfx card)? Has HDR worked for you in the past with RetroArch has it worked for you in any game? D3D11 and D3D12 have both had it for ages 1.9.9 I believe and they are more robust as the D3D HDR implementation is bit more fully featured. Also just try the stock shader all you’re interested in at the moment is getting HDR working i.e. it being bright - it definitely sounds like the HDR pipeline is not working for you - you will know when it is!
I found that this shader work best with games that have a darker color palette. Playing through MGS with it and the codec looks great.
This screenshot is really off. Sharpness, brightness, granularity, everything. Idk if the shader is being used as intended here.
I’m still waiting on those close ups, let’s see those RGB phosphors! That’s what this shader is all about, right?
The problem is going to be in properly capturing it. If you get a camera right up against it, it’s just going to adjust its aperture as if you were taking a picture outdoors and it’ll just look like any ol’ LCD at that point.
We actually want the blowouts in this case, and cameras try very hard to avoid that
Yep it’s very tricky. Slowly moving the camera while taking the shot will prevent the camera from completely focusing, but it’s like 1 shot out of 10 will look good.
Yes the picture didn’t come out right. I don’t think it is possible for the camera to properly capture the color and brightness.
Hi @Arviel that shouldn’t be the case (that it works best with darker colour palettes that is - see screenshots of SFA3 right at the top for instance). Could you post me a screenshot with a bit of colour in it? It may very well be the contrast is too low and you need to up that and lower your paper white luminance level (can you post your shader values - remember you ALWAYS have to change the three HDR values to match your monitor - peak luminance, paper white luminance and contrast).
You do also have to sit far enough away with this shader for the phosphor/pixel grid to start disappearing - just like a real PVM.
I have found a ‘bug’ in my horizontal falloff i.e. how phosphors are lit up when they straddle boundaries between source pixels. I say bug its not really a bug per se its just not accurately capturing what my PVM does which is that it biases bright source pixels so lighter source pixels will illuminate neighbouring (transitioning) phosphors disproportionately to darker source pixels i.e. from light to dark there is a slow drop off (release) in the beam and from dark to light there is a fast rise (attack) in the beam it seems. This results in light pixels looking fatter and dark pixels looking narrower - a kind of bloom brought on by the beam transitioning between values (I think). I can see this in Link’s eyes in The Legend of Zelda: A Link to the Past on the SNES.
There are also a few other things I’ve noticed that aren’t right and I’ve implemented convergence too. I’m still trying to keep this shader simple with no blurs or blooms and minimal noise generation apart from maybe simulating the cable. I will probably add curvature too - my PVM only seems to curve in the x axis - it looks like most PVM’s only curve in that direction too.
yeah, that’s part of the deal for aperture grilles.
Ah fantastic good to know there isn’t some slight y curvature I’m not seeing or some odd model etc.
Are you sure it curves in the x axis and not the z axis? This is a 3 dimensional object. It might be across the x axis but the direction of the curve is into the z axis, not so?
What I meant by this is that the sharpness of the scanlines do not bother me as much with a darker color palette. In games where the scanlines are more apparent my eyes start to bother me after a while, even if I lower the brightness. This has happened with other shaders that have very sharp scanlines (ex kurozumi), not something exclusive to this one.
Well you could argue it’s the y axis but yes you’re right X is definitely the wrong axis - I knew what I meant in my head even though no one else did
You’ll be glad to hear I’m currently working on that and the next version will have more accurate scan lines to my PVM.
You see, I think everyone understands what we’re talking about but it’s time we start addressing this concept of proper perspective correct curvature and usher in the next generation of curvature techniques. The current ones with the fishbowl effect still leave a bit to be desired but I haven’t really heard anyone else say what I’ve been saying about the way I have perceived curvature of the in game image as opposed to curvature of the screen. Thankfully @HyperspaceMadness has begun to address this in his awesome HSM Mega Bezel Reflection Shader with his implementation of independent image and CRT Bezel curvature, which is a great step in the right direction but the next generation might hopefully go as far as attempting to simulate proper, realistic geometry distortion for example. Sometimes, instead of a curved lines near the borders people’s sets ended up with a wavy line at the edges as they tried their best to get that geometry dialed in correctly. So we would need similar pin cushion controls and trapezoidal controls like on computer monitors.
What we have now is just people slapping on curvature and adjusting the strength of their bends and saying, "Oh what a good boy/girl am I! I just made a CRT. Then they proceed to play games using those bent out shape settings that warp and magnify the images in ways that old TV’s and monitors never did (at least not by that amount).
If you look at a CRT with an image on it from the bottom or top, depending on if you’re higher or lower, either the top or bottom might be flatter or more curved than the other. What we have now are static bends in the x and y axis, (which is technically incorrect, it might only appear that way if not looking at a screen head on.) The perspective and curve doesn’t change with respect to the actual viewing angle, and perspective of the viewer.
Edit: I’ll rephrase:
These two are the most accurate curvature I’ve seen done by examining real world hardware:
Shader is here: shaders_slang\anti-aliasing\shaders\ewa_curvature.slang
The shader is here: shaders_slang/crt/crt-yo6-KV-M1420B.slangp
Royale does a 3D projection which looks good and can respond to curvature, viewing distance and tilt but is slow.
Yes so my old 2730’s (I luckily have two of them) have an absolute ton of pots for the things you mention and more but I have little idea how they are implemented in practice. I can see what their end results are but is that enough?
In terms of what you’re saying is not a 3D projection of a curved surface enough?
I’ve implemented basic convergence in my shader now as I believe that is pretty crucial to the 2730s look as they ALWAYS have convergence issues especially at the edges. They also always have geometry issues too - you’re just trying to minimise them.
Ah thanks for these links I’ll have a read. What are the other methods to 3D projection? I would have thought that is the only way to do this?
Maybe, the issue I mainly have is that most examples of curvature I see are really examples of what uncorrected severe pincushion distortion would look like.
That was never my experience playing games on a CRT. Even if playing on a curved CRT, my horizontal and vertical lines looked relatively straight going across the screen. Sideways scrolling didn’t look like things were going around a bulb or bulge like they do with most of the curvature presets I see at their default settings.
If viewing a curved screen from below the bottom would appear less curved than the top and vice versa.
So to summarize, the examples and implementations of curvature that I commonly see are exaggerated. They look a bit off in terms of perspective distortion as well.
I just can’t use them without getting a headache or motion sickness, probably because they feel very wrong to my brain.
So I feel as though there might be some room for improvement. Certainly in the geometric distortion correction department because I’m sure manufacturers would have worked to minimize geometric distortion before shipping their products and consumers were given the tools to do so as well at least in terms of monitors.
I do acknowledge that some mainly very old CRT TVs might have featured highly curved, bent, pincushioned, fishbowl like images but those would’ve been way before my time. The size of the TV might have also played a part in determining how bad this distortion due to curvature of the image looked.
Implementing the behaviour of types of controls should probably be the next step towards improving curvature emulation. This must be implemented in conjunction with independent screen and image curvature controls. The curvature of the screen would remain static, while the distortion caused by the curvature of the screen on the image wouldn’t be 1:1 due to the ability to minimize and correct said distortion in the image itself via these controls.
Take this random image that I borrowed from elsewhere in the forum for example. You can see the pincushion effect in all four corners and of course it also affects the center as well as all horizontal (and vertical) lines in the image. This is not the way I remember things in my extensive CRT experience. Those corner distortions could be easily fixed. The end result may not have been an absolutely perfectly straight, parallel, horizontal (or vertical) line but I hope you understand my point and the adjustment would’ve affected the entire image.
Thus screen curvature should not necessarily be equal to image curvature. After accounting for geometric distortion correction, image curvature could be a lot less (or a bit different) than screen curvature.
Once this is implemented correctly, all of the other anomalies created by the current implementations should fall into place.
The curvature of the screen can still be “suggested” by using things like @HyperspaceMadness’s Tube Diffuse Layer because light, shadows and reflections hitting the outside of the curved tube would be completely affected by its curved nature while being completely unaffected by the set’s geometry adjustments.
I think some people might be (incorrectly) using the curvature of the image and pincushion distortion to “suggest” that they’re not playing on a perfectly flat display because a perfectly flat would clearly be an inaccurate representation of a CRT’s image output in their minds.
So in concluding, nothing should be wrong with measuring and modeling the dimensions of the outside curvature of a CRT Screen and implementing it in a shader. What’s wrong is assuming that the final image projection is supposed to conform to the shape and dimensions of that physical screen glass model 1:1.
That’s where things need to be improved.