Thanks for this link - I will try some of these mitigation techniques later tonight! The moire I am getting is minimal - maybe one or two of these tricks will help!
Gave some of these a try last night finally. Noise is definitely a way to fully eliminate the moire, but I didn’t like the look of the noise.
I think for me the solution is going to be to give up curvature and then pick whichever shader gives me the most even/natural looking scanlines. The contenders are Guest Advanced, Hyllian, Easymode and Zfast in that order.
The biggest remaining issue I’m tinkering with is trying to remove what I think is meant when people say uneven scanlines - basically the very subtle appearance of horizontal “stripes” created by 3 or 4 scanlines in close proximity seeming to band together or maybe just appear slightly closer to one another or more prominent, which is visually distracting. I am relatively confident I can eliminate this by playing with the scanlines settings inside Guest Advanced to just slightly soften the scanlines.
Tbc, lightening up the scanlines and/or mask will help with your moire issues (some). And you’re right noise helps, but that opens up a whole other can of worms. (Namely being you usually need to use a distracting amount of noise to “remove” the moire, which kind of defeats the purpose imho, as you’re removing one distracting thing for another)
Oh interesting, I hadn’t thought about how lightening the scanlines and lowering the mask strength would reduce moire, but that does make sense since it is the lines themselves that appear to warp with the moire. I am probably about 95% to where I want to be just by using any of the shaders I mentioned, without curvature - so if I can get one of them to even out the striping/clumping/banding/unevenness to a place where I am satisfied, through a little lessening of those two parameters you mentioned, then I will toss some curvature on just to check and see if that is resolved as well. If yes, awesome. If no, then I’ll just give up on curvature.
This is an exerpt from another post I’ve made concerning moire mitigation. If using Non-Integer Scale try to match it to an Integer Scale percentage that produces the least or no moire. You can fine tune the noise resolution and amount so that it’s barely noticeable but still helps hide/diffuse the moire.
Have you tried my preset pack by chance? There are some curved presets using MBZ__3__STD that should look pretty decent. I also started making a preset using GDV-Mini but there was a bug in the shader or in Mega Bezel that stopped me from proceeding further. That bug seems to have been resolved now though.
I observed the same Castlevania: SOTN startup loop for a bit and I had earlier noticed that when the screen fit the bezel, the Non-Integer scale was around 66% while when it exceeded the bezel it was around 77%.
This can also be seen in the screenshots I posted.
So I thought I’d use the Non-Integer Scale instead of the Integer Scale along with (HSM_INT_SCALE_BORDER_MIN_HEIGHT) to sort of lock the scale factor to a setting that matched the setting that was set when I used Integer Scale Mode 0 (which didn’t have the moire issue).
Guess what? It actually worked! I was now using Non-Integer Scale (Int. Scale Mode 0) with my rich scanlines with no moire pattern in 640 x 480 res mode!
For Castlevania: SOTN the perfect setting was 66.40! 66.60 was also acceptable but 66.40 had 0 horizontal moire! any setting above or below that resulted in noticeable moire either in a convex or concave pattern depending on if I was above or below the ideal setting. the further away I went the worse the moire pattern.
Iv’e been experimenting with stronger masks lately, suppressing my autistic obsession with color retention just enough to get some results… until I can’t.
On one hand it’s undstandable that laying stripes of transparent color ontop of an image… may affect the color of said image.
On the other hand I am absolutely baffled how i’ts aways a greenish tint that even appears with the black and white mask 7 for some reasion, on both displays I’ve used the past two years at all resolutions with scaling set to 100%… but not mask 8. Sadly mask 8 is just ugly.
Despite not having a 2k (I use 4k) display I get some pretty decent results from masks 11 and 12. This gives me a theory; let me get straight to the point.
Im requesting a CMY (maybe CMYB?) mask. Despite our fundamentaly additive displays, perhaps applying a mask over an image is a subtractive afair that calls for the subtractive color model?
I don’t know if this has been brought up before; I only ran a rough check with the searchbar.
Do you still get this greenish tint when you flip the mask layout?
What mask size are you using and what colour format are you using on your computer? Is it 4:2:0 Limited or RGB 4:4:4 full? Have you tried Mask 6 size 0 or 2 with layout 0 or 1?
Have you adjusted any other color settings? Do you use Grade? What are your Gamma Settings like? What white point are you using or NTSC Phosphor Gamut? Is your display calibrated to sRGB or DCI? What colourspace are you using in the shader?
Are you using Deconvergence? If so, have you manually adjusted the settings?
These are all things that you can look into first that might affect the colour output.
Feel free to post some screenshots, including some close-ups and you can also take some photos of the screen with the following settings: ISO 100 or 200, White Point 6500K and Shutter Speed 1/60 or 1/50 corresponding to 60Hz or 50Hz refresh rates.
You can also post your shader settings in the forum.
We can call it what we want but you can’t get subtractive color on a backlit display. Not without using a physical mask.
What would using multiply blend with a mask look like though?
Unfortunately, the computer I’m tinkering with is not strong enough for your preset, but I appreciate the suggestions and will try adjusting the noise resolution.
Yes. I tested my moniter’s subpixel layout, it’s RGB.
How we theorize it shouldn’t be capable of stopping anyone from laying stripes of cyan, meganta, and yellow, ontop of an image, which is an experiment I still want to see the results of.
@Cyber This has always been a problem with nearly all masks (the expections being, 8 and -1 for obvious resons) well before I started using deconvergence, across two computers and moniters, and all high strength images posted to this site, The same problem exists with PowerSlave Exhumed’s crt shader options.
It’s worth noting it’s not exactly a “tint” per se. There was this one time where I think I accidently set one of the shaders to scale the image, when that happend the “tint” became unevenly distributed, and in larger images took on a kinda “swirly” form that reminds me of spatial aliasing. This disapears completly when “mask strength” and “low strength” are set to 0.50 max. Changing the mask size can also solve the problem in some specific cases.
I always just called it color loss. I think it’s just an inherent problem to using masks. 11 and 12 which mix rgb and cmy colors, however, deliver “different” results.
So I want to see happens with a CMY mask. I don’t expect it to be a smoking gun against something I think is intrinsic to masks. I do ,however, thnk it would be fun to configure.
The easiest way to play around with them is with CRT-Royale, which uses png images for the masks, which you can modify in photoshop/gimp/whatever to CMY instead of RGB.
I use a CMY mask on my setup (or rather YMC to be exact since it’s a BGR display). It results in a much-needed boost in brightness. However, it’s a bit of a mess at the subpixel level. Only reason it sort of works for my setup is because I output 1080p to a 4K display, so it’s more like YYMMCC, which appears to mitigate the subpixel issues somewhat.
In my past experiennce, if I were to switch from mask 10, 6, or 13 to 11 or 12 I’d have to recalibrate the bightness setting to be a tad more dim. I expect I’d find this to be even more the case with a CMY mask.
Color loss when using masks is easily mitigated by increasing scanline saturation, I’ve found.
Colors are well-saturated here, right? (view at original size)
Saturation by it’s very nature reduces the color range.
Compare that image to raw and you’re bound to find something isn’t quite meant to be so saturated.
It was color loss (or perhaps better worded “change”) in skin tones that originaly drove me to create an account on this site. Many techniques that compensate for masks can easily send skin tones straight to the uncanny valley.
What masks do to colors isn’t desaturation so increasing saturation doen’t really help.
However, I have actually warmed up to scanline saturation lately, but this is beacuse of what it does to the beam shape, not the colors.
Efectively replicating the aesthetic of CRT displays while keeping the colors of the raw image is my holy grail.
Sorry, this is a bit of a rushed “face palm” post.
The colors of the raw image aren’t what gets displayed on a CRT, though. I would never use the raw colors as a target.
Honestly I’m kinda baffled as to what you’re even talking about.
Why are “skin tones” even an issue when talking about 240p video games?
I don’t have a crt laying around and my memories are mainly from childhood so I disable shaders to reference the raw image displayed on the lcd screen.
In “Panarama Cotton” Cotton’s small “corner face” can easily turn too red. In many rpgs characters faces can go from a healthy beige to yellow jaundice. What seems to work great for scenery may ruin a portrait.
I don’t see these issue in either pictures and video footage of actual CRTs, or the CRTs at the local arcade bar.
Most of these issues are brightness/saturation related though. The mask’s effect is something different (though it does contribute to jaundice). This is my last day of work before my days off, I’ll post some screenshots showing exactly what my beef with the masks is along with hunterk’s suggested royale test later (if complications with that png file don’t stop me).
My 2c worth of observations, from my own personal perspective; feel free to disagree with everything I say guys
Looking at CRT shader screenshots in this and other threads, it seems to me that many people are playing games on big-ass TV sets from several meters away, in which case you need heavier scanlines and masks and your eyes will do much of the “blending”. Conversely, if you’re playing on a normal PC setup (24-27" monitor at arm’s length), you really need to tone the strength of these effects back, you want more “blending” in the shader, so to speak — especially if you’re not even using fullscreen, but something around 960x720 or 1067x800 like myself. Less strong masks == much less colouration side effects.
The other contributing factor why these look nothing like you remember is the relatively recent Sony PVM/BVM hype train. These were broadcast reference monitors costing several thousand or tens of thousans of dollars; I guarantee you that no one used these monitors with personal computers or consoles pre-2000 (and I’m generous there, probably the cutoff is around 2010-2015). It’s just that 5-10 years ago professional facilities were literally throwing them away or selling them off for $5 or something, and the hype started… They have more vertical resolution and sharpness than any regular small TV set or monitor people used with these machines in the 80s and 90s. Also keep in mind that they were not made to produce a pleasant looking picture, but an accurate one, so they can spot errors in a professional studio environment easily. Well, it’s the same as with audio – I have a $1500 pair of “budget” pro audio monitor speakers sitting on my desk, and while they are super accurate, anything that’s not perfectly mixed will sound a bit like shit on them, making 90% of records unpleasant to listen to. But that’s exactly what they’re supposed to do — underline all your errors with a big fat red felt pen marker!
My point is, no one who drew the artwork for these games had a PVM/BVM or similar quality monitor, and no one who played them in the last century had one either. The artists definitely took the lower-quality but pleasant blending, blooming and glow artifacts into account. Personally, I find the PVM/BVM look quite ugly and it’s nothing like the TVs/monitors we used back in the day; the scanlines are waaaaaaay too thick and prominent for a start, and there’s very little pleasant blooming and blending going on. Again, they’re precise but don’t have the qualities that someone who actually grew up in the 80s would want from a CRT emulation shader.
You don’t really see scanlines on 200/240p content on a 15kHz 14" monitor (13" viewable area). They’re a bit more prominent on 200-line NTSC modes, so my guess the cause for the PVM/BVM craze is twofold: 1) many of the console folks are from the US, hence used to the more promiment scanlines on NTSC displays, 2) US people had no affordable small TVs with SCART input like we had in Europe, therefore they had to resort to composite output with most consoles. Many of these console-only gamers don’t really have any other point of reference than composite, so they don’t know how a proper 80s RGB monitor like the Commodore 1084S would look like. Hence I would bet good money on it that many of them only know composite, high-res CRTs post-1995-2000 and modern LCDs. Then to people who only know the modern LCD look, the cleanliness of the PVM/BVM shaders might look appealing (quite similar to the simplistic “blank every second line” scaline emulation of early emulators) and proper authentic CRT shaders might look “wrong” or “broken” to them.
Okay, flamesuit on…
PS: This is in no way criticism of any shader authors; most of their shaders can be tweaked to taste anyway.