I’ve seen some instances where Windows Full Screen Optimizations and DPI Scaling Settings were causing uneven scanlines in some of my Steam Games which use Scanlines/CRT Effects. Disabling Full Screen Optimizations and setting Override high-DPI scaling behavior to Scaling performed by Application resolved these issues.
I just tried your suggestions but no avail for my case. Thanks for the suggestions though.
It now turns out that when I change the screenmode from 240Hz (Gsync mode) to 60Hz the issue is gone. Maybe some form of signal compression going on in 240Hz mode? Don’t have any issues with regular PC games in 240hz/Gsync mode. Anyway will investigate further or I’ll have to live with it.
@guest.r Issue seems not shader related . Lates improvements look stunning btw, love the realism on the deconvergence and the more fluid scanlines. Scanline type 2 looking great
Yeah, modern displays play a part in crt emulation. I focused on this recently a bit…
Like me and the vulkan driver. It seems like it works with other peoples AMD adapters, hard to tell why it’s broken on my setup.
True, but they don’t look that good. I think to implement a good looking shader like this, it would need a separate computing chip to handle the graphical load. Maybe they develop something in the future, that would be a good alternative for people who don’t want emulation.
New Test Version (2021-06-21-r1):
Notable changes:
- Small LUT code fix.
- Scanline spike removal code improvement with greater param. values.
- Color code overhaul.
Download link:
https://mega.nz/file/EhJElJZI#JwAVABUguL7NzLCLgGugdL-MHvkP4EEUSAwIy4nnIfE
I guess some testing is needed with the new color options, mostly with shaders added in the preset. Feedback is welcome.
What I’ve seen so far looks great, seems like it’s easier to get things nice and bright without clipping anything.
Can we have an option to make the deconvergence go under the mask? This could look better with black and white masks, maybe. You can see in the highlights that the green kinda forms a solid line over the mask, which looks slightly awkward.
Unfortunately, this is very hard to do, especially with vertical deconvergence. The best option is to use deconvergence offsets equal to mask sizes, i.e. 3.0 with the modified 7.0 mask.
Is existing presets broken for everyone with this new version, or is it just me?
It’s a test version, probably only temporary, maybe it’s a good idea to use the regular version for existing presets. Next release version will retain compatibility with them with good chances.
@guest.r With past versions of the shader a custom color space could be added by just replacing one of the existing color space matrices. But with the recent changes how does this work now? I only see the sRGB matrix left, can that be replaced by a custom one and everything will be OK?
Maybe for improved configuration possibilities it’s a nice idea to leave a slot open ("-1"?) where people can add their custom colorspace? Since the color spaces now seem saved and configured via LUT, it would be nice to know which steps are needed now to add a custom color space.
Out of curiosity did you use so some custom or publicly available LUT generator for the color spaces in the new lut folder?
As I understand from discussion in the past with @Dogway even a color space for a monitor is in reality seldom the theoretical color space, even if advertised as 100% sRGB coverage for example. So more advanced users would want the ability to add their custom measured color space.
A more obvious example would be the nowadays advertised monitors with “sRGB coverage 110%”, or “DCI-P3 95% coverage”. For that kind of monitors one would probably from an accuracy perspective want to use the exact measured monitor color space, instead of the preconfigured “100%” theoretical DCI-P3 space in the shader.
The next release version will probably go back to the old model, but the test version is also interesting for users, who are willing to do some custom color tweaks on their own. The shaders are quite matured by now. It’s almost entirely LUT based, with some standard color spaces and crt specs.
With the test version it’s enough to replace one …to_srgb LUT with a custom one, retrieved from personal display calibration (placeholder LUT is modern-to-srgb.png). It’s quite simply done with DisplayCAL, where it’s convenient that the target colorspace is sRGB, since it’s a good reference point for further adaptions (like sRGB to SMPTE-C…) and you can get quite accurate colors.
Dang it, I actually like new options quite a lot, Is it possible to retain them while keeping compatibility with older versions?
Also, what’s the purpose of the NTSC LUT? I get nice results combining it with Adobe color space in grade pre-shader, but using Modern (which I assume is rec709?) cs in main shader. (weird combination, I know, but it makes colors look “right” for me). But by itself it looks kinda odd. Is there some trick to use it properly?
Are both methods comparable in quality and performance? just curious, personally I would prefer the method that delivers the most accurate output even if it would come at a slight performance hit
If I use a personal monitor / displaycal calibration LUT in the shader, resulting in accurate colorspace for the monitor used, what value for “Scanline saturation” would then be preferable? Is a setting of 1.0 then “most accurate”?
Thanks for the continued development on the shader
Then you can go with the ‘test’ version, since it’s a bit easier to create you custom calibration LUT compared with a transformation matrix. Alas the difference isn’t this drastic, mostly the way an user can add custom stuff is different.
It really depends from the crt type and masks used, not to forget the available settings for brightness, contrast and saturation. Sometimes i decrease the scanline saturation with some setups because something ‘annoying’ is happening on the screen. It can be compensated with gamma and saturation settings. If you use mild scanlines then the scanline saturation effect is also mitigated etc. But it’s really hard to tell in general, since an accurate answer would require measuring the user display and a corresponding crt.
I found something to nitpick Some details are becoming way too thin with NTSC; doesn’t look natural because brighter areas should bloom into darker areas. Check out the text here:
Now look at Blargg’s NTSC filter:
This looks a lot more natural and is easier to look at, IMO.
Is there anything that can be done to improve this? I assume it’s just different interpolation or whatever, maybe there could be an option to switch methods?
This is intentional so far because of some scanline artifacts, which are happening with many scenarios, mostly at surface edges and the range of the NTSC filter becomes very apparent. Like you got some pixels of stronger scanlines and then a too harsh transition to ‘normal’ conditions.
New Release Version (2021-06-23-r1):
Notable changes:
- Small LUT code fix.
- LUT options overhaul, the new lut selection is much better.
- Scanline spike removal code improvement with greater param. values.
- DCI-P3 white point set to D65 to match most calibrations.
- New gamut preset added (modern) to cover ‘an average’ 90% DCI-P3 consumer display.
- CRT profile options now labeled.
- Compatibility with previous version presets is maintained.
Download link:
https://mega.nz/file/IthDiY4K#GnD5VC1LapNOb3CFkcgevA_jDryb29V5WhOs9kcwhgE
What does convert-ntsc.slang do? It seems to sharpen the blur effect on the dithering in Sonic 1, turning it back into vertical lines.
You can play with horizontal downsampling + Prescale-X Factor to achieve the old effect, since the default look has changed. Or the older version of the ntsc preset can be used. It’s available at the first post of the thread. As i mentioned, the focus of the new ntsc preset are nice filtering options, regarding many other games.
Edit: An additional blend mode will be probably added which honors traditional dithering handling.
It works just as you said. This thing is amazingly versatile.