PlainOldPants's Shader Presets

Just regarding the guest-advanced part, the shader chain mostly uses float framebuffers, it’s very well thought-through and tested with various testing environments. I tried @PlainOldPants presets with -hd version, looks regular, although there is some default brightness loss as one starts - due to mask and scanline effects.

To mention it, ntsc presets can also be followed by simple scanline and dotmask presets, but the adjustment options will be probably limited.

2 Likes

Quick sanity check. Does this look intended? (mask removed). Maybe it’s just much darker than I expect because I’m used to consumer-grade calibrations/settings?

2 Likes

The output seems a bit darker on the ramp.

“Input”:

“Output”:

2 Likes

You do it in the slangp, like this:

shader0 = "src/scanline-luma.slang"
filter_linear0 = "false"
float_framebuffer0 = "true"
scale_type0 = "source"
scale0 = "1.0"

That screenshot you posted doesn’t look right.

3 Likes

Just to mention it, with Retroarch there is an aditional final “shader pass” after the shader chain, most probably default texture format (UNORM8 per channel with SDR), which merges displayable (trough aspect ratio…) area into display / window sizes. So if last ntsc pass is float framebuffer, probably won’t matter if no shader chain is appended and stuff / presets prom Patchy were probably tested without appended shader chains.

2 Likes

I just added float framebuffers to every pass in the chain, same result

3 Likes

Looks like two things are happening:

  • Compression leading to highlight crush for red and blue, green is unaffected
  • Overall gamma is correct, but picture looks dark

Maybe chroma error in the NTSC shader, chroma needs to be multiplied by 2x or so.

3 Likes

Yes, the average chroma sampling sum (without additional multiplication) with lowpassing is very close to 0.5 of “original” because of the mod/demod method.

2 Likes

@anikom15, @guest.r For the not as learned in coding, like myself is this something that I should check for and add to all the passes in my shader stack in order to get some sort of quality improvement?

1 Like

I’m a layman but I just find the fringing on the letters a bit exaggerated.

1 Like

The brightness loss in my presets is caused by the NTSC color emulation. Change the video decoder/region setting to 0 to get unmodified colors. Otherwise, the default setting of 6 is based on Japanese NTSC, or NTSC primaries at D93 (9300K+8MPCD, x=0.283,y=0.298). Most US sets probably were close to decoder 2 ( C ), 3 (9300k+27MPCD, x=0.281,y=0.311), or 4 (8000k) when connected over RF, composite, or s-video, and decoder 0 (plain sRGB D65) over component.

3 Likes

To be on the safe side, do it for all passes and see if you have any performance issues or any other problems. Most modern video cards are optimized to work on floats as well as integers now.

2 Likes

That helps a lot, thanks!

Also- I’m assuming the default is RF? What’s the best way to adjust the artifacting? I’ve been playing with the comb filters a bit, but any tips?

1 Like

My shader is just doing composite for now. I removed the RF lowpass because it was working badly. I am experimenting with fast noise now, though. (Edit: I just thought, S-Video could take maybe 1 minute for me to add.)

There is a lot of reading that I need to do about adaptive comb filtering still. What I have implemented is only an educated guess, made by guessing several random ideas and seeing what sticks.

The code currently does a mix of notch and comb filtering. The idea is that the lowest frequencies of the signal contain very little chroma, so we can just notch filter that and get near-perfect luma there. The rest of the signal is then adaptively comb filtered, and it always tries to comb filter without falling back to a plain notch filter.

For my settings, “Sharpness” affects the strength of that luma notch filter. “Comb filter sharpness” directly multiplies the comb-filtered region by a factor. There are some other modes in there that I experimented with too, if you want to try them out, but I like the defaults that I set already. But, whatever you do, don’t demodulate at the narrowband bandwidth, as the baseband bandwidth is needed for the adaptive filter to compare lines effectively. (Edit: One of the modes allows you to compare the comb-filtered region of luma instead of the demodulated/lowpassed chroma, allowing you to use the narrowband bandwidth while still comb filtering kind of decently, but I didn’t like how it looked.)

From my experience, the way to figure things out are from chip datasheets, CRT service manuals, schematic diagrams, and, most importantly, actually getting real CRTs and consoles to try for myself. I have a Panasonic CT-36D30B from 2000 with an adaptive comb filter, with a Genesis, NES, PS1, Wii, and cheap composite-to-HDMI convertor for testing out the comb filter. (Edit: Occasionally, patents can be helpful. There are some research papers too that include measurements of CRTs.)

1 Like

If we’re gonna do RF and are modelling a CRT TV, there’s gotta be an AFT circuit.

1 Like

There are likely an hundred different ways to implement an adaptive comb filter. I’ve seen filters that look at the lines, look horizontally, look across fields, look across frames, detect correlation, lock in the peak luminance frequency, lock into some other frequency, mix with notch, not use a notch at all, different notch designs, whether to bandlimit the output luminance, different weights, differential feedback, bypass comb filtering entirely on certain signals, etc. There are plenty of patents, with little information about what was actually implemented.

It may be better off either implementing one thing very well that you have enough information for, or focusing on the technique you feel gives the best result for the application at hand (mainly 240p 2D games).

2 Likes

Pictures of my Panasonic CT-36D30B which has a comb filter, with games on the Genesis, NES, and PS1. The test patterns are from the Genesis 240p test suite. https://drive.google.com/drive/folders/1balfWcYBL6im2w12awlIAT2biAaXSP2K

6 Likes

can you please share more Pictures from PS1 bios? also videos if you dont mind

1 Like

Fantastic idea. That screen is like an absolute explosion of dot crawl artifacts, even on video capture. I’ll get it by tomorrow, I hope.

Edit: Here’s a video capture, if curious https://youtu.be/warPE1-WLGU the PS1 bios is at the end

Just beware, none of my consoles are confirmed to have replaced capacitors (which can impact things like black/white level, saturation, and amount of Y/C blurriness and interference), and my CRT itself is showing signs of aging and might benefit from getting serviced.

1 Like

good point, but my main intention was to see how interlace appears, especially to know if there is a big effect of phosphor persistence that help with it

anyway, thanks in advance :slight_smile:

1 Like