Interesting. Is it that you’re not trying to emulate Phosphors down to the subpixel level but are capping it to the pixel level, so one emulated phosphor colour can be represented by one entire display pixel?
Yes, we can’t make use of the subpixels as phosphor dots because of the screen curvature and the phosphors being a slightly different color.
The original Scanline Classic shader directly transforms color coordinates to sRGB and clamps the result. The new version will have chromatic adaptation transform and gamut compression available. First, the chromatic adaptation. The test input is SMPTE C bars simulating an NTSC-J monitor (NTSC-J primaries, D93 white point). Zebra stripes show out-of-gamut colors (this is one of the many new debug tools available).
# Test NTSC-J
R_X = "0.618"
R_Y = "0.350"
G_X = "0.280"
G_Y = "0.605"
B_X = "0.152"
B_Y = "0.063"
R_WEIGHT = "0.2243"
G_WEIGHT = "0.6742"
B_WEIGHT = "0.1015"
No chromatic adaptation:
In this test pattern we see that red is the only strongly blown-out color (there are slight saturations for cyan and magenta which you can see as faint zebra stripes). That’s good. The drawback of a direct conversion is a loss of dynamic range. We are also fighting against the inherent non-linearity of the display. The closer we are to the white point, the more linear and accurate our display is. Chromatic adaptation allows us to estimate the ‘adapted’ response, where our eyes compensate for the difference in white point. We get back dynamic range and linearity.
Bradford Transform
The traditional Bradford transform includes a nonlinear transformation on blue. I have retained this because tests show a significant enough difference between the nonlinear and linear versions of Bradford. We can see clearly how a chromatic adaptation transform does not fix our issues with out-of-gamut colors and even introduces yellow and green blow-outs. A separate gamut mapping process is required to fix this. The white point is successfully transformed, but we can see how blue and cyan are darkened.
Zhai-Li CAT16 Method
This uses a more up-to-date color adaptation model. I don’t know whether or not its more accurate, but the blue channel looks brighter.
Color correction results
Gamut compression scales down luminance for high values and scales down chrominance for negative values.
Uncorrected, Rec. 709 primaries, D65
Corrected, NTSC-J primaries, D93 white point ‘absolute colorimetric’
Corrected, NTSC-J primaries, D93 transformed to D65 with gamut compression ‘perceptual’
How are you performing the gamut compression? One tool that I’ve used is this LUT generator https://github.com/ChthonVII/gamutthingy and I’ve heard of another LUT tool from ReShade.
Simple method scales colors according to the maximum component value when at least one component is greater than 1.0
Advanced method converts to CIELuv space, scales L until all components, when in RGB, are less than or equal to 1.0, then scales u and v until all components are greater than or equal to 0.
It’s not really compression, it’s controlled clipping. This method allows us to convert any arbitrary RGB colorspace (or RG, monochrome) to sRGB.
EDIT: It seems that gamut thingy tool is addressing a different use case. The color correction shown in these screenshots is mapping the phosphor gamut to sRGB space. It’s not simulating analog conversion circuitry. That will be addressed in a separate stage.
Gamutthingy includes both the phosphor gamut and nonstandard B-Y/R-Y demodulation wrapped together. You can perform one, the other, or both. The compression algorithm is for the phosphor gamut conversion, but it takes the B-Y/R-Y demodulation into account too to avoid compressing colors that won’t be output by the circuits.
I don’t see why a generic LUT shader couldn’t be used in place of the color output stage, so you could use your gamut thingy LUTs that way.
Well this is interesting … I may have accidentally discovered a way to easily make up lost brightness while maintaining gamma.
Can you provide details? I’m curious what this technique is. I basically blend in the unmasked scanlines starting when the the mask is fully saturated.
The mask is made with a series of Gaussian functions. Normally you are supposed to scale the output of a Gaussian by its integral because energy is supposed to be distributed across the function, but I just let it be free, so when you increase the sigma of these functions the mask dots blend into each other.
I then changed the User Picture and Brightness settings to operate on (virtual) luminance directly instead of voltage (and getting capped by the distortion modeler) so that I could pump values greater than 1.0 (to test my gamut compression algorithm). These two things essentially turned the CRT half of the pipeline into an HDR renderer. The zebra stripe mode I implemented allowed me to quickly scale the input without needing to measure. A tone mapper can give a little bit extra range but if it’s pushed too much the gamma will start to break down.
I then spent some time on math (I am not good at math). As long as you are working in a linear space and avoid exponentiating your input by itself it seems it should be possible to blend the image back to the original input level (at the expense of mask sharpness or whatever else you’re doing that cuts signal level):
There is a new beta available:
https://github.com/anikom15/scanline-classic/releases/tag/v6.0.0-beta2
You will find two kinds of presets: professional and consumer. I have still focused on SNES only but I also added an N64 preset as a bonus. I hope comparing the two can help to understand the settings better. If the moire from the slot mask bothers you, you can try shadow or aperture instead. I tried to organize the settings as best I could. Any feedback is welcome.
The next update will have RF and the remaining 16-bit consoles.
- New parameter system
- Sharpener circuit
- Improved color correction
- New tone mapper and gamut compressor
- New masks
- 1 wide color gamut preset included
- Full HDRR pipeline (SDR output only; HDR output TBA)
- Optimizations
cool, SECAM is next? 
There are no true SECAM consoles, but I would like to do it as a hypothetical. However the color carrier is FM modulated which is non-trivial compared to how NTSC and PAL color is implemented, so I’m not sure how accurate it will come out.
Attempt at RF … still a work in progress. I don’t know why the color is so dull, hopefully just a factor of 2 I need to add somewhere.
This is getting closer. The noise model is maybe too aggressive. Hard to say because I don’t have any actual reference to compare to. But I included a lot of parameters to tweak it.
In Composite and S-Video/RGB situations if you can easily see the noise and not just “feel” it, it’s probably too high.
The Neo Geo AES over composite. I was expecting it to look worse since Neo Geo is really meant for RGB. The AES is different from the other 16-bit consoles because it doesn’t derive its master clock from the subcarrier (or vice versa). Instead it has a completely separate master clock and subcarrier oscillator.
The result is a nice blend with no flicker, assuming my derivation’s accurate.
The SNES presets are now in the Libretro shader pack. I will be adding other systems over time, hopefully without needing to change any of the shader code at this point. Before I do that, I think I will make a few generic presets that will be stable for any input size, framerate, so they can be used with cores I haven’t made a preset for.
I am considering trimming the number of shaders in the shader pack down because there are a lot of variants for each system. I have already decided to limit consumer presets to one per system per region (usually composite). People could download the full pack themselves.










