New CRT shader - Any interest?

I haven’t done this in a long time, but the difference can be seen if you have alternating vertical blue bars with constant Y. If you lowpass before modulating, the red gets evened out more nicely. If you bandpass after modulating, there’s a rippling effect that becomes a little more noticeable. I forget how important this difference is, so it’s worth testing out first to see if it’s worth it.

CRT manufacturers had some trouble with this, too. By the 90s, it became common for CRTs to have an adjustable sharpness setting to control this. I need to look at schematics, but as I said in my first post, I’m only assuming that the most common way they did it was by doing an inductor/capacitor notch filter first, followed by a controllable sharpening filter inside of the decoder chip.

My shader includes a sharpness setting too, but it doesn’t work like that: I instead manually remade the notch filter 29 different times with varying widths and hardcoded all of them. In my desmos graph, you can adjust the variable called “i4” to switch between each notch filter.

Edit: I forgot to address why my filter isn’t as deep as yours. I could have gone deeper, but I chose to stop at 20db on purpose. The number 20db came from the specification for the I subcarrier, which is filtered to be under 20 db at 3.6 MHz. This number of 20 db looked very good in experimentation too, as this seemed to be right at the point where the carrier visually disappeared.

To make the decoder notch more hardware-accurate, I suggest looking at the datasheet for the Sony CXA2025AS, found in the Sony KV-27S22 from 1997. The chip itself contains both a notch filter and a sharpening filter, and the datasheet has line graphs for the frequency responses of both. Consider matching up against this.

Both I and Chthon (the author of the LUT generator gamutthingy, which I linked before) have made our own efforts to figure out what’s happening with retro game colors.

Because of the problem with gamut compression, my goal has been to reverse-engineer one (or preferrably several) CRT’s color correction with respect to its own primaries so that I can re-implement that same kind of correction directly into sRGB primaries. I believe this is 100% doable, in a way that will nicely emulate what a good number of real CRTs did, while still mapping perfectly into sRGB.

The method that manufacturers generally agreed on was to mess with the B-Y/R-Y (and G-Y) demodulation. The result is a 3x3 matrix matrix operation on R’G’B’ that doesn’t account for gamma. Very simple and fast for a shader.

The most difficult problem has been getting a “full set”, to be able to fully replicate a specific known CRT’s colors: The CRT’s phosphors, EOTF (a.k.a. gamma, may as well assume 2.2 for this), whitepoint, nonstandard demodulation settings, and default tint and color settings. Because of the lack of data on the internet, we only have a handful of these.

Other than that, we have separate known decoder chips (without known default settings), known phosphors, and known whitepoints, without knowing what goes with what.

For something consistent, I have one strong answer to this. I’ve found two CRTs, manufactured over a decade apart from each other by different brands with different phosphors, that I believe both share the same exact kind of NTSC color correction. I just haven’t yet fully figured out how they made it for their own specific phosphors.

See this document from 1966: https://ieeexplore.ieee.org/document/4179914 . The idea is this: You want the CRT’s white point to be at x=0.281, y=0.311 (a.k.a. 9300K+27MPCD), but you want the colors near red and green to look normal, without accounting for chromatic adaptation, and without caring much about the rest of the colors. The paper says that you can pick two points that you want exactly correct, and build your 3x3 matrix around that. The resulting color errors look like this:

In february this year, I got ahold of a Toshiba FST Blackstripe, model CF2005, manufactured in 1985. I used an X-Rite i1Display 2 colorimeter to get the primaries, and I looked up the datasheet for the TA7644BP to get the demodulation offsets and gains. (In the forum post, I used manually sampled offsets/gains, but the datasheet’s values give better results.) Since the TV’s whitepoint was not intact (due to capacitor aging), I had to guess and check the whitepoint. With a whitepoint of x=0.281, y=0.311 (the exact same as that 1966 paper), I got a graph that looked a lot like what was in that 1966 paper:

About a year ago, Chthon made some efforts to find a “full set” for a Sony TV. There were a small few sources that had sampled Sony CRT primaries that were slightly off from each other, but one in particular claimed to have official values provided by Sony. For demodulation axes/gains, there are several different chips. Assuming that CRTs having the same tube or chip had the same phosphors, it’s possible to apply these same “official” Sony phosphors to many different TVs models, tubes, and chips. You can see the results of that here https://github.com/ChthonVII/gamutthingy/blob/master/src/constants.h

On CRT Database, there’s the Sony KV-27S22 https://crtdatabase.com/crts/sony/sony-kv-27s22 , manufactured in 1997, which has the Sony CXA2025AS. Thanks to Chthon, there’s a decent chance this has those same official Sony phosphors.

As with the Toshiba Blackstripe, the Sony KV-27S22 only has known phosphors and demodulation axis/gains, but not a known whitepoint. Sure enough, if I guess a whitepoint of 9300K+27MPCD with no chromatic adaptation, I get that same thing again:

When I messed with this, I tried several other whitepoints, different chromatic adaptation matrices, and SMPTE C primaries instead of NTSC primaries. Nothing else made sense. This has to be the answer for these TVs. It’s good evidence that this kind of color correction was common during the 80s and 90s, even from brands that were supposed to be good.

Here’s an unfinished, updated version of that NTSC shader that I’d posted in June: https://www.mediafire.com/file/pg7vevzuq4iia5i/p68k-fast-multipass-2025-09-22.zip/file Part of this contains an attempt at re-implementing the 1985 Blackstripe based on the points that they picked, but when I graphed it later, it looked too different from the original (and slightly worse, too). It’s decent for now if you want to include it in your shader, but it’s not quite finished, as I have to figure out how exactly they picked their perfect points and selectively darkened them.

There’s also a messy attempt at adaptive comb filtering in there, if you are interested. If you set the chroma demodulation to use baseband filtering, and you set the adaptive comb filter type to “4”, the result looks pretty good for PS1 games.

That said, this is definitely not what every single CRT brand did. I also happen to own a working RCA ColorTrak Remote E13169GM-F02 from 1989, which also does not take chromatic adaptation into account, but has a whitepoint closer to x=0.30, y=0.31. My Panasonic CT-36D30B from 2000 has a switchable whitepoint between roughly 7000K, 9500K, and 14000K, and it looks like it’s meant to do a correction based on D65, no matter which whitepoint you pick. I have not gotten service manuals or chip datasheets for either of these two CRTs yet, so I only have rough approximations for their demodulation offsets/gains. As for why these CRTs are different, it might be because of the ColorTrak Remote being a 13-inch portable TV that’s the size of a computer monitor (giving a different viewing environment), and because of the Panasonic one having a YPbPr component input. The point is, not all CRTs did this bizzare 9300K correction, but it must have been a thing for a long time.

Just like that, I’ve covered every single “full set” that I have looked at. That’s four total.

As for when this 9300K correction started disappearing, I have a decent guess. I once had a Toshiba CE20D10 from 1995, which had a TA8867AN chip. I don’t have phosphors for that CRT, but based on the TA8867AN’s demodulation axes/gains, it must have been a similar idea, using a higher color temperature than standard and hacking the red/green area back into place. There also exists a very similar chip called the TA8867BN (notice that it’s a one letter difference), whose behavior looks more like the D65 correction in my Panasonic CRT from 2000. I would assume then that 1995 was around the time that this 9300K correction was just starting to disappear. It may have to do with HDTV beginning around that time, in 1998, along with the adoption of YPbPr Component.

There’s also one aspect of that Sony KV-27S22 that isn’t understood: Sony TVs had a feature called “Dynamic Color” which could be toggled on or off, and we don’t know what this feature is doing. However, I personally am placing my bet on Sony’s Dynamic Color just being a switch for another whitepoint with another nonstandard set of nonstandard demodulation offsets/gains; in other words, my guess is that it’s just the same thing as all the other brands are doing, with no special circuit at all. The reason why is that the datasheet for the Sony CXA1465AS chip describes that the Dynamic Color circuit detects whites and flesh tones, and adjusts white to be bluer without affecting the flesh tones, so in other words, it’s doing the exact thing that just the normal demodulation offsets/gains are already doing. The only way to know for sure what happens when you toggle Dynamic Color on or off is to buy the chip on eBay and set it up on a breadboard for testing.

Brands other than Sony just stuck to nonstandard B-Y/R-Y demodulation, as far as I’m aware. Even Sony PVMs did this.

3 Likes

That’s a rabbit hole you entered lol. Creating a generic NTSC is somewhat easy, but accurate is tricky. I am probably around 2 years on and off on this.

3 Likes

I was looking for a new fast shader and test this and seems work very well!

is it possible to add adaptive Cutoff (RGB/Y bandwidth) based on horizontal res?

since the horizontal res is changing in video games, aka 2-phase and 3-phase

and there are horizontal res changing even in the same game (many PS1 games do this for example)

maybe the same for ICutoff and QCutoff

also it will be nice if there are a coloring (by Rainbow Effect) option in 2-phase

1 Like

I was looking for a new fast shader and test this and seems work very well!

Thank you!

is it possible to add adaptive Cutoff (RGB/Y bandwidth) based on horizontal res?

Right now my goal is actually the opposite: a consistent cutoff regardless of the resolution. The actual bandwidth constraints of the analog output of the PS1, for example, didn’t change when the output resolution changed.

also it will be nice if there are a coloring (by Rainbow Effect) option in 2-phase

The current version doesn’t have true NTSC decoding implemented, but that is what I’m working on. That will hopefully include the different phase behaviors of each system.

1 Like

no problem if it leads to a similar result :slight_smile: , because I noticed that to merge the dither I need a value of about {Cutoff = “1.700000”} but this value is too strong in the case of 3-phase or 480!

1 Like

I plan to add system-specific behaviour to the NTSC implementation. That might be more like what you are looking for.

1 Like

I think I’ve managed to simulate the luma trap and chroma filters from the NTSC Genesis 1. I may have someone I know check my work just to be sure. The chroma filter in particular looks kind of… bad? But maybe that’s actually how the Genesis was!

I can generate FIR filters to use in the shader from these frequency response curves. I may give that a try this weekend.

genesis1-luma genesis1-chroma

7 Likes

Whatever you’re doing don’t forget TurboGrafx16, PC-Engine, TurboDuo, SuperGrafx which uses more resolutions than any of those other systems, even in game and which benefits greatly from proper NTSC effects like checkerboard dithering blending to produce even more colours.

2 Likes

TurboGrafx16/PCEngine is difficult because games can set whether a frame is 262 or 263 lines. We can’t tell just from the core output, though. This affects how the chroma phase shifts between frames. I don’t know what to do about that.

5 Likes

maybe ask libretro to output more metadata?

If you still have interlacing woes, you can look at the XGA2 preset in Scanline Classic: https://github.com/anikom15/scanline-classic/blob/master/xga2.slangp

The relevant code is in the pixel function here: https://github.com/anikom15/scanline-classic/blob/master/src/scanline-advanced.slang

I used 86Box and Windows 95 to debug the interlacer because I wanted something that could line-double, handle interlacing, and handle progressive scan all within the same shader. The SVGA cards we had in the 90s would line double the 640x480 mode. We usually set our desktops to 800x600 or 1024x768. So the shader was made to handle line doubling at a low resolution and switches to single line when it detects a high enough resolution. 86Box only supports glsl so I actually used the glsl shader, but I backported everything to the slang shader.

The HDTV preset is another one you can look at as it line doubles everything below 540 (actual CRT HDTVs did this). Not many CRT HDTVs handled 720p, but apparently at least one existed.

1 Like

There are several issues with interlacing that I think can only really be fixed with more metadata from the cores. Currently most cores output both fields (e.g. 480 vertical pixels) to indicate that the content is interlaced. We have no way of knowing which field is the current field and which is the previous. As a consequence:

  • We might display the wrong one and basically add a frame of lag.
  • We might get the NTSC phase offsets wrong. I’ve been struggling with how to do this. Even in standard NTSC the phase cycle is four fields long when interlacing. Some systems might be even more complicated. We kind of need to know where we’re at to make it work. We can make a guess and maybe it will be close enough, though. Unless this is what you’ve figured out?

crt-beans currently supports interlacing by either:

  • Showing one field at a time (properly offset and with the proper scanline sizes). There is a toggle for the “phase,” i.e. which field is current depending on whether the frame count is odd or even. This can cause issues on some LCD panels due to charge accumulation. You can basically get a weird flickering that persists even after the interlacing is gone.
  • Rendering both fields and blending them together. This works great for systems with 480p output to give them that 480i feel. If the two fields are different, you’ll get combing artifacts, though.
  • VGA line doubling (default on the VGA preset) that will line double the lower resolution VGA modes. I don’t really deal with anything above VGA resolutions.
2 Likes

finally found someone who cares about CRT interlacing!

true, I noted this before

unfortunately, interlaced seems so underrated, RF is underrated too

2 Likes

The main thing I was concerned about with the Multiscan support in Scanline Classic was automatically enabling the line doubler at a certain point and adjusting the spacing between active lines based on resolution (the scanlines disappear at a high enough resolution). This support is relevant for computers. If you don’t care about computers or a rare type of HD CRT, it’s not relevant.

I think the field order is deterministic based on core for many systems and the behavior is documented. A simple macro system that can make defines based on the emulated system would be good enough, e.g.

#ifdef __SYSTEM_SNES
#define EVEN_FIELD_FIRST 0
#endif
1 Like

@beans lots of interest indeed! I just updated my shaders and found yours. Trying it out at the moment, and I like what I’m seeing a lot, it is next gen GTU. Excellent work :smiley:

1 Like

This is a cool shader, I like the automatic brightness mitigation idea, but it’s too bright on my HDR1000 display. Some additional mask controls (to make the mask darker) would go a long way towards making this shader compatible with a wider range of displays. Maybe an “override brightness mitigation with mask strength” option?

2 Likes

I want to add a mask strength parameter along with the NTSC simulation. I’m out of town and won’t be able to finish this up until next week at the earliest.

1 Like

@beans Your Mega Drive filters seem reasonable, assuming low output impedances and high input impedances (as it appears from the data sheet I have, though no specific values are given). The luma filter should start to roll off at around 8 MHz. That roll off does not become strong until about 17-18 MHz. This is because the notch filter is cascaded with a lowpass (it’s hard to determine where the cutoff is exactly because the capacitance (C34) is illegible, but it’s in video range). You can filter the two stages separately as the Y input has a high impedance, so the lowpass part does not significantly load the notch part. The existence of the lowpass is baffling to me and I don’t know why it’s there. Perhaps it was added to reduce some noise discovered late in design.

The CXA-1145 chip is supposed to have a 180 ns ‘delay line’ where these filters are installed. Now a notch filter certainly isn’t a delay line, but I was not able to find any info about what part would be used for a ‘delay line’ in this case and how that would affect the signal. I am guessing it would provide a better composite image than the notch filter by slightly misaligning the luma and chroma to reduce overlapping frequency peaks (changes in luma generally correspond to changes in chroma), but this is totally speculation.

The CXA-1145 has a minimum RGB output frequency response of 5.0 MHz. The data sheet shows a much higher range in a lab setup. The Y and C frequency responses are the same.

The chroma filter is a bandpass centered a little off subcarrier frequency. The design overlaps the two filters so that the result ends up broader than I expected. It’s highly dependent on the input and output impedance, but no less than 2.8 MHz or so. That equates to a baseband chroma bandwidth of 1.4 MHz, close to the 1.3 MHz SMPTE encoding standard.

Conclusions: Between the SNES and Genesis we see that chroma is filtered for both, but the SNES allows a wider bandwidth. Only the Genesis employs a notch filter, and does so in place of a delay line puting its use of its video chip out-of-spec. The SNES does not. The Genesis also imposes an arbitrary lowpass filter on its luma. Combined with the notch, this results in a very poor luma frequency response. The SNES does not filter its luma at all, in either composite or S-Video. Neither systems filter their RGB.

Edit: I realized I posted this in the wrong thread, but I guess you can figure out what I’m talking about anyway.

The Mega Drive’s lack of delay line makes sense actually. It’s exactly why the dot pattern doesn’t shift over lines and we get the strong rainbow effects.

2 Likes

I think this is better anyway. I don’t want to take over @PlainOldPants’ thread.

The Mega Drive’s lack of delay line makes sense actually. It’s exactly why the dot pattern doesn’t shift over lines and we get the strong rainbow effects.

I think that is actually because the Mega Drive’s line length is 228 chroma cycles long, instead of the standard 227.5.

I had assumed that the delay line was supposed to compensate for delay from the chroma filter, assuming that luma wouldn’t be filtered and would therefore be ahead of the chroma. As it is, I don’t know if the filters would have the same delay, so the chroma and luma might be slightly misaligned.

The CXA-1145 has a minimum RGB output frequency response of 5.0 MHz. The data sheet shows a much higher range in a lab setup. The Y and C frequency responses are the same.

I suspect that the RGB signal is, in effect, filtered by the response of the DAC and buffers. The SNES, in particular the 2-chip varieties, has a notoriously soft RGB signal. The SNES behavior is complex, though, and would be hard to simulate.

The luma filter should start to roll off at around 8 MHz. That roll off does not become strong until about 17-18 MHz.

Are these numbers correct? That seems pretty high. TVs should be filtering out anything in this range at the inputs anyway.

it’s hard to determine where the cutoff is exactly because the capacitance (C34) is illegible, but it’s in video range

I found a forum post with the values. I think I linked it in the other thread, but I can pull it up later.

Thank you for your analysis! I’m out of town and away from my computer for now, but I want to resume looking at this later and I think I’ll have some questions. I think we can get pretty close to accurate composite simulations for at least a few consoles.

Well this is the sort of thing where it’s not easy to understand why its happening. Like I wouldn’t expect transistor frequency response degradation to have that nonlinear effect we see in the 2-chip version. And this is just one example, but the video quality can be very different even between the same generation of systems, so there should be caution with trying to simulate the exact response of something based off a single set of measurements. Here is a further discussion and the conclusion is that the issue may be due to poor component tolerances: https://www.retrorgb.com/snesversioncompare.html

They are correct. I am only looking at this filter, not the system frequency response. But keep in mind that at 8 MHz, the luma is already attenuated. That is, while the notch part is sloping back up until around 8 MHz, the overall response is attenuating starting at about 1.8 MHz. I am assuming a very low output impedance and high input impedance. As the impedances close in, the filter’s slopes soften.

There’s also the question of dynamic range. I don’t know what dynamic range analog TVs have, probably less than 8 effective bits. Standards treat -30 dB as enough attenuation to be considered blank. What we can conclude here is that the rolloff from the lowpass part outweighs the effect of the notch. Ignoring the notch part, and just filtering based off of the lowpass response should already give you a result very close to the actual console, especially if you use a notch in the TV simulation side.

I did find other schematics for other variations and it shows a 110 pF cap that is consistently used across versions. The frequency response graph you see uses that new 110 pF value.

Sure. When I started looking into this stuff years ago I found that it’s better to let good be the enemy of perfect. There are so many variables involved that it’s better to either do one thing very detailed or do many things in a more abstract, practical manner.