There is a new version available here (direct link here).
I fixed a bug in the new VGA line doubling that lead to incorrect scanline drawing.
There is a new version available here (direct link here).
I fixed a bug in the new VGA line doubling that lead to incorrect scanline drawing.
Beans, I’ve been testing your shader. I’d like to give some proper feedback, but I don’t like to rush these things, I need to get the feeling of your work first. Just know that I appreciate your efforts and think they’re valid. It’s another good shader for the collection.
I can say two quick things:
The fast version is not bad. Given what you wrote, I thought it would be much worse, but it’s actually satisfactory. Understandable that you may dislike it at first, as it’s just a stripped down version of your main work, but it also gives modest hardware a taste of your skills. Think of it as another way to express your personal view of how a crt shader could look good.
Do you think there could be a way to set proper gamma controls? Normally, when you mess with beam width / spot size, you can end up with a brighter or darker picture, and you could compensate for that with gamma control. Right now, sRGB gamma looks nice on default values, but a tad darker than similar shaders (and messing with the spot size may darken the visuals too much). The 2.2 gamma washes out the picture quite a bit, it doesn’t necessarily brighten things up.
All in all, still testing. Pleasing approach you did with your masks, much malleable. It’s not easy to make this kind of shader to look fine on 1080p.
Thank you for your testing!
I could add a gamma parameter. The reason that I haven’t is that the current settings are basically “correct” from a theoretical point of view.
I am using 2.4 as the CRT gamma, which is basically standard. This does result in a slight darkening of the image as compared to the input without a shader, but it should accurately capture the gamma of an average CRT. CRTs varied a lot and people changed their own settings, so if I added a gamma parameter it would probably be for this.
The output gamma is pretty standard, and it should be set to match your screen. Most screens will be a simple 2.2 gamma, but some are actually calibrated to sRGB (with the linear portion in the shadows). They look very similar except for the dark shadows. If this isn’t set to approximately match your screen, you can get weird effects. The tonality of the image will not be accurate, the dither may not work correctly, and the scanline shapes may even be altered.
Regarding the brightness of the shader, if you set the maximum spot size to 1.0 it should be as bright as the original image if it was corrected for the difference between a CRT and modern screen gamma. The scanlines themselves would not actually darken the image at a maximum spot size of 1.0. Any value below that will decrease the brightness proportionally. That’s pretty much the only parameter that currently affects the brightness as a whole. The glow settings can sort of redistribute brightness, but they should have a minor effect on the average brightness of the image. The current mask blending method maintains the brightness of the pre-mask image on average (although, again, it sort of redistributes it). The minimum spot size also will not brighten or dim the image.
The low pass filter in the rgb and svideo presets can also dim small, bright detail. If you turn it up to 6.0MHz or use the monitor preset (which has no low pass filter), you may notice a difference. But this is faithful to the way CRTs worked as well. See @cgwg’s old post here:
Video amplifier. This is somewhat connected with the previous step, since the problem can be treated as an issue of signal processing. Since this applies before the gamma ramp, it can cause darkening of high-contrast areas — this is why gamma charts traditionally use horizontal lines.
The low pass filter is designed to mimic the limited bandwidth of the video amplifier (and other components in the signal path).
One thing that I have been careful about in this shader is not affecting the tonality of the image. The highlights don’t get rolled off and the shadows don’t get crushed or brightened. For example, here’s the output from one of my tests. I generate full screen grayscale images at varying brightness levels (corrected for CRT gamma) and run them through the shader. The x axis is the input brightness and the y axis is the output brightness. The solid, light blue line represents no change in brightness.
In this case, I used a maximum spot size of 0.9, so the actual values are shifted down slightly, representing a slight overall darkening. However, the curve stays linear. Changing the gamma will make this nonlinear, and this would change the tonality of the image.
All this is to say that, basically, it is complicated. The brightness of the shader is always at the maximum level that it can be without affecting the tonality of the image, and the current options guarantee linearity. Adding a simple gamma adjustment to brighten the output will cause details to be lost in the bright areas compared to an actual CRT, as it does in other shaders with such a feature. I don’t mean to rule it out as a future option, but that’s why I haven’t included it so far.
I’ve realized that I previously uploaded the wrong images to demonstrate the dither. Here’s a better demonstration anyway. There is no dither on the left side and dither on the right side. The image has been enlarged 2x and brightened 3x to make the difference more noticeable without having to be an a dark room. Hopefully you can see the banding on the left side and no banding on the right. You may have to view at 100% or zoom in.
ah, yeah, nice illustration. it’s very obvious on that shot.
This is some serious theory there. Why not put it to the ultimate test and compare some game screenshots vs against a real CRT or some real CRTs?
I really appreciate your approach and ideas. I vote for leaving out the Gamma control if it goes against the main premise of the shader and allow the shader to be unique in that regard.
Getting this Gamma setting correct is one of the most difficult aspects of CRT Shader preset tweaking that I’ve longed for something like this which auto-compensates for brightness without clipping or losing shadow detail for years.
If you look deep into the history of my forum posts you might see a few asking about that somewhere.
The sad thing about this is that it’s another shader to be added to the now large multitude. What would it do better than my current favourite shaders and what would it do worse?
You ask if there is “any interest?”
I say, change that last part to “if I build it, they will come, eventually…hopefully” and sort of decouple yourself, your work and your expectations from a particular outcome.
Once you like what you’re doing and know what you’re doing is important to you and it passes the litmus test later on of going against a real CRT then eventually you can try to showcase it and garner some media coverage.
If it’s truly game changing and makes it easier to get an accurate output then I’m sure your efforts will be greatly appreciated but not necessarily rewarded.
By the way, the shader operates within certain constraints. Any plans to go the extra mile and incorporate some of the concepts of the Sony Megatron Color Video Monitor Shader when it comes to accuracy and maximizing brightness compensation by using the brightness of the display and HDR where possible?
The development of Sony Megatron Color Video Monitor seems to have stagnated, so if you could combine what’s best of your work with what’s best about the concepts contained within Sony Megatron Color Video Monitor, then going forward your shader might evolve to become one of the top choices for anyone who’s interested in accurate and eye-catching CRT Shader emulation.
You don’t necessarily have to take the exact Megatron approach but you could also test and verify if there is any benefit to using RetroArch’s built-in Global HDR controls with your shader.
I believe the HDR stuff in the megatron shaders is mostly separate from the mask/scanline stuff, so it’s probably doable to integrate the other 2 shaders as pre-passes for this one. IIRC, the main issue is just that the last pass output needs to be a specific format to trigger the HDR signal.
crt-beans is now in the libretro slang-shaders repository, and should be available to everyone who has recently updated their shaders through the online updater! (Thanks @hunterk!)
There are 4 presets. In the “crt” directory are:
In the “presets/crt-plus-signal” directory is:
A description of the available parameters is here in the original repository.
I have some ideas for things I want to improve or add, but I haven’t had a lot of time recently and it’s been slow going. Still, I think the shader is usable enough as-is.
I have been playing around with NTSC simulation. None of the existing NTSC shaders worked quite the way I wanted for crt-beans. I’ve created a new NTSC shader that:
Maister’s NTSC shaders and ntsc-adaptive don’t fully encode and decode NTSC. They keep luma and chroma separate, introduce some amount of crosstalk, basically, and then low pass the result. This obviously works pretty well, but this new NTSC shader should be a bit more faithful to the way NTSC functioned, although with less flexibility in how the artifacts appear. It should be more robust to different resolutions as well.
@DariusG’s shaders do actually encode and decode NTSC, but I wanted something I could add more features to, which works better with crt-beans (which requires resampling to a larger horizontal size), and which low-passes the components before encoding to NTSC (which is generally what would have been done).
This is still experimental and I have only implemented the Mega Drive/Genesis style of color carrier phase behavior so far (which should cover several other systems, too). It should be fairly simple to add other console’s behaviors.
Currently the rainbow artifacts are a bit too strong. I think this is probably because the actual Mega Drive had a luma trap filter (although not a very good one). That is something I plan to implement as well.
Screenshot here:
Compare to:
It’s so nice to see another person trying at this kind of NTSC simulation. I don’t like the other options so much either.
Here are my suggestions for this:
(According to the actual standard, I is filtered to be less than 2 db down at 1.3 MHz and over 20 db down at 3.6 (really 3.58, the chroma carrier) MHz with minimal ringing and overshoot, and Q is filtered less than 2 db down at 0.4 MHz, less than 6 db down at 0.5 MHz, and more than 6 db down at 0.6 MHz. For RF modulation, there’s no values listed, but there’s a diagram showing that up to 4.2 MHz should be recoverable, and everything should disappear by 4.5 MHz.)
Those are the most important points. After that, things get complicated.
The original matlab files are included in the Cg shader repo, so you can see what they’re doing under the hood: https://github.com/libretro/common-shaders/blob/master/ntsc/shaders/filter_3phase.m
I’ve been vibe-coding/cheating my way through an NTSC de/modulation pipeline:
but I think the filtering is bad, and chatGPT seems to have run out of ideas lol.
Thanks @PlainOldPants, and great post.
Don’t bother with the separate I and Q bandwidths. Real hardware used B-Y/R-Y both when encoding and when decoding, which forced using the same bandwidth for both. As long as your filters are implemented as simple integrals (like what you’ve been doing so far, and like 99% of LibRetro shaders do), you’ll be fine using YIQ instead of B-Y/R-Y, since the algebra works out nicely here.
I have been filtering both I and Q the same. I’ve retained an option to filter them to different bandwidths but I’m considering removing that. When I said “at different bandwidths” I meant that I was filtering Y at a higher bandwidth than I and Q. I’ve also considered using B-Y/R-Y instead, I just did YIQ first.
For this first low pass filter, I use a simple convolution with a Hann function. This is a continuous-time FIR filter. I originally did this for analog RGB simulation, which is also a feature of crt-beans. By treating the pixel data like a piecewise constant signal and then convolving the Hann function, I aimed to simulate the sample-and-hold behavior of the DAC combined with the limited bandwidth of the DAC and amplifier. So I basically get a rounded over square wave, and then sample from that.
It’s probably not quite accurate to do the same for YIQ, given that most consoles didn’t generate YIQ directly but converted from RGB (though what was going on in the later, more integrated chips, I have no idea). It is convenient for now, though!
A lot of real hardware doesn’t lowpass B-Y/R-Y before modulating, and instead bandpasses the subcarrier. The Sega Genesis and Sony PlayStation both do this, but the SNES does not. Doing this causes slightly different artifacts.
That’s a good point, and I may test this out to see if there is any noticeable visual difference. Mostly I just don’t want a million variations for different platforms. For some of the later video chips, the filters seem to be internal and I can’t find any information on the frequency response or whether they filter before modulating, so I’m not really sure what to do there.
The NES did the signal very differently. I suggest looking in gtu-famicom for an example, but it’s missing more up-to-date voltage values and info that you can find in the nesdev wiki. Row-skew in the NES color palette matters too.
I know that the NES outputs composite video directly and I’ve mostly been ignoring that for now. My primary reference so far has been the Mega Drive/Genesis, simply because it is the most convenient.
Everyone keeps talking about lowpassing under some bandwidth, or notching out some frequency, but that’s not right at all. It’s better to pay attention to the overall frequency response, like I did in this desmos graph https://www.desmos.com/calculator/tg2yrsqqqv based on this article https://www.mathworks.com/help/signal/ug/fir-filter-design.html . (Maister used MATLAB, but I’m not sure exactly how.) I’ve already done this, and you can find my code for this in my latest post in my thread, except that the RF lowpass keeps causing the chroma saturation to decrease, unfortunately.
(According to the actual standard, I is filtered to be less than 2 db down at 1.3 MHz and over 20 db down at 3.6 (really 3.58, the chroma carrier) MHz with minimal ringing and overshoot, and Q is filtered less than 2 db down at 0.4 MHz, less than 6 db down at 0.5 MHz, and more than 6 db down at 0.6 MHz. For RF modulation, there’s no values listed, but there’s a diagram showing that up to 4.2 MHz should be recoverable, and everything should disappear by 4.5 MHz.)
I have been looking at the frequency response as well. The continuous-time FIR filters I mentioned above for YIQ don’t quite meet the requirements for Q (they are close, and they meet the requirements for I). They result in a -18dB/octave roll-off. The benefit is that there is no ringing or overshoot.
For both the notch filter used in decoding and the low pass after chroma demodulation, I am using FIR filters designed with SciPy (using the signal
package). I just precompute a static array of coefficients, similar to Maister’s approach. One difference is that I don’t need a different set of coefficients for each input resolution because I resample the signal to a fixed resolution after the initial YIQ filters. That makes things much easier and more consistent for different input resolutions.
This is the frequency response of the notch filter I’m currently using (a Blackman-windowed sinc function). I am probably being too aggressive and it is resulting in some ringing. I noticed that your filter is wider and not as deep. I have struggled with deciding how steep the response should be.
And here are the frequency responses for Maister’s 2-phase filters, just for fun:
The filters from the actual NTSC standard aren’t quite right. This is a thing I haven’t done yet: Original consoles and TVs typically did these filters using so-called LC circuits. You can look up circuit schematics of consoles and TVs to see this. Consoles generally did all their filtering with just the LC circuits, but some TVs had filters built into the decoder chip instead. I haven’t looked around enough to decide what’s the most common, but I currently assume the most common setup in 1990s TVs was to first bandpass and notch with inductors and capacitors to separate luma/chroma, then use the decoder chip to sharpen luma. For 1980s, I assume it’s the same except that the sharpness is often skipped. I’ve tried using circuit simulator programs like ngspice to help with this, but it’s too much for me to figure out. I’ve 80% given up on ever doing this.
I have also been looking at the circuit diagrams and was hoping to simulate the filters for the Mega Drive/Genesis, at least. I will admit that it’s been over 15 years since I’ve simulated an analog filter, so I guess we’ll see if I can figure it out again. I have some friends who may be able to help if I can manage to enlist them.
One extra thing to consider here is the phase response of the filters. With digital filters, it is straightforward to design something with a linear phase response, but many analog filters had non-linear phase responses. I assume the filter designers tried to keep the phase response as linear as possible (using Bessel filters or something), but who knows? Phase errors in the chroma could alter the colors.
Consumer TVs in the US did a color correction based on 1953 NTSC primaries and illuminant C when using RF, composite, or S-Video, but not when using YPbPr component or RGB. This caused most games’ colors to get messed up and over-saturated. Neither SMPTE C nor SMPTE 170M caused this to change; it continued into the 2000s probably until HDTV. In Japan, the correction was different, probably because of 9300K. US models, even as late as 1997 with the Sony KV-27S22 (which had the CXA2025AS, as in the NES palette), did another trick where they would make the area near red and green look okay, but drag the white point from C to 9300K+27MPCD at the expense of other colors, like this https://forums.nesdev.org/viewtopic.php?t=26093 . I won’t overload you with all the difficult details on this, but the best way to do this currently is to take the settings from my nesdev post, and put them into this LUT generator by ChthonVII https://github.com/ChthonVII/gamutthingy which also happens to have a mode specifically made to be used along with NTSC emulation.
I’m mostly ignoring color correction and treating everything as if it was sRGB/709 primaries with a D65 white point. There are a few reasons for this:
I will read though your nesdev post, though. I had just assumed that, given the original primaries were wildly unrealistic, manufacturers decided to do whatever they felt like doing instead. If there is something consistent then I will look into it.
I appreciate the amount of research you’ve done and I feel like I should probably look through your NTSC shader as well!
I haven’t done this in a long time, but the difference can be seen if you have alternating vertical blue bars with constant Y. If you lowpass before modulating, the red gets evened out more nicely. If you bandpass after modulating, there’s a rippling effect that becomes a little more noticeable. I forget how important this difference is, so it’s worth testing out first to see if it’s worth it.
CRT manufacturers had some trouble with this, too. By the 90s, it became common for CRTs to have an adjustable sharpness setting to control this. I need to look at schematics, but as I said in my first post, I’m only assuming that the most common way they did it was by doing an inductor/capacitor notch filter first, followed by a controllable sharpening filter inside of the decoder chip.
My shader includes a sharpness setting too, but it doesn’t work like that: I instead manually remade the notch filter 29 different times with varying widths and hardcoded all of them. In my desmos graph, you can adjust the variable called “i4” to switch between each notch filter.
Edit: I forgot to address why my filter isn’t as deep as yours. I could have gone deeper, but I chose to stop at 20db on purpose. The number 20db came from the specification for the I subcarrier, which is filtered to be under 20 db at 3.6 MHz. This number of 20 db looked very good in experimentation too, as this seemed to be right at the point where the carrier visually disappeared.
To make the decoder notch more hardware-accurate, I suggest looking at the datasheet for the Sony CXA2025AS, found in the Sony KV-27S22 from 1997. The chip itself contains both a notch filter and a sharpening filter, and the datasheet has line graphs for the frequency responses of both. Consider matching up against this.
Both I and Chthon (the author of the LUT generator gamutthingy, which I linked before) have made our own efforts to figure out what’s happening with retro game colors.
Because of the problem with gamut compression, my goal has been to reverse-engineer one (or preferrably several) CRT’s color correction with respect to its own primaries so that I can re-implement that same kind of correction directly into sRGB primaries. I believe this is 100% doable, in a way that will nicely emulate what a good number of real CRTs did, while still mapping perfectly into sRGB.
The method that manufacturers generally agreed on was to mess with the B-Y/R-Y (and G-Y) demodulation. The result is a 3x3 matrix matrix operation on R’G’B’ that doesn’t account for gamma. Very simple and fast for a shader.
The most difficult problem has been getting a “full set”, to be able to fully replicate a specific known CRT’s colors: The CRT’s phosphors, EOTF (a.k.a. gamma, may as well assume 2.2 for this), whitepoint, nonstandard demodulation settings, and default tint and color settings. Because of the lack of data on the internet, we only have a handful of these.
Other than that, we have separate known decoder chips (without known default settings), known phosphors, and known whitepoints, without knowing what goes with what.
For something consistent, I have one strong answer to this. I’ve found two CRTs, manufactured over a decade apart from each other by different brands with different phosphors, that I believe both share the same exact kind of NTSC color correction. I just haven’t yet fully figured out how they made it for their own specific phosphors.
See this document from 1966: https://ieeexplore.ieee.org/document/4179914 . The idea is this: You want the CRT’s white point to be at x=0.281, y=0.311 (a.k.a. 9300K+27MPCD), but you want the colors near red and green to look normal, without accounting for chromatic adaptation, and without caring much about the rest of the colors. The paper says that you can pick two points that you want exactly correct, and build your 3x3 matrix around that. The resulting color errors look like this:
In february this year, I got ahold of a Toshiba FST Blackstripe, model CF2005, manufactured in 1985. I used an X-Rite i1Display 2 colorimeter to get the primaries, and I looked up the datasheet for the TA7644BP to get the demodulation offsets and gains. (In the forum post, I used manually sampled offsets/gains, but the datasheet’s values give better results.) Since the TV’s whitepoint was not intact (due to capacitor aging), I had to guess and check the whitepoint. With a whitepoint of x=0.281, y=0.311 (the exact same as that 1966 paper), I got a graph that looked a lot like what was in that 1966 paper:
About a year ago, Chthon made some efforts to find a “full set” for a Sony TV. There were a small few sources that had sampled Sony CRT primaries that were slightly off from each other, but one in particular claimed to have official values provided by Sony. For demodulation axes/gains, there are several different chips. Assuming that CRTs having the same tube or chip had the same phosphors, it’s possible to apply these same “official” Sony phosphors to many different TVs models, tubes, and chips. You can see the results of that here https://github.com/ChthonVII/gamutthingy/blob/master/src/constants.h
On CRT Database, there’s the Sony KV-27S22 https://crtdatabase.com/crts/sony/sony-kv-27s22 , manufactured in 1997, which has the Sony CXA2025AS. Thanks to Chthon, there’s a decent chance this has those same official Sony phosphors.
As with the Toshiba Blackstripe, the Sony KV-27S22 only has known phosphors and demodulation axis/gains, but not a known whitepoint. Sure enough, if I guess a whitepoint of 9300K+27MPCD with no chromatic adaptation, I get that same thing again:
When I messed with this, I tried several other whitepoints, different chromatic adaptation matrices, and SMPTE C primaries instead of NTSC primaries. Nothing else made sense. This has to be the answer for these TVs. It’s good evidence that this kind of color correction was common during the 80s and 90s, even from brands that were supposed to be good.
Here’s an unfinished, updated version of that NTSC shader that I’d posted in June: https://www.mediafire.com/file/pg7vevzuq4iia5i/p68k-fast-multipass-2025-09-22.zip/file Part of this contains an attempt at re-implementing the 1985 Blackstripe based on the points that they picked, but when I graphed it later, it looked too different from the original (and slightly worse, too). It’s decent for now if you want to include it in your shader, but it’s not quite finished, as I have to figure out how exactly they picked their perfect points and selectively darkened them.
There’s also a messy attempt at adaptive comb filtering in there, if you are interested. If you set the chroma demodulation to use baseband filtering, and you set the adaptive comb filter type to “4”, the result looks pretty good for PS1 games.
That said, this is definitely not what every single CRT brand did. I also happen to own a working RCA ColorTrak Remote E13169GM-F02 from 1989, which also does not take chromatic adaptation into account, but has a whitepoint closer to x=0.30, y=0.31. My Panasonic CT-36D30B from 2000 has a switchable whitepoint between roughly 7000K, 9500K, and 14000K, and it looks like it’s meant to do a correction based on D65, no matter which whitepoint you pick. I have not gotten service manuals or chip datasheets for either of these two CRTs yet, so I only have rough approximations for their demodulation offsets/gains. As for why these CRTs are different, it might be because of the ColorTrak Remote being a 13-inch portable TV that’s the size of a computer monitor (giving a different viewing environment), and because of the Panasonic one having a YPbPr component input. The point is, not all CRTs did this bizzare 9300K correction, but it must have been a thing for a long time.
Just like that, I’ve covered every single “full set” that I have looked at. That’s four total.
As for when this 9300K correction started disappearing, I have a decent guess. I once had a Toshiba CE20D10 from 1995, which had a TA8867AN chip. I don’t have phosphors for that CRT, but based on the TA8867AN’s demodulation axes/gains, it must have been a similar idea, using a higher color temperature than standard and hacking the red/green area back into place. There also exists a very similar chip called the TA8867BN (notice that it’s a one letter difference), whose behavior looks more like the D65 correction in my Panasonic CRT from 2000. I would assume then that 1995 was around the time that this 9300K correction was just starting to disappear. It may have to do with HDTV beginning around that time, in 1998, along with the adoption of YPbPr Component.
There’s also one aspect of that Sony KV-27S22 that isn’t understood: Sony TVs had a feature called “Dynamic Color” which could be toggled on or off, and we don’t know what this feature is doing. However, I personally am placing my bet on Sony’s Dynamic Color just being a switch for another whitepoint with another nonstandard set of nonstandard demodulation offsets/gains; in other words, my guess is that it’s just the same thing as all the other brands are doing, with no special circuit at all. The reason why is that the datasheet for the Sony CXA1465AS chip describes that the Dynamic Color circuit detects whites and flesh tones, and adjusts white to be bluer without affecting the flesh tones, so in other words, it’s doing the exact thing that just the normal demodulation offsets/gains are already doing. The only way to know for sure what happens when you toggle Dynamic Color on or off is to buy the chip on eBay and set it up on a breadboard for testing.
Brands other than Sony just stuck to nonstandard B-Y/R-Y demodulation, as far as I’m aware. Even Sony PVMs did this.
That’s a rabbit hole you entered lol. Creating a generic NTSC is somewhat easy, but accurate is tricky. I am probably around 2 years on and off on this.
I was looking for a new fast shader and test this and seems work very well!
is it possible to add adaptive Cutoff (RGB/Y bandwidth) based on horizontal res?
since the horizontal res is changing in video games, aka 2-phase and 3-phase
and there are horizontal res changing even in the same game (many PS1 games do this for example)
maybe the same for ICutoff and QCutoff
also it will be nice if there are a coloring (by Rainbow Effect) option in 2-phase
I was looking for a new fast shader and test this and seems work very well!
Thank you!
is it possible to add adaptive Cutoff (RGB/Y bandwidth) based on horizontal res?
Right now my goal is actually the opposite: a consistent cutoff regardless of the resolution. The actual bandwidth constraints of the analog output of the PS1, for example, didn’t change when the output resolution changed.
also it will be nice if there are a coloring (by Rainbow Effect) option in 2-phase
The current version doesn’t have true NTSC decoding implemented, but that is what I’m working on. That will hopefully include the different phase behaviors of each system.
no problem if it leads to a similar result , because I noticed that to merge the dither I need a value of about {Cutoff = “1.700000”} but this value is too strong in the case of 3-phase or 480!
I plan to add system-specific behaviour to the NTSC implementation. That might be more like what you are looking for.
I think I’ve managed to simulate the luma trap and chroma filters from the NTSC Genesis 1. I may have someone I know check my work just to be sure. The chroma filter in particular looks kind of… bad? But maybe that’s actually how the Genesis was!
I can generate FIR filters to use in the shader from these frequency response curves. I may give that a try this weekend.
Whatever you’re doing don’t forget TurboGrafx16, PC-Engine, TurboDuo, SuperGrafx which uses more resolutions than any of those other systems, even in game and which benefits greatly from proper NTSC effects like checkerboard dithering blending to produce even more colours.