That’s a rabbit hole you entered lol. Creating a generic NTSC is somewhat easy, but accurate is tricky. I am probably around 2 years on and off on this.
I was looking for a new fast shader and test this and seems work very well!
is it possible to add adaptive Cutoff (RGB/Y bandwidth) based on horizontal res?
since the horizontal res is changing in video games, aka 2-phase and 3-phase
and there are horizontal res changing even in the same game (many PS1 games do this for example)
maybe the same for ICutoff and QCutoff
also it will be nice if there are a coloring (by Rainbow Effect) option in 2-phase
I was looking for a new fast shader and test this and seems work very well!
Thank you!
is it possible to add adaptive Cutoff (RGB/Y bandwidth) based on horizontal res?
Right now my goal is actually the opposite: a consistent cutoff regardless of the resolution. The actual bandwidth constraints of the analog output of the PS1, for example, didn’t change when the output resolution changed.
also it will be nice if there are a coloring (by Rainbow Effect) option in 2-phase
The current version doesn’t have true NTSC decoding implemented, but that is what I’m working on. That will hopefully include the different phase behaviors of each system.
no problem if it leads to a similar result
, because I noticed that to merge the dither I need a value of about {Cutoff = “1.700000”} but this value is too strong in the case of 3-phase or 480!
I plan to add system-specific behaviour to the NTSC implementation. That might be more like what you are looking for.
I think I’ve managed to simulate the luma trap and chroma filters from the NTSC Genesis 1. I may have someone I know check my work just to be sure. The chroma filter in particular looks kind of… bad? But maybe that’s actually how the Genesis was!
I can generate FIR filters to use in the shader from these frequency response curves. I may give that a try this weekend.

Whatever you’re doing don’t forget TurboGrafx16, PC-Engine, TurboDuo, SuperGrafx which uses more resolutions than any of those other systems, even in game and which benefits greatly from proper NTSC effects like checkerboard dithering blending to produce even more colours.
TurboGrafx16/PCEngine is difficult because games can set whether a frame is 262 or 263 lines. We can’t tell just from the core output, though. This affects how the chroma phase shifts between frames. I don’t know what to do about that.
If you still have interlacing woes, you can look at the XGA2 preset in Scanline Classic: https://github.com/anikom15/scanline-classic/blob/master/xga2.slangp
The relevant code is in the pixel function here: https://github.com/anikom15/scanline-classic/blob/master/src/scanline-advanced.slang
I used 86Box and Windows 95 to debug the interlacer because I wanted something that could line-double, handle interlacing, and handle progressive scan all within the same shader. The SVGA cards we had in the 90s would line double the 640x480 mode. We usually set our desktops to 800x600 or 1024x768. So the shader was made to handle line doubling at a low resolution and switches to single line when it detects a high enough resolution. 86Box only supports glsl so I actually used the glsl shader, but I backported everything to the slang shader.
The HDTV preset is another one you can look at as it line doubles everything below 540 (actual CRT HDTVs did this). Not many CRT HDTVs handled 720p, but apparently at least one existed.
There are several issues with interlacing that I think can only really be fixed with more metadata from the cores. Currently most cores output both fields (e.g. 480 vertical pixels) to indicate that the content is interlaced. We have no way of knowing which field is the current field and which is the previous. As a consequence:
- We might display the wrong one and basically add a frame of lag.
- We might get the NTSC phase offsets wrong. I’ve been struggling with how to do this. Even in standard NTSC the phase cycle is four fields long when interlacing. Some systems might be even more complicated. We kind of need to know where we’re at to make it work. We can make a guess and maybe it will be close enough, though. Unless this is what you’ve figured out?
crt-beans currently supports interlacing by either:
- Showing one field at a time (properly offset and with the proper scanline sizes). There is a toggle for the “phase,” i.e. which field is current depending on whether the frame count is odd or even. This can cause issues on some LCD panels due to charge accumulation. You can basically get a weird flickering that persists even after the interlacing is gone.
- Rendering both fields and blending them together. This works great for systems with 480p output to give them that 480i feel. If the two fields are different, you’ll get combing artifacts, though.
- VGA line doubling (default on the VGA preset) that will line double the lower resolution VGA modes. I don’t really deal with anything above VGA resolutions.
finally found someone who cares about CRT interlacing!
true, I noted this before
unfortunately, interlaced seems so underrated, RF is underrated too
The main thing I was concerned about with the Multiscan support in Scanline Classic was automatically enabling the line doubler at a certain point and adjusting the spacing between active lines based on resolution (the scanlines disappear at a high enough resolution). This support is relevant for computers. If you don’t care about computers or a rare type of HD CRT, it’s not relevant.
I think the field order is deterministic based on core for many systems and the behavior is documented. A simple macro system that can make defines based on the emulated system would be good enough, e.g.
#ifdef __SYSTEM_SNES
#define EVEN_FIELD_FIRST 0
#endif
@beans lots of interest indeed! I just updated my shaders and found yours. Trying it out at the moment, and I like what I’m seeing a lot, it is next gen GTU. Excellent work 
This is a cool shader, I like the automatic brightness mitigation idea, but it’s too bright on my HDR1000 display. Some additional mask controls (to make the mask darker) would go a long way towards making this shader compatible with a wider range of displays. Maybe an “override brightness mitigation with mask strength” option?
I want to add a mask strength parameter along with the NTSC simulation. I’m out of town and won’t be able to finish this up until next week at the earliest.
@beans Your Mega Drive filters seem reasonable, assuming low output impedances and high input impedances (as it appears from the data sheet I have, though no specific values are given). The luma filter should start to roll off at around 8 MHz. That roll off does not become strong until about 17-18 MHz. This is because the notch filter is cascaded with a lowpass (it’s hard to determine where the cutoff is exactly because the capacitance (C34) is illegible, but it’s in video range). You can filter the two stages separately as the Y input has a high impedance, so the lowpass part does not significantly load the notch part. The existence of the lowpass is baffling to me and I don’t know why it’s there. Perhaps it was added to reduce some noise discovered late in design.
The CXA-1145 chip is supposed to have a 180 ns ‘delay line’ where these filters are installed. Now a notch filter certainly isn’t a delay line, but I was not able to find any info about what part would be used for a ‘delay line’ in this case and how that would affect the signal. I am guessing it would provide a better composite image than the notch filter by slightly misaligning the luma and chroma to reduce overlapping frequency peaks (changes in luma generally correspond to changes in chroma), but this is totally speculation.
The CXA-1145 has a minimum RGB output frequency response of 5.0 MHz. The data sheet shows a much higher range in a lab setup. The Y and C frequency responses are the same.
The chroma filter is a bandpass centered a little off subcarrier frequency. The design overlaps the two filters so that the result ends up broader than I expected. It’s highly dependent on the input and output impedance, but no less than 2.8 MHz or so. That equates to a baseband chroma bandwidth of 1.4 MHz, close to the 1.3 MHz SMPTE encoding standard.
Conclusions: Between the SNES and Genesis we see that chroma is filtered for both, but the SNES allows a wider bandwidth. Only the Genesis employs a notch filter, and does so in place of a delay line puting its use of its video chip out-of-spec. The SNES does not. The Genesis also imposes an arbitrary lowpass filter on its luma. Combined with the notch, this results in a very poor luma frequency response. The SNES does not filter its luma at all, in either composite or S-Video. Neither systems filter their RGB.
Edit: I realized I posted this in the wrong thread, but I guess you can figure out what I’m talking about anyway.
The Mega Drive’s lack of delay line makes sense actually. It’s exactly why the dot pattern doesn’t shift over lines and we get the strong rainbow effects.
I think this is better anyway. I don’t want to take over @PlainOldPants’ thread.
The Mega Drive’s lack of delay line makes sense actually. It’s exactly why the dot pattern doesn’t shift over lines and we get the strong rainbow effects.
I think that is actually because the Mega Drive’s line length is 228 chroma cycles long, instead of the standard 227.5.
I had assumed that the delay line was supposed to compensate for delay from the chroma filter, assuming that luma wouldn’t be filtered and would therefore be ahead of the chroma. As it is, I don’t know if the filters would have the same delay, so the chroma and luma might be slightly misaligned.
The CXA-1145 has a minimum RGB output frequency response of 5.0 MHz. The data sheet shows a much higher range in a lab setup. The Y and C frequency responses are the same.
I suspect that the RGB signal is, in effect, filtered by the response of the DAC and buffers. The SNES, in particular the 2-chip varieties, has a notoriously soft RGB signal. The SNES behavior is complex, though, and would be hard to simulate.
The luma filter should start to roll off at around 8 MHz. That roll off does not become strong until about 17-18 MHz.
Are these numbers correct? That seems pretty high. TVs should be filtering out anything in this range at the inputs anyway.
it’s hard to determine where the cutoff is exactly because the capacitance (C34) is illegible, but it’s in video range
I found a forum post with the values. I think I linked it in the other thread, but I can pull it up later.
Thank you for your analysis! I’m out of town and away from my computer for now, but I want to resume looking at this later and I think I’ll have some questions. I think we can get pretty close to accurate composite simulations for at least a few consoles.
Well this is the sort of thing where it’s not easy to understand why its happening. Like I wouldn’t expect transistor frequency response degradation to have that nonlinear effect we see in the 2-chip version. And this is just one example, but the video quality can be very different even between the same generation of systems, so there should be caution with trying to simulate the exact response of something based off a single set of measurements. Here is a further discussion and the conclusion is that the issue may be due to poor component tolerances: https://www.retrorgb.com/snesversioncompare.html
They are correct. I am only looking at this filter, not the system frequency response. But keep in mind that at 8 MHz, the luma is already attenuated. That is, while the notch part is sloping back up until around 8 MHz, the overall response is attenuating starting at about 1.8 MHz. I am assuming a very low output impedance and high input impedance. As the impedances close in, the filter’s slopes soften.
There’s also the question of dynamic range. I don’t know what dynamic range analog TVs have, probably less than 8 effective bits. Standards treat -30 dB as enough attenuation to be considered blank. What we can conclude here is that the rolloff from the lowpass part outweighs the effect of the notch. Ignoring the notch part, and just filtering based off of the lowpass response should already give you a result very close to the actual console, especially if you use a notch in the TV simulation side.
I did find other schematics for other variations and it shows a 110 pF cap that is consistently used across versions. The frequency response graph you see uses that new 110 pF value.
Sure. When I started looking into this stuff years ago I found that it’s better to let good be the enemy of perfect. There are so many variables involved that it’s better to either do one thing very detailed or do many things in a more abstract, practical manner.
well, old crts indeed has a longer phosphor decay time, but I don’t think we can take the “Slow Mo Guys” clip as a reference because it’s been edited in terms of lighting, and video compression on YouTube and in editing has an effect in killing details
anyway I asked to get some shots so we can know better
but for now, we got https://www.youtube.com/watch?v=FOs8LfPifoA and seems there are some Phosphor-persistence from previous field
also


