Maybe slightly offtopic, I hope it’s not. @guest.r, is advanced not compatible with color mangler? No matter how I try to add it to the chain, I always get an error, fails to apply. What am I doing wrong?
It works well with the fast and fastest versions though. On a side note, what’s the difference between HD, regular and fast/fastest? Where are the trade-offs?
Both masks should be fine, but the new mask option is more dense, finer. So i would say you should choose whatever fits you setup better. There is maybe one thing, which is that mask 12 is more “neutral” for changing RGB or BGR layouts.
There is a parameter named “sat” for saturation in both shaders, which causes a missmatch and the shader chain can’t compile. It’s linked to afterglow in guest-advanced, which isn’t included with fast or fastest versions, so color mangler is compatible with them.
The difference is feature-wise. Regular verison has afterglow, TATE mode, screen raster bloom, magic glow, smart horizontal filtering, extra glow passes, which all fast verions misses. So to say expensive effects.
HD version misses TATE and raster bloom, but has a better horizontal + also bonus vertical filter (see Internal Resolution parameter), which is very nice for games which start with higher input resolution right from beginning.
Internal Resolution definitely seems to help me capture the look of composite Dreamcast’s half horizontal resolution. The look of the health bar on the MvC2, the softening of the sprites, the look of Crazy Taxi’s bottom bar on its title menu…and AAish look of Ecco/Sonic’s model in their games.
I don’t know if people will like it compared to the more aliased and clean VGA look of the console, but I like that it’s doable because composite Dreamcast looks so ahead of its time with the softer AA look!
I don’t understand what TATE or Raster Bloom are in my ignorance of missing the first ?? years of CRT shader development.
Mostly arcade machines had their display turned 90 deg., and scanline flow was vertical. You can enforce this effect with TATE mode. It’s an option for vertical arcade games, mostly with MAME, since FBNeo already does a rotation on it’s own by default settings.
Raster Bloom is an effect, where the entire screen enlarges with bright scenarios and shrinks when image is darker. Mimics the behaviour of some CRT TVs with worn voltage regulators. It’s authentic, I could observe it first hand on my device.
Greetings @guest.r just asking out of curiosity, is it possible to produce a Dot Mask Subpixel Layout CRT Shader Mask compatible with QD-OLED’s triangular subpixel layout?
Secondly, my presets using your NTSC section plus Sony Megatron Colour Video Monitor and the rest the shader stack I use always end up looking the best to me even after I experiment with other Shaders. I must also add that I was able to essentially match what I did with the Megatron/Guest-NTSC combo almost verbatim using CRT-Guest-Advanced-NTSC plus some other shaders so that says a lot about the potential of CRT-Guest-Advanced-NTSC.
However, I was recently trying out @DariusG’s CRT-Consumer-1w-NTSC-XL in which he has modeled different NTSC clocks and settings on a per console basis. I can’t currently verify the accuracy of his implementation but it does allow full blending of PC-Engine/TurboGrafx-16/Turbo Duo dithering with NTSC colour and fringing artifacts without introducing diagonal artifacts in the dedithered areas via decreasing the strength of the comb filter.
His implementation simulates all NTSC artifacts all the time except when in S-Video mode and one must decrease the Comb Filter Strength to increase the strength of the artifacts. He has also implemented 2 PCE clock modes PCE256 and PCE320.
Any chance we’re going to see further evolution of CRT-Guest-Advance’s NTSC module/implementation now that we seem to be in a period of renaissance where that is concerned once again?
P.S. I tried out mixed mode to try to improve the “m” in the Turbo Duo BIOS screen but I ended up going around in circles as Mixed Phase is inherently blurrier than 3 Phase and I think I have achieved the sweet spot for maximal blending of PCE dithering, prodcucing new colours and sharpness.
However, after trying out CRT-Consumer-1W-NTSC-XL then going back to my regular stack, the “m” didn’t look so bad at all after all. It’s legible and it might be that I’m aiming for a sort of idealised look with the perfect colour blending, plus sharpness but at the expense of sharpness in certain niche cases in order to facilitate this perfect blending.
I would really need someone with the mentioned subpixel display to test a couple of things, but my intuition says that first tries should just use the RGB layout. Physical subpixel layouts can be hard to “emulate”, even harder by just following intuition. It should do, but close-up photos will suffer from, khm, the target display hard-wired properties a bit.
You are somewhat right about some features of the ntsc shaders. Over the recent period of enhancements i didn’t focus too much on the good ole’ backbone. I might incorporate some ideas from my recent PAL shaders, also add an option or two. You know that this also allowed very nice backwards preset compatibility and even small changes could require after preset re-tweaking. But i guess it’s plausible to take some steps.
Thanks for considering backwards preset compatibility in all of your endeavours but sometimes things just need to move forward for the greater good I guess.
While you’re in the lab, you might want to consider what these folks seem to think about a good adjustable notch filter vs comb filters.
I think @NESGuy recently said something similar or came to a similar conclusion that for older systems, Notch Filters seemed to be better than Comb Filters. Kinda similar to how Dot Mask TVs seem to work just as good if not subjectively better than Aperture Grille and Slot Masks for older consoles.
I don’t think there’s anything wrong with trying to push CRT-Guest-Advanced beyond the subjective and objective limits of what was visually possible on even the best CRTs you know. With Shaders we do have that flexibility to be accurate if we want to but also to be better than accurate while not spoiling the presentation.
That’s what I try to aim for in my presets and is one of the reasons I enjoy doing this so much. Sometimes I wish I could stop though (and just play the games) but seeing these games come to life in ever improving forms is just as pleasant as listening to music that I love.
Yes, It’s authentic, but seems there is something missing, in real CRT enlarges and shrinks is not unified (at least horizontally and with very bad cases)
I harnessed the potential of Gemini to go through the relevant internet sources about PCE and phase shifts. I’m just posting here, what it found out:
The horizontal resolution (256px vs. 320px) does not change the 180∘ line-to-line phase shift.
In both modes, the PC Engine’s video encoder (the HuC6260 chip) still outputs a standard, NTSC-compliant composite signal. That 180∘ shift per line is a fundamental part of the NTSC standard, and the PCE adheres to it.
So, what does change?
You are correct that the PCE has different dot clocks (pixel clocks) to create these different horizontal resolutions:
256px mode (e.g., Bonk’s Adventure ): Uses a dot clock of 5.37 MHz (the same as the NES/SNES).
320px mode (e.g., Street Fighter II’ ): Uses a higher dot clock of 7.16 MHz .
While this changes how many pixels are “drawn” during the active part of the scanline, it doesn’t change the timing or phase of the NTSC color subcarrier (which is always ≈3.58 MHz) or the 180∘ shift that occurs at the start of each new line (the H-sync).
The Real PC Engine Phase-Shift Trick
Interestingly, while the horizontal resolution doesn’t affect the phase shift, the PC Engine has a different, more advanced trick up its sleeve for managing composite artifacts.
The video chip has a register bit that can change the total number of scanlines in a frame from 262 lines (even) to 263 lines (odd) .
This is the real “artifact correction” mechanism:
Standard 180∘ Shift (per line): Like any NTSC console, the phase is inverted 180∘ on every new line.
Even Frame (262 lines): The console draws 262 lines. Let’s say the frame ends with the phase on “normal.”
Odd Frame (263 lines): The console draws 263 lines. Because it draws one extra line, the 180∘ inversion pattern is flipped. The frame ends with the phase on “inverted.”
Result: The phase of the color subcarrier is now inverted relative to the entire previous frame .
By flipping the phase of the entire picture every other frame, any static composite artifacts (like dot crawl or rainbowing) are also inverted. To the human eye, these opposing artifacts blend together and “average out,” resulting in a cleaner, more stable image with less shimmering.
So, you’re right to be curious about its video tricks, but the magic wasn’t in the horizontal resolution—it was in this clever 262/263-line switching, a feature far more advanced than its 8-bit and 16-bit rivals.
So i’m not rushing, it’s just one evaluation of the situation, but it’s still an evaluation.
Isn’t this what all NTSC consoles did on interlace, i mean since total NTSC lines are 525, one field has to be 262 and the other (next frame) 263? One field renders that 1 extra line. Since you choose a field to render upon it (there is only one field used on double strike mode), it has to be either 262 or 263 AFAIK.
Maybe the (PCE modes agnostic) encoder used on the PCE has that register for interlaced mode?
------- line 1 drawn on frame 0
------- line 2 drawn on frame 1
------- line 3 drawn on frame 0
------- line 4 drawn on frame 1
....
------- line 525 drawn on frame 0
Blending on composite occurs because of Chroma bandwidth, check these revealing screenshots (that’s an ntsc shader mockup i did, i do these every now and then to practice lol).
Check how colors are blended where only colors (chroma) is at play, but wherever luma exists it won’t because of higher luma bandwidth, e.g. check the forehead here. That’s why composite blends but still manages to be sharp.
Of course the shader takes in to account the different bandwidths and applies them, like 7 passes of 0.5*SourceSize.z for chroma, if we do the calculation, it’s 3.5 times less than original horiz. size, e.g. 256.0/3.5 is ~ 80 color samples for IQ (chroma), near to what it is supposed to be.
1.2 mhz / 4.2 mhz is ~1/3, a bit more so 3,5
Luma will do like 2 or 3 0,5*SourceSize.z to simulate that small detail loss, or do the full 7 passes and adjust the gaussian weight or whatever is used, to be sharper.
If we extend out math and chroma lives around 3.579 mhz with a range of 1.2 mhz symmetrically, it means it lives in 3.0 to 4.2 mhz range, so if we supposedly cut those freq. to erase Chroma from luma, we got
3.0/4.2, around ~1/1.5 so there we got our 3 passes of 0.5 for luma.
Most systems didn’t let the developers choose, as far as I know. The Mega Drive and SNES always had 262 lines per frame (unless interlacing, then they had 262.5; the half line is what triggers interlacing). What’s interesting about the PCE is that the game devs can choose. They might even change in the middle of the game, as far as I can tell.
263 lines per frame will result in fewer visible artifacts, and maybe that’s what most used. That might be a good default.
I guess this “default” makes the integration of PCE much smoother. It’s a big deal regarding gameplay, but code changes are more cosmetic. Still need to test some things though.
That should be covered by the 7mhz clock already (same with the Amiga that can do 352 on same clock). About that 262/263 register on PCE chip, did some research, it’s a switch to force interlace mode, but no PC Engine game ever used it AFAIK.