PlainOldPants's Shader Presets

This is an incomplete, ongoing hobby project of mine to get these consoles’ video signals and colors to look more accurate on RetroArch. The main things unique to this are its customizeable video signal and its emulation of the color corrections found in real jungle chips’ data sheets. The US presets aim to emulate the colors of the Sony KV-20M10 and KV-20S11, without their Dynamic Color feature.

Previous version (2024-10-16)

Latest version (2024-12-16) (You might like the previous version better.)

My latest releases have been using my own new NTSC shader called Patchy NTSC, which combines the first three of these four features. (Fyi, my username is going to be changed to “Patchy” in the near future, so please try not to get into the habit of calling my shader Patchy.)

  1. NTSC video signal artifacts and RF noise. Genesis/MegaDrive, SNES/SFC, and NES/FC are supported, for RF, Composite, S-Video, and Component video signals. The shader carefully encodes the signal, optionally adds RF noise, and carefully decodes the signal to get close to the same artifacts as original hardware, albeit at a serious performance cost, especially if using RF. Advanced users are encouraged to use the shader’s settings to try to replicate more consoles’ signals. Settings to adjust sharpness, noise severity, and the noise RNG seed are included as well. NES signal emulation requires you to set Mesen’s color palette to “Raw”, which looks like just red and green. SNES and Genesis presets automatically detect when the game changes resolution. To get the intended artifacts, make sure you set your emulator to crop equal amounts of overscan on both sides. PS1 support might be coming soon.
  2. Color alteration caused by the jungle chip. If you’ve seen the CXA2025AS NES palette, that’s one of the jungle chips you can choose (number 7 in the settings), except you can use it on Genesis now, and you can adjust the settings for contrast, brightness, color, and tint.
  3. Gamma and phosphor gamut. Gamma is handled using the BT.1886 EOTF function. The phosphor gamut is done using Chthon’s precomputed lookup tables with (what is claimed to be) state-of-the-art gamut compression, resulting in good phosphor gamut emulation without much performance cost. Afterglow is simulated using code from crt-guest-advanced, even if you choose a different CRT shader like Royale or Hyllian.
  4. The CRT display itself. A few different “sanitized” CRT shader presets are included, which have settings that disable color and signal alteration to the best extent I can. The purpose of that is to give my shaders full control of the NTSC signal and color emulation, without the CRT display emulation interfering with that. That said, those CRT shaders are still being worked on separately by their own authors, and if you update them, you might have to update my sanitized settings for them. To use this, you can pick the “SignalOnly” preset that corresponds to your emulator and append a sanitized version of your favorite CRT shader to it. If you customize one of these sanitized presets, be careful not to change anything that could alter the output colors or signal, such as Gamma, LUTs, or some of Hyllian’s features.

Here are the major credits for this project (though, not everything listed here is in my latest version):

  • aliaspider - GTU-famicom, an NES video signal emulator, which helped me understand how the NES’s video signal works. My earlier posts used this for lowpass filtering, and this used to be directly copied into Patchy NTSC as one of its low pass filter options. The code was difficult for me to understand at the time because of how it had been written, but I have since realized that this is taking the integral of a basic raised-cosine window function (or “Hann Window”).
  • dannyld - Source of the settings in ntsc-md-rainbows, a preset for mame-hlsl’s NTSC passes that approximates the Genesis’s NTSC color carrier timings, although better results can be obtained from their own thread which perform more filtering through gtu and crt-guest-advanced. Because of this, my patchy-ntsc shaders were initially made to be a more versatile, more extensible way to simulate NTSC artifacts with customizable color carrier timings and frequency filters.
  • Dogway - Grade, a large shader that performs many steps to simulate CRT colors. The shader does have many features, but at the time of writing this, only its EOTF (a.k.a. gamma) function and SMS blue lift have made it into patchy-ntsc.
  • ChthonVII - gamutthingy, a program for generating LUTs based on existing CRTs’ phosphor gamuts with state-of-the-art gamut mapping, and one shader file called trilinearLUT.slang that samples these LUTs in the correct way. This is far superior to the gamut compression found in Dogway’s grading shader, though that’s not entirely Dogway’s fault.
  • Guest - crt-guest-advanced, in my opinion the best-looking CRT shader in RetroArch. Also, a gaussian blur shader, which I used to use for lowpassing. I have included two different personal presets for crt-guest-advanced, but I haven’t tested them on the latest verion in some time.
  • lidnariq - Posted measured voltages of their NES onto nesdev, including attenuated voltages.
  • I forget who, but someone posted estimated NES hue error amounts on nesdev. The 2C02G amount of 5 degrees appears to be a close match to a video capture of my real NES.
  • EMMIR (LMP88959) - Created NTSC-CRT, written in C. In my source code, you can see my failed attempt at directly porting the 3-band equalizer from NTSC-CRT. Other than this, I don’t have much to say about it.
15 Likes

Amazing work. Any plans to make presets for further systems?

1 Like

As to why I made the presets in this way

The things that affect the image the most to me, in order from most to least important, are the resulting colors, the signal artifacts, and sitting at a comfortable distance from the CRT to make my brain filter all that out. The NES/FC and Genesis/MegaDrive were the two primary supported consoles specifically because they both have unusual, unique-looking composite video signals. Other than that third thing in particular, the realistic CRT look isn’t a priority for me, so I’ve emulated that third thing by adding noise and a specific mask, which I’ve only tested so far on my laptop’s 2560x1600 (1440p) screen.

In fact, if no one minds, I’m thinking about having my presets just be the console’s video signal in one folder, the consumer TV colors in another folder, and, with permission of course, other peoples’ shader presets with the video signal and color changes removed, so that you can easily use RetroArch’s “Append” button to combine those three things into a complete shader, which means you won’t have to put up with my personal bias against highly detailed realistic CRT shaders.

The same day I posted my presets, I searched online and discovered a project called NTSC-CRT by LMP88959 (EMMIR) which has very similar goals to my shader presets, but it is 1) more complete feature-wise, 2) written entirely in C (meaning, it’s not a shader, but it’s more like blaarg’s and bisqwit’s NTSC filters built into your emulator), and 3) missing the entire consumer color emulation step. This is likely the next place where I’ll be copying my code from, because it is emulating some things that I really want in my shader presets. I’ve also discovered another palette generator that I should’ve discovered a long time ago called palgen-persune, so I will be checking out that code a little bit.

The NES/FC and Genesis/MegaDrive were the two primary supported consoles specifically because they both have unusual, unique-looking composite video signals, and I’ve deliberately picked gtu-famicom and ntsc-md-rainbows (the latter being a mame-hlsl ntsc preset) in order to emulate their signals closely (though of course I’m still wanting to keep improving them in certain ways, as they’re still quite different from the real look). To add more consoles, I insist that they be at that level of close-ness. I admit I’m having second thoughts about that approach to signal artifacts now, as some shaders have looked better with non-“real” signal emulation, or with mystery video signals that are for no specific console.

Let me give an example of what kind of things I would be thinking about when adding support for a new console. To add, for instance, SNES/SFC, which I’ve skipped because its composite is relatively clean, I might make a similar mame-hlsl ntsc preset like ntsc-md-rainbows, because it uses a mostly ordinary composite video encoder. There’s also the problem that there are multiple versions of the SNES video encoding hardware, with the later revisions being too clean to be worth emulating at all (in my mind at least), and there’s another problem that mame-hlsl’s ntsc implementation always starts the color signal at the same timing at the start of every frame, which might or might not be what the real SNES/SFC does. Thankfully, both these problems are possible to solve. As for color output, that should be easy because the SNES doesn’t contain anything that majorly alters the color output, unlike the Genesis and NES: The Genesis has a unique quasi gamma correction to increase the precision of its brighter colors, and the NES basically has no colors at all and smashes together its bizzare hue-level-emphasis square wave bullshit so that your TV can figure out what to do with it.

About Master System and Atari consoles which I mentioned as being unsupported in my original post: The Master System is probably the next console to implement, because it has a non-linear blue quasi gamma correction that’s already emulated, and it gets its distinct look from the Sony V7040 or CXA1145 depending on your revision, both of which can be emulated using mame-hlsl ntsc like with ntsc-md-rainbows. For Atari consoles, I just don’t have much interest in playing them, so they can wait.

2 Likes

2 Likes

This is awesome. Color reproduction is one of the things I spend the most time thinking about. I’m actually still on the lookout for the 13" JVC VCR/TV Combo I had from about 1990 for this very purpose. I played everything on that set from Intellivision all the way to Xbox 360. … I just have no idea what the model was so it’s a crapshoot :frowning_face:

In any case, I spend way too much time on every shader I play with to try and recreate colors how I remember, since that’s as close as I can get. So the idea of this is just really cool.

3 Likes

Gee, I guess I’d better write up some of the things I’ve been meaning to for you, hadn’t I?

About CAT16 instead of Bradford:
Literally just replace xyzToLms with
{{0.401288, 0.650173, -0.051461},
{-0.250268, 1.204414, 0.045854},
{-0.002079, 0.048952, 0.953127}}
and replace the inverse matrix too.
(Note, that’s row major; you’ll need to flip it for GLSL/slang.)

About Demodulators and Clipping:
It looks like these demodulators are multiplying red by about 1.3. (That sounds reasonable as far as color correction goes.) It looks like you’re preventing that from clipping by turning the saturation knob down to 75%ish. Is that correct? Can we confirm with actual hardware that (255, 0, 0) red was clipping with the saturation knob at 100%?
(Television broadcasts probably wouldn’t have been clipping with a 1.3 multiplier and the saturation knob at 100% because they were supposed to be sticking to 75% color bars and all that. I think?.. Video game consoles were a hack that didn’t always follow the rules.)
I bet there’s a straightforward math solution to find where to set the saturation knob to just barely avoid clipping.

About Modulators:
It took me a long time to figure out where the 0.2 was coming from in your CXA1145 emulation. If I understand correctly, the idea is that the top of the color burst is supposed to be at 20 IRE, relative to white’s 100 IRE, hence multiplying those chroma/burst ratios by 0.2 gets them relative to white at 1.0. Buuut that’s not quite right. The datasheet says white’s Y level is 0.71v, which we can assume really means 5/7v, or exactly 100 IRE. But it says burst is 0.29vpp. Divide by 2 to get the peak at 0.145v, or 20.3 IRE, which is not quite to spec. So the multiplier should be 0.203 rather than 0.2. The successor chip CXA1645 has the same angles and ratios, but a lower burst amplitude, resulting in a slightly different multiplier.

Which consoles had which modulators? I’ve got some partial information here: According to one of your posts, SNES had CXA1145. (Do you have a pdf of the sheet for that?) According to this, first generation Genesis mostly had CXA1145, but some had Fujitsu MB3514; second generation Genesis had four possibilities; and third generation Genesis all had CXA1645. (More info on Genesis.) Playstation 1 had CXA1645. (I had a better source for this, but I lost it. Here’s one though.) Neo-geo had CXA1145.

We can make a R’G’B’ to R’G’B’ matrix out of a modulator by multiplying the inverse of an idealized modulator with the actual modulator.

Back to Demodulators:
(Yeah, I’m bouncing around since my thoughts aren’t very organized tonight.)
If we assume overlapping tubes or overlapping demodulators implies the same phosphors, we have (at least) this cluster of Trinitrons:

year model tube demod notes
1994 KV-20M10 A51LDG50X CXA1465AS
1994 KV-20S11 A51LDG50X CXA1465AS
1996 KV-20V60 A51LDG50X CXA1870S
1997 KV-20M40 A51LDG50X CXA2061S
1999 KV-20M42 A51LDG50X CXA2061S
1995 KV-13M10 A34JBU10X CXA1465AS CRT Database lists demod as “CXA1465AS, CXA1865S” and tube as “A34JBU10X , A34JBU70X.” A34JBU70X is described as back-compatible update to A34JBU10X. Cannot find datasheet for CXA1865S. Service manual has CXA1465AS and A34JBU10X together.
??? KV-1396R A34JBU10X ??? Looks 80s. CRT Database has no service manual.
??? PVM-1380 A34JBU10X CX20192 Described as “late 80s/early 90s" Can’t find datasheet for CA20192
1996 KV-13M20 A34JBU10X CXA1870S CRT Database lists tube as “A34JBU10X , A34JBU70X.” A34JBU70X is described as back-compatible update to A34JBU10X.
1986 KV-1367 A34JBU10X ??? CRT Database has no service manual.
1999 KV-13M42 A34JBU70X CXA2061S
1990 KV-13TR27 A34JBU10X CXA1013AS CRT Database has tube wrong. (It’s A34JBU10X, not the later A34JBU70X) Can’t find datasheet for CXA1013AS.
1993 KV-13TR28 A34JBU10X CXA1465AS CRT Database has tube wrong. (It’s A34JBU10X, not the later A34JBU70X)
1993 KV-13TR29 A34JBU10X CXA1465AS CRT Database has tube wrong. (It’s A34JBU10X, not the later A34JBU70X)

I’m guessing the US model demodulators changed dramatically in 1987 when SMPTE-C was adopted. I’d really like to find a spec sheet for a 1985ish US demodulator to see how early US-made NES games were supposed to look.

You mentioned something about PAL axes. My understanding is that they are ALL straight 90 degrees.

Something else to emulate for Trinitrons is their “Dynamic Color” feature that was enabled by default. There’s partial information in the CXA1465AS datasheet: “The new dynamic color circuit detects flesh and white colors from the amplitude ratio of R, G and B primary color signals and changes the ratio of the R, G and B outputs so that the color temperature will be higher as the color is closer to white without changing the color temperature of the flesh colored portion.” In the testing section it indicates that red should be 97% and blue should be 106% when “Dynamic Color” is engaged. (Page 14 of this old brochure shows what it looks like.) No idea what qualified as a “flesh color” or what the function was for turning this on harder “closer to white.” My first guess would be that how strongly “dynamic color” is applied scales linearly with the inverse of the largest of R-Y, G-Y, B-Y, down to zero at whatever size R-Y counts as “fleshy.”

About Gamut Mapping/Compression:
It’s pretty much impossible to implement a “real” gamut compression algorithm in a shader. Figuring out accurate gamut boundaries is actually a really hard problem, and the first “good” solution to it was published in 2020. And it is still very compute heavy, and very space hungry, and pretty much demands computing both gamut boundary descriptors in their entirety once during initialization, storing them, and then accessing them as needed for processing each input color. This is a very CPU-friendly paradigm, and a very GPU-unfriendly one. Soooo… the best answer is to precompute your gamut conversion + compression and make a LUT out of it, then use the LUT in your shader. Annnd… that’s what gamutthingy is for. I need to push some commits and then it ought to be able to handle Trinitron P22 (NTSC-U, NTSC-J, or SMPTE-C whitepoint) to sRGB with fancy-pants gamut compression. I’ll try to upload some LUTs tomorrow or the next day. (And, yes, the readme is out-of-date in places. Both it terms of not reflecting recent commits, and not reflecting that demodulator chip datasheets are a great source of information on how color correction was performed in the 80-90s.)

Aside: I highly recommend reading everything in the “references” section of gamutthingy’s readme.

Aside: Grade’s “gamut compression algorithm” is trash because it’s operating in very-very-not-perceptually-uniform RGB space and it’s just guessing at the source gamut boundary. (Or, rather, it makes the user guess by way of a parameter.) Its only virtue is being fast enough to implement in a shader.

Something I still need to figure out how to implement: If we’re turning down the saturation knob to prevent red from clipping, then there are likely colors around the other primaries and secondaries that are within the Trinitron P22 gamut, but that we could never possibly output. So, if we do gamut compression using the gamut boundary descriptor for Trinitron P22, then, in many directions, we’re going to end up compressing more than we need to in order to make space for colors we can’t possibly have. What we need is to redefine the gamut boundary descriptor in terms of “the set of all possible outputs from the chain of modulator -> demodulator -> saturation knob -> (‘Dynamic Color’ emulation??) -> CRT gamma emulation given the domain of 0-1 R’G’B’ inputs.” I think I’ve got a rough idea how to implement that. But I’ve got a bad feeling that it unavoidably requires baking that whole chain into the LUT.

What if you really, really want to do gamut compression in the shader instead of using a LUT? If we decide to go with a relatively unsophisticated “desaturate only” compression algorithm, then we can “cheat” and reduce gamut boundary finding to binary search, and maaaaaybe that could be implemented fast enough for a shader. Maybe. No promises. It would look like this:

  1. (Do chromatic adaptation of the input color if necessary.)
  2. Convert the input color to LCh. (LCh is not quite perceptually uniform, but it’s close enough and a lot faster than something like JzCzhz.)
  3. Call our input color’s C value Ci.
  4. Keeping L and h constant, look for another C value, Cd, that marks the edge of the destination gamut in the +C direction along the vector away from the achromatic axis through our input color. How? Binary search!
    1. We can precompute and hardcode an unreachable Cbig value for the initial upper bound on our binary search. Check all 6 primaries and secondaries and take ~4/3 the largest C value of the 6. (4/3 should help us converge faster.)
    2. How do we test each candidate C value? Pair it with our constant L and h, and convert the resulting LCh value to linear RGB in the destination gamut. If any of the RGB values are below 0 or above 1, then it’s outside the gamut; otherwise it’s inside.
    3. Keep going until we find an “inside” C and an “outside” C that are arbitrarily close together, then average them and call it Cd.
  5. Compute Ct = 0.9 * Cd. (Alias so I don’t have to type it out a bunch.)
  6. If Ci <= Ct, then no compression is needed. Don’t change C, and just carry on converting the input to the destination gamut.
  7. Otherwise, we need to find the source gamut boundary. Same deal, except our test converts to linear RGB in the source gamut. (If you did a chromatic adaption, you will need to run it backwards here!)
  8. If Cd > Cs, then no compression is needed. Don’t change C, and just carry on converting the input to the destination gamut.
  9. Otherwise, compute the new C value as Ct + ((Ci - Ct) / (Cs - Ct)), replace Ci with that, then carry on converting the input to the destination gamut.

Do I recommend this? NO, I don’t. I’d much rather use a LUT generated by gamutthingy. But if you really, really want to gamut compression in a shader, the above is passable quality that’s maybe fast enough to work as a shader.

2 Likes

I have always known that these numbers were not the right ones; they were guesstimates from experimentation and my limited knowledge as someone new to this subject, so I left a note about this in my original post. 0.75 is close to the mathematically correct amount (About 0.74 if I remember) to prevent the CXA2025AS’s red from clipping, and 0.2 for the CXA1145 is in fact from 20 IRE. The other saturation constants were just personal preference. The end of that obsolete shader downscales the color to make everything (or almost everything) fit… by running all the extremes through the same process, which multiplies the entire computation time by 8. I’m sorry about the confusion.

In my shader in this post, those constants are still found in the code because I copied the same sequence of if-statements, but the code after that returns their saturation back to 1.0–notice that they’re multiplied by Uupscale / b_max. The hardcoded decreased saturations are replaced with a universal option to “Normalize Saturation to Red instead of Blue”, which uses a very basic formula instead that ends up looking under-saturated. Having the saturation somewhere between the red-normalized and blue-normalized defaults is a must.

I’m considering renaming the demodulator presets to ones that are easier to remember and understand, instead of having arbitrary numbers that are US, JP, or “PAL”, where “PAL” is confusingly both SMPTE and PAL.

Although I haven’t read so much about PAL, that is my understanding as well, that they’re all almost the same. However, I noticed that the Toshiba TA8867AN’s PAL mode has G-Y at a higher amplitude than the USA’s SMPTE matrix. That is demodulator preset 7 in my current shader. Preset 1 is the SMPTE matrix, but I’ve incorrectly labeled both as PAL to avoid overcomplicating things from a user’s perspective. My labelling of those demodulator presets needs an entire overhaul to be more user-friendly and less confusing.

I don’t recall posting that. I must have said that the Genesis had the CXA1145 and several other chips that all had approximately the same constants. The SNES has the BA6592F. All the Genesis ones’ sheets can be found with a quick web search, but the ones on SNES are more obscure and can be found here, on the same source that claims the SNES used them.

You understand this math well. One concern from me is that converting through both the console’s modulator and the CRT’s demodulator in this way involves having an unrelated RGB matrix in between them, but I assume that this is not a real problem. Also, now that we’re going R’G’B’->R’G’B’->R’G’B’, to remove that matrix inverse operation for better speed, is it not an option to do an R’G’B’->Y’UV->R’G’B’ instead, like I was roughly trying to do in my old, unoptimized rewrite-customsignal-colorimetry.slang?

I agree. The gamut is a very big hurdle that I can’t make “real” on my own. To be clear, the reason I called this “real consumer color” was to emphasize that the entire color adjustment is based 99% in real data and formulas, (though it doesn’t include every single little step, but what I believe to be the most important points). I had a similar goal to the gamut compression algorithm that Grade currently uses, doing some crude adjustments that look at least passable. Since you’ve been at this for longer than I have, you seem have a much higher standard than me for this, but it’s justified given how this shader has “real color” in its name.

You can see I’ve made my own fast yet still trash attempts at compression based entirely on the linear-light RGB->RGB matrix. My idea was to try to always desaturate (still in RGB) the resulting RGB by a different amount depending on its hue, and have all those amounts align with the input RGB space’s “corners”, but oversaturate a little bit to stay closer to the original image. The blue color was especially prone to being desaturated a lot despite its (x, y) coordinates being so consistent, so set its oversaturation to 1.0. For the rest, I picked the oversaturation amounts using the test patterns and the clamp reveal feature. The result of this whole thing has uneven changes everywhere, and the entire gamut except grays is affected, but it’s fast.

For me, I think the gamut mapping process needs to not just bring the out-of-bounds colors back in-bounds, but also keep the gamut smooth, by continuing to move the in-bounds colors a bit.

By the sound of your post, using an LUT in the process (but not the whole gamut adjusting process) sounds like a necessity for future versions of this shader, to make it more “real”. It is certainly possible to design a more mixed approach to still be able to adjust gamut settings with only a single LUT or a small few LUTs. On another thought, who says the LUTs have to be only 3-dimensional? And uncompressed? Anyway, for now, using gamutthingy will be a big step better than what consumer-color.slang does right now, so I will start with that.

Thinking back to our whole process, the bigger problem that needs to be tackled isn’t the gamut fixing itself, but it’s what to do about gamma, because that’s the single part of the process that turns our otherwise one-step process into a two-step process.

Absolutely, I need to read into this more. You said the readme is outdated, so are there some more sources to add to that?

This word alone makes me 200% more tempted to try this. My shader packs already turn my laptop into a stove, so let’s see if it gets even hotter with this.

I forgot about this. For this, some people might have to order real chips somehow (or extract from wrecked CRTs, less preferrable because the chip might be damaged), set them up on a breadboard or something without destroying them on accident, sample a bunch of their output, and reverse engineer the formulas… or just make an Arduino script to lazily generate a LUT for it. I don’t have this experience myself, but I know someone who might possibly be able to help me if I cross my fingers and superglue myself that way.

I just noticed something. Those numbers have been rounded to only 2 significant figures, so they are both ±0.5. The result of (0.29±0.5)÷2÷(0.71±0.5) is between 0.1993006993 and 0.2092198582, and the result of (0.29±0.5)÷2÷(5/7) is between 0.1995 and 0.2065. So, more likely to be over 0.2 than under, but still as close as those numbers can get.

It seems we’re right back at the “how did CRTs deal with out-of-bounds values?” question. I can think of a few possibilities:

  1. We’re fundamentally misunderstanding something.
  2. They clipped 'em. We just never noticed because broadcasts never got anywhere near (255, 0, 0), and game consoles rarely used gradients of bright reds or bright red-on-red highlights that would clip into each other.
  3. The analog hardware tolerated out-of-bounds values to some extent, and any clipping happened further up where we didn’t notice.
  4. They went and implemented some automatic chroma gain control akin to US Patent 4,167,750 without telling us in the datasheet.

Since #1 mayn’t be the case in the first place (I don’t think we’re misunderstanding something big), and isn’t something we could fix at will, and #2 is not something we want to live with (is it?), and thinking about how to possibly implement #3 makes my head spin, that leaves #4, automatic chroma gain control. Maybe this should be implemented as an unprincipled kludge to resolve the clipping-or-desaturation conundrum in a way that looks appealing, at the expense of being unfaithful to the hardware?

To spare you the pain of parsing through that patent, it works like this:

  • If B-Y is positive, B-Y is reduced.
  • If G-Y is positive, G-Y is reduced.
  • If R-Y, normalized to a -0.5 to 0.5 scale is greater than 0.133, R-Y is reduced. (It assumes an angle of 123 degrees; not sure if that’s worth correcting for when the red angle is different.) This funny business is flesh color protection.
  • It doesn’t say how much the reduction is, but for our purposes it should probably be enough to hit 1.0 output for each of the primary color inputs at 1.0.
  • For our purposes, we probably want to interpolate between no reduction and full reduction as chroma increases.

My apologies. I think I misread or misremembered what you wrote. That link is an absolute treasure trove. Thank you!

Whether you’re multiplying by a R’G’B’ to Y’UV matrix, then multiplying by a Y’UV to R’G’B’ matrix, or multiplying by an R’G’B to R’G’B’ matrix, and then multiplying by another R’G’B to R’G’B’ matrix, either way you’re doing two matrix multiplication operations. So the performance is the same either way. As for the math, so long as you use the same constants for white balance and UV scaling, the idealized Y’UV to R’G’B’ baked into in the first matrix exactly cancels out the idealized R’G’B’ to Y’UV baked into the second matrix. However, a reason you might want to stick with a Y’UV implementation would be if you wanted to try implementing some sort of automatic gain control like above.

Something I failed to say before, but it’s too late to edit my prior post now: If you want to make a R’G’B’ to R’G’B’ matrix out of a modulator, multiply an idealized Y’UV to R’G’B’ matrix by the modulator’s R’G’B’ to Y’UV matrix, and then normalize the rows to fix up the errors that crept in due to only having two decimal places of precision in the data sheet.

Also, on the topic of low precision in the datasheet numbers: The correct amplitude for a 20 IRE burst is 0.2857…vpp. It could very well be that the CXA1145 datasheet rounded off an exactly correct 20 IRE to 0.29vpp. In which case the correct multiplier would indeed be 0.2. (But not for the CXA1645.)

More on the topic of low precision in the datasheet numbers: The CXA1145 is surprisingly close to an idealized modulator, at least according to its datasheet. The R’G’B’ to R’B’G’ matrix is quite close to the identity matrix. The margin of error due to low precision values in the datasheet and tolerances on analog components is probably at least an order of magnitude bigger than the difference between an idealized encoder and an emulated CXA1145 based on the datasheet. So, I think it’s probably acceptable to skip emulating the modulator (at least if it’s a CXA1145).

It was not my intent to bag on your gamut compression algorithm. So far as I know, there are only two published retroarch shaders that even attempt gamut compression, and yours does the better job at it.

Every academic expert to ever consider the question agrees with you. The standard approach is in the bottom steps of the “maybe this will work in a shader” algorithm I sketched for you: Linearly scale everything from 90% of the destination gamut boundary to 100% of the source gamut boundary to fit within 90% to 100% of the destination gamut boundary. You can do a soft knee if you like, but it’s usually meaningless when quantizing to 8 bits. One paper proposes defining the size of the remapping zone inside the destination gamut based on how much the source gamut boundary exceeds the destination gamut boundary in that direction. Gamutthingy also offers a hybrid mode that uses a fixed X% or that paper’s method, whichever leads to the smaller remapping zone. Whether this actually improves anything over the simple 90% rule seems to vary by algorithm.

Maybe a generic description of “state-of-the-art” gamut mapping might be useful? Illuminating? Fun? First, note that algorithms for gamut compression in video and in still images/printing have diverged. The state of the art for still images/printing is presently all about adaptive algorithms that find the minimum compression necessary for same-direction-ish colors in a local neighborhood, for increasingly clever definitions of “local neighborhood.” We can’t do that with video because it would flicker. So video requires a one-size-fits-all solution. The generic shape of most state-of-the-art algorithms looks like this:

  1. If you need to do chromatic adaptation, do it.
  2. Do luminosity scaling so that black and white are in the same places in both gamuts. Several algorithms involve interesting/complex scaling methods. But I think linear scaling is best for our use case for the reasons explained in the gamutthingy readme. (I don’t believe you have to do step #1 and #2 in this order; it’s just customary.)
  3. Get into a LCh-like polar color space that’s perceptually uniform. JzCzhz is the “new hotness.”
  4. Pick a “center of gravity” point on the achromatic axis. Different algorithms pick this in different ways: Simple desaturation uses the same luminosity as the input color. GCUSP and its descendants use the luminosity of the cusp for the input color’s hue. (The cusp is the highest chroma point for a given hue. To the extent that a color gamut is sorta kinda shaped like a cube standing up on one corner, the cusps represent points along the ring of 6 edges that don’t run to the top or bottom corners.) (At one point Windows’ photo gallery thingy used a descendant of GCUSP.) VP has two steps – the first uses black as the center of gravity, with a modified gamut boundary, and the second is the same as simple desaturation. VPR is my fix for a subtle bug in VP that’s only a little more complicated than reversing the order of steps in VP. (See the gamutthingy readme for more details.)
  5. Take the vector going away from the “center of gravity” through the input color. Find where that vector intercepts the source gamut boundary and where it intercepts the destination gamut boundary. (Like I said before, the problem of mapping out gamut boundaries is a hard one that had its first good solution published only recently. That paper’s key insight is that, if the process for getting your input into your perceptually uniform polar color space is invertible, then you can sample arbitrary points by running them backwards to see if they were a valid input within the source gamut. (Or forwards for the destination gamut). With the ability to sample, you can find an arbitrarily detailed set of points on the boundary, and assume the boundary line between points is a straight line. See #9 in the references in gamutthingy’s readme for a better description.)
  6. If the source gamut exceeds the destination gamut in the vector’s direction, and if the input color is outside the destination gamut or within the zone near the edge of the destination gamut, then remap it backwards along the vector. The standard approach to remapping is the 90% thing described above.

Yes, but they’re all datasheets and such that you already know about. I wasted a lot of time looking at patents when I should have been looking at CRT repair manuals and datasheets for the chips listed in them.

It’s also out-of-date in that I’ve added some stuff without putting it in the readme. The only important currently undocumented things are Trinitron P22 gamuts and “Spiral CARISMA.” (Spiral CARISMA is my attempt to update the old CARISMA algorithm. It works as an extra preprocessor step that sometimes does hue rotation before compression. Unlike the old CARISMA algorithm, it actually checks if rotation leads to a better result (instead of applying rotation blindly), and it scales the amount of rotation according to chroma (hence the “spiral” name).)

I would be endlessly amused if you could pull it off.

We can’t roll gamma into matrix multiplication because it’s exponentiation, not multiplication. Gamma’s always going to have to be its own step. The only way to squash it down would be to precompute it as part of a LUT. (Which sounds like something you’re not keen on.)

On the topic of gamma, do you understand what’s going on in the last stanza of Grade’s gamma function? I asked in the Grade thread, but Dogway has not responded. Clearly a lot of thought went into that stanza, but I cannot figure out what it’s trying to do or where those constants came from.

I still owe you some LUTs, which I hope to make and upload in a few hours.

I’ll respond to your full post soon. I was just coming back to quickly edit my post to rephrase this part, because it suddenly dawned on me that I said that in that baffling way. What I meant there was just in the context of approximating the gamut boundaries in a shader. Of course we can’t make the gamma linear when actually reproducing a color; the whole purpose of gamma is that it’s not linear. If we can somehow approximately semi-gamma-adjust the R’G’B’ matrix (over-estimating of course), it can be combined into the phosphor gamut part for approximating limits. That would help with the concern about potentially having the entire demodulator conversion with contrast/brightness/color/tint have to be in the LUT for gamut.

About not using LUTs, I’m starting to consider them more seriously now. They might even be the better way to simulate some modulators, especially the CXAs with their hardly documented Dynamic Color. My first thoughts in my post above are to try to get more creative with how the shader’s algorithm uses them, instead of having the entire algorithm be implemented by the LUT itself, which would reduce the user’s ability to customize it.

Of course, now that you’ve shown this part, it looks like the gamma-corrected conversions and linear-light conversions will absolutely have to be kept separate for most purposes. Also on this section, I didn’t understand the context of it. Is this by any chance referring to Dynamic Color, or is it everything else?

I’ll properly read and respond to all this tomorrow, or “soon”. Your research is a great help for me.

No, it’s a 1979 patent granted to Matsushita Electric Industrial Co., Ltd., so it’s not something Sony could have done exactly like this (unless they licensed the patent). But they might have done something similar to stop that 130% gain from going wild.

Also, it’s kinda the reverse of Dynamic Color. Dynamic Color turns on when saturation is low (and makes near-white things more blue), but automatic chroma gain control would turn on when saturation is high (and make saturated things less saturated).

But… it seems like a reasonable guess that “flesh color” for purposes of Dynamic Color is probably similar to “flesh color” for purposes of automatic chroma gain control. I.e., if we can’t find more info on Dynamic Color, we might use Matsushita’s definition of “flesh color” for a guess-lementation.

The endpoint of that would be just doing a state-of-the-art conversion from the spec gamut to the destination gamut, skipping over the phosphors and CRT stuff. That’s looks pretty good for some games, but it fails to capture all the ways that CRT color correction was hacky and wrong – like working in gamma instead of linear.

Oooooooooh, I get it now. You want to postpone dealing with the out-of-bounds values the demodulator produces until the gamut compression stage.

Conceptually, I guess that’s sort of akin to assuming the CRT hardware either tolerated it through analog magic or attenuated without hard clipping.

I have to confess that I’ve gone down this avenue before and given up. But, this time things went differently.

In order to do what you want to do, you’d need a gamma function with three properties:

  1. Unlike most, it can handle inputs above 1.0.
  2. Unlike most, it can handle inputs below 0.
  3. Like most, it’s invertible.

Well, it turns out there do exist some gamma functions that are defined for inputs outside 0-1. IEC 61966-2-4 (a.k.a. xvYCC)'s gamma function is defined for, apparently, -infinity to infinity. And ITU-R BT.1361 is defined for -0.25 to 1.33. For inputs greater than 1, they both just apply the same exponential formula they use normally. I guess that makes sense, since it runs hard into diminishing returns just like I imagine a CRT would. For inputs below zero, both flip the sign on the input, then run it through, then flip the sign on the output. BT.1361 then divides by 4. (IEC 61966-2-4 does not.) I wish I knew what the rationale for that was, but I cannot find it in the document.

If we strip the control knobs off the ITU-R BT.1886, Appendix 1 gamma function, it seems like we could define it from -infinity to infinity in the same way. The control knobs would need to be refactored into a preprocessing step, and possibly a post-processing step too. I haven’t thought too hard about this yet, but it sounds doable. (Also, I could totally live without the knobs.)

With that sorted, it should be possible to run backwards from LCh to linear RGB, through the inverse gamma function to R’G’B’, through the inverse of the demodulator’s color correction matrix, through the inverse of the modulator’s matrix, all the way back to the original R’G’B’ input, and then say whether an arbitrary LCh value was inside or outside the set of possible original input values. And with that, you can find the source gamut boundary for “the gamut of things this chain of modulator, demodulator, etc. could output given valid input.” So, yeah, you could postpone dealing with those out of bounds values until the final gamut compression.

One bit of caution: This sort of modified gamma function will be happy to produce outputs with out-of-bounds luminosities. You’ll need to check for those, and do something about them, because a “desaturate only” compression direction will never intersect the destination gamut. based on the couple of demodulators I did matrices for, I don’t think any valid inputs will get you here, but I’m a little worried about yellow.

Honestly, I barely remember what I even meant by the whole gamma thing anymore. Part of what I meant there was that gamma was a very non-linear step between two otherwise 95% linear parts of the shader.

At least, I know that what you said here is serious:

I’m thinking of this like, the “right” way to do this (but not exactly the way we’ll do it in practice) involves two separate passes from the beginning to the end of the entire process: First, a pass to get good overestimates of the limits in key steps and in the final output, and second, a pass to get the actual resulting color. The limits would then be used for downscaling and desaturating the final color as needed. For the actual resulting color, we should handle clamping and such things like the real hardware, but for the limits, we need an overestimate that includes everything at least, so clamping and gamma need to be handled in such a way where we don’t mess up the limits. I still need to read your sources and gamutthingy’s code to understand how this whole process works, but my understanding right now is that clamping and gamma are overcomplicating the process of over-estimating the limits.

So, you were close. I want to postpone dealing with out-of-bounds values until the gamut compression stage when calculating the gamut’s limits for gamut compression, not when truly processing the color.

I’ve noticed in that the oversaturated red needs to get clamped for some games to look good, at least with the consumer-color’s current process. For NES, it’s the same situation, but with blue instead. Postponing dealing with out-of-bounds values up until gamut compression made the games look gross and dark. My current presets have 0.9 contrast, where 1.0 is “fit max white”, resulting in a slight clamp on over-saturated colors, before applying gamma and the phosphor gamut.

Sorry for the delay. Here are some linear RGB to linear RGB LUTs for going from Trinitron phosphors with various white points to sRGB with fancy-pants gamut compression.

(I’ve also included a sample trilinear LUT shader because the reshade one in the repo is a bit fucky.)

I intent to implement using “the gamut of things this chain of modulator, demodulator, etc. could output given valid input” for the source gamut boundary in place of the phosphor gamut. But that will likely take awhile.

Would you please reupload that on a different site for me? I can’t get my computer to stop blocking that site for “malware” even though it’s no more dangerous than the hundreds of other file sharing sites that are not blocked.

I will get to working on this again very soon, but for the time being, I have other priorities. There are several things in this thread that are all ready to implement, but there are also several things that have to get measured from real hardware, such as Dynamic Color and the behavior of maximum saturated red. I most likely will not be able to set up one of these chips for testing by myself (though I surely will try), so my current opinion is that 100% saturated red is prone to getting clamped and badly dot-crawled. One thing not brought up is a “black stretch”, “black expansion”, or “dynamic picture”, which I assume all mean the same thing, but if I remember right, Dogway’s Grade is at least trying to do this.

I literally just picked that site at random from a page of search results. Let’s try this one.

1 Like

This is a very rushed preview of what I’ve been working on. Soon, I’m going to clean this all up and reupload it.

https://www.mediafire.com/file/1ioxjlc2j0fmxgo/PlainOldPants_2024_07_28.zip/file

To install, simply put the “PlainOldPants” folder into your “shaders” folder. The “shaders” folder will then contain folders called “shaders_slang”, “shaders_glsl”, and “PlainOldPants”.

Please read these instructions before using

To use it for Genesis/MegaDrive games, use the emulator Genesis Plus GX or BlastEm, and use the genesis-milestone-grade preset located in the PlainOldPants/2024_07_28 folder. If you chose BlastEm, you have to go into the shader settings and change the “Your Emulator” setting to “BlastEm” (0). If you chose Genesis Plus GX, make sure Genesis Plus GX’s “Borders” setting is set to “Off” (That’s in Genesis Plus GX’s settings, not the shader settings). In the shader’s settings, you can adjust the “Artifacting Reduction Hack” setting to change how much rainbows you are getting. Some Genesis consoles had more rainbows than others, and many later consoles had hardly any rainbows at all. Feel free to change the settings “Artifacting Reduction Hack”, “CRT Saturation”, “Demodulator”, “CRT Brightness”.

To use it for NES/FC games, use the emulator Mesen (Don’t use FCEUmm; its “Raw” palette is different), and use the nes-milestone-grade preset located in the PlainOldPants/2024_07_28 folder. If you’re playing Battletoads or Battletoads & Double Dragon, you should use the nes-battletoads-milestone-grade preset instead. In Mesen’s settings, change the palette to “Raw”. In the shader’s settings, feel free to change these settings to your liking: “Signal Res Y”, “Artifacting Rate”, “CRT Saturation”, “Demodulator”, “CRT Brightness”.

All the presets are for a certain US color option for now, but there’s a workaround: You can change the setting called “Demodulator” to one of the other regional options.

Details and updates

One quick thing I need to do is take back what I said up here. At least, the colors were emulated by getting proper mathematical formulas to emulate a real CRT’s colors, including the demodulator simulation that was missing from Grade. There is still a long way to go, but the demodulator simulation got the result much closer. My presets main features are the color and the video signal; the video signal is equally as important as the color.

The main update that’s happened to Genesis is the improvement of the composite signal effect. The code for it works much differently now. It is much less intense now, which is a lot closer to how it looks on my real hardware, though still not perfect. The rainbow effect on completely solid backgrounds is still missing.

The main update to NES is that it now internally uses colors sampled from a video capture of my real NES. Despite this, you still have to select the “Raw” color palette in Mesen. Even with this change, you still will get very realistic NES video signal artifacts thanks to GTU-famicom by aliaspider, and you will still be able to simulate several different demodulators. The correct fringing amount on NES is 1.0, so if you want to reduce fringing on NES, you should decrease the Y signal resolution instead. The artifacting amount varies depending on how your TV decodes the signal, and the default 0.75 is just my personal preference.

Color simulation is changed, but it’s inaccurate because I rushed it for this release. I’ve thrown together a version of Grade that includes the demodulator simulation, and I’ve appended Chthon’s gamutthingy lookup table after it. The correct way to do this would have been to insert the lookup table into a certain part of grade. This incorrect setup is temporary and will be fixed soon. By using Chthon’s lookup tables and Grade’s several features correctly, the result will look closer to real hardware.

Soon (hopefully tomorrow or the day after), I’m going to put together a cleaned up version of all this. I’ll also eventually put in poll requests for some things.

PlainOldPants’s preset pack - 2024-08-01 release

Note: As of August 3, I’ve just fixed a problem with the NES decoding. Please redownload this.

https://www.mediafire.com/file/u149kltb2ipid6j/PlainOldPants_2024_08_01_nesfix.zip/file (Installation instructions and settings descriptions are included.)

My cringy introduction

POV: You just bought this on Facebook Marketplace from some plain old pants guy for over $200, and when you got home after 3 more hours of driving and finally set it up… You just got f***ing scammed.

For the record, I don’t truly know what causes this smear, and I doubt it’s a common problem, because almost no one online talks about it. From what I’ve gathered, it’s a combination of the CRT wearing out over the years, your contrast and/or saturation being too high, the jungle chip decoding R-Y at a very high saturation, and your resulting red, green, and/or blue not getting clamped low enough. Real life examples: https://www.reddit.com/r/crtgaming/comments/ihes5m https://web.archive.org/web/20220826091650/https://i.imgur.com/At0tRng.jpg https://www.reddit.com/r/explainlikeimfive/comments/7xk1ll/eli5_why_do_some_crt_tvs_have_color_bleed/

NES raw palette decoded based on the Sony CXA1465AS data sheet. Modified GTU-famicom for the signal.

If you use the ntsc-md-rainbows that’s currently in RetroArch, the result is much more intense than it is on my real Genesis. I ended up replacing a bunch of code in mame-hlsl’s (ntsc-md-rainbows’s) NTSC implementation to make it look like this.

Sony CXA1465AS

Rec. 709

Make sure to read the Readme for details on what the settings do.

Download link https://www.mediafire.com/file/boul3d0bpv17wge/PlainOldPants_2024_08_01.zip/file

This is more of an NTSC shader preset pack than a CRT shader preset pack. There are four main steps that this preset pack does:

  1. NTSC video signal artifacts. Genesis/MegaDrive and NES/FC are supported, using modified versions of ntsc-md-rainbows (mame-hlsl) and GTU-famicom respectively. Parts of these shaders have been rewritten and improved. NES signal emulation requires you to set Mesen’s color palette to “Raw”, which looks like just red and green. Genesis signal emulation requires you to pick either Genesis Plus GX or BlastEm, and if you’ve picked Genesis Plus GX, you have to crop overscan to make the signal look right.
  2. Color alteration caused by the jungle chip. If you’ve seen the CXA2025AS NES palette, that’s one of the jungle chips you can choose, except you can use it on Genesis now, and you can properly adjust contrast, brightness, color, and tint. (Sorry, but sharpness is stuck at 0.) Optionally, you can make bright colors smear over to the right, but I doubt this happened commonly, nor would anyone want it in their CRT shader.
  3. Gamma and phosphor gamut. Gamma is handled using Grade’s EOTF function that no one seems to understand. The phosphor gamut is done using lookup tables, for the best looking colors and the best speed. Afterglow is simulated using code from crt-guest-advanced, even if you’re not using crt-guest-advanced.
  4. The CRT display itself. A few different “sanitized” CRT shader presets are included, which have settings that disable color and signal alteration. You can pick the “SignalOnly” preset that corresponds to your emulator and append a sanitized version of your favorite CRT shader to it.

This preset pack is almost entirely made using the work of various other people. Some work is used almost as-is; other work has pieces copied and pasted into different places. Here are the major credits I can remember:

  • aliaspider - GTU-famicom, an NES video signal emulator.
  • dannyld - Source of the settings in ntsc-md-rainbows, a preset for mame-hlsl’s NTSC passes that approximates the Genesis’s NTSC artifacts.
  • Dogway - Grade, a large shader that performs many steps to process CRT colors.
  • Chthon - gamutthingy, a program for generating lookup tables, and trilinearLUT.slang, which correctly samples the lookup tables.
  • Guest - crt-guest-advanced, in my opinion the best CRT shader in RetroArch. Also, a gaussian blur shader.
  • lidnariq - Posted measured voltages of their NES onto nesdev, including de-emphasis amounts.
  • I forget who, but someone posted estimated NES hue error amounts on nesdev. It’s a close match to my real NES.

You don’t have to read the rest of this post. The zip file above contains everything that most people need.

Detailed changelog

Changes to NTSC signal artifact simulation

For NES, added an option to use colors that I got from video capturing my NES. They are usable both with or without GTU-famicom’s NES signal emulation. Also, I’ve adjusted the GTU-famicom settings a little, by changing the artifacting value back to 1.0 (like in the vanilla version) and decreasing the chroma resolution to make the artifacts less jarring.

  • To be able to use GTU-famicom with my video-captured NES palette, scaleX is changed to use the palette like a lookup table to convert from YIQ to RGB. This was already a feature in my previous upload, but I’m now using it seriously.
  • Unfortunately, I wasn’t able to capture the colors with the de-emphasis bits. All the de-emphasized colors are off, at least for now.
  • I used a Dazzle DVC100 and OBS to capture the video.
  • To prevent clamping, I captured each NES color with the saturation and contrast set to 25%. The shader undoes this and, in doing so, preserves RGB values that are less than 0 or greater 1. The drawback is that the colors are 25% as precise as a normal capture. Looking at the resulting numbers, maybe even 50% contrast/saturation would have worked. Even with this setup, it seems like color 0D got clamped anyway.
  • If you want to take my unclamped NES colors and use them for another project (which you don’t even need to ask to do), please understand that my setup used the BT.709 matrix to decode the color. What you probably want is the BT.470 matrix. To correct this, simply convert into YUV using BT.709, and convert back to RGB using BT.470. Both matrices are found in my shader.
  • Thanks to my real capture, I discovered that, when computing NES colors, the PPU hue rotation error was going in the wrong direction. I also discovered that the result was off by half a PPU cycle, or 15 degrees. I have fixed both these issues.

Overhauled the MegaDrive/Genesis NTSC simulation. The whole thing is matched by eye, not using real data. It looks closer to real hardware now, but still not quite there.

  • The input is now nearest-neighbor instead of bilinear to remove inaccurate, uncontrollable low-passing.
  • Before luma and chroma are combined, luma is low-passed in a more controlled way, using GTU’s scaleX code by default, with an option to instead use Guest’s gaussian blur. Don’t ask me how scaleX works; I don’t understand it either. This gives the Genesis its blurry signal, and it reduces artifacts to an amount that’s more like real hardware.
  • The signal is decoded using a different method than before, now using GTU’s scaleX code with some hacks to improve sharpness. As a result, that excessive, inaccurate ringing-like effect is now removed, and the bandwidth for chroma is lower to make the artifacts flicker less, more like real hardware.
  • To further refine the artifacts, I’ve added “Artifacting” and “Fringing” controls that work similarly to other NTSC shaders. I ended up leaving them at 100%, but if you want to reduce the rainbows, this is the easiest way to do it.
  • Something that’s not emulated is how, on real Genesis hardware, even a solid color or solid gray will have rainbow banding. I don’t know what causes this effect.
  • “My” code for this whole Genesis NTSC process is a complete mess. My sources for mass copy/pasting were mame-hlsl’s NTSC passes (used by ntsc-md-rainbows), Guest.r’s gaussian blur, and GTU-famicom’s scaleX phase.
  • I have no idea what I’m doing, but I’ve looked up the schematic for the VA3 Genesis and the CXA1145’s data sheet and seen that, unlike the CXA1145 which only wants you to put a delay line on luma before combining with chroma, the Genesis puts a capacitor and an inductor in parallel. I have no idea how electricity works, but this seems weird.

Changes to consumer color simulation

  • The demodulator presets are now sorted by the green-to-brown convolution amount. As a result, they are also sorted by region.
  • The TA8867AN/BN PAL matrix has been removed. I’m now assuming that the different green amplitude was an error in the data sheet that didn’t appear in the real chip, and that all PAL CRTs had the BT.470 matrix.
  • I’ve added the BT.709 matrix for decoding. I don’t know whether this was ever commonly used or not, but assume this wouldn’t have become common until about the 2000s, and I’m guessing it also would have concided with YPbPr component.
  • Added an optional, hardware-inaccurate effect where bright colors will smear over to the right. After searching around the internet, I’ve found very little information about this effect, but it seems like it’s caused by the CRT wearing out over years of use (or lack of use) and the user’s contrast and/or saturation being very, very high, which isn’t helped by the fact that many CRTs set their default contrast to the maximum possible. The reason why I call my implementation “hardware-inaccurate” is because my implementation just uses an arbitrary clamping level to smear at (1.0 by default), not based in anything legitimate. Here’s an extreme, severe example in real life: https://www.reddit.com/r/crtgaming/comments/ihes5m/does_this_color_issue_mean_my_crt_is_going_out/ https://web.archive.org/web/20220826091650/https://i.imgur.com/At0tRng.jpg https://www.reddit.com/r/explainlikeimfive/comments/7xk1ll/eli5_why_do_some_crt_tvs_have_color_bleed/
  • Composite video demodulator simulation is now a separate phase from phosphor gamut simulation. That makes it easy to swap out the gamut simulation with anything you like.
  • I’ve included a modded version of Grade with demodulator simulation, for those who are interested in that. My original plan was to eventually base all my code on Grade, but I’ve changed my mind.
  • I copied and modified crt-guest-advanced’s afterglow and made it work with my setup. Previously, if you used the afterglow in crt-guest-advanced, the glow would be based on the output direct from your emulator instead of the color that’s actually being displayed.

For phosphor gamut simulation, you now have four main options as of now:

  • Chthon’s lookup tables, generated using Chthon’s program called gamutthingy. Chthon gave three of these to me. I have chosen to use this for my presets. I’ve copied Grade’s gamma function into this for better results.
  • The lookup tables in crt-guest-advanced. I assume it won’t take a lot of work to substitute these into Chthon’s trilinearLUT-gammacorrect.slang.
  • Dogway’s grading shader. This shader combines many different features in it. This one emulates the phosphor colors in the shader itself instead of using a lookup table. It handles out-of-bounds colors either by clamping or using a less serious gamut compression function, but on the bright side, it includes a better gamma function with something it calls “black lift compensation”, which is not present in any of the other options.
  • My consumer-color-phosphor.slang. This one’s only improvements over Grade are better (but still not great) handling of out-of-bounds colors and the ability to input an sRGB color for your white point. Other than that, I don’t recommend consumer-color-phosphor as of today.

The actual CRT effect itself

The presets now split in half: First, you select a “SignalOnly” preset that corresponds to your emulator, and second, you pick a CRT preset of your liking from the “CRT_Sanitized” folder.

My pack’s entire process is set up in such a way where the simulation of the video signal, jungle chip, and phosphors all happens before doing the CRT effect itself. In order for that to work, I needed to “sanitize” a CRT shader, by making a preset for it that does not do any of the aforementioned things. In the “CRT_Sanitized” folder, you can find sanitized presets of the shaders made by Guest (crt-guest-advanced), Hyllian (crt-hyllian), lottes (crt-lottes), TroggleMonkey (crt-royale), cgwg (crt-cgwg-fast), and aliaspider (GTU pass3). If you want to use one of those, you should first take a “SignalOnly” preset for your specific emulator, and then append your preferred sanitized CRT shader. Otherwise, you’ll get Guest’s CRT, which I believe to be the best one.

Blah

  • Finally added GPL 3 headers to my consumer-color shaders.
  • Changed all the settings’ names to start with eztvcol3 instead of eztvcol2. This means all the old presets are forced to be incompatible on purpose with these updated shaders.
  • Removed unneeded mame-hlsl parameters from my modded ntsc-md-rainbows.
  • Changed the headers in the settings so that they no longer crash RetroArch when you click on them.
  • To make the settings more accessible, I’ve added a dummy pass that moves all the end user’s settings to the top, followed by a line stating that the other settings are for “advanced users only”.
2 Likes

Pants, IDK why I am getting this when I am in battle.

Crap. I messed up the de-emphasis bits, apparently. I swear I tested this.

Try scrolling down the shader settings and changing the NES from “Capture” to “Formulas”, or whatever it was called. Or, if you’re on FCEUmm, change to Mesen.

If neither of those work, then I think I know what piece of code I broke.

2 Likes