Dogway's grading shader (slang)

Here they are, as good as it gets on a cellphone camera. Actual image is more saturated in-game.

1 Like

Thank you very much for the empirical NTSC-J phosphor values. I’d given up hope of ever finding them.

Food for thought: By using a LUT, you can get access to more sophisticated gamut compression algorithms (and gamut boundary descriptors) that won’t run in real time. For instance, here’s my toy (which just happily ingested your NTSC-J phosphor values) that wraps up all the steps from linear RGB back to linear RGB into a LUT.

3 Likes

@Dogway No idea if this is of potential use to you in your efforts to refine/verify the NTSC-J phosphor values, but i stumbled upon this 1980 paper and it seemed worth sending your way just in case:

3 Likes

@Azurfel Thanks for the links, the paper looks familiar though. It mentions D65 and phosphor chemicals mostly based on the period of the 60s and 70s. I don’t know how reliable that can be despite coming from a Japanese study.

My procedure was to find several similar phosphor measurements known to be D93, their characteristic is a kind of orange-y red phosphors, based on that I know we are talking about the same kind of phosphors for the three of them. After that I do an average after removing outliers to reach to a common ground, I do this for each phosphor and using the IQM estimator. Now that I have my primaries, I look for almost matching phosphor chromaticities in the literature and IIRC I replaced them with those.

Given that no CRT unit is perfect this is kinda the best approach I could find.

@Chthon Thanks for the compliments, I just explained to Azurfel my procedure. I had a look at your repo and found it very interesting with several gamut compression methods that I always wanted to try, aside from CUSP, the JzCzhz model that I had pending. The only difference is that Bradford while well known might not be the best LMS for chromatic adaptation. In my tests I found CAT16 to work better.

I’m quite busy these days so I will give it a look when I get some time.

3 Likes

@Dogway

I implemented and tried CAT16. Not sure what I think about it. On the one hand, it’s probably a lot more accurate. On the other hand, it leads to more desaturated blues, which is aesthetically displeasing.

As for gamut compression methods, I’ve found that VP (and my VPR variant) is the best at preserving details outside the destination gamut. I think it lives up to its authors’ lofty claims.

There’s something I’d like to ask you about: color correction circuits. A saw a reference to these awhile back, but the significance didn’t register. I saw one again while looking into P22 phosphors, and I had a “crap, I think I’m doing this wrong” moment.

Wikipedia claims that…

"[t]o ensure more uniform color reproduction, receivers started to incorporate color correction circuits that converted the received signal — encoded for the colorimetric values [of the NTSC primaries] — into signals encoded for the phosphors actually used within the monitor… As with home receivers, it was further recommended that studio monitors incorporate similar color correction circuits so that broadcasters would transmit pictures encoded for the original 1953 colorimetric values…

…and it cites to ITU-R 470-6, annex 2, which does indeed say that.

A brief search finds several quite different implementations of color correction circuits:

  • A post in this forum thread describes late-70s color correct circuits as “bodges” and “abhorrent correction devices [that] distorted the demodulation axes to force colors near the flesh tone region to be flesh tone atg [sic] the expense of distorting other colours.”
  • This 1970 American patent might be what the forum post is referring to.
  • This 1978 Japanese paper is waaay beyond my reading level. From what I can gather, it’s working on gamma-space(?) RGB voltages, and results in everything moving towards the red point.
  • This 1995 American patent sounds like some kind of LUT in a LCh-like polar color space that could be calibrated by hooking up a computer. (Maybe that part was to be done in the factory? Not clear.)
  • This 1999 American patent to Samsung sounds pretty close to “doing it right.” It’s using linear RGB as an index into a LUT, and it sounds like the LUT was computed using a Von Kries transform of some sort. Since they couldn’t manage a 256x256x256 LUT, they used a tree structure to use finer sub-LUTs for “important” colors like skin tones, and coarser ones for everything else.

Do you happen to have any insights into…

  • Did 80s- and 90s-era Japanese televisions have color correct circuits?
  • How did those particular color correction circuits work?
  • Were those circuits even on the pathway for input from a game console (as opposed to broadcast)?
  • Do any of Grade’s knobs simulate the effects of a color correction circuit? (I don’t think they do, but maybe I’m just dense.)

One thing that occurs to me is that we could simulate an “idealized” color correction circuit by first doing a gamut conversion from the NTSCJ receiver specifications (93k+27MPCD & NTSC1953 primaries) to the actual TV hardware capabilities (93k+27MPCD & P22 phosphors) before doing the gamut conversion from the actual TV hardware capabilities to sRGB. I tried this and it was at least visually pleasing. Of course it’s going to be inaccurate at simulating a color correction circuit to the extent that modern methods are more accurate at doing the gamut conversion right than the circuits were. Perhaps what’s called for is a “shittiness of color correction circuit” knob that blends the output of the gamut conversion with the uncorrected input…

1 Like

Definetely some circuit inside the TV changes the colors. It’s not just calculate this matrix and multiply and you are done. The colors simply are not the same. Similar yes but not the same. Rgb LCD defaults then are not to be used for any CRT era gaming, they simply are way off, no way any pixel artist would use those colors for any game. That’s why I think you can only put them side by side and tweak the colors manually until you are there (did that on crt-sines glsl ), just an approximation, 90% accurate. That’s better than 30-40-50% accuracy of matrices.

1 Like
  • A post in this forum thread describes late-70s color correct circuits as “bodges” and “abhorrent correction devices [that] distorted the demodulation axes to force colors near the flesh tone region to be flesh tone atg [sic] the expense of distorting other colours.”

Yeah if you look at real trinitron the game skin tones are close to skin color while on rgb they are orange and still off if you use matrices.

Funny thing if you have an Android that has “sRGB” gamut option, colors are EXACTLY the same as my Trinitron CRT, at least. There is absolutely no any difference anywhere. Same hue, tone etc etc. Colors are very different if I use my pc. And I tested that on a real Amiga feeding the CRT, no emulators used on the crt.

1 Like

@DariusG

I think I could have done a better job at clarity in my previous post. Please let me try again. There are TWO gamut conversion muddles going on.

The more obscure one has to do with color correction circuits. This might be more clear in narrative mode: The 1953 American NTSC spec defines the white, red, green, and blue points to be used for broadcasts and receivers. (And the Japanese spec uses the same red/green/blue points with a different white point.) But the phosphors corresponding to these points were really dim, especially the red one. And chemists never figured out how to make brighter phosphors in those exact colors. But they could make brighter phosphors in other colors. And it turned out that consumers preferred brightness over accurate color. So, by 1960, every manufacturer was using brighter-but-wrong-color phosphors. And this trend continued until the death of the CRT. However, manufacturers were aware that doing this borked the colors. So they added “color correction circuits” to compensate. Generically, these circuits were performing a gamut conversion from the gamut in the broadcast spec they were supposed to be implementing to the gamut they were actually implementing by way of their choice of phosphors. Well, that was the goal at least. Judging from the grab bag of patents I found, it sounds like these circuits did a really shitty job at this in 1970, but were pretty close to right by 2000. Along the way, in 1987, the Americans said “screw it,” and changed their broadcast spec to match the phosphors commonly in use, obviating the need for these circuits in American TVs. But the Japanese kept their old broadcast spec until the advent of HD. I assume they got better and better color correction circuits in the meantime.

So far as I am aware, Grade isn’t doing anything to simulate a color correction circuit. Neither is anything else that I know of. Until a couple days ago, I wasn’t even aware it was something that would need to be accounted for. I’m still not 100% sure whether no one is accounting for this because it’s obscure, or because I’m fundamentally misunderstanding something.

The more well-known gamut conversion problem is that the gamut implemented by any CRT television set is different from the gamut implemented by modern computer monitors, so a gamut conversion is required. Grade accounts for this and does the required conversion better than any other shader I’m aware of. In fact, Grade’s repo is the only place I was able to find authoritative values for the phosphors used in Japanese CRT TVs in the 80s/90s. Like I said in an earlier comment, the only improvement I can suggest is using a precomputed LUT so that the gamut compression step can employ more sophisticated algorithms that won’t run in real time. (Aside from its speed, I don’t really like the current compression algorithm.)

(FYI, general overview of gamut conversions: A basic gamut conversion is just converting linear RGB to XYZ using one set of primaries, then back to linear RGB using the other set of primaries. But two wrinkles may arise complicating matters. The first wrinkle is that the two gamuts might not share the same white point. In this instance, you need to convert from XYZ to a color space defined by the response intensities of cone cells in the human eye and map each color based on its relative distance and direction to the white point, and then back to XYZ again. Fortunately, once the math is figured out, this can be reduced to a single matrix multiplication. The second wrinkle is that the source gamut will likely contain colors that are simply outside the destination gamut. In this case, you want to be slightly inaccurate so that you can squeeze those colors into the destination gamut without clipping. Because if you clip, then any details drawn with those colors will be lost. For a few weeks the animated details on the waterfall behind Aerith’s house in FF7 were the bane of my existence, because they’re mostly out-of-gamut colors when converting from NTSCJ (spec primaries, because I didn’t have phosphor values yet) to sRGB. Solutions for this second wrinkle are called “gamut compression.” It’s a very hard problem in several respects.)

2 Likes

Yeah if you do a matrix conversion from wide gamut to sRGB, you get a color like eg 240, 50, 10 on wide and something like 225, 40, -5 on sRGB and that minus cannot be covered on sRGB. So the tone is different in the end. These are just random numbers not a real conversion. Android on the other hand is wide gamut and covers all colors directly. That compression you mention should be already happening inside the sRGB, without any modifications/shaders etc you get a miniature model of NTSC inside the sRGB that’s why the colors are pale and desaturated. At least if I understand correctly what’s going on.

That compression you mention should be already happening inside the sRGB, without any modifications/shaders

I’m afraid that’s not the case. Unless you expressly implement some sort of gamut compression, what you get is clipping. To take your example, the -5 gets clipped to 0. And so does -1. And so does -10. The problem is what happens when the original image has, for example, a gradient drawn with 15 consecutive colors that work out to -10 to 4 after conversion. Two thirds of the gradient is lost to clipping and the output looks obviously wrong.

What you want to do is to set aside some space just inside the boundary and compress all the colors originally in that space, plus all the colors in that “direction” beyond the boundary, out to the source gamut’s boundary, down into that space. To continue with your example, those 15 consecutive colors might now map to 8 colors, 0 to 7. So the end result still looks like a gradient. We’ve lost some detail, and lost some colorimetric fidelity, but we no longer have an unacceptable loss on either axis.

Here’s an example with pictures. The first image is uncorrected, NTSCJ displayed as sRGB. The second image is a gamut conversion with clipping. The third image is gamut conversion with compression.

uncorrected clipped compressed

Like I said before, this is a hard problem.

The first major obstacle is figuring out which “direction” to compress in. All experts agree that hue should be kept constant, while reducing some combination of luminosity and saturation, but there’s no consensus on the details of that second part. A bunch of competing algorithms reduce luminosity and saturation in different ways. (For my money, VPR, implemented here, is the best method.) With one wild exception (reference #4 in my readme), keeping hue constant means working in a LCh-like polar space. So an additional problem with just clipping in RGB space, in addition to the issues arising from clipping, is that hue isn’t kept constant.

The second major obstacle is that it’s really hard to figure out what the maximum representable color is in a given direction in a LCh-like space, or indeed any space other than RGB. In fact, the first “good” solution to this problem was just published last year (reference #9 in my readme). And that’s something you need to know to define the endpoints of what’s going to get compressed.

3 Likes

@Chthon About the color correction circuit, what I could read from the forum post is it was a correction from the demodulation. I don’t think it’s related to CRT phosphors. If any, it should be dealt in a NTSC shader I guess.

The problem with brightness was on the red phosphors, but later on they found rare earths for reds and the brightness issue was no more. Then “tinted” reds were used (ionized Fe2O3) and a more pure red along brightness could be achieved.

I think that after HUE, it’s luminance which should be kept. Mostly out-of-gamuts are highly saturated colors, if the saturation is bond to a high luminance then a mix probably should be a better approach.

There are good Color Appearance Models for perceptually uniform spaces such as ITP for SDR or JzAzBz for HDR. The problem is that for gamut compression you have to be aware that the path to white is not linear or as you said on the path to white HUE changes can arise.

1 Like

@Dogway

About the color correction circuit, what I could read from the forum post is it was a correction from the demodulation. I don’t think it’s related to CRT phosphors.

Another post in that same thread called out that part for being wrong. ITU-R 470-6 specifically talks about phosphors not matching the spec as the rationale for color correcting circuits. And so do the patents. For instance, the 1999 Samsung patent says:

However, the received color Signal may be distorted for various reasons. One of the main reasons for Such distortion is caused by the color Signal processing in a color TV receiver. In particular, a color reproducibility difference between an input color and a CRT output color is generated due to the difference between R, G, B phosphor characteristics of a CRT and those of a predetermined broadcasting Standard.

It really does sound like they were aiming for a gamut conversion from the primaries in the standard to the phosphors actually used, to the extent they were able to do it.

If any, it should be dealt in a NTSC shader I guess.

Anything that impacts the intensity of the simulated phosphors needs to be simulated before the “phosphor gamut to sRGB gamut” conversion.

Looking more closely at the “Background” section of the 1999 patent, it sounds like a state-of-the-art late 90s CRT might implement a single matrix multiplication for color correction. So, probably just a simple “linear RGB -> XYZ -> linear RGB” conversion with no effort to prevent clipping.

[edit: Or not. It looks like Samsung failed at prior art search. The Japanese had basically the same thing patented in 1987. The partitioning based on hue and saturation to apply different “correction factors” probably accomplished some form of gamut compression.]

1 Like

@Dogway Have you perchance come across any info regarding when pro-level Trinitron computer monitors switched from the original/standard 1987 Trinitron computer monitor “Apple RGB” phosphors to the SMPTE phosphors used by the FW900 and it’s siblings?

No sorry, I’m not much into pro monitors. I always kinda thought they were all EBU based?

At least according to A Review of RGB Color Spaces by Danny Pascale (written circa 2002-2003), SGI badged Trinitrons used the same “Apple RGB” P22 phosphors/approximate primaries as any other desktop Trinitron monitor.

Come to think on it, your “Sony Trinitron KV-20M20” gamut from Grade 2020 used identical primaries to Apple RGB, so if you still consider that to represent valid/accurate information, there may be reason to suspect that all Trinitrons from no later than 1987 on used either Apple RGB, EBU, or SMPTE phosphors, depending on the product tier and era.

I was reading a bit on the matter, i am not a pro on the subject just fooling around, and i read that CRTs that we know used SMPTE-C with NTSC luminances (0.30, 0.60, 0.11) instead of it’s actual luminances (0.21, 070, 0.09) and RGB almost fully covers SMPTE-C and is larger 13% actually (Green and Red primaries are a bit out of RGB triangle). So it depends if you have a very capable quality RGB monitor covering more of RGB space.

My laptop that covers around 70% RGB (the specs say so) shows colors closer to CRT than an old Dell monitor i have which shows super saturated colors in some places. Laptop screen looks like it needs a colder temperature setting to be closer to the CRT. In any case in both monitors Grade “PAL” or “P22-80s-90s” and “rec 709” setting are very close (leaving the temp setting as it is 8500k). That setting also fixes the super saturated colors on that old Dell monitor.

An old paper i found about color gamuts, notice what happens if you project a larger gamut to a narrower gamut (saturation loss etc). This is old like 1967, still using the NTSC primaries. Notice how important is Green primary as it defines the flesh tone saturation

1 Like

Continuing from previous, if we assume a picture was made in SMPTE-C (the black triangle), it should have more saturated red, less green, and a bit more blue compared to sRGB our PC monitor uses.

lame and loose translation, gamma fixed too

void main()
{
vec3 res = COMPAT_TEXTURE(Source,vTexCoord).rgb;
res = pow(res,vec3(2.45));


res = pow(res,vec3(0.45));
float l = dot(vec3(0.30,0.59,0.11),res);
res.r = mix(l,res.r, 1.2);
res.g = mix(l,res.g, 0.9);
res.b = mix(l,res.b, 1.1);
   FragColor.rgb = res;
}

From

To

We can see the sky is brighter, the water is more blue-ish, probably that’s what was intended originally. Probably blue should have a multiplier boost too

Difference, blue multiplied 1.1 here too. As i said just a lame conversion looking at spaces

2 Likes

@Dogway So I just recently released my latest guest shader pack a few days ago, I tried out a new setting for the first time within grade called “crt-hue”. I put it to a value of negative 3 and I like how it changes up the image a bit, it also got rid of this greenish “tint” that seems to have plagued my presets with that setting on default.

My question is does that setting help to bring skin tones/colors closer to how they should naturally look? I love testing out different settings within grade just to see what it does.