Arbitrarily high resolution/PPI obsoletes CRT shaders

@MajorPainTheCactus I almost forgot

You don’t seem to understand the whole PPI thing

For VR applications 64K resolution on a 6 inch screen(translates to about ~12000PPI+) is needed to not see any pixels whatsoever no matter how close your eyes are to the screen

There are plenty of companies researching high PPI for this reason

When you get pixels this small you can also as a side-effect upscale things perfectly using near-neighbour scaling without the need for specialized shaders(though this still of course won’t eliminate the need for composite/rf filters)

There’s even PPI as high as 28000PPI being talked about

Hard to tell if these kind of Pixel Densities will ever make their way to TVs or Monitors but we know they are coming for VR/AR applications . And of course the resolutions would skyrocket in the case of 60+ inch TVs. But when we get to these pixel densities we can upscale anything like 240p, 360p, 480i, 540p, 720p, 1080i/p, 1440p, 4K, 5K, 8K etc. with zero quality loss

Basically, at these pixel densities you don’t need stuff like AI-stuff like DLSS or AMD’s equivalent FSR etc.

Presumably at these pixel densities you probably wouldn’t even need anti-aliasing techniques anymore whatsoever.


The CRT shaders provide scanlines and masks, why should there be no need for that anymore?


Arbitrarily high resolution and/or pixel densities won’t have any effect on the need for CRT shaders, only the fidelity they can produce and the techniques used to produce them (e.g., being able to draw masks directly instead of using subpixel tricks).

I’m also not following how nearest-neighbor scaling at high resolutions comes into eliminating the need for CRT shaders. However, it will eliminate the need for interpolation shaders like sharp-bilinear, since the visibility of a non-integer/rounding error of 1 pixel will approach zero the higher the res/PPI goes.


If you want a Low TVL look you don’t want scanline gaps to be visible. Near-Neighbour Scaling is technically perfect but at conventional pixel densities of today it still yields noticeable jaggies because the pixels are still too big so it ends up blocky(i.e what might have bee meant to be a small dot in the original 240p image will get interpreted as a square or rectangle block with conventional PPI of today but the situation starts to reverse at >10000PPI)

@hunterk I’ll have to disagree with that one. When I play a 240p game integer scaled to max resolution on my +600PPI phone and I stretch my hand as far as I can from my eyes I can barely notice any jaggies and the image almost looks perfect. When it’s from regular playing distance I can still see some blockiness of course which suggest 600PPI isn’t enough without shaders.

But we’ll see in about 20 years I guess when we finally get >10000PPI displays

Of course if you still want a CRT look like a BVM look with thick scanlines or whatever you’ll still be able to do that. But there will come a day when you’ll be able to just hook up a Super Nintendo to a modern 20000PPI TV or Monitor and you will get pretty much a perfect image that isn’t blocky nor will it have any jaggies(within the limiations of the 240p resolution and particular game of course)

1 Like

This would only be true if there were infinite information and our view were limited by resampling to a lower, finite amount of information.

With pixel art, we already have all of the information.

For example, here’s a pixel art ‘o’ rendered at 240p: image

Now here’s that same pixel art rendered at 168,000p:

1 Like

That “o” is 7x7

That 700x700 image will look as small as the 7x7 image when displayed on let’s say a 320000x180000 30 inch display(which is about 12 to 14 thousand PPI IIRC)

I suggest trying the experiment I did if you have a +600PPI phone. A good game to test this out with is Taxman’s Sonic 1 and 2 remasters for phones since they have attract screens. Just take your phone in one hand and stretch it as far as you can from your eyes and look at it, you’ll understand what I mean.

1 Like

I understand what you’re saying, I just don’t think it applies here. Nevertheless, I guess we’ll find out in 20 yrs or whatever :slight_smile:

We’re getting far afield from this topic, so I’m going to break these last few posts off into their own thread.


It applies to every image. I too used to think that CRTs or CRT shaders is the only way to properly display retro games or old TV shows that were taped on 480i or whatever and used to think that upscaling can never be perfect etc.

But after I did some reading over time I began to understand more about how images work and how upscaling works and why integer/near-neighbour scaling yields into blocky images when they’re supposed to be mathematically perfect etc. and then I learned about pixel fill factor and all that stuff.

So needless to say I’m convinced that NN scaling just needs a lot more PPI than we ever thought. Although of course it’s a brute-force method compared to shaders.

As said before, this of course wont recreate the composite/rf-based dithering and colors generated from those signals as that is a different separate thing.

Anyways, that’s all. See ya guys!

1 Like

Pixel density does not matter when you display a 240p game when the physical size is the same. Mario’s sprite is going to look the same -> is made out of “squares”. When you talk about a phone screen at arms lenghts distance you are not going around the “look” you are just in eyesight limits territory.

Same physical size mario’s sprite, let’s say 3cm tall, is going to look the same on a 720p display, 4K and on a 48K display.


I can say with greatest certainty that existing crt shaders would work very well on high PPI displays and resolutions starting with 8k. Some minor overhauls might be needed to increase some parameter ranges and enhance certain functionalities like crt mask size etc. Most crt shaders are coded this way, being able to handle a variety of input and output resolutions.

Even if subpixel technology is replaced by a single RGB spectrum pixel approach, this would only benefit most situations. Some masks would have to be replaced (i.e. magenta/green) and that’s it.

Unfortunatelly or fortunatelly one can’t hope for a crt look using only display characteristics and stock shaders :grin: (unless you plug an actual crt display).


@damn1 so youve lost me a bit here (Ive read the entire thread just to add) so bear with me.

You mention with low TVL you dont want scanline gaps to be visible. This isnt true as the TVL has nothing to do with the scanline gaps as its a horizontal resolution thing rather than a vertical resolution thing which is where the gaps come in.

Basically you get rid of scanlines gaps by increasing the vertical width of the scanlines with respect to the size of the screen - typically smaller screens will have larger scanlines in comparison to it and thus no space for black gaps to appear but big TVs also can just have big fat scanlines and also have no black appear.

TVL just relates to the number of phosphor triad strips across the screen.

As for the high pixel densities I dont think anything beyond 4K will really benefit us for aperture grille at least as basically RGBX mask is pretty much as good as youre going to need and having 10 pixels per scanline in the vertical direction seems to be enough to simulate the scanline curvature.

Much more important for CRT simulation is brightness and update speed i.e simulating the scanning electron beam which will give far better motion clarity. You need brightness because you want to turn off the screen for as long amount of time between frames as possible while fooling the eye that there is an image there.

As for high PPI as I say the more pixels you have the more processing power you need to fill them in and basically for most situations including VR, 4K is approaching diminishing returns and we’re starting to just fill in the pixels in between using A.I as in DLSS or hand coded upscalers like FSR from motion vectors and the like.


CRT shaders are not about perfect upscaling. They’re about making the image look like it’s being shown on a CRT display.


Well, I didn’t expect my post to be split into a different thread. I think this was unncessary.

And the title is misleading. I didn’t say “arbitrarily high resolution”. I said about maybe 10000/12000PPI. My figure was based on me reading somewhere that 64K resolution is required on a 6 inch screen for VR to where human eyes can no longer see the pixels when they wear VR goggles/headsets.

tl;dr What I was saying was that once we get screens with a certain pixel density(>10000PPI or >15000PPI) then we will most likely no longer need shaders at all since at that point the pixels become so small that the upscaled 240p image looks basically like 240p on a CRT without its drawbacks(scanline gaps or screen-door effect). And yes I’m aware for some those aren’t drawbacks and they may prefer those artifacts.

I hate to repeat myself but this is why I gave you an example with the phone experiment. Pick any game you like and emulate it in your high PPI(600PPI+ but 300PPI+ also works) phones. Just use integer/near-neighbour scaling, pick a game with an attract screen that you know very well(the reason I recommended Sonic 1/2 Remasters is because they’re easily available) and then hold your phone in one hand and stretch it as far away from your eyes as you can and watch the attract screen. You will notice that typical NN jaggies are barely noticeable at all from that distance but then as you take the phone screen near to regular playing distance you’ll start to notice the jaggies and the blockiness. This indicates that 600PPI on a typical 5-6 inch phone screen is nowhere near enough at regular distance but this also demonstrates that if we increase the Pixel Density by about 20 or 30-fold that then it will be enough.

@MajorPainTheCactus The higher the TVL(Horizontal Resolution) + the bigger the screen = the more obvious the scanline gap appears in lower resolutions.

That’s why a 900TVL BVM like the 20F1U has very noticeable scanline gaps on 240p and even 480i. That’s why a lot of PC CRT Monitors have scanline gaps even when displaying resolutions like 800x600.

For the high resolution/densities refer to the first parapgraph I wrote in this post.

Anyways, this was more of a future prediction and I don’t think it neccessitates its own thread so feel free to close it or merge it back to the previous thread.

Thank you for your time.

As I mentioned in my previous post, the “effect” that you are describing it has nothing to do with the picture itself but with the limited “resolving power” of our eyes. after a certain threshold our eyes cannot resolve anymore details. that’s why I said having a sprite at the same physical (3cm tall) size from a 240p content is going to look exactly the same at any pixel density with a usable size, unless you are willing to play on a post-stamp screen or a phone at 2m…

1 Like

Yeah ok thinking about it there probably is some correlation between TVL and the size of the gap but that isnt directly to do with TVL. Its just you want the scanning beam to be as focused as possible to take advantage of the higher resolution and be more responsive. Id imagine on the later 16:9 BVMs D series that the beams would become fatter again (vs same TVL 4:3 BVMs if they exist) as they dont need to be as focused in the horizontal direction as the phosphor triads are less dense in the horizontal direction and so can become fatter.

As for the VR figure its an interesting one - I actually wrote the fixed foveated rendering for the biggest selling VR game of that year - several years ago now (Im still employed by the developer and so cant go into more detail).

The interesting thing is how small your fovea actually is. This shader toy should show you how little you see in detail at any one time (the rest being filled in by your brain largely): (how large an area do you see moving?)

If we were to have proper eye tracking then we could have super high resolution displays as we’d only have to fill in a fraction of them with detail and the rest can be just rendered at low resolution and blocked in. Until then we still have to calculate it and then effectively throw it away for super high resolution displays.

Im still not sure what youre after from displaying 240p content on a 12000DPI screen is though Im afraid - a bilinear filter done by the high resolution display perhaps?


I would think that high TVL CRTs also excel in other areas that your typical TV does not that affects visibility of scanlines, e.g. they bloom less, are better calibrated etc. On my 14" TV for example scanlines are mostly very subtle, except for very dark areas. If I’d adjust brightness / contrast to the point where they become notable regardless of color, I haven’t increased TVL, have I :grinning: On the contrary, now my TV would be unusable, because I can barely see anything :wink:

As for the PPI theory, my understanding of it here is that at some point the differences between various resolutions on a screen would become unnoticable (if it were true…). However, presumably that still would result in something of a unique look I suppose. I mean, you don’t even need to consider crts, what about low-res handheld lcds for example. A DS 256x192 pixel screen isn’t “jaggie free”. You can already make out the grid on a DSI screen, I would imagine on the XL it’s a good deal more noticable. On these super-high ppi screens, it would look “better”?

1 Like

Is that really true that they ‘bloom’ less? Bloom technically being light scattering off the source emitter and various mediums the light goes through until it reaches the back of your eye.

From what Ive seen the higher TVL TVs are brighter because they do tend to have larger gaps between the scanlines and need to make up for that and brighter is generally better.

As such we’re talking about differences in glass refraction (not much) and phosphor emitting focus vs sheer quantity of photons being emitted. Id guess the BVMs have more bloom because simply they are brighter no?

I’m not sure about the terminology here, looking around, I see people refering to “bloom” also as “glow”, plus as another CRT defect related to geometry.

What I was thinking about was actually how the scanlines “bleed” or “merge” with each other. How do you call that effect? I was thinking that focus played a role, because my main TV showed almost none scanline gaps before adjusting it, but on the other hand, the smaller TV I did it first with showed those gaps despite being rather blurry at first.

1 Like

Im not sure there is a terminology for it - its just when the beams excite the phosphor such that the excited parts touch the previous/next scanlines excited parts. (Im begining to wish I never wrote that sentence).


Haven’t been here in a while but we’re basically agreeing. The higher the PPI/smaller pixels the closer the picture will look to the intended target(small dot will look like a small dot due to the extremely small pixel size and not a square)

That said, the resolution required is a lot. For example for a 30 inch monitor you would need about 300K+ resolution. For a 60 inch TV double that. So it’s far off, maybe 20-30 years from now and that’s an optimistic view. I just hope the luddites won’t keep us down if we can someday achieve this kind of pixel densities in traditional displays, because this would benefit everything and not just retro 240p games but also 1080p material or 480i material or 720p material etc. This would be a truly resolution independent display.