Perfect output on consumer CRTs - CRTSwitchRes pointless?

Sorry for the headline, which might sound a little bit provoking, but after the tests I did in the last days, I am not sure if I have the right understanding or if the basic concept behind CRTSwitchRes is wrong.


First of all: My goal is to get an analog RGB output which is scaled the same way like the original consoles do when I connect the emulator to a consumer TV - not to a PC monitor, a professional Sony PVM or even a HDMI flat screen. I played around first with an Raspi 2, but after I learned about the pixel clock limitations on Raspi 1-3 and wanted to get nearly all SD consoles emulated, I built a PC from spare parts and spent some money on an used Radeon R9 280X - the fastest AMD card with an DVI-I connector - and an Ultimate SCART VGA-to-Scart adapter.

The result after having activated CRTSwitchRes in RetroArch is disappointing and does not seem to be right in my opinion when using a consumer CRT, even though there are already lot of demo videos and photos on the web showing the games on CRTs. Some examples:

  • The speedometer in Gran Turismo 2 (European = PAL version) is cut on the right of the screen, also the text in the upper left area seems to be too close to the borders of the visible CRT screen
  • On some Mega Drive / Genesis games, texts and icons are either cut at the left and right borders or too close to the borders. As consoles are meant to be plug & play and developers had to take care of the overscan area of CRT TV during game development, I don´t think that this is the right output, especially as only few consumer TVs might have offered image shifting and scaling without using a hidden service menu (see also my bug/feature request at https://github.com/libretro/RetroArch/issues/10475).

Therefore, I currently think that switching the resolution to the consoles internal resolution via CRTSwitchRes is wrong - while adjusting the framerate and maybe switching between half and full vertical resolution might be perfectly right. In most documentations and discussions I searched on the web, the NTSC or PAL video standards or at least the vertical resolutions of 480/240 or 576/288 pixels are mentioned as base for the outputs, and that often not all lines or the full width are said to be used.

A good example is the NES and the documentation at https://wiki.nesdev.com/w/index.php/Overscan:

Multiplying the pixel rate by the scanline length gives 39,375,0006/4/11640/(135,000,000/11) = 280 pixels per scanline. The PPU puts signal in 256 of these and a border at the left and right sides." Even more interesting: "There are two ways to emulate the pixel aspect ratio of the NES: scale before padding and pad before scaling. The NES PPU hardware performs the padding first, adding 24 pixels of border to form a 280x240 pixel picture that can be resized to 320x240, 640x480, or 960x720 square pixels, or to 352x240 or 704x480 if your SDTV output circuit produces non-square pixels at 13.5 MHz (Rec. 601/DVD dot clock, 132/35colorburst) or 13.423 MHz (PlayStation dot clock, 15/4colorburst). […]

What CRTSwitchRes does, is changing the TV resolution to the native 256x240 resolution. From my understanding, this means, that my consumer TV fills the whole screen with this resolution, not taking care of the padding on the left and right, so the resulting image is wider than it should be, even if you don´t recognize this at the first sight. This might of course be fixed with the aspect ratio setting - but you might get the point… Same goes e.g. with N64, that always has a resolution of 640x480, but at least on my TV fills the whole horizontal width, hiding pixels within the overscan area.

Another example is the PlayStation like mentioned before: If I start Gran Turismo 2 (Arcade Disc), xrandr tells me that the resolution is switched 6 times until the racing starts: 640x478, 640x480, 320x256, 368x480, 320x240 and 320x256, the last one clearly having cut borders. Regarding the vertical resolution, I found this at http://www.psxdev.net/forum/viewtopic.php?t=3513#p18324:

If you specify less lines, simply more of the screen will be black. If you specify too many lines, some of your graphics will end up in overscan, and not be visible on a CRT. You can never make the GPU output more or less actual lines.

My TV does seem to stay at a certain vertical resolution, so there are at least (correct?) black borders at the top and the bottom e.g. at 480 and 256 pixels vertical resolution, the latter one scaled apparently to 512 pixels, but once again, the horizontal stretching is for sure wrong.


So finally, as a summary, wouldn´t the correct behaviour be to switch to the correct TV standard (640x480 or 768x576 when using square pixels (or superres)), use integer scaling to fill the whole screen based on the vertical resolution as good as possible (i.e. 224px becomes 448px, 448px would stay as-is), center this image and add appropriate borders? Besides, this would solve any shifting problems we currently might have with CRTSwitchRes in certain modes.

You might say: This could be achieved by simply disabling CRTSwitchRes, but then, the automatic framerate adjustment is also disabled and thus the 60.1 Hz refresh rate instead of 54.94 Hz e.g. on NES can only be achieved by additional startup scripts like used before e.g. on RetroPie when CRTSwitchRes was not available yet. Also, automatic NTSC-PAL resolution switching as well as switching between 240p and 480i (or 288p and 576i) is not possible.

So, what is your opinion on this? Or was CRTSwitchRes never intended to be used on console emulation but on Arcade games instead and/or for PC monitors without the consumer TVs´ big overscan area?

It should work fine with consoles, for the most part, as long as you are providing it with the full frame of content. That is, no overscan cropping, etc. In Genesis Plus GX, for example, this means you need to enable the borders in the core options.

Thanks you very much for this hint.

Indeed, the option in Genesis Plus GX looks good. And I also found an appropriate option in Snes9x. FCEum also has this option, but continues delivering a horizontal width of only 256 instead of 280 (according to xrandr, tested with Super Mario Bros (US)). In PCSX-ReARMed, I haven´t found a border option (yet).

So I still wonder why this non-standard TV resolution switching is necessary at all and now why it is up to the cores to deliver an overscan option. This currently looks more complicated than it needs to be to me.

I think Beetle-PSX has the borders from the start. You might have better luck with it.

RetroArch doesn’t know anything about what the cores should be outputting. The core has to tell it. Likewise, the cores are responsible for ensuring that they provide the right data within the frame.

As for the necessity of it, sure, you can just skip it and do a single modeline. It’s what I usually do, but this won’t handle, for example, switching to interlaced resolutions, etc.

Like mentioned, I wonder why RetroArch has to know it at all. My assumption is that the original consoles don´t use special modelines at all for compatibility reasons with (older) TV sets. I also want to check if I could use a capture card to record this non-standard resolutions. If this doesn´t work, I would take it as another proof that using the internal resolution as output resolution is wrong.

Maybe it could be a feature request (as long as the it is not clear if this resolution switching is crucial e.g. for arcade emulation or PC monitor output) to allow only changing the refresh rate and interlace/non-interlace settings (i.e. changing between 480i, 240p / 576i, 288p and ~60Hz/~50Hz) without switching to the internal resolution.

Furthermore, I encounter the bug that after exiting RetroArch, the resolution is set to 700×480 59.94Hz - might be set in gfx\display_servers\dispserv_x11.c, starting with line 104 - but this is another story…

I mentioned this because it looks to me, that all CRTSwitchRes seems to be revised anyway to some degree…

This is not a correct assumption. S/NES, for example, run at a weird, not-standard-NTSC-timing speed, around 60.08.

CRT TVs don’t have any concept of modelines, they only have a timing range that they can function within to avoid being damaged. Consoles and arcade games produce signals anywhere within those safe ranges as they see fit (sometimes changing on the fly).

I know about this ~60.1Hz and other slightly different refresh rates, and like written, but I doubt about those low-res resolutions especially regarding the vertical lines because they feel like major timing setting adjustments. Especially also because of the explanations linked in my first post where all talk about the standard resolutions as base where some lines are simply left blank (but still transmitted), while columns are filled “dynamically” (like the mentioned 280px on NES - like we also do with superres).

I guess I don’t understand the argument, then.

Either they all use the same resolution/modeline and SwitchRes is pointless, or they don’t and it’s not. If the former were true, RetroArch could just have a hardcoded 15 khz modeline and be done with it, which would be great (and is what I’ve done for years, essentially), but it doesn’t encompass resolution and/or refresh rate changes (either across cores or within a single core), which is the main purpose of the feature.

Like written in the post before, I think changing between PAL and NTSC standard resolution and refresh rates depending on the rom is fine, also between interlace and progressive in case you prefer 240p over 480i (or 288p over 576i) for keeping the scanline effect. And those minor refresh rate adjustments like for NES and apparently also Genesis are also perfectly fine. But switching the resolutions to non-TV-standard resolutions might be wrong at least for console emulation - see my Gran Turismo 2 problem especially. Of course, the problem might be shifted to the core developers that have to implement proper border handling, but this might be unnecessary when this is automatically taken care of by integer scaling when using standard TV resolutions.

However, since arcardes are some kind of “custom” builds in my opinion, the additional custom resolution switching still might makes sense, but a proof is also needed here if they don´t also use standard the resolutions as base.

Unfortunately, as you might guess, I don´t have any of the old SD consoles here to check their original output for comparison.

The best way to see why this obviously wouldn’t lead to correct results is to set up some modelines by yourself and then use RA manually and see what affects what.

I would give a detailed explanation, but have some knowledge gaps myself, so I’d rather not raise more questions. It’s not entirely clear for starters, what is to blame on Cores, CRTSwitchRes, and PC Hardware for example.

I have created a PAL [email protected] modeline as well as [email protected] myself, turned off CRTSwitchRes with integer scaling enabled and started Gran Turismo 2 (EU), and the borders were not cut anymore. Tried the same with other cores. While my TV seemed to react to lower vertical resolution settings by adding borders, the image often was too wide and borders were cut with CRTSwitchRes enabled.

And it´s wrong with NES (FCEUmm) anyway using the internal resolution because the real internal resolution is 280px and not 256px like it is currently set via modeline (see link in my first post). On the other hand, 256px are perfectly fine for windowed mode or devices without overscan area because this 24px border is definately not needed.

So I am really interested in the reason why in your opinion my approach with keeping with standard TV resolution obviously leads to incorrect results (apart from minor frequency adjustments needed for NES etc.). If you have any further links regarding this modeline concept (and not the actual development), I would be happy to read them. I know that people use CRTSwitchRes or appropriate onstart- and onstop-scripts for years…

I might have found a reason for all of this - thus a new post.

I wondered why my Gran Turismo 2 showed borders on my TV where a YouTube video, apparently captured from an original devices, did not. So I searched the web again in a different direction.

http://retroactive.be/forum/viewtopic.php?f=8&t=7&start=260#p469 says:

Another facet is the PAL ports themselves - I’ve mentioned before how most ports are lazy - they don’t take advantage of the extra vertical resolution that PAL offers.

  1. They add black bars ontop and bottom (can be remove with special PALFIX option) or
  2. They make the N64 stretch the rendered data to fit the new space - creating a lot of unrecoverable blur

In another thread at http://retroactive.be/forum/viewtopic.php?t=36#p932, related to the scaling of Rare games to PAL resolution of from 240p to 288p, it is said:

So the N64 was ahead of it’s time in this too, imitating modern console games’ trend of upscaling of sub-native framebuffer resolutions. TAKE THAT PLAYSTATION AND SATURN!!!

From my understanding, quite a lot of the older consoles did not scale sub-native framebuffer resolutions to the fullscreen, while the PS1 and Saturn do.


So, to get rid of borders (introduced by integer scaling) or visible artifacts even on CRTs (when scaling to the full TV standard resolution is done), CRTSwitchRes is a very good solution as it it could be seen as a kind of “clean” analog scaling - as long as overscan/border settings are available.

For older consoles, either the cores must support borders or options are introduced to only scale to standard PAL/NTSC resolutions (interlace/progressive) instead of internal resolutions and to adjust the refresh rate accordingly if needed.

I understand that the first solution could become the plug-and-play solution as you would not have to do a research if the emulated console does full-screen scaling. However, the second solution would also support cores that don´t offer the overscan/border settings (yet).

" have created a PAL [email protected] modeline as well as [email protected] myself, turned off CRTSwitchRes with integer scaling enabled and started Gran Turismo 2 (EU), and the borders were not cut anymore. Tried the same with other cores. While my TV seemed to react to lower vertical resolution settings by adding borders, the image often was too wide and borders were cut with CRTSwitchRes enabled."

It obviously can’t be more correct to what you had before to turn a low-res progressive signal into an interlaced one.

It will also not be flexible enough with respect to horizontal values, since you’re stuck with values like 512, 560, 640, once doubled with integer scale.

“From my understanding, quite a lot of the older consoles did not scale sub-native framebuffer resolutions to the fullscreen, while the PS1 and Saturn do.”

The link you posted just said the opposite.

Sorry, I still don´t understand you fully. I am also fine with switching to half vertical resolution (240p or 288p), that´s what I mean with switching between interlace and non-interlace resolution. And indeed, I was wrong regarding PS1 and Saturn - I wrote the post obviously too late in the evening…

Nevertheless, I still don´t get the disadvantages you talk about in your posts. Why do I have to be flexible in resolutions for consoles that produce borders? Do you want to get rid of them even though the original consoles did not?

PS1 and Saturn cores do not output the correct resolutions. They have their vertical height hard set. You can see this when you go into the core settings. The consoles did not have locked vertical heights so the resolutions would have looked different. This is where the problem lies.

You can find a patched version of the Saturn core here. However, you will need to compile it. The patch allows for all the extra resolutions.

Currently, there is no fix for PSX.

Not all CRT are the same. As they get older caps fail and the picture shifts. Some people adjust their image to compensate for this. After the caps are change the picture is no longer perfectly calibrated. This can be very frustrating but does needs to be considered by all users. In short your CRT may not be factory calibrated.

@JK1974 you may find the super-res options more to your liking. The vertical resolutions fluctuated less than the horizontal resolutions, and the super-res works sort of the same insofar as it’s a one-size-fits-all horizontal resolution.

Your approach is basically just a variant of the dynamic /super res approach. One advantage of higher resolutions is actually there you can get finer control of the picture. For example, I have a modeline with 768 active pixels and 944 total. Substracting or adding 8 pixels to the total should have less of an effect than if I do that with doubled values, i.e. 1536 / 1888. As far as I know, using 8 pixel steps and multiples is also a requirement for modern GPUs, something original hardware did not have in their signals.

Why does the PS1 core (in my case PCSX-ReArmed, still have to check Beetle) have the vertical height hard set? Before the racing in GT2 started, xrandr switched the resolution 6 times: 640x478, 640x480, 320x256, 368x480, 320x240 and 320x256px.

And no, mine is a “normal” CRT TV, fully working without any problems (yet). Even if it might not be perfectly calibrated (i.e. centered), seeing only 2/3 of the speedometer looks obviously wrong. I might post a photo later on, also comparing it to the [email protected] output with integer scaling.

Tested Superres and works all the same, either with native, 1920, 2560 or 3840 pixels (maybe apart from minor rounding errors that I haven´t seen at the first sight, maybe also because there is a little flashing everytime I change this setting, so that a direct 1:1 comparison is not possible). There is only one exception: On the Stella core (Atari 2600), there is a little difference in scaling between native and superres and a bigger visual difference when bilinear filtering is enabled - maybe because the native resolution is really low (was it 160x240px?).

After having found https://forums.nesdev.com/viewtopic.php?t=15879 and http://pineight.com/mw/index.php?title=Dot_clock_rates, I learned that the pixel aspect ratio is neither fixed nor square and even different for different consoles and resolutions.

Therefore, “emulating” those different pixel aspect ratios with CRTSwitchRes with Superres makes sense seen from this perspective. My assumption was that we nearly always deal with square pixels - even if I even wrote in my first post, that I read that the NES just has 280 pixels per line and thus would not get an integer divider when scaled to 640/768px.

So, I hope to see more border implementations in the cores in the future, an option to add an overscan area on my own in case they don´t exist (to fix the PSX problem) or the mentioned possiblity to use at least standard resolutions with interlace/progressive and framerate switching for all cores that don´t support borders.

1 Like