Let’s say I have a 4:3 17" CRT TV, and a 4:3 640x480 17" LCD.
I also have a 240p image of a video game plumber, since we are in the 4:3 realm here, he is obviously 320 pixels wide - 320x240. I will also consider that 17" is the viewable area (no bezels/edges) for simplicity.
If I display that image on the LCD without any scaling, it takes exactly one quarter of the screen (because it’s half in two dimensions). He’s diagonally 8.5" sized.
If I open that image on an CRT, provided I’m sending a 240p NTSC signal into it, that plumber will take on the entire 17" screen, meaning it will physically look bigger than what he looks like on the LCD. He’s diagonally 17" sized.
But he will also have the scanline effect, where every second line is dark, because that’s the only way to get 240 progressive lines to appear on something that shows 480 interlaced lines (I am also not talking about TVL/beam thickness/mask added “effects”).
He will also (almost) not flicker, because every frame he is drawn in the same exact lines on that TV (progressive image on an otherwise interlaced display).
So on the LCD side, I have a about 8.5" diagonally sized plumber in full pixels, on the TV size I have a 17" diagonally sized plumber with dark lines going “thru” him - he is definitely bigger on the TV. On the LCD he looks “complete” (made of entire rectangular pixels that touch each other), while on the TV he looks like has horizontal zebra stripes.
Basically, I see people asking about “lost information” and “half sizes” and whatnot, and I walk them thru these steps - is what I’ve written all correct?
PS: off-topic question, but isn’t applying scanline filters/shaders on a 240p LCD image “wrong”? 240 bright + 240 dark scanlines = 480 lines on a TV, so putting 120 dark lines into that image on an LCD makes it 120p?