Hi, Oled C1 for over 2 years. Being careful and sometimes not, but when playing with Cyber’s Bezels and shaders, I always leave the Brightness at 50. But now I’m using it without Bezels and in Full Screen. But what would this BFI be?
BFI or Black Frame Insertion reduces motion blur due to the sample and hold, progressive scan method of displaying an image on the screen as opposed to the interlaced method. This occurs because of the persistence of vision phenomenon in our eyes and brain. So when the next frame is displayed, our cones still have the image of the previous frame “remembered” and there’s some overlap, causing ghosting/motion blur or rather reduced motion clarity especially when compared to CRTs.
BFI inserts a black frame in between the image frames thus reducing the time the frame is spent “burning” into the rods and cones of our retinas so that less of the image is “remembered” when the next frame is displayed.
So it’s basically wiping the slate clean between frames so your eyes spend less time remembering the old frames while the new frames are being displayed.
CRTs don’t have that problem because the entire screen is being blanked out except for the single line of image data that is being drawn at any given point in time.
If you don’t notice any issue with motion blur, I’d say don’t go looking for problems because ignorance can be bliss and much more economical.
I’d say wait until the tech is there or you have the tech that can truly take advantage of it before “spoiling” your gaming experience when you can’t “unsee” something that you have now had a greater experience with.
BFI can be enabled on either the display side or via RetroArch itself.
It absolutely ruins brightness. So there’s that as well.
So I tried this last night and when I used the 8 pixel high slot mask 6X and 8X worked well, while 5X and 7X showed the moire pattern and had the scanlines and the slot masks having intersection offsets which were irregular.
6X looked really beautiful but was a little too large for 4K at 2688 pixels high. A 4K 16:10 3840 x 2400 or 5K 16:9 5120 x 2880 or maybe even a double height 16:18 2560 x 2880 display might be able to pull it off!
I went all the way down to 1X where it was really hard to tell what was going on with the scanlines vs the mask. The mask pattern seemed unchanged though so the TVL just got lower as I scaled the resolution down and vice versa.
I also went as high as 12X and 16X and things seemed pretty evenly and regularly intersected but at those high TVLs getting the moire patterns might be a little more difficult because it’s easier for the mask patterns to fit within the scanlines. Not sure if that is all you wanted to see or find out but feel free to ask for more information or tests.
So this just reiterated something that I had learned when making my CyberLab Special Edition Presets which were the first to incorporate in house graphics.
I learned that it was easy to eliminate moire by using mask size (TVL) that was optimized for the effective resolution available to render the mask pattern.
So if one were to lower the available pixels to around 1080p, by using an overlay or bezel, then they might get better results using a 1080p Optimized CRT Shader Preset than a 4K one when it comes to things like more patterns.
It’s the same thing playing out here, it’s just that we’re running into it while using the maximum 4K resolution of our screens or rather 2160p vertical resolution.
So some masks are just too big/tall to render properly/evenly with Scanlines at those resolutions.
So the solution is to choose a shorter mask that can fit or be resolved evenly or forgo the scanlines altogether for those biggger, taller masks if you’re limited by vertical resolution.
Enjoy the beauty of 6X Integer Scale:
I need to have another play around with bfi in retroarch - my tv has it built in which does back light strobing (I believe) and so I havent had to use the retroarch version in a while. Id be interested to see what the change HunterK was referring too above has done. What I really want is bfi on my phone - its got a 120hz amoled display and Id love to see what that does to its motion blur (which is the best of all screens Ive seen outside of a CRT)
A future OLED generation that can get at high brightness+BFI will be pretty fantastic. I’m happy with my S95b OLED for a good while, and while I’m quite happy with the state of emulation right now the future is looking pretty glorious
I tried adding in some really slight tex_coord noise and I like the result. On all of my CRTs, if I get super-close to them, there’s some subtle movement in the scanlines and it looks pretty much like this (just a couple of lines added to the base megatron shader file): https://pastebin.com/Xvd3CL7r
I’m not going to bother with a screenshot because it wouldn’t really show anything.
This is interesting. I used to add subtle noise to all Mega Bezel Presets for the same reason.
Is it possible for you to add this as a separate module/shader that can be appended and have it behave in exactly the same way?
I wasn’t clear where the noise code started so I would have to compare the existing shader code with the code you posted. Perhaps a comment might assist the uninitiated like myself.
Right now I’m really satisfied and happy with the refinement in my latest Megatron Slot Mask presets. Something like this might just add some further polish or rather some icing to the cake.
I forget, is there a reason why this shader doesn’t use the built-in HDR setting? I wanted to use Cyber’s presets but I need to edit every single preset individually
You can try Notepad++ or even WinMerge to append the new lines of code to the existing shader.
Also, did you replace the actual Sony Megatron Color Video Monitor base shader in the Shaders/Shaders_Slang/HDR
Folder?
Unfortunately, no. The noise has to happen where the scanlines are sampled, which is what makes it so subtle and mimics really tiny perturbations in the electron beam. Luckily, the way megatron is structured, we can apply this to the scanline sampling without touching the mask, which should be rock-solid.
The noise functions are lines 229-242:
float rand(vec2 n) {
return fract(sin(dot(n, vec2(12.9898, 4.1414))) * 43758.5453);
}
float noise(vec2 p){
vec2 ip = floor(p);
vec2 u = fract(p);
u = u*u*(3.0-2.0*u);
float res = mix(
mix(rand(ip),rand(ip+vec2(1.0,0.0)),u.x),
mix(rand(ip+vec2(0.0,1.0)),rand(ip+vec2(1.0,1.0)),u.x),u.y);
return res*res;
}
And the sampling offset happens on lines 1627-1630:
float offset_x = noise(sin(gl_FragCoord.xy) * float(mod(global.FrameCount, 361.)));
float offset_y = noise(cos(gl_FragCoord.yx) * float(mod(global.FrameCount, 873.)));
tex_coord = tex_coord + vec2(offset_x, offset_y) * 0.0005;
EDIT: whoops, forgot to include the ‘rand()’ function. Added it just now.
I should clarify that everything is working correctly! Great presets, BTW
What I meant was I needed to edit the presets each to match my displays peak white and paper white. 800 and 350 respectively.
That’s par for the course with Sony Megatron Color Video Monitor. You can use Notepad++ Find in Files (and Replace) function to do this as a batch operation.
Also, feel free to share your Peak and Paper White Luminance to settings so that others who have the same model display may benefit.
Thanks, I’m really loving what I’m seeing. Too bad I don’t have all the free time in the world to just sit down and play while also trying to make better presets. Lol
Unfortunately this and “game watching” has become the game for me.
Ill have to try this out! Yeah even I have to admit there is a subtle softness you get from phosphors that you dont get from using pixels. I think its back to the whole continuous vs discrete thing we were talking about above. I guess 8K is the solution here but then again when using ntsc signal filter the image is very soft so maybe Im wrong and its more a pre process thing. Lets give this a whirl!
Yes it tries to output directly HDR values (R10G10B10A2) from linear space (RGBA16) instead of essentially going to SDR (R8B8G8A8) and then out to a HDR backbuffer. But how much of a difference that actually makes Im not too sure. Maybe you can set the shader in SDR mode and change the format in the main Megatron shader file - comment out the #pragma format line. You should then be able to have this the same as other shaders when turning on hdr in retroarch.
Ooo I see what youre trying to mimic - I like that idea. Maybe we can add it as an option - the only down side might be added processing - I want this shader to run on the Pi5 when Lakka eventually supports Vulkan for it.
Ah I see - you should be able to change the master settings for that - there are includes where you should be able to change hdr settings globally - hmm is it sony_megatron_hdr.slangp or something? Can’t quite remember iff the top of my head.
Is this for use on an LG OLED TV using the HDR Game picture setting?
Yes, this is on my LG C9. BTW, mask gamma has a tremendous effect on how punchy the HDR is.
yeah, it’s pretty expensive. Looks like somewhere between 15% and 20% additional cost on my system, and you can claw back around half of that while it’s disabled using a branch.
Im going to wait until I can test and profile on the Pi5 before I add any more to the shader. Who knows when the Pi5 gets Vulkan support though.