technically possible, yes.
Release v1.9.85 is up on github and should be available via online updater soon.
Changelog over 1.9.75:
New
- Implemented halation effect
- Implement a soft limiter for bloom colors this will allow to push dark to mid range colors while mitigating bright clipping
- new “crt-regale” preset
- Add Dots-sharp preset
- Antiburn: use #define ANTIBURN_COMPLETE to apply to the whole screen.
- Add Atari Lynx Presets
- Gameboy DMG now has a specific base preset needed to deal with black frame around the dotmatrix grid.
Removed
- Bloom: Remove eye adaption strength delay
- Remove mask helper stuff and maxmask presets.
Change
- Dehive: use “powed” phosphor height because it is more accurate.
- presets tune
- Bloom: Improvement: Modulate bloom over bright area at pixel level
- More convincing, and probably accurate, dot crawl.
- Reordered parameters list
- improve adaptive black feature, now “black crush”.
- Allow X/Y shifting when using integer scaling
- Allow diorama offset till +/- 2.0
- Phopshor persistance: allow to set different decay times for r,g,b
FIX
- Increase bezel frame LoD when using TILT to avoid aliasing on straight bright lines.
- Vignette: Apply tilt perspective (pillowing is still buggy, do not use pillow shape with tilt)
- Fake transparencies(/waterfalls): Always use blur despite glow to blur bias setting
- fix PSP “fast” preset
- Correctly handle rotated content while dealing with foreground and background images alignment
- Doublescan: treat content as low resolution when “Consider Hi-Resolution above # lines” is 0.0
- move pixel persistance after real interlacing
- Fix wrong integer scaling logic
- phosphor persistance: scale it by the input fps.
- CVBS bandwidth limited chroma and artifacts: use correct mat3 mul order (DoH!)
Performance
- Slightly optimize cvbs chroma bleed
- dot crawl: just use one colorspace
- small speedup to warpedglow
This issue may not be koko-aio’s fault, but I’m getting glitches with phosphor persistence enabled on ParaLLEl N64 core:
I tested both glcore and vulkan (both glitch out). However, this issue does not happen with Mupen64Plus-Next.
Indeed, seems a core specific issue; not much to be done, unfortunately.
The latest version seems to crash on Windows 11, Final Burn Neo core, and gl core video driver. I’ve tried various presets like tv_pal_my_old and the same thing happens.
I don’t and hopefully will never use Windows 11. Could identify the offending commit or provide meaningful logs?
Hi kokoko3k,
any news?
I think there has been a misunderstanding.
1 Actual case: Core -> 4x -> Some shaders -> avglum pass -> black crush.
2 Proposed: Core -> 4x -> Some shaders -> stock -> avglum pass (0.1x) -> black crush (10x).
3 better: Core -> 4x -> Some shaders(*) -> avglum pass (0.1x) -> black crush.
2 is worse than 1, because not only it has the same number of passes that runs at 4x, but it has one more pass at 0.1x needed by avglum
3 is better because it just spares gpu memory by scaling avgum pass down.
However for 3 to work, “black crush” needs to know the alias of Some shaders(*); since sampling from its source (avglum pass) would produce a scaled down result.
For that very same reason (I don’t know the alias), I CANNOT implement 3.
I’ve opened a report on GitHub. I’d appreciate it if you could take a look when you have a moment. I’ll also try to test all the cores I have. So far, only the arcade cores are crashing, which is a bit strange.
I’ve had the chance to try it on my parent’s notebook with win11 plus latest slang shaders and glcore, all working flawlessly…
Maybe my log will show what the problem is.
I meant the “doing the math part of avglum with less resolution” with an option that can be changed within “Shader Parameters”, which should not need new stock pass nor alias, not possible?
What that option would be supposed to do?
"doing the math part of avglum with less resolution” is possible even without an option, but as explained it isn’t going to help anyway.
If you mean lowering a pass resolution and/or selecting a source pass whose name/alias is unknown, then no; it is beyond the scope of runtime parameters.
internally scale down the resolution by divide it with that option number, to undo the core scaled render (temporarily); then doing the avglum math, and then by multiplying with the same previous option number to rescale back to the core scaled render (I think that needed in retroarch shader system with ‘scale_type = “source”’)
That thing is just not possible. The shader has no control over its input nor output framebuffer resolution, that is managed by the preset, which is just static; only the core can change resolution.
You can fake a lower input/output resolution, via eg: downsampling the input texture, or emulate a different internal resolution, but this is not free nor desiderable to lower the gpu load; it would increase it.
The cause is still obscure, because very specific and likely related to bad Nvidia opengl drivers, but In case someone faces the very same problem (Retroarch crash with GLcore, Mame or FBNeo cores on windows and Nvidia).
edit
The cause has been identified (some cores emit bogus zero sized frames) and the workaround has been applied.
@Starman99x, could you please verify if the issue you had with persistance is present even if you load the shader after the core has started?
Unfortunately the issue is still present in both cases.
Uh Hello everyone, I’m having a problem with the Duimon Koko AI Gameboy shader. It’s scaling the image incorrectly. How can I fix this?
Replace Duimon-koko-aio-main/res/base/Nintendo_Game_Boy/base.params with:
base.params
// DERIVED FROM DUIMON MEGA BEZEL GRAPHICS AND PRESETS | https://duimon.github.io/Gallery-Guides/ | [email protected]
// SOME RIGHTS RESERVED - RELEASED UNDER CC BY NC ND LICENSE https://creativecommons.org/licenses/by-nc-nd/4.0/deed
// ----------------------------------------------------------------------------------------------------------------
// BASE PARAMETERS
// Parameters shared across sets
// ----------------------------------------------------------------------------------------------------------------
// BASE
// ----------------------------------------------------------------------------------------------------------------
DO_SPOT = "0.000000"
MIN_LINES_INTERLACED = "1024.000000"
DO_BG_IMAGE = "1.000000"
BG_IMAGE_OFFX = "-0.003000"
BG_IMAGE_OFFY = "0.000500"
BG_IMAGE_ZOOM = "1.000"
DO_GLOBAL_SHZO = "1.000000"
GLOBAL_OFFY = "-0.086500"
GLOBAL_ZOOM = "0.58840"
ASPECT_X = "140.000000"
ASPECT_Y = "126.000000"
@Duimon: I just added “BG_IMAGE_*” params to that file.
I’ll take a look at what’s going on. I’ve been setting up a Retrobat install with my koko-aio presets and once in a while I run into something. I’ll have to do an update soon.
Edit: I don’t have an issue with a wider aspect. (Other than it mirroring.)
Maybe something to do with scaling settings in the OP’s shot?



