PlainOldPants's Shader Presets

I noticed that too after uploading this. Here’s a better performing one. (Actually, never mind, maybe it’s not fixed.) https://www.mediafire.com/file/1pn4wlvbdvalahf/patchy-performance-fix-2024-09-09.zip/file

And yeah, it can be in an independent folder now because I removed the #include reference to shaders_slang/include/colorspace-tools.h

I am still working on fixing the performance. Apparently, the noise isn’t the only thing at fault, but it’s a big part.


Here’s another fix. I moved the noise into a separate pass this time. The key to making sure this performs well is to keep your screen resolution low. Increases in vertical resolution are mostly okay (for instance, interlaced mode on Genesis), but increases in horizontal resolution can cause bad slowdown. If you’re playing a console with a big horizontal resolution, like the N64 which can get as high as 640, you’ll want to go into the slangp file and change the resolution multiplier from 8.00000 to 4.00000 or something like that. https://www.mediafire.com/file/ejc2xdctliwbll2/patchy-second-performance-fix-2024-09-09.zip/file

2 Likes

GPGX preset is not playable to me now. It moves around 51-53 fps

It did not happen 2 updates ago. Going full capacity

In any case, I’m not asking for a cheap low end set of presets for low performance computers like mine.

I dont ask to cut off extra additional features in your presets if that’s what make them as accurate as possible to either your CRT or the real consoles.

This behavior didn’t happen 2 updates ago, though. I could still use about 50% of GPU in there.

I will keep attempting to check how the noise accuracy works with current update, in case GPU doesn’t go 100% usage.

1 Like

That’s three updates ago. I was getting crappy performance too on my laptop’s GTX 1650 (which my laptop can’t even fully utilize; turned out it’s just an advertising point) until I did one more change today. Since I just edited it onto my previous post instead of making a new post, I’m sorry if you missed it.

My latest one should perform even better than the update that originally added noise, because noise is an entirely separate pass now.

Another thing contributing to the performance drop was that I’d doubled the horizontal resolution factor from 8 to 16. I did that to make my newly added comb filters work better. I’ve reverted it back to 8 now.

2 Likes
#reference "nes_raw_palette/patchy-mesen-raw-palette.slangp"
pn_color_screen_offset_modulo = "2.000000"
pn_connection_type = "-1.000000"
pn_noise_rand_offset = "449.000000"
pn_noise_min_amp = "1.000000"
pn_noise_counter = "100.000000"
pn_noise_severity = "0.225000"
pn_knob_contrast = "0.700000"
pn_knob_brightness = "-0.100000"
pn_knob_saturation = "0.900000"

core: mesen, preset for nes games.

same menu options like from:

2 Likes

Have you gotten better results with the minimum amplitude lower or higher? My intuition has been that we get closer to real hardware if we have min amp at exactly 0 and min rate at something lower, like about 20. But even if we do that, my implementation isn’t based in real physics at all, so I don’t know.

1 Like

I can only see the best artifacts as possible on screen at night, so will have to wait for at least +5 more hours.

I’ve been trying to replicate those small diagonal lines on screen that are almost transparent, but with a black dot and white dot on each of the top & bottom edges from each line, without success.

I’ve got success in the other hand, with one of your noise values to give some sort of diagonal red/green lines (rainbow?) around the screen. For some reason, I can see them, even though the Rainbows are supposed to be static, but I can see those lines even though they’re tiny. Still, they’re more visible at night on screen. These lines that are moving are independent from the static rainbow colors that already affect the screen.

These 2 artifacts combined are what I feel like are given by the RF noise. But there’s ton of area to play with. This is the type of Noise settings I was looking for and just wish could only be expanded. There’s just lots of room to play with the different type of intereference that people can get through RF alone. It just gives way more realism like the real TVs. Whether they are CRT or flat screen.

Hey, just wanted to say that this is great stuff you’re working on here and although I’ve invested a lot of time and effort and have gotten a lot out of using CRT-Guest-Advanced-NTSC’s implementation, I can’t wait to try your shader, especially if it’s a more accurate approach.

I really loved Blargg’s implementation, have you taken a look at those closely?.

Anyway, for a long time I’ve felt that shaders needed to besides emulate different CRT characteristics and input types, also take account of the differences in different consoles video output circuitry.

I’m looking forward to you cleaning up the code as well, making things nice and efficient and meticulously put together. It doesn’t sound like it’s quite there yet where that is concerned.

Anyway, keep up the good work!

1 Like

Wow. I step out for one minute and look at all you’ve accomplished. This is pretty impressive.

I feel like I owe an explanation and apology for suddenly disappearing in the middle of the conversation. Shortly after we talked about it, I got a proof of concept working for “use the set of colors the demodulator chip could output for the source gamut (instead of the whole P22 phosphor gamut), and let out-of-bounds values ride until the gamut compression step.” A couple days later, my PC died. Then I had a two-week trip to visit family abroad, which delayed replacing the PC. Then it took some time to get most of my stuff copied from backup and my software environment mostly rebuilt. (Aside, Retroarch’s Linux documentation is out-of-date, which makes sorting out the build dependencies harder than it needs to be.) Anyway, I’m back up and running now.

Demodulator-Based-Source-Gamut LUTs

I guess I’ll start with the results of “use the set of colors the demodulator chip could output for the source gamut (instead of the whole P22 phospohor gamut), and let out-of-bounds values ride until the gamut compression step.” I think the results look really good. At least right now, I’m feeling like this is the best direction to be going for gamut conversion when there’s a “color correcting” demodulator involved. Here’s a link to a couple demo LUTs. Note that these LUTs start with TV-gamma-space R’G’B’ input and give you sRGB-gamma-space R’G’B’ output. If you want to do anything else in linear RGB, you’ll need to apply the sRGB linearization function. The advantages here are:

  1. It’s really performant.
  2. We’re not needlessly compressing anything we don’t have to be, which gives us better brightness and saturation overall.
  3. Out-of-bounds values end up smoothly compressed, rather than hard clipped.
  4. We avoid the hue distortion that we’d get with clipping.

The disadvantages are:

  1. Since everything is baked into the LUT, we can’t change settings without making a new LUT. And, if we want to do something in between the demodulate and gamut conversion steps, we have to go implement it in gamutthingy.
  2. I don’t really like outputting sRGB-gamma-space R’G’B’. Unfortunately, avoiding this is going to be a pain (see below). I wanted to output linear RGB, but due to the next issue, doing that leads to some banding in dark blues.
  3. We’re doing trilinear interpolation in one gamma space, using color values from another gamma space. This is totally, totally wrong. Unfortunately, there’s no simple way around it. To compensate, I’ve increased the LUT size to 128x128x128 so that, hopefully, we never get a big enough error to matter when quantizing to RGB8 in the end.
  4. It might be less accurate. Maybe hard clipping and hue distortion are more true to what CRTs actually did. (Interesting note: I tried some very generous clamping (compression, really) on the demodulator output before gamut conversion. The result was to turn most yellows really, really orange. I thought that looked terrible, and reverted it. Since you’ve got some real CRTs to test out, do you get really, really orange yellows?)
  5. It can’t presently be incorporated into Patchy because Patchy needs to do the demodulation itself for all the artifact stuff. I can think of 3 possible solutions, each of which would be a pain in the ass: (A) Modify gamutthingy to make linear-RGB-to-linear-RGB LUTs that cover a range of out-of-bounds values. (E.g., clip the input to -0.5 to 1.5, then scale that to 0 to 1, then use a LUT with the same range and scale internally.) Patchy would be responsible for making sure that the demodulator and gamma function match what was used for the LUT’s source gamut. (PITA.) (B) Patchy does the demodulation, then multiplies the inverse of the R’G’B’-to-R’G’B’ color correction matrix derived from that demodulator with the output, (then clips if necessary,) then feeds that into the LUT. (Easy, but inefficient.) ( C ) Patchy calculates the perturbations to the signal independently of the signal, and then adds them just prior to feeding into the LUT. (This would be exceptionally ugly.)

Noise Generation

I think I have an idea for improving your noise shader. Martin Roberts’ quasirandom sequences are a really remarkably elegant way to generate good noise. Here’s a sample (C++) implementation in gamutthingy using it for dithering. Since dithering is just noise generation scaled small relative to signal, you could easily adapt this to generating noise at any level you want. To get noise changing over time, add a frame-dependent value to the x and y inputs. (Also, always add 1 to the x and y inputs to keep the first row and column fully random.)

Gamma

I’m hesitant to say that Dogway got it wrong without hearing his explanation for those magic numbers. But, since he doesn’t seem to be interested in explaining, I’m going to have to go out on a limb and say I think he got it wrong.

I believe this is a correct implementation of BT1886 Appendix1. Calculating b is probably unworkable for a shader, but gamutthingy will screen-poop the constants it got by brute forcing b so you can just copy and hard-code them into a shader implementation. (Or maybe you could make it brute force a bunch of b’s for various black levels, then regress those to make a function for approximating b?.. (Dunno if that would work.)) A few things in my implementation that aren’t quite by the book:

  1. I’m calibrating b so an input of 0 yields the specified black level, rather than 0.0183 (a value that’s only meaningful in the context of HDR) yielding a physical measurement I can’t make because I don’t have the physical CRT.
  2. I’ve added support for input domain -infinity to infinity the same way that IEC 61966-2-4 does it.
  3. I’ve added a post-processing step to chop off the black lift and renormalize to 0-1. (BT1886’s output is in units of cd/m^2, which is only useful in the context of HDR.)

(Aside: Some best guesses about black and white points: A Sony PVM-20L5 was measured to have 0.01 cd/m^2 black point (the same value BT1886 Appendix1 cites as an example of “a lower black level”) and 176 cd/M^2 white point. A Sony GDM-17SE1 was measured with a white point of 171 cd/m^2. (I should probably change gamuthtingy’s defaults to match these.))

Going back to Grade, I can find a few hints that maybe shed light on what’s going on with this magic number business:

  1. There’s a comment on one of the constants used for that function that says, “CRT EOTF. To Display Referred Linear: Undo developer baked CRT gamma (from 2.40 at default 0.1 CRT black level, to 2.60 at 0.0 CRT black level).” So I think maybe Dogway is thinking that emulator core developers are baking gamma into their outputs? If that’s the case, then I think it’s misguided for a few reasons: (A) By and large, emulator core developers aren’t doing that. Byuu/Near did some of that back when bsnes only existed as a standalone program, but that was all stripped out of the retroarch cores long ago. I believe we can mostly trust that cores are giving us raw R’G’B’ like they should be. (Except NES, which is a special case.) (B) If an emulator core is futzing with gamma, then the correct solution is to fix the core so that it doesn’t. ( C ) If a core is doing futzing with gamma, and you can’t fix the core, then the best solution is to do the inverse of whatever the core did during a pre-processing step, on a per-core basis; not… whatever this is.
  2. Here is the conversation around when Dogway implemented the “black lift.” I don’t find it illuminating, but maybe you will.
  3. Recalculating b is no fun, and probably not feasible in a shader. So these magic numbers might be an attempt to approximate that. Though, even under that assumption, some of the operations still make no sense to me.

About BT709

I’ve added the BT.709 matrix for decoding. I don’t know whether this was ever commonly used or not, but assume this wouldn’t have become common until about the 2000s, and I’m guessing it also would have concided with YPbPr component.

BT709 was never used for standard definition television. It’s part of the HDTV spec. I could be wrong, but I don’t think HD over composite/s-video output was ever a thing.

About Modulators

I’ve looked at the NTSC chips that appeared in the SEGA Genesis and SNES. (SNES ones can be found here. Genesis ones can be found with a few google searches.) They all use about the correct YUV matrix, albeit with some slight variation, probably not intentional. I’ve chosen to skip this slight difference and use the standard YUV matrix instead.

I think this is a totally reasonable decision.

About Out-Of-Bounds Values and Clamping

My current implementation is assumes that R-Y and B-Y never reach their clamping levels in practice as long as the video signal is sane; in other words, they’re never clamped at all. In other words, no clamping happens until the final RGB are output. This made the shader easier to implement. The main problem with the above is that the NES’s video signal is not sane. Determining where R-Y and B-Y get clamped on NES will be a hassle, since different chips have different clamping levels for it. I don’t even know how the jungle chip interprets the NES’s square wave colorburst. Patchy NTSC doesn’t clamp anything until the final RGB.

Yes, clamping does seem to be the insoluble headache of CRT emulation, doesn’t it?

After thinking about it until the insoluble headache was located between my ears…

One big thing, as you noted, is that clamping is not consistent across jungle chips. Nor is it always documented well, or at all. Nor is it always clear whether the datasheet author means “the jungle chip clamps this,” or “the board designer is expected to clamp this before/after the jungle chip,” or “because the input was clamped elsewhere, the output range is guaranteed to be this.”

Another, more fundamental thing, is that we’re probably misconceiving the relationship between a CRT’s output range and the gamut defined by its phosphors. We tend to treat these as coextensive, but that’s usually (never?) correct. However you happen to have a CRT calibrated, there’s always(?) going to be at least one knob you can turn so that it starts displaying new colors that were out-of-bounds a minute ago (or not displaying some colors that were in-bounds a minute ago, or both.) So we have some “out-of-bounds, but displayable” colors whose out-of-bounds-ness is really an artifact of clinging to the fiction that the output range and phosphor gamut are coextensive. And we also have some colors that are truly out-of-bounds and get clamped. And distinguishing between them looks like a hard problem; and cleaning up this misconception looks like a prerequisite to even attempting that.

Which brings me back around to the LUTs at the top of the post. “Let out-of-bounds values ride until gamut compression” is a pretty good way to sidestep that problem. Having a max value in a particular direction that’s further out than a real CRT would have permitted is not going to make much difference in the compressed result. At worst, you might have some colors that should have been compressed into the high 90s instead compressed into the mid/low 90s to make room for the bigger extreme. (“High 90s” or “mid/low 90s” in terms of distance from the center of gravity to the destination gamut edge in a particular direction; not RGB values.) The other thing you might lose is if clamping causes noticeable hue distortion and you want the distorted hue.

About Datasheets

One more problem is that I’m assuming that these data sheets are not hiding any of the chip’s features. These data sheets are giving the impression that things are only getting clamped whenever they’re input into a pin on the chip, but are there things happening in the chip that aren’t being detailed in the sheet?

Indeed. The datasheets sometimes appear to be more aspirational than descriptive, especially for the modulators.

2 Likes

I’m busy with things today, but I have a few things I can say here right now. I’ll post more later.

I have two working CRTs right now: In my dorm room, a 13-inch 1989 RCA Colortrak Remote with only RF, and at my home about 1.5-2 hours away, a 36-inch 2000 Panasonic CT-36D30B with RF, three composite inputs, one S-Video, and one YPbPr Component, all 240p/480i. I have a Dazzle DVC100 video capture that I can easily take from place to place.

My consoles at my dorm are an NES (front-loader, about 5 degrees skew per palette row) and a model 1 Genesis with rainbow banding, whereas at home I have a PlayStation 1 SCPH-900something with TonyHax and a roughly 2008-09 Wii. All of them either are modded or have everdrives.

I also have a 20-inch 1995 Toshiba CE20D10 with RF and composite, but it’s so broken and shitty with worn-out capacitors that I’ve been too afraid to power it on for several months. I did peek through the vents and see a TA8867AN in it, and my emulation of it looks convincing to me. I’ll have to go off of my memory for this CRT.

I’ve heard that RCA ColorTrak TVs had Toshiba jungle chips, but this one’s color is not like the TA8867AN. I’ll have to look closer, but this 1989 one reminds me of Sony’s JP axis.

Neither of these pre-2000 CRTs seem to be clamping off bright red at all. These CRTs are able to get very bright too before topping out. It’s as if there are no universal “1.0” or “255” RGB values in analog TV, and the video signal’s standard black and white IRE levels are just there to keep you from having to write down different CRT settings for each broadcast channel and video game console. My 1995 CRT made bright red and blue bleed over to the right if they ever got too high for the CRT to handle, and I would turn down my contrast and saturation to fix that. As for the 1989 one, I have to turn up my contrast and saturation almost to their maximums to start seeing this bleed, but it’s 100% there. This is why I think we shouldn’t clamp bright values at all, but we should still pull negative numbers up to 0.

Yellow definitely does not become orange on my CRTs. The only place so far where I’ve seen yellow turn orange is in Sony’s US axis chips. (Edit: I forgot to mention, these demodulators need to have their hue rotation and saturation (tint and color) settings sync’ed up. I have some quick code to match to color bars, but it’s not exactly right because it’s not in a perceptually uniform space. The shader has a built in test pattern to sync it by eye. One more thing to say, I have started using white C without chromatic adaptation, and I like the result much better.)

Last time I looked, I thought my 2000 CRT had Rec. 709 color over composite. If not, then it at least had a much less intense color correction compared to these older CRTs. It did seem to have different color over RF compared to composite, but I don’t know how to describe it.

Dogway has been inactive for some time now. We might have to contact them on another site. Still, about the BT1886, they didn’t define what happens if a value goes over Lw (or did they? I’ll have to look again), so I am just using the standard BT1886 function against the user’s brightness and contrast settings, while just tolerating out-of-bounds results. Not ideal at all.

About composite signal artifacts

My 1989 CRT has no comb filter, only the typical frequency filtering setup, with similar rippling artifacts to what’s in patchy-ntsc, whereas my 2000 CRT has a (probably adaptive) comb filter. Patchy currently implements lowpass and bandpass filters. I want to look into the adaptive comb filter that’s in cgwg-famicom-geom and consider incorporating it into Patchy.

I looked at a schematic of the VA3 model 1 Genesis and saw the part where the composite signal is being filtered. The thing is, I don’t know how electricity works. What I can gather is this. The circuit is relatively simple to use fewer components, but it’s trying its best to do a notch filter on Y and a bandpass filter on C. The CXA1145’s data sheet asks for a delay line on Y, but that’s missing on the Model 1 VA3. The resulting signal has a rainbow artifact pattern, even on solid greys and solid colors, not just on dithered patterns, and the rainbow artifacts can be reduced if not totally removed by replacing capacitors. Do you have any thoughts on these screenshots?

Speaking of real hardware, the filters I have are simple FIRs that go equally to the left and right. They should have a smear to the right instead, like an IIR filter, to match hardware better.

I haven’t looked at an SNES schematic, nor do I have a real SNES, but judging by a couple YouTube videos of video captures, I believe the SNES has no filters on its composite. I did see in a video from RetroRGB that the 2-chip SNES has blurry RGB, so implemented a simple bilinear blur for that.

I can imagine many CRTs used inductors and capacitors for their filters. I might look up some schematics and see if there’s any info on this. One mystery remaining is the sharpness setting in the jungle chip, which seems to take an already-filtered luma signal (or even the Y component of YPbPr in my 2000 CRT) and somehow sharpen it to restore detail that had been blurred out.

I’ll look at your noise, but I’m not too sure about it. A lot of noise in RetroArch so far is just random numbers. I think noise normally follows sine wave patterns in RF. The part I don’t know is what exactly those sine waves are and what distribution they follow. Is the noise really only from outside interference, or is there also noise coming from inside the console itself? This is just what I’ve heard while browsing the web, but the NES’s RF signal is known to consistently have jailbars, to the point that many people at the time complained about the RF-only toploader model’s poor video quality, so much so that Nintendo started installing modchips for composite video, and even made some rare ones with composite already in the console.

2 Likes

Quick update for better noise, better color accuracy, and slightly better Genesis rainbows. https://www.mediafire.com/file/5n2yt04y9hytdtp/patchy-better-noise-2024-09-15.zip/file

Just this night, I hooked up my Genesis to my Dazzle DVC100 to match up the rainbow artifacts better. Then I found out the rainbow looks like this:

I’m serious. There’s noise in the composite signal, but only whenever the whole screen is solid blue. Even worse, the jailbar artifacts aren’t an exact sine wave pattern, but they’re a more complex pattern that I don’t understand. All I could do was intensify the rainbow that I already had implemented, since it looks decent already. A possibly better idea I just came up with is to use this image as a texture in the shader, and randomly pick rows from it to make the jailbar pattern with noise, but that would be very sad if I did that.

I’ve also changed the noise generation to make random numbers in real time from a seed instead of having a hardcoded array of 1000 random numbers, so the seeds are now much more different from each other. For best results, you should always have the minimum noise rate at about 20 or lower.

Still, this noise generation is very basic and rushed. It has to run through the same loop a few hundred times to work, and how accurately it replicates real RF noise just depends on which seed you pick. I saw someone asked Guest to add this kind of noise, but Guest (and most other people here) has much higher standards for performance, efficiency, and clean code in his shaders, so I would expect Guest to figure out a way to use fewer sine waves for a better result.

About my note about better color accuracy, what I did this time was manually pick hue rotations and saturations for each jungle chip by eye, unlike before where I averaged the offsets and saturations of the color bars.

3 Likes

i’m the one who asked about noise

And please don’t stop improving Noise features. The very only ones that exist on presets from like CRT guest’s are quite limited in how Noise can be tweaked. This is a core feature for RF experience.

And thank you so much for the already options available. Hope they can be added up in other presets/shaders.

1 Like

Update 2024-09-25.

I’ve been focusing mostly on my coursework lately, so I haven’t had much time to think about the things Chthon posted two weeks ago. Still, in my spare time, I’ve made a few useful updates, if anyone’s interested in it.

  • Made the signal decoding much sharper, and added a setting to easily adjust sharpness. High sharpness results in ripple artifacts, while low sharpness is just blurry. To get the previous blurry look, change the “Easy Sharpness” setting to 2.15.
  • Updated the R-Y/B-Y lowpass to have a distinct smear to the right, making the signal look more realistic. This is optional. To adjust how much smear, use the setting called “B-Y R-Y (2) Leaky Integrator Rate”. Currently, you have to manually set a different smear amount for every different console and every video resolution of that console, but I’ll fix that soon enough.
  • Made the NES more saturated by default.
  • Matched the demodulators by eye again, but did a better job this time.
  • Brought back the “Auto Contrast” setting that automatically sets your contrast to the brightest amount that won’t clip.
  • Slightly updated the RF noise. It uses much higher frequencies now. Although there are fewer settings, you’re getting about the same result as before.

This zip includes a couple basic CRT-advanced presets in the “Patchy_Presets” folder, which only work if you copy my updated files into your shaders_slang folder. Those presets also are set up to rearrange the shader settings so that all the easy beginner settings are on the top of the list.

The NES presets require you to use Mesen and set the color palette to “Raw”.

The Genesis presets are set up for BlastEm, but if you’re using a different core like Genesis Plus GX or PicoDrive, you can scroll down in my shader’s settings and change “Genesis Plus GX Color Fix” to 1.0.

Edit: For the SNES presets, the random seed I put for the noise wasn’t great. Try changing it in your settings.

Alternatively, changing the signal from RF to Composite (which just disables noise) gives a huge performance boost.

https://www.mediafire.com/file/9milu5hnp0ubnie/patchy-userfriendly-2024-09-25.zip/file UPDATE: Fixed both the R-Y/B-Y lowpass and the noise behaving differently depending on the console’s resolution. That’s why the SNES noise looked so bad. https://www.mediafire.com/file/zd71nc23mb2xecc/patchy-userfriendly-fix1-2024-09-25.zip/file

3 Likes

I’m sorry to keep repeatedly bumping this thread today, but I’m just very happy with how this is turning out. This update adds a “Toshiba 1995” preset for the Genesis, NES, and SNES. Just by tweaking the settings of my previous upload, I was able to make the artifacts, noise, and especially this Toshiba TA8867AN NES palette look very convincing to me.

Reminder that all of this is made for 1440p. For bigger displays, I suggest changing the CRT-Advanced mask type from 3 to 1 and increasing the mask size from 1 to 2. As always, for the NES presets, use the Mesen emulator and change your color palette to Raw.

https://www.mediafire.com/file/4r8aw2ggptn6dx1/patchy-userfriendly-fix2-2024-09-25.zip/file

Quick update to decrease saturation to 0.8. I thought I’d sync’d this up already, but apparently I did not. https://www.mediafire.com/file/cmdscvpev3aeyrl/patchy-userfriendly-fix3-2024-09-25.zip/file

There are some rippling artifacts because I had my sharpness set high, at 2.75. Also, I really hate this JPEG compression that keeps ruining my images.

2 Likes

Shader’s looking good!

Maybe this might help.

1 Like

Just fixed a glitch in my noise implementation, and suddenly the noise looks very different from before. I added one more experimental noise setting too.

https://www.mediafire.com/file/5vf6djqhjyb9057/patchy-noisefix-2024-09-27.zip/file

2 Likes

Where do I put the folder in? In Shader_Slang?

I was hoping the file names would help give it away, but just paste all the contents of my shaders_slang folder into your shaders_slang folder. It will overwrite my previous version with the latest version.

This snapshot version here adjusts the Sony color settings a tiny bit and makes a few other tiny fixes. All my changes to this are being very rushed because I have other things going on in my life. If the shader lags really badly for you, switch the connection from RF to Composite; if not, try increasing the RF noise severity and changing the noise random seed.

https://www.mediafire.com/file/zvq68x8ncrkk7re/patchy-snapshot-2024-09-29.zip/file

1 Like

@ynnad4 Please let me know what you think of this new noise behavior. It looks very different now.

Pants, will you do one for the SMS, PSX or N64?