Adding Raytracing rendering to 3D emulators

Hi everybody.

This is my first post, however I’ve been using emulators for almost 30 years now.

I was just thinking about how much computing power is needed for raytracing and though, why not trying with some simple graphics like those on older 3D systems like Playstation 1 or N64?

Do you think a raytracing plugin is something doable for PSX, Saturn, N64, etc?

It seems to me like the kind of enhancements / features libretro likes to include :slight_smile:

Cheers!

It should be possible, but it would require writing a renderer for it from scratch, I think. I’m not very familiar with how raytracing libraries/SDKs work, though.

What benefits would it bring to the emulation ?

To emulatíon itself none. It won’t be more faithful to the real thing.

It would be just an interesting enhancement like filters or shaders, higher resolutions, etc

To be honest i would prefer the possibility to mod meshes and textures by redirecting the priority for the assets to custom ones, raytracing is something so fundamentally different to what early 3D games do that i find really hard to understand how to implement it.

  • n64: i think so
  • psx: probably not
  • saturn: no

the main issue is that the gpu’s on the saturn and psx are entirely 2d, they support affine transformations only, that is they have no concept of depth. hacks exist for psx (pgxp) to restore missing info needed for perspective correct rendering, which works mainly by monitoring vertex data that goes through a central point on the psx (the gte) before it reaches the gpu (this all assumes common use patterns by software, so it won’t always work). but even with this i’m not sure the psx graphics pipeline knows of the necessary lighting and material params in order for it to get any kind of real benefit from a raytraced rendering model. the saturn has no such equivalent to the gte, so trying to recover lost info for data the gpu receives would be far more difficult. i haven’t given it much thought but ottomh i think it’s pretty impractical, don’t expect to see perspective correct rendering in a saturn emu anytime soon if ever, so raytracing is a non-starter.

@e-tank

Are you sure with perspective correct rendering with Saturn emulation?

I could be wrong, which I’m am often, but that post seems to be talking about using Tessellation to try to solve that issue.

no, i was not aware of this. looks cool and interesting but i’m not sure what i’m seeing or if this is even recovering full info necessary for perspective correct rendering. i tried to read the blog post but it’s not clear and borders on being giberish (to me anyway), i imagine english must not be this person’s primary language. idk, it’s possible it could just be a method of using a modern graphics api to do more accurate style saturn quad rendering, i can’t tell…

i took a quick peak at the source code but after 10 mins i still couldn’t make much progress figuring out what it’s doing, when i have time i’ll definitely check it out further but that probably won’t be for a long time… anyone else up to taking a look at it and explaining in plain (clear) english if this really is perspective correct rendering and how it’s recovering said data?

edit: thought about it some more and remembered that 4 points in projected space and 4 in source space is enough to recover the projective transformation, so i imagine that’s what this is doing (which much like pgxp, i’m not sure this always gives correct results). i’m fuzzy on the details but i think in regards to raytracing this would only get you at most on equal footing to what i first said about psx, which would just upgraded from no to probably not.

Then maybe the only feasible consoles without the extra work of figuring out how to convert data to “GPU language” would be from N64 onwards? (basically those that have a GPU that manages the required data).

Doesn’t some ePSXe rendering plugins for instance already do something alike to take advantage of computer GPUs?

@pakoman that’s essentially correct, yes, so you’d be looking at n64 (consoles) and nds (handhelds) onward. even then i only said i think it’s possible, to what extent idk. games handle stuff like shadows differently and i’m not sure you could account for things like that by just looking at the commands submitted via the platforms graphics api.

re psx plugins: as i already stated, while i’m not 100% on this, i believe both psx and saturn do not track certain properties for lighting and materials that would be needed to get any kind of benefit. thus even if you did raytrace the scene i don’t think it would look much different than what gpu rasterization gives, just much slower. (and/or you would need to make assumptions about things and it probably wouldn’t look right)

@fivefeet8 again, thanks for the info on uoyabause, i found it very interesting. it reminded me of an old graphics paper i once read which i managed to find again, here’s a link for the interested: http://vcg.isti.cnr.it/publications/papers/quadrendering.pdf

i believe that graphics paper describes exactly what the author of uoyabause was doing. in fact, i think the tessellation is not for perspective rendering, it’s used to get a bilinear mapping similar to what the saturn actually produces (but much nicer). the paper describes a different method of bilinear quad rendering that is more efficient than cpu based tessellation, however uoyabause appears to have since implemented its tessellation approach on the gpu to (obviously) huge gains, how comparable their method would be to that, idk.

as i said in my last post it’s possible to use both 4 points in projected and source space to determine a projective map, which is what uoyabause is able to do. and while this will work for quads that actually went through a projective transform (mostly, i think there are some edge cases where it won’t) there’s no way of knowing if that’s the case, hence for certain games it will produce incorrect results. so in theory, for games that work well with it i think the perspective option in uoyabause is capable of producing slightly better visuals, but if your machine is capable of running with tessellation it’s probably not worth the hassle.

I can see three serious problems with this:

1, The PSX and Saturn are fundamentally 2D in terms of rendering, therefore their emulators do not get any depth information about the polygons they draw and cannot place them in 3D space.

2, There is a lot of fake lighting in older games. Take Conkers’ Bad Fur Day for example. You see shadows, and they look nice. The emulator sees dark polygons. There is no light source the emulator can identify, even if it could somehow deduce that one should exist.

A common trick is to bake lighting into textures, and again the emulator has no information about where the lights are, or even whether the textures have lighting baked onto them.

3, Raytracing a scene involves more than what you can see directly. Everything a ray can hit, changes the final image in some way. The issue here is that if you cannot see something, the game probably won’t try to draw it, and the emulator cannot tell if it is supposed to be there.

For example: You confront Cyrano de Bergerac. You see the shadow of his oversized nose against a wall. He turns his back to you. The nose shadow disappears.

You might think this wouldn’t matter, and if you were rendering still images you would probably be right, most of the time. However, polygons will appear and disappear frequently, changing the lighting in subtle and not so subtle ways, in scenes where the lighting shouldn’t really change. My expectation is that this would be conspicuous enough to be annoying.

I think point 1 is already sorted out by some emulators that increase rendering resolution, add perspective correction and such features in PSX and Saturn.

Agree with point 2: lighting would be a problem that may require specific per game patches like those re-texturing projects out there. Adding some ambient/ sun light by default would help not to have a dark scene, like current architecture design programs do (like using modelled lights, exterior daylight, interior daylight, night + interior lighting…).

  1. Is that what PowerVR did at the time to save processing resources, just ignore everything the camera wouldn’t see?

Maybe very unstable for older 3d systems and needs at least a DC or a Gamecube to be reliable enough, but still would be an interesting feature to mess with!

You’re right about point 1 it seems. It was a common argument against fancy 3D enhancements a few years ago, but I have clearly been living under a rock. Hats off to the people that made it happen.

I’m thinking about two basic optimizations found in almost every rasterized 3D engine:

  • Clipping. If it’s outside your field of view, don’t draw it.
  • Back face culling. If a polygon is facing away from the camera, don’t draw it.

There are also a lot of fancy schemes to avoid drawing things that are behind other things, but those two basic optimizations remove most of the polygons around you, and when polygons appear and disappear in a ray traced scene you are likely to notice, even if they aren’t directly visible.

I don’t really know how most console hardware works, but I suppose it might be possible to do something similar to PGXP and find these polygons before they get tested, so they can be included in the ray traced render.

However, it is also common to test whether the bounding box of a mesh is within the field of view before drawing it, and it will be harder to detect when whole meshes are removed this way because it is a high level optimization, not close to the metal.

I think this will indeed require game specific patches in order to have the required information.