Shader Parameters support

There have been strong requests lately to be able to tweak shader parameters in a more convenient and uniform way than we’re currently doing. Right now, we have to tweak shaders directly, which is not an ideal situation.

These settings are generally floating point numbers.

I’m proposing a scheme which can work with any of the shader backends. It’s also important that we can use these shaders without having to use presets.

There are formats which support these things already (CgFX, FX), but we cannot rely on such solutions as they cannot be supported everywhere.

For example, we can expose a setting which sets brightness in a trivial shader:


#pragma parameter BRIGHTNESS "Brightness" 1.0 0.25 2.0 0.1 // default, minimum, maximum, optional step
#ifdef PARAMETER_UNIFORM // If the shader implementation understands #pragma parameters, this is defined.
uniform float BRIGHTNESS;
// More parameters here
#else
// Fallbacks if parameters are not supported.
#define BRIGHTNESS 1.0 // Default
#endif

// Shader code
OUT.color = BRIGHTNESS * tex2D(s0, tex);

#pragmas which are unknown are ignored by the compiler. It is trivial for us to parse all lines in the shader and find all such #pragmas, but they cannot be hidden behind #includes. After parsing, we know all the uniforms we are allowed to tweak, along with a human readable description as well as defaults, min/max and step sizes.

By embedding which parameters are supported inside the shader file, we allow a mix-and-match model that automatically detects all parameters and exposes them to RGUI. In a shader preset, we can then store parameters. For example:


parameters = BRIGHTNESS;FOO
BRIGHTNESS = 1.2
FOO = 0.5

Using uniforms, we allow RGUI or other UIs to provide sliders to on-the-fly tweak shader parameters. This makes it very easy to tune shaders to your liking.

In addition to that, I’ve looked at shader pass aliases. With current multi-pass shaders, it can get complicated when shaders are referring to each other. You can use the PASS%u convention where you reference passes directly, or PASSPREV, where you do the same, but negative relative indicing. To make it more maintainable for shader writers, you can now give a name to a pass, and shaders can then refer directly to that pass. This also allows shaders to be reused across multiple presets where such absolute/relative references might not be possible without tons of copy-pasta shaders.


// Preset
shader0 = foo.cg
alias0 = FOO

shader5 = resolve.cg // Refers to PASS1 in this preset, i.e. output of shader0.


// Shader code
OUT.color = tex2D(FOO.texture, tex);

Again, to be able to function without a preset, all aliases which are recognized will add certain #defines to the shader.


#ifndef FOO_ALIAS // The FOO pass is not known
#define FOO PASS1 // Fallback
#endif

The shader-parameter branch on GitHub implements this as well as RGUI support.

Exciting stuff! We should be able to clean up a number of shader flavors in the repo using these features, and the RGUI editing will be great for end-users.

So, am I correct in thinking that the fallback ifdefs will be inserted automatically for anything covered in the pragmas?

Also, I was not aware of the PASSPREV options. That was something I found very handy when porting certain shaders to byuu’s GLSL format and didn’t realize we’d had it here all along. o_o

No, PASSPREV was mostly an experimental feature (I found some critical bugs in it when I made CRT-glow), but that’s fixed now obviously :stuck_out_tongue_winking_eye: I’ll have to rewrite the CG/GLSL shader spec soon I guess in a more proper format when I have the motivation to do it. That GLSL shaders are also available should be pushed a bit more as well because most people don’t even know we have a fully fleshed out GLSL shader spec that is basically identital to Cg spec.

Yeah, that’s a great point. I’ve been hand-porting a few stubborn ones lately for use with KMS, so it would be nice to have a place for ones that don’t convert programmatically, whether a subdirectory in common-shaders or somewhere else.

That’s a great addition and will help to organize the repository.

I don’t see why these should be limited to floating point numbers, when there is a rich tradition and not very unclear need for other types of parameters. I get not worrying about the other types yet, but the proposal here isnt even extensible. I think this is a big mistake.

Look at the FX file UIWidget annotations for an example. Here’s a demonstration of how that could be worked into your pragma-based system, using key-value pairs:


#ifdef PARAMETER_UNIFORM 
//the pragma should be here, so it doesnt have to be declared again, if we so wish to do that
#pragma parameter (type=float, name=BRIGHTNESS, default=1.0, UIName="Brightness", UIWidget="Slider", UIMin=0.25, UIMax=2.0, UIStep=0.1)
#pragma parameter (type=float4, name=TINTCOLOR, default = "1.0, 1.0, 0.0, 1.0", UIWidget="Color", UIName="TintColor")

#else
#define BRIGHTNESS 1.0 // Default
#define TINTCOLOR float4(1,1,0,1)
#endif

Cg doesn’t support int uniforms anyways, at least not in the lower profiles, so float only makes sense there.

As for vector parameters, well. For one, it would increase complexity since simple left/right stepping mechanics wouldn’t work, and vectors can always be treated as multiple floats (which any slider based approach would have to do anyways). Adding things like UIWidget to let you pick colors makes little sense when you don’t have a widget UI to rely on. Unless you have something obvious like RGBA “color”, you also need to attach semantic information to all vector elements.

Anyways, if we do add vector parameter support, it could always be #pragma parameter_vector or something. It’s not like this feature is completely frozen yet.

exciting news. I’m looking forward to modify my shaders accordingly.

If the option is to use one or other code portions, is there an elegant way to make it an option param? Or should I define a float param and use float values to choose among the different code portions? (This is a bit weird imo).

For example, in xBR I have four implementations for corner detection, each of them is just a code line only. How can I parameterize it in RA?

That is best done with compile time #defines. One workaround of course is to create four shaders which all #define the correct approach and #include the main file. Not certain how compile time #defines should be best exposed. It is a complicated feature due to offline shader cross-compiles.

Ok, so I can’t reduce the number of shaders to only one.

Not for now, no. However, you could reduce it to four tiny shaders and one actual shader. I will likely add support for “code path parameters” or something similar eventually.

OK, sure, so that means you go ahead and design a system that can’t ever support things in higher profiles?

This is lame. Without the shader labeling it with enough information that something which does have a widget UI to rely on can actually use it, then there can never be a sophisticated widget UI. If you’re trying to design a system which has a straightforward implementation solely via hotkeys (is this why your concern with stepping mechanics?) then there are some solutions. For instance, anywhere you presented a user interface to simply step a float left/right, you can present the vector parameter as 4 component-wise float parameters, in the same way. Colors can be done this way by just shrugging and implicitly having a (0,255,1) min/max/step range.

Maybe, maybe not. FX doesnt need this. You’re not obliged to go that extra mile. Just get within hailing distance of the solution devised by people who already solved this. There are only a few small things that are needed to unlock a lot of convenience and power.

Alright, I retract my claim that your design isn’t scalable, supposing the type is all that’s important, which I dont think it is.

Well, I might have been wrong about Cg not supporting int. While there is no cgGLSetParameter1i (but cgGLSetParameter1f), there is for some unholy whack reason cgSetParameter1i (wtf). In my defense, I checked this 3 years ago. I guess this allows int, but I’ll have to verify with arbvp1/arbfp1 first (unless you have nVidia cards, CGC compiles to this, yaaaay >_<).

int or float parameters can be supported then simply by deducing the type (glGetActiveUniform and cgGetParameterType). Either that, or #pragma parameter_int. #pragma parameter_int is probably safer.

Smashing a full vector definition into one line gets kinda nasty imo. Maybe a simpler approach is to have some definitions which logically group other parameters together to form one vector. This allows easy per-vector naming of components (for simple UIs), and one name for the vector as a whole (for the fancier UIs). Also allows some kind of fixed “semantic” on how to treat the vector. E.g. “color” for color pickers.


#pragma parameter FOO_RED "Foo - Red" 0.5 0.0 1.0 0.01
#pragma parameter FOO_GREEN "Foo - Green" 0.5 0.0 1.0 0.01
#pragma parameter FOO_BLUE "Foo - Blue" 0.5 0.0 1.0 0.01
#pragma parameter_vector FOO "Foo - Color" COLOR (something semantic, dunno) FOO_RED FOO_GREEN FOO_BLUE // Create a vector FOO based on FOO_RED, FOO_GREEN and FOO_BLUE.
uniform float3 FOO;

I can’t say for sure if the PS3 GL subset supports cgSetParameter1i or not - I’ve only ever seen it in a header file (Cg/NV/Cg.h), and that was just a function prototype.

https://bitbucket.org/MichelPaulissen/d … 2f1519f42d

This define ‘DEX_CG_NO_INTEGER_PARAMETERS’ leads me to believe that integer parameters can’t be assumed to be there by default for the Cg spec. Anyway, guess we should look into this further.

This seems like it could be a rough outline of something workable.

One small oddity with this approach is that an advanced frontend would need to decide to hide some individual parameters, in favor of the more semantically elaborated parameters. That’s weird, but not hard.

But one related problem with this approach is that it allows the entry of some sort of nonsensical scenarios. For instance, one could define two parameter_vectors which drive some of the same components, and then it would not be clear to the frontend which color-picker should drive which components. This isn’t a fatal problem.

To improve these things, I recommend that the specification be written so that when the vector is declared, all the components it employs are allocated to a vector grouping and no longer available for employing in another vector. Furthermore, to be clear, the interval settings on the individual components should be discarded, and new interval settings specified or implied by the semantic information. If no semantics are specified on the vector, then the individual components interval settings might could be kept.

A small unfortunate feature of this approach is that it doesnt require people to enter the semantically rich definition, so youre less likely to get a library of shaders built that way, but I can live with that.

I had assumed that long, annoying definition in one line could be broken up into several ala FX, but I realize now that by going through a #pragma that isn’t a feasible plan. However, allow me to re-cast my original proposal into a multi-line approach by having each key-value pair be on one line only:


#pragma parameter (type=float, name=BRIGHTNESS, default=1.0, UIName="Brightness", UIWidget="Slider", UIMin=0.25, UIMax=2.0, UIStep=0.1)

->


#pragma parameter BRIGHTNESS type float
#pragma parameter BRIGHTNESS default 1.0
#pragma parameter BRIGHTNESS UIName "Brightness"
#pragma parameter BRIGHTNESS UIWidget "Slider", UIMin 0.25, UIMax 2.0, UIStep 0.1

#pragma parameter NAME type <float/int> looks alright. It’s either that or #pragma parameter_int. Since the description always starts with a ", arbitrary keyboards can be added after NAME if needed.

As for #pragma parameter_vector, it should “consume” the parameters so they cannot be used for other vectors, indeed. The benefit of splitting up the definition is to allow simpler UIs to only use the single components without having to serialize the vector and “inventing” new descriptions for each vector element. The shader backend can treat the single component as single components, and vectors as indices to those single component vectors.

parameter_vector might benefit from more lines, maybe something like


#pragma parameter_vector FOO "Foo - Color"
#pragma parameter_vector FOO components FOO_RED FOO_GREEN FOO_BLUE
#pragma parameter_vector FOO semantic "COLOR"

I’m not sure if I like semantics like COLOR overriding the valid range of the components. In some cases, you might want to have a larger dynamic range per-component and the min/max of the individual components should reflect this. Step-sizes, etc are of course only suggestions, even without semantics. It’s not sensible to enforce those for widget UIs.

Anyways, the internal representation of such a parameter set can be quite simple:


struct Parameter
{
   string name;
   string description;
   float current, default, min, max; // Tagged union here for int, w/e.
   bool vector_component; // True if this is part of a vector.
};

struct ParameterVector
{
    // Type here
    string name;
    string description;
    string semantic;
    unsigned size;
    unsigned indices[4]; // Index the parameters.
};

struct ShaderParameter
{
   vector<Parameter> parameters; // Simple UIs can only care about this.
   vector<ParameterVector> parameter_vectors; // More complex UIs can use this as well.
};

Nobody’s widgets for the COLOR semantic are going to be able to cope with component ranges other than [0,1]. Nobody coding something using these semantics is going to want to worry about what it means to take a color which is actually going to be [0,1] and adapt it to the range constraints on the individual vector elements. So the first time someone makes a shader that does this, it’s going to turn into an undefined morass.

So, I strongly disagree. A semantic strictly corresponds to an intended widget and component constraints. If someone needs a color with a higher dynamic range, then they can define a new semantic, and emulators that don’t support it yet (they can’t; nobody will even anticipate this bizarre scenario) will discover an unknown semantic and fallback to presenting sliders. This seems cool to me.

Under this scheme, the component constraints are still needed for implementations without semantics support. Implementations with semantics support should, as a nice debugging aid, print a warning when the choice of semantic mismatches the constraints on the individually defined components, since this is a mistake. Otherwise someone might think they can define rgb to go [0,2], call it a COLOR, and think it works fine because they didn’t test it on an implementation with widgets; but nobody’s widgets understand what the hell that means.

Regarding “inventing” new descriptions for each vector element, I don’t mind that. The specifications on the semantic would define how it is to be split. However, this would mean that an emulator encountering an unknown semantic wouldnt even know how to “invent” the components… so… you win this one.

If you think a color dynamic range is an actually useful scenario, we could write good specifications for that semantic early on. If it was just a thought experiment, then… lets pick a better thought experiment.

Well, being able to go beyond 1.0 for a color component is useful. Some of the CRT shaders have tints where the final color is multiplied by (1.0, 1.0, 1.05). It’s one of the very few shaders I’ve seen a use case for a COLOR semantic. Another is crt/glow where I’d like a color boost beyond 1.0. It the color is multiplicative, a larger range is going to be useful.

A color picker could perhaps just rescale the color value within the dynamic range of something. These colors don’t represent a direct color to be represented on screen anyways. If colors beyond 1.0 are really that odd, then COLOR_HDR or something.

Then there’s the question of linear vs. gamma space colors. A recent trend for CRT shaders is using sRGB FBOs to do all computation in linear-space without having to dick around with gamma-correction back end forth in every pass. Multiplying colors in either space changes the result significantly. Maybe color pickers don’t really care about this case though since they will likely just display the colors in gamma-space. A shader assuming linear colors would just have to apply gamma manually I guess.

I’m not sure how a user would make sense out of a color picker which was intended to select values like (1,1,1.05). I’ve never seen an HDR color picker before. If I intended to have values like that in a shader, I wouldnt use the COLOR semantic.

However, if we suggest that implementations allow each semantic vector’s values to be specified manually in addition to through the obvious widget, then normally users could enjoy a color picker for the COLOR semantic and then enter numbers manually when they were specifying an HDR-type value of 1.05. This way, nobody has to figure out how an HDR color picker should work. It’s not a lot different from how the alpha channel will have to be specified: an independent widget, since a color picker is really only 3 dimensional.

With this in mind, I suppose we could go ahead and define the COLOR semantic as an HDR-capable thing, even if nobody can make a widget for it.