You’d need to run th cg2glsl script
Would you be able to convert the shader posted by commiebits for me. I’d like to try it on my Androd Box and keeps failing for me
C:\Users\User\AppData\Local\Programs\Python\Python35\python.exe “C:\Script\cg2glsl.py” C:\Shaders2\Test.cg C:\Shaders\Test.gls
I keep getting a bunch of errors.
C:\Shaders2\Test.cg(1) : error C0000: syntax error, unexpected ‘<’ at token “<” C:\Shaders2\Test.cg(5) : error C0130: invalid character literal C:\Shaders2\Test.cg(9) : error C0000: syntax error, unexpected ‘/’, expecting " :" at token “/” C:\Shaders2\Test.cg(11) : error C0000: syntax error, unexpected integer constan , expecting “::” at token “<int-const>” C:\Shaders2\Test.cg(16) : error C0000: syntax error, unexpected ‘.’, expecting ::" at token “.” C:\Shaders2\Test.cg(26) : error C0104: Unknown pre-processor directive #version C:\Shaders2\Test.cg(31) : error C5060: out can’t be used with non-varying vTexC ord C:\Shaders2\Test.cg(35) : error C5059: stdlib “gl_” variables are not accessibl C:\Shaders2\Test.cg(38) : error C0000: syntax error, unexpected ‘]’ at token “] C:\Shaders2\Test.cg(41) : error C0104: Unknown pre-processor directive #version C:\Shaders2\Test.cg(62) : error C1059: non constant expression in initializatio C:\Shaders2\Test.cg(133) : error C0000: syntax error, unexpected type identifie , expecting ‘{’ at token “vec2” C:\Shaders2\Test.cg(134) : error C0000: syntax error, unexpected type identifie , expecting ‘{’ at token “vec2” C:\Shaders2\Test.cg(135) : error C5060: out can’t be used with non-varying frag entColour C:\Shaders2\Test.cg(137) : error C1013: function “main” is already defined at C \Shaders2\Test.cg(33) C:\Shaders2\Test.cg(271) : error C0000: syntax error, unexpected ‘]’ at token " " Vertex compilation failed … C:\Users\User>C:\Users\User\AppData\Local\Programs\Python\Python35\python.exe " :\Script\cg2glsl.py” C:\Shaders2\Test.cg C:\Shaders\Test.gls C:\Shaders2\Test.cg(1) : error C0000: syntax error, unexpected ‘<’ at token “<” C:\Shaders2\Test.cg(5) : error C0130: invalid character literal C:\Shaders2\Test.cg(9) : error C0000: syntax error, unexpected ‘/’, expecting " :" at token “/” C:\Shaders2\Test.cg(11) : error C0000: syntax error, unexpected integer constan , expecting “::” at token “<int-const>” C:\Shaders2\Test.cg(16) : error C0000: syntax error, unexpected ‘.’, expecting ::" at token “.” C:\Shaders2\Test.cg(26) : error C0104: Unknown pre-processor directive #version C:\Shaders2\Test.cg(31) : error C5060: out can’t be used with non-varying vTexC ord C:\Shaders2\Test.cg(35) : error C5059: stdlib “gl_” variables are not accessibl C:\Shaders2\Test.cg(38) : error C0000: syntax error, unexpected ‘]’ at token "] C:\Shaders2\Test.cg(41) : error C0104: Unknown pre-processor directive #version C:\Shaders2\Test.cg(62) : error C1059: non constant expression in initializatio C:\Shaders2\Test.cg(133) : error C0000: syntax error, unexpected type identifie , expecting ‘{’ at token “vec2” C:\Shaders2\Test.cg(134) : error C0000: syntax error, unexpected type identifie , expecting ‘{’ at token “vec2” C:\Shaders2\Test.cg(135) : error C5060: out can’t be used with non-varying frag entColour C:\Shaders2\Test.cg(137) : error C1013: function “main” is already defined at C \Shaders2\Test.cg(33) C:\Shaders2\Test.cg(271) : error C0000: syntax error, unexpected ‘]’ at token " " Vertex compilation failed … C:\Users\User>
I just updated Super-xBR shaders.
Now the user can tweak the filter weights with just one knob. It’s a param that can vary from 0.0 to 2.0:
0.0 means bilinear filtering. 0.65 is equivalent to cubic filtering 1.29 is sinc filtering (the default value). This is the biggest sane value. Above 1.29 is insanely sharp, which can introduce other artifacts, though it can please some other users.
I developed a fast variant of super-xbr and released on common-shaders: https://github.com/libretro/common-shaders/tree/master/xbr/super-xbr
It’s like twice as fast.
Hey Hyllian! I’ve created an account to ask a bit of help regarding the cg shaders. So I’m a beginner when it comes to shader programming and I wanted to use the xBR (x2 the native resolution) shader in my Unity game. A very kind member of the community helped me exporting ( actually he did the whole thing, I was just watching) the cg shader into a shader compatible with Unity, here is a link to the post : http://forum.unity3d.com/threads/how-to-copy-paste-a-cg-shader-found-on-the-internet-in-unity.334772/#post-2181555 So the shader compiling is throwing no more errors thanks to Lanre, but I have no idea what to fill these values with : half2 video_size; float2 texture_size; half2 output_size; Lanre said they’ll need to be filled with a Vector4 value ( 4 floats, X, Y, Z, and W), but the screen is always white regardless of the values I set from within the script that controls the shader. I was also doubting about whether the cg shader that was exported works to begin with ( link in my Unity post). HELP! I must say my game looks gorgeous with the xBR filter applied to it ( through an image scaler that uses the algorithm)
[QUOTE=Sonoshee;24291]Hey Hyllian! I’ve created an account to ask a bit of help regarding the cg shaders. So I’m a beginner when it comes to shader programming and I wanted to use the xBR (x2 the native resolution) shader in my Unity game. A very kind member of the community helped me exporting ( actually he did the whole thing, I was just watching) the cg shader into a shader compatible with Unity, here is a link to the post : http://forum.unity3d.com/threads/how-to-copy-paste-a-cg-shader-found-on-the-internet-in-unity.334772/#post-2181555 So the shader compiling is throwing no more errors thanks to Lanre, but I have no idea what to fill these values with : half2 video_size; float2 texture_size; half2 output_size; Lanre said they’ll need to be filled with a Vector4 value ( 4 floats, X, Y, Z, and W), but the screen is always white regardless of the values I set from within the script that controls the shader. I was also doubting about whether the cg shader that was exported works to begin with ( link in my Unity post). HELP! I must say my game looks gorgeous with the xBR filter applied to it ( through an image scaler that uses the algorithm)[/QUOTE]
These three vars are uniform vars used by Retroarch.
video_size is the size of the frame being filtered. So, if it is a snes game, it’s the native res of that system, that is 256x224. texture_size is the size of the texture where the video is copied in. In Retroarch texture_size is always power of two in both dimensions. So, it can be 256x256, 512x256, 512x512, 1024x1024 and so on. You should know the size of the texture you use for your game. output_size is the size of the output texture already filtered. So, if you had a snes game and upscaled it by 4x, it should be 1024x896.
So, in other words, this is what happens when you run a snes game in Retroarch with xBR and wants to upscale by 2x:
1- You set Retroarch and run a snes game which has 256x224 of native res. 2- Retroarch allocates a texture power-of-two to accomodates the snes frame. So it allocates a 256x256 texture. (this is indeed what matter for xbr calculations) 3- to upscale by 2x, the final output res should be 512x448. Which means output_size is 512x448.
Hello! I’ve been importing some of the xBR cg shaders in Github to Unity but I have some issues : 1 - the output sometimes gets xBR-scaled, sometimes not ( normal scaling), sometimes there are some spiky pixel edges :
Also, the center sprite inside the game ( the aircraft) is kinda distorted :
This is how the game view looks like without the xBR shader applied ( this is only part of the screen):
2- You mentioned that the video_size and the output_size should be modified by a controller ( in your case, Retroarch, in my case, a Unity script that changes the values). But I read through the shader and I saw that it doesn’t use video_size and output_size anywhere in the algorithm, it only uses texture_size. Correct me otherwise because I’m only setting the texture_size in the controller script to my native resolution ( which is 512x300), the other two values are left unset ( (0,0) for both)).
3- I’m not sure but maybe the issues are due to the logic inside the shader, I imported the following shaders :
[ul] [li]2xbr-hybrid-v4b.cg : https://github.com/libretro/common-shaders/blob/master/xbr/shaders/xbr-hybrid/2xbr-hybrid-v4b.cg [/li][li]2xbr-hybrid-v4.cg : https://github.com/libretro/common-shaders/blob/master/xbr/shaders/xbr-hybrid/2xbr-hybrid-v4.cg [/li][li]2xbr-hybrid.cg : https://github.com/libretro/common-shaders/blob/master/xbr/shaders/xbr-hybrid/2xbr-hybrid.cg [/li][li]2xBR-v3.5a.cg : http://www.mediafire.com/view/n3fmr2gybq291x2/2xBR-v3.5a.cg [/li][/ul]
All of them give the same artifacts mentioned above. Are they granted to perform xBR scaling? ( have you guys tested them in Retroarch?)
Have you read my last post? It answer your questions.
And yes, all xbr shaders work in Retroarch.
BTW, you should set POINT sampling when using xBR shaders.
Yes I did read your answer, that’s why I asked what’s the point of setting output_size and video_size since they’re not used inside the shader. Also, I tried another shader ( 5xbr-v4.0-noblend.cg : https://github.com/libretro/common-shaders/blob/master/xbr/shaders/legacy/5xbr-v4.0-noblend.cg), there is a slight change as expected ( no antialiasing), but the artifacts are still there. I’ve been with this the past 3 days, I guess I’ll just stick with normal pixel-perfect scaling if the spiky pixels problem is still there.
They aren’t used in this particular shader, but could be used in others.
I don’t know about Unity framework. I only know about libretro’s.
I think it can be one of these:
1- You’re not setting the correct texture_size your game is using (probably Unity is using a texture_size bigger than your native game’s resolution); 2- You’re not using POINT sampling.
Try setting texture_size to 512x512 and see if it fix it.
I tried different combinations for texture_size ( swapping between 256, 512, 1024, 2048), the one that got me an accurate result is 1024x512, but it’s not as accurate as you would expect from an xBR filter :
At least now there is no more artifacts on the aircraft, so as you said it’s probably a matter of power-of-two values for texture_size. I double-checked and all the assets in my game use point sampling, so that’s probably not the source of the problem.
Yes, that’s definitly incorrect.
I recommend you to use this xBR version: https://github.com/libretro/common-shaders/blob/master/xbr/shaders/xbr-lv2.cg
The ones you’re using are old.
I think I hit a sweet spot in corner detection for super-xbr. I updated The repo with new versions with updated params. On SNES games the hud and fonts reminds me now of regular xbr. On rpgs, like FF, the fonts are top notch!
At last, after almost a year without improvements, now I have a new xBR shader with a new corner detection and a new color difference algorithm.
I’ve used the color difference algorithm from Sp00kyfox’ ScaleNx. I tweaked it a bit, because the standard one was giving me some weird results in my tests, so I had to change some params.
The repo is already updated: https://github.com/libretro/common-shaders/blob/master/xbr/xbr-lv2-accuracy-multipass.cgp
With the new color diff algorithm, some artifacts (red/blue spike artifacts) are fixed. These artifacts are very apparent in Final Fight 3. The difference between the standard xbr shader and the new one can be seen here:
http://screenshotcomparison.com/comparison/164239/picture:0
The only drawback for the new color diff algo is the speed, it’s a heavy code. For now, it only works fullspeed in multipass. So, this new shader run in two passes and is called xbr-lv2-accuracy-multipass. I think I can update the single pass shaders at least with the new corner detection, but not the color diff algo. So, for now, the new multipass shader is the best one in IQ.
Standard xBR:
New xBR (color diff and corner detection improvements)
Quite nice. When you want super clean image quality with minimal detail loss, this seems like the way to go.
For games without anti-aliasing, most of 8-bit games, you should give a try to ScaleFx-3x too, it does an amazing job with stairs up to level 6 (as long as they don’t have AA)! For now, xBR can only get up to lvl4 (with AA).
I fixed a small bug in smart-blur shader that was causing some black halo artifacts. And added a new preset to use smart-blur with xbr-accuracy.
A comparison with/without smart-blur: http://screenshotcomparison.com/comparison/164360
It smooths some outlines.
Other comparison: http://screenshotcomparison.com/comparison/164365
Ah, that looks really nice with the smoothed transitions along the color bands, and it makes the pattern in the chrono trigger background more mellow. Good stuff all around!
@Hyllian nice to see that the metric helps you too! as I was starting with the development of ScaleFX I tested several fast methods for color metrics but this one was surely the best. yeah it costs more but still more performant than going the full way and doing a Lab-colorspace conversion. feel free to use my implementation for it. it just looks a litte convoluted since I implemented it in a way so it does 4 metric calculations at once which should be faster than doing them individually but I didn’t change the formula itself apart from inversing the result and scaling it to [0,1]. you can also get rid of the square root by squaring your threshold value which you only need to do once if it is constant. I coudn’t do it in my case since I needed to use the metric result as the output of a shader pass and without the square root there is a noticeable precision loss due to the frame buffer. here is the “clean” float4-version if you have a use for it:
float4 df(float4x3 A, float4x3 B)
{
float4x3 diff = A-B;
float4 ravg = 0.5 * (A._m00_m10_m20_m30 + B._m00_m10_m20_m30);
diff *= diff * transpose(float3x4(2.0 + ravg, float4(4), 3.0 - ravg));
return mul(diff, float3(1));
}
the result of df(A,B) is the squared difference of (A.x, B.x), (A.y, B.y), (A.z, B.z) and (A.w, B.w). maybe you have another idea to speed it up?
[QUOTE=Sp00kyFox;35348]@Hyllian nice to see that the metric helps you too! as I was starting with the development of ScaleFX I tested several fast methods for color metrics but this one was surely the best. yeah it costs more but still more performant than going the full way and doing a Lab-colorspace conversion. feel free to use my implementation for it. it just looks a litte convoluted since I implemented it in a way so it does 4 metric calculations at once which should be faster than doing them individually but I didn’t change the formula itself apart from inversing the result and scaling it to [0,1]. you can also get rid of the square root by squaring your threshold value which you only need to do once if it is constant. I coudn’t do it in my case since I needed to use the metric result as the output of a shader pass and without the square root there is a noticeable precision loss due to the frame buffer. here is the “clean” float4-version if you have a use for it:
the result of df(A,B) is the squared difference of (A.x, B.x), (A.y, B.y), (A.z, B.z) and (A.w, B.w). maybe you have another idea to speed it up?[/QUOTE]
I didn’t use your function because my cg compiler was complaining about using too much registers. For some reason, if you pass matrices to functions, the compiler uses more registers than if you just use vectors.
I had to modify the original params (2, 4, 3) too. For some reason they provide wrong results for my pixel art test images. So I tweaked them and found some good triplets: (17, 20, 3) or (3, 6, 1). The default now is (17, 20, 3).
I’m using the sqrt return because I need to compare accumulated distances. For example, If I had 4 color distances d1, d2, d3 and d4:
d1 = 1, d2 = 6 d3 = 4, d4 = 4
d1^2 = 1, d2^2 = 36 d3^2 = 16, d4^2 = 16
so, if I use the sqrt, then (d1 + d2) < (d3 + d4). On the other hand, if I don’t use the square root, then (d1^2 + d2^2) > (d3^2 + d4^2).