Problems porting the GLSL shader to a single whole

I'm currently trying to port shader shadertoy.com ( atmospheric dispersion pattern , interactive code demo) to Unity. The shader is written in GLSL, and I have to start the editor using C:\Program Files\Unity\Editor>Unity.exe -force-opengl so that it renders the shader (otherwise, the error "This shader cannot be run on this GPU "), but that’s not a problem right now. The problem is porting the shader to Unity.

Functions for scattering, etc. all are identical and "controllable" in my ported shader, the only thing is that the mainImage() functions control the camera, the directions of the light and the very direction of the rays. This should, of course, be changed, since the position of the Unity camera, the viewing direction, and light sources and directions are used.

The main function of the original is as follows:

 void mainImage( out vec4 fragColor, in vec2 fragCoord ) { // default ray dir vec3 dir = ray_dir( 45.0, iResolution.xy, fragCoord.xy ); // default ray origin vec3 eye = vec3( 0.0, 0.0, 2.4 ); // rotate camera mat3 rot = rot3xy( vec2( 0.0, iGlobalTime * 0.5 ) ); dir = rot * dir; eye = rot * eye; // sun light dir vec3 l = vec3( 0, 0, 1 ); vec2 e = ray_vs_sphere( eye, dir, R ); if ( ex > ey ) { discard; } vec2 f = ray_vs_sphere( eye, dir, R_INNER ); ey = min( ey, fx ); vec3 I = in_scatter( eye, dir, e, l ); fragColor = vec4( I, 1.0 ); } 

I read the documentation about this feature and how it works at https://www.shadertoy.com/howto .

Image shaders implement the mainImage () function to generate procedural images by calculating the color for each pixel. This is expected that the function will be called once per pixel, and it is the responsibility of the host application to ensure that it is entered correctly and receive the output color from it and assign it to the screen pixel. Prototype:

void mainImage (outside vec4 fragColor, in vec2 fragCoord);

where fragCoord contains the pixel coordinates for which the shader needs to calculate the color. Coordinates are in units of pixels, in the range from 0.5 to resolution-0.5, above the rendering surface, where the resolution is passed to the shader through the iResolution formula (see below).

The resulting color is collected in fragColor as four vector components, the last of which is ignored by the client. As a result, the assembled as an “outside” variable in anticipation of future additions has several rendering goals.

So, in this function there are links to iGlobalTime , so that the camera iGlobalTime over time and iResolution to iResolution for resolution. I built the shader into the Unity shader and tried to fix and connect the dir , eye and l devices as it works with Unity, but I was completely stuck. I get some kind of picture that looks “related” to the original shader: (Top is original, buttom current state of unity)

shader unity comparison

I am not a professional shader, I know only some of the basics of OpenGL, but for the most part I write the logic of the game in C #, so all I could do was look at other examples of shaders and see how I can get data about the camera, sources light etc. in this code, but as you can see, nothing really works out.

I copied the skelton-code for the shader from https://en.wikibooks.org/wiki/GLSL_Programming/Unity/Specular_Highlights and some vectors from http://forum.unity3d.com/threads/glsl-shader.39629/ .

I hope someone can point me in some direction, how to fix this shader / put it in unity correctly. Below is the current shader code, all you need to do to reproduce it: create a new shader in an empty project, copy this code inside, create a new material, assign a shader to this material, then add a sphere and add this material on it and add directed shine.

 Shader "Unlit/AtmoFragShader" { Properties{ _MainTex("Base (RGB)", 2D) = "white" {} _LC("LC", Color) = (1,0,0,0) /* stuff from the testing shader, now really used */ _LP("LP", Vector) = (1,1,1,1) } SubShader{ Tags{ "Queue" = "Geometry" } //Is this even the right queue? Pass{ //Tags{ "LightMode" = "ForwardBase" } GLSLPROGRAM /* begin port by copying in the constants */ // math const const float PI = 3.14159265359; const float DEG_TO_RAD = PI / 180.0; const float MAX = 10000.0; // scatter const const float K_R = 0.166; const float K_M = 0.0025; const float E = 14.3; // light intensity const vec3 C_R = vec3(0.3, 0.7, 1.0); // 1 / wavelength ^ 4 const float G_M = -0.85; // Mie g const float R = 1.0; /* this is the radius of the spehere? this should be set from the geometry or something.. */ const float R_INNER = 0.7; const float SCALE_H = 4.0 / (R - R_INNER); const float SCALE_L = 1.0 / (R - R_INNER); const int NUM_OUT_SCATTER = 10; const float FNUM_OUT_SCATTER = 10.0; const int NUM_IN_SCATTER = 10; const float FNUM_IN_SCATTER = 10.0; /* begin functions. These are out of the defines because they should be accesible to anyone. */ // angle : pitch, yaw mat3 rot3xy(vec2 angle) { vec2 c = cos(angle); vec2 s = sin(angle); return mat3( cy, 0.0, -sy, sy * sx, cx, cy * sx, sy * cx, -sx, cy * cx ); } // ray direction vec3 ray_dir(float fov, vec2 size, vec2 pos) { vec2 xy = pos - size * 0.5; float cot_half_fov = tan((90.0 - fov * 0.5) * DEG_TO_RAD); float z = size.y * 0.5 * cot_half_fov; return normalize(vec3(xy, -z)); } // ray intersects sphere // e = -b +/- sqrt( b^2 - c ) vec2 ray_vs_sphere(vec3 p, vec3 dir, float r) { float b = dot(p, dir); float c = dot(p, p) - r * r; float d = b * b - c; if (d < 0.0) { return vec2(MAX, -MAX); } d = sqrt(d); return vec2(-b - d, -b + d); } // Mie // g : ( -0.75, -0.999 ) // 3 * ( 1 - g^2 ) 1 + c^2 // F = ----------------- * ------------------------------- // 2 * ( 2 + g^2 ) ( 1 + g^2 - 2 * g * c )^(3/2) float phase_mie(float g, float c, float cc) { float gg = g * g; float a = (1.0 - gg) * (1.0 + cc); float b = 1.0 + gg - 2.0 * g * c; b *= sqrt(b); b *= 2.0 + gg; return 1.5 * a / b; } // Reyleigh // g : 0 // F = 3/4 * ( 1 + c^2 ) float phase_reyleigh(float cc) { return 0.75 * (1.0 + cc); } float density(vec3 p) { return exp(-(length(p) - R_INNER) * SCALE_H); } float optic(vec3 p, vec3 q) { vec3 step = (q - p) / FNUM_OUT_SCATTER; vec3 v = p + step * 0.5; float sum = 0.0; for (int i = 0; i < NUM_OUT_SCATTER; i++) { sum += density(v); v += step; } sum *= length(step) * SCALE_L; return sum; } vec3 in_scatter(vec3 o, vec3 dir, vec2 e, vec3 l) { float len = (ey - ex) / FNUM_IN_SCATTER; vec3 step = dir * len; vec3 p = o + dir * ex; vec3 v = p + dir * (len * 0.5); vec3 sum = vec3(0.0); for (int i = 0; i < NUM_IN_SCATTER; i++) { vec2 f = ray_vs_sphere(v, l, R); vec3 u = v + l * fy; float n = (optic(p, v) + optic(v, u)) * (PI * 4.0); sum += density(v) * exp(-n * (K_R * C_R + K_M)); v += step; } sum *= len * SCALE_L; float c = dot(dir, -l); float cc = c * c; return sum * (K_R * C_R * phase_reyleigh(cc) + K_M * phase_mie(G_M, c, cc)) * E; } /* end functions */ /* vertex shader begins here*/ #ifdef VERTEX const float SpecularContribution = 0.3; const float DiffuseContribution = 1.0 - SpecularContribution; uniform vec4 _LP; varying vec2 TextureCoordinate; varying float LightIntensity; varying vec4 someOutput; /* transient stuff */ varying vec3 eyeOutput; varying vec3 dirOutput; varying vec3 lOutput; varying vec2 eOutput; /* lighting stuff */ // ie one could #include "UnityCG.glslinc" uniform vec3 _WorldSpaceCameraPos; // camera position in world space uniform mat4 _Object2World; // model matrix uniform mat4 _World2Object; // inverse model matrix uniform vec4 _WorldSpaceLightPos0; // direction to or position of light source uniform vec4 _LightColor0; // color of light source (from "Lighting.cginc") void main() { /* code from that example shader */ gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; vec3 ecPosition = vec3(gl_ModelViewMatrix * gl_Vertex); vec3 tnorm = normalize(gl_NormalMatrix * gl_Normal); vec3 lightVec = normalize(_LP.xyz - ecPosition); vec3 reflectVec = reflect(-lightVec, tnorm); vec3 viewVec = normalize(-ecPosition); /* copied from https://en.wikibooks.org/wiki/GLSL_Programming/Unity/Specular_Highlights for testing stuff */ //I have no idea what I'm doing, but hopefully this computes some vectors which I need mat4 modelMatrix = _Object2World; mat4 modelMatrixInverse = _World2Object; // unity_Scale.w // is unnecessary because we normalize vectors vec3 normalDirection = normalize(vec3( vec4(gl_Normal, 0.0) * modelMatrixInverse)); vec3 viewDirection = normalize(vec3( vec4(_WorldSpaceCameraPos, 1.0) - modelMatrix * gl_Vertex)); vec3 lightDirection; float attenuation; if (0.0 == _WorldSpaceLightPos0.w) // directional light? { attenuation = 1.0; // no attenuation lightDirection = normalize(vec3(_WorldSpaceLightPos0)); } else // point or spot light { vec3 vertexToLightSource = vec3(_WorldSpaceLightPos0 - modelMatrix * gl_Vertex); float distance = length(vertexToLightSource); attenuation = 1.0 / distance; // linear attenuation lightDirection = normalize(vertexToLightSource); } /* test port */ // default ray dir //That the direction of the camera here? vec3 dir = viewDirection; //normalDirection;//viewDirection;// tnorm;//lightVec;//lightDirection;//normalDirection; //lightVec;//tnorm;//ray_dir(45.0, iResolution.xy, fragCoord.xy); // default ray origin //I think they mean the position of the camera here? vec3 eye = vec3(_WorldSpaceCameraPos); //vec3(_WorldSpaceLightPos0); //// vec3(0.0, 0.0, 0.0); //_WorldSpaceCameraPos;//ecPosition; //vec3(0.0, 0.0, 2.4); // rotate camera not needed, remove it // sun light dir //I think they mean the direciton of our directional light? vec3 l = lightDirection;//_LightColor0.xyz; //lightDirection; //normalDirection;//normalize(vec3(_WorldSpaceLightPos0));//lightVec;// vec3(0, 0, 1); /* this computes the intersection of the ray and the sphere.. is this really needed?*/ vec2 e = ray_vs_sphere(eye, dir, R); /* copy stuff sothat we can use it on the fragment shader, "discard" is only allowed in fragment shader, so the rest has to be computed in fragment shader */ eOutput = e; eyeOutput = eye; dirOutput = dir; lOutput = dir; } #endif #ifdef FRAGMENT uniform sampler2D _MainTex; varying vec2 TextureCoordinate; uniform vec4 _LC; varying float LightIntensity; /* transient port */ varying vec3 eyeOutput; varying vec3 dirOutput; varying vec3 lOutput; varying vec2 eOutput; void main() { /* real fragment */ if (eOutput.x > eOutput.y) { //discard; } vec2 f = ray_vs_sphere(eyeOutput, dirOutput, R_INNER); vec2 e = eOutput; ey = min(ey, fx); vec3 I = in_scatter(eyeOutput, dirOutput, eOutput, lOutput); gl_FragColor = vec4(I, 1.0); /*vec4 c2; c2.x = 1.0; c2.y = 1.0; c2.z = 0.0; c2.w = 1.0f; gl_FragColor = c2;*/ //gl_FragColor = c; } #endif ENDGLSL } } } 

Any help is appreciated, sorry for the long post and explanation.

Editing: I just found out that the radius of the sphere has an effect on the material, a sphere with a scale of 2.0 in each direction gives a much better result. However, the image is still completely independent of the viewing angle of the camera and any lights, it is nowhere near the version of shaderlab.

status2

+7
unity3d shader glsl
source share
1 answer

It looks like you are trying to make a 2D texture over a sphere. This has a different approach. For what you are trying to do, I would apply a shader over a plane intersected by a sphere.

For a general purpose, take a look at this article , which shows how to convert shaderToy to Unity3D.

Here are a few steps:

  • Replace the shader input with iGlobalTime ("shader play time in seconds") with _Time.y
  • Replace iResolution.xy ("resolution in pixels") with _ScreenParams.xy
  • Replace vec2 types with float2, mat2 with float2x2, etc.
  • Replace vec3 (1) label constructors in which all elements have the same value with explicit float3 (1,1,1)
  • Replace Texture2D with Tex2D
  • Replace atan (x, y) with atan2 (y, x) <- mark the order of the parameters!
  • Replace mix () with lerp ()
  • Replace * = with mul ()
  • Remove third parameter (offset) from Texture2D search
  • mainImage (out vec4 fragColor, in vec2 fragCoord) is the fragment shader function equivalent to the float4 function mainImage (float2 fragCoord: SV_POSITION): SV_Target
  • The ultraviolet coordinates in GLSL have 0 at the top and increase down, in HLSL 0 at the bottom and increase up, so you may need to use uv.y = 1 - uv.y at some point.

About this question:

 Tags{ "Queue" = "Geometry" } //Is this even the right queue? 

The queue refers to the order that will be displayed, Geometry is one of the first if you want the shader to work on everything that you could use Overlay, for example. This topic is here.

  • Background - this rendering queue is displayed in front of any others. It is used for skyboxes, etc.
  • Geometry (default) - This is used for most objects. Opaque geometry uses this queue.
  • AlphaTest — Proven alpha geometry uses this queue. Its separate turn from - geometry is one with its more effective for rendering alpha-checked objects after all solid ones are drawn.
  • Transparent - This rendering queue is displayed after Geometry and AlphaTest in reverse order. All alpha-mixed ones (i.e. shaders that don't write to depth buffers) should go here (glass, particle effects).
  • Overlay - This rendering queue is intended for overlay effects. Everything that was done last should go here (for example, lens flashes).
0
source share

All Articles