Convert ShaderToy to Fragment Shader

I came across several ShaderToy shaders, and I did not manage to convert them to a format that can be used on a mobile device, for example .fsh .

I have this Shader, and I want to be able to use it on a mobile device.

I know that I need to make changes to iXXXX variables and change mainImage to main ().

Does anyone know how I can do this? I can’t find any resources on how to do this, and have never come across this myself.

 float noise(vec2 p) { float sample = texture2D(iChannel1,vec2(1.,2.*cos(iGlobalTime))*iGlobalTime*8. + p*1.).x; sample *= sample; return sample; } float onOff(float a, float b, float c) { return step(c, sin(iGlobalTime + a*cos(iGlobalTime*b))); } float ramp(float y, float start, float end) { float inside = step(start,y) - step(end,y); float fact = (y-start)/(end-start)*inside; return (1.-fact) * inside; } float stripes(vec2 uv) { float noi = noise(uv*vec2(0.5,1.) + vec2(1.,3.)); return ramp(mod(uv.y*4. + iGlobalTime/2.+sin(iGlobalTime + sin(iGlobalTime*0.63)),1.),0.5,0.6)*noi; } vec3 getVideo(vec2 uv) { vec2 look = uv; float window = 1./(1.+20.*(look.y-mod(iGlobalTime/4.,1.))*(look.y-mod(iGlobalTime/4.,1.))); look.x = look.x + sin(look.y*10. + iGlobalTime)/50.*onOff(4.,4.,.3)*(1.+cos(iGlobalTime*80.))*window; float vShift = 0.4*onOff(2.,3.,.9)*(sin(iGlobalTime)*sin(iGlobalTime*20.) + (0.5 + 0.1*sin(iGlobalTime*200.)*cos(iGlobalTime))); look.y = mod(look.y + vShift, 1.); vec3 video = vec3(texture2D(iChannel0,look)); return video; } vec2 screenDistort(vec2 uv) { uv -= vec2(.5,.5); uv = uv*1.2*(1./1.2+2.*uv.x*uv.x*uv.y*uv.y); uv += vec2(.5,.5); return uv; } void mainImage( out vec4 fragColor, in vec2 fragCoord ) { vec2 uv = fragCoord.xy / iResolution.xy; uv = screenDistort(uv); vec3 video = getVideo(uv); float vigAmt = 3.+.3*sin(iGlobalTime + 5.*cos(iGlobalTime*5.)); float vignette = (1.-vigAmt*(uv.y-.5)*(uv.y-.5))*(1.-vigAmt*(uv.x-.5)*(uv.x-.5)); video += stripes(uv); video += noise(uv*2.)/2.; video *= vignette; video *= (12.+mod(uv.y*30.+iGlobalTime,1.))/13.; fragColor = vec4(video,1.0); } 
+6
source share
2 answers

I wrote main() and included the equivalent Sprite variables of the ShaderToys variables at the bottom of my answer.

Customization

To apply a shader to your node, you need to tell SpriteKit to bind the shader to SKSpriteNode in the .fsh file.

  • Create an empty text file ending in .fsh for the shader code.

Swizzle πŸŒ€

shader1.fsh

 void main() { vec4 val = texture2D(_texture, v_tex_coord); vec4 grad = texture2D(u_gradient, v_tex_coord); if (val.a < 0.1 && grad.r < 1.0 && grad.a > 0.8) { vec2 uv = gl_FragCoord.xy / u_sprite_size.xy; uv = screenDistort(uv); vec3 video = getVideo(uv); float vigAmt = 3.+.3*sin(u_time + 5.*cos(u_time*5.)); float vignette = (1.-vigAmt*(uv.y-5)*(uv.y-5.))*(1.-vigAmt*(uv.x-.5)*(uv.x-.5)); video += stripes(uv); video += noise(uv*2.)/2.; video *= vignette; video *= (12.+mod(uv.y*30.+u_time,1.))/13.; gl_FragColor = vec4(video,1.0); } else { gl_FragColor = val; } } // end of main() 
  1. Then add the shader to SpriteKit.

shader1.swift

 let sprite = self.childNodeWithName("targetSprite") as! SKSpriteNode let shader = SKShader(fileNamed: "shader1.fsh") sprite.shader = shader 

Description

  • A shader turns each pixel into an effect color (screenDistort (uv)).
  • main () is the entry point.
  • gl_FragColor is the return.
  • For each pixel in the image, this code is executed.
  • When the code is executed, it tells each pixel that the color should be the color of the effect. The call to vec4 () has the values ​​r, g, b, a.

ShaderToys variable names β†’ SpriteKit variable names

iGlobalTime β†’ u_time

iResolution β†’ u_sprite_size

fragCoord.xy β†’ gl_FragCoord.xy

iChannelX β†’ SKUniform with name of "iChannelX" containing SKTexture

fragColor β†’ gl_FragColor

Since you have Sprite equivalent variables, now you can easily convert these remaining methods, which are above main() .

float noise {}

float onOff {}

float ramp {}

float stripes {}

vec3 getVideo {}

vec2 screenDistort {}

Theory

Q. Why does main() contain texture2D and u_gradient, v_tex_coord ?

a. SpriteKit uses texture and uv coordinates.

UV mapping

UV mapping is a 3D modeling process for projecting a 2D image onto the surface of a 3D model to display textures.

UV coordinates

When texturing the mesh, you need to tell OpenGL which part of the image should be used for each triangle. This is done using UV coordinates. Each vertex can have a pair of U and V floats above its position. These coordinates are used to access and distort the texture.

SKShader class reference

OpenGL ES for iOS

Shader Recommendations

WWDC Session 606 - What's New in SpriteKit - Shaders, Lighters, Shadows

+3
source

this works for me in the unit3D engine.

 // Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)' Shader"ShaderMan/Clip"{ Properties{ _MainTex("MainTex", 2D) = "white"{} _SecondTex("_SecondTex",2D) = "white"{} } SubShader{ Pass{ CGPROGRAM #pragma vertex vert #pragma fragment frag #pragma fragmentoption ARB_precision_hint_fastest #include "UnityCG.cginc" struct appdata{ float4 vertex : POSITION; float2 uv : TEXCOORD0; }; uniform sampler2D _MainTex; uniform fixed4 fragColor; uniform fixed iChannelTime[4];// channel playback time (in seconds) uniform fixed3 iChannelResolution[4];// channel resolution (in pixels) uniform fixed4 iMouse;// mouse pixel coords. xy: current (if MLB down), zw: click uniform fixed4 iDate;// (year, month, day, time in seconds) uniform fixed iSampleRate;// sound sample rate (ie, 44100) sampler2D _SecondTex; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; float4 screenCoord : TEXCOORD1; }; v2f vert(appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = v.uv; o.screenCoord.xy = ComputeScreenPos(o.vertex); return o; } fixed noise(fixed2 p) { fixed sample = tex2D(_SecondTex,fixed2(1.,2.*cos(_Time.y))*_Time.y*8. + p*1.).x; sample = mul( sample ,sample); return sample; } fixed onOff(fixed a, fixed b, fixed c) { return step(c, sin(_Time.y + a*cos(_Time.y*b))); } fixed ramp(fixed y, fixed start, fixed end) { fixed inside = step(start,y) - step(end,y); fixed fact = (y-start)/(end-start)*inside; return (1.-fact) * inside; } fixed stripes(fixed2 uv) { fixed noi = noise(uv*fixed2(0.5,1.) + fixed2(1.,3.)); return ramp(fmod(uv.y*4. + _Time.y/2.+sin(_Time.y + sin(_Time.y*0.63)),1.),0.5,0.6)*noi; } fixed3 getVideo(fixed2 uv) { fixed2 look = uv; fixed window = 1./(1.+20.*(look.y-fmod(_Time.y/4.,1.))*(look.y-fmod(_Time.y/4.,1.))); look.x = look.x + sin(look.y*10. + _Time.y)/50.*onOff(4.,4.,.3)*(1.+cos(_Time.y*80.))*window; fixed vShift = 0.4*onOff(2.,3.,.9)*(sin(_Time.y)*sin(_Time.y*20.) + (0.5 + 0.1*sin(_Time.y*200.)*cos(_Time.y))); look.y = fmod(look.y + vShift, 1.); fixed3 video = fixed3(tex2D(_MainTex,look).xyz); return video; } fixed2 screenDistort(fixed2 uv) { uv -= fixed2(.5,.5); uv = uv*1.2*(1./1.2+2.*uv.x*uv.x*uv.y*uv.y); uv += fixed2(.5,.5); return uv; } fixed4 frag(v2f i) : SV_Target{ { fixed2 uv = i.uv; uv = screenDistort(uv); fixed3 video = getVideo(uv); fixed vigAmt = 3.+.3*sin(_Time.y + 5.*cos(_Time.y*5.)); fixed vignette = (1.-vigAmt*(uv.y-.5)*(uv.y-.5))*(1.-vigAmt*(uv.x-.5)*(uv.x-.5)); video += stripes(uv); video += noise(uv*2.)/2.; video = mul( video ,vignette); video = mul( video ,(12.+fmod(uv.y*30.+_Time.y,1.))/13.); return fixed4(video,1.0); } }ENDCG } } } 
0
source

All Articles