Efficient particle system in javascript? (Webgl)

I am trying to write a program that does some fundamental simulations of particle gravity physics. I originally wrote the program using standard Javascript graphics (with a 2d context), and I could get about 25 fps with 10,000 particles this way. I rewrote the tool in WebGL because I was on the assumption that I could achieve better results this way. I also use the glMatrix library for vector math. However, with this implementation, I only get about 15 frames per second with 10,000 particles.

I am currently an EECS student, and I had a reasonable amount of programming experience, but never with graphics, and I know little how to optimize Javascript code. I don't really understand how WebGL and Javascript work. What key components affect performance when using these technologies? Is there a more efficient data structure to control my particles (I just use a simple array)? What explanation can be made to reduce performance using WebGL? Perhaps delays between the GPU and Javascript?

Any suggestions, explanations or assistance in general are welcome.

I will try to include only the main areas of my code for reference.

Here is my installation code:

gl = null; try { // Try to grab the standard context. If it fails, fallback to experimental. gl = canvas.getContext("webgl") || canvas.getContext("experimental-webgl"); gl.viewportWidth = canvas.width; gl.viewportHeight = canvas.height; } catch(e) {} if(gl){ gl.clearColor(0.0,0.0,0.0,1.0); gl.clearDepth(1.0); // Clear everything gl.enable(gl.DEPTH_TEST); // Enable depth testing gl.depthFunc(gl.LEQUAL); // Near things obscure far things // Initialize the shaders; this is where all the lighting for the // vertices and so forth is established. initShaders(); // Here where we call the routine that builds all the objects // we'll be drawing. initBuffers(); }else{ alert("WebGL unable to initialize"); } /* Initialize actors */ for(var i=0;i<NUM_SQS;i++){ sqs.push(new Square(canvas.width*Math.random(),canvas.height*Math.random(),1,1)); } /* Begin animation loop by referencing the drawFrame() method */ gl.bindBuffer(gl.ARRAY_BUFFER, squareVerticesBuffer); gl.vertexAttribPointer(vertexPositionAttribute, 2, gl.FLOAT, false, 0, 0); requestAnimationFrame(drawFrame,canvas); 

Drawing cycle:

 function drawFrame(){ // Clear the canvas before we start drawing on it. gl.clear(gl.COLOR_BUFFER_BIT); //mvTranslate([-0.0,0.0,-6.0]); for(var i=0;i<NUM_SQS;i++){ sqs[i].accelerate(); /* Translate current buffer (?) */ gl.uniform2fv(translationLocation,sqs[i].posVec); /* Draw current buffer (?) */; gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); } window.requestAnimationFrame(drawFrame, canvas); } 

Here is the class that Square inherits from:

 function PhysicsObject(startX,startY,size,mass){ /* Class instances */ this.posVec = vec2.fromValues(startX,startY); this.velVec = vec2.fromValues(0.0,0.0); this.accelVec = vec2.fromValues(0.0,0.0); this.mass = mass; this.size = size; this.accelerate = function(){ var r2 = vec2.sqrDist(GRAV_VEC,this.posVec)+EARTH_RADIUS; var dirVec = vec2.create(); vec2.set(this.accelVec, G_CONST_X/r2, G_CONST_Y/r2 ); /* Make dirVec unit vector in direction of gravitational acceleration */ vec2.sub(dirVec,GRAV_VEC,this.posVec) vec2.normalize(dirVec,dirVec) /* Point acceleration vector in direction of dirVec */ vec2.multiply(this.accelVec,this.accelVec,dirVec);//vec2.fromValues(canvas.width*.5-this.posVec[0],canvas.height *.5-this.posVec[1]))); vec2.add(this.velVec,this.velVec,this.accelVec); vec2.add(this.posVec,this.posVec,this.velVec); }; } 

These are the shaders that I use:

  <script id="shader-fs" type="x-shader/x-fragment"> void main(void) { gl_FragColor = vec4(0.7, 0.8, 1.0, 1.0); } </script> <!-- Vertex shader program --> <script id="shader-vs" type="x-shader/x-vertex"> attribute vec2 a_position; uniform vec2 u_resolution; uniform vec2 u_translation; void main() { // Add in the translation. vec2 position = a_position + u_translation; // convert the rectangle from pixels to 0.0 to 1.0 vec2 zeroToOne = position / u_resolution; // convert from 0->1 to 0->2 vec2 zeroToTwo = zeroToOne * 2.0; // convert from 0->2 to -1->+1 (clipspace) vec2 clipSpace = zeroToTwo - 1.0; gl_Position = vec4(clipSpace*vec2(1,-1), 0, 1); } </script> 

I apologize for being lanky. Again, any suggestions or pushings in the right direction would be huge.

+7
source share
2 answers

you should never draw primitives individually. Draw them all at once, whenever possible. Create an ArrayBuffer that contains the position and other necessary attributes of all the particles, and then draw the entire buffer with a single call to gl.drawArrays. I cannot give exact instructions because I am on a mobile phone, but finding vbo, alternating arrays and particles in opengl will undoubtedly help you find examples and other useful resources.

I handle 5mm static dots with 10fps. Dynamic points will be slower, since you will have to constantly send updated data to the graphics card, but it will be faster than 15 frames per second per 10,000 points.

Edit:

You might want to use gl.POINT instead of TRIANGLE_STRIP. Thus, you should only specify the position and gl_PointSize (in the vertex shader) for each square. gl.POINT are displayed as squares!

You can take a look at the source of these two point renderings:

https://github.com/asalga/XB-PointStream http://potree.org/wp/download/ (The following files can help me: WeightedPointSizeMaterial.js, pointSize.vs, colorredPoint.fs)
+6
source

It depends on what you are trying to do. When you say "gravity", you mean some kind of physical modeling with collisions or just mean velocity += acceleration; position += velocity velocity += acceleration; position += velocity ?

If the latter, then you can do all the math in the shader. Example here

https://www.khronos.org/registry/webgl/sdk/demos/google/particles/index.html

These particles are fully executed in the shader. The only input after configuration is time . Each "particle" consists of 4 vertices. Each vertex contains

  • local_position (for unit)
  • texture_coord
  • Life time
  • starting_position
  • starting_time
  • speed
  • acceleration
  • start_size
  • end_size
  • orientation (quaterion)
  • color factor

At a given time, you can calculate the local particle time (the time since it started)

  local_time = time - starting_time; 

Then you can calculate the position with

  base_position = start_position + velocity * local_time + acceleration * local_time * local_time; 

This is acceleration * time ^ 2. Then you add local_position to this base_position to get the position needed for quad rendering.

You can also calculate from 0 to 1 lerp per particle lifetime

  lerp = local_time / lifetime; 

This gives you a value that you can use for lerp all other values

  size = mix(start_size, end_size, lerp); 

If the particle has a size of 0, if it is outside its life

  if (lerp < 0.0 || lerp > 1.0) { size = 0.0; } 

This will cause the GPU to draw nothing.

Using a ramp texture (1xN pixel texture), you can easily change the colors of the particles over time.

  color = texture2D(rampTexture, vec4(lerp, 0.5)); 

etc...

If you follow the shaders , you will see other things similarly processed, including particle rotation (which would be more difficult with point sprites), texture animation for frames, processing of 2D and 3D-oriented particles. 2D particles are great for smoke, exhaust, fire, explosions. 3D particles are good for ripples, possibly tire tracks, and can be combined with 2D particles for earth puffs to hide some z-problems of only 2D particles. etc..

There are also examples of a single shot (explosions, puffs), as well as traces. Press "P" to tighten. Hold 'T' to see the trail.

AFAIKs are pretty efficient particles because JavaScript does almost nothing.

+3
source

All Articles