Customizing the projection, model and presentation transformation for the vertex shader in your own

I looked around and never saw, nailed down, exactly what each matrix does and what operations form them (this is how the real function calls). This is what I am looking for. Or at least a description of the process and a couple of examples with native functions to see how to do it! Anyway, here are some details if they are useful:

I am creating a perspective game from top to bottom (so the camera locks down, but can rotate and move along the XY plane), but since I will have some 3D elements (along with some things that are strictly 2D) I think that a perspective projection will work OK. But I wonder what commands are needed to form a spelling projection ...

I am analyzing the view that will be made by translating the camera coordinates to the origin, rotating to rotate the camera, translating them back to where they were, and then zooming to enlarge? But just what functions and objects will be involved, I'm not sure.

And for storing the rotation of any given object, a quaternion is apparently the best choice. So this will determine the projection of the model? If I manage to make my rotation simplify the two-dimensional case of one angle, will the quaternions be wasteful?

And do all these matrices need to be regenerated from each frame? Or can they be changed in some way to fit the new data?

I would rather use eigen for this instead of a manual library, but I need something to work to figure out exactly what is happening ... I have all the GLSL settings and uniform matrices that feed into rendering with my VAO, I just need to understand and make them.

Editing:
My vertex shader uses this standard setup with 3 uniform matrices times vec3:

gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0); 

Can mat3s and vec2 be used to achieve better performance in purely 2D cases?

+8
c ++ game-engine opengl eigen
source share
2 answers

Here is an example of the lookAt and setPerspective functions, which create matrixes of views and projections from simple inputs:

 void Camera::lookAt(const Eigen::Vector3f& position, const Eigen::Vector3f& target, const Eigen::Vector3f& up) { Matrix3f R; R.col(2) = (position-target).normalized(); R.col(0) = up.cross(R.col(2)).normalized(); R.col(1) = R.col(2).cross(R.col(0)); mViewMatrix.topLeftCorner<3,3>() = R.transpose(); mViewMatrix.topRightCorner<3,1>() = -R.transpose() * position; mViewMatrix(3,3) = 1.0f; } void Camera::setPerspective(float fovY, float aspect, float near, float far) { float theta = fovY*0.5; float range = far - near; float invtan = 1./tan(theta); mProjectionMatrix(0,0) = invtan / aspect; mProjectionMatrix(1,1) = invtan; mProjectionMatrix(2,2) = -(near + far) / range; mProjectionMatrix(3,2) = -1; mProjectionMatrix(2,3) = -2 * near * far / range; mProjectionMatrix(3,3) = 0; } 

Then you can specify matrices for GL:

 glUniformMatrix4fv(glGetUniformLocation(mProgram.id(),"mat_view"), 1, GL_FALSE, mCamera.viewMatrix().data()); glUniformMatrix4fv(glGetUniformLocation(mProgram.id(),"mat_proj"), 1, GL_FALSE, mCamera.projectionMatrix().data()); 

To transform the model (it is better to keep the view and the model separated), you can use the Geometry module with the Scaling, Translation, and Quaternion classes to assemble the Affine3f object.

+11
source share

Shaders are run for each vertex delivered in the rendering pipeline. To get maximum performance, you usually perform “uniform” operations with the processor, pass detailed information to each instance of the shader using a uniform, and then run ...

In the example you pointed out, it is best to calculate mat4 * vec4 instead of mat4 * mat4 * mat4 * vec4 , indeed:

gl_Position = modelviewprojectionMatrix * vec4 (in_Position, 1.0);

Where modelviewprojectionMatrix is the result of projectionMatrix * viewMatrix * modelMatrix . Matrix arithmetic is implemented on the CPU side, for each set of vertices that need to be visualized.

How do you organize the data necessary to obtain projection-projection matrices that meet your requirements. Actual performance depends on the scene graph that will be displayed; for example, if you only perform translations (perhaps only on the XY plane), only vector transfer is possible, creating matrices when they are needed.

Matrices are multiplied by a standard algebraic operation . Matrices can be model matrices or projection matrices . Transformations can be combined by multiplying two transformation matrices.

0
source share

All Articles