Help fix row scaling in Android OpenGL 2.0 w / QCAR

I am working with the QCAR AR SDK on Android that uses OpenGL 2.0, and I am new to version 2.0. The QCAR SDK is for CV-based AR applications and uses OpenGL to render to images.

I just would like to draw a little X in the center of the screen and use the following code. But instead of drawing X at the correct coordinates, X continues to the edges of the screen. This happens no matter what values ​​I assign to the vertices. I can't figure out if this is a scaling issue or some confusion in the coordinate system I use.

Any ideas as to why these lines are not drawn correctly? - I know that it would be easier in 1.1, but I have to use 2.0.

Thnx

// Clear color and depth buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); GLfloat diagVertices[12]; diagVertices[0] = -10; diagVertices[1] = -10; diagVertices[2] = 0.0f; diagVertices[3] = 10; diagVertices[4] = 10; diagVertices[5] = 0.0f; diagVertices[6] = -10; diagVertices[7] = 10; diagVertices[8] = 0.0f; diagVertices[9] = 10; diagVertices[10] = -10; diagVertices[11] = 0.0f; glUseProgram(diagonalShaderProgramID); // map the border vertices glVertexAttribPointer(diagVertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) &diagVertices[0]); // draw it glEnableVertexAttribArray(diagVertexHandle); glLineWidth(3.0f); glDrawArrays(GL_LINES, 0, 4); glDisableVertexAttribArray(diagVertexHandle); 

Here is the shader that I use.

 static const char* diagLineMeshVertexShader = " \ \ attribute vec4 vertexPosition; \ \ void main() \ { \ gl_Position = vertexPosition; \ } \ "; static const char* diagLineFragmentShader = " \ \ precision mediump float; \ \ void main() \ { \ gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \ } \ "; 

Update:

So, I installed the build environment on Windows 7 (64) using Eclipse and Cygwin and tested the same approach - draw arrays of vertex attributes. The codebase is derived from a simple lighthouse3D sample demonstrating GSLS. I compiled and ran the sample to confirm its rendering, as expected. Then I applied vertex arrays as above. I see exactly the same problem. The lines extend to the edges of the window, regardless of their vertex values.

This is for GL_VERSION 2.1.2. The implementation of vertex attribute arrays and the method for rendering them seem to be identical to the other examples I found through the reference resources.

Here is the code .. - I commented on sections of the lighthouse3d code I modified.

 #define WIN32 #include <stdio.h> #include <stdlib.h> #include <GL/Glee.h> #include <GL/glut.h> #include "textfile.h" GLuint v,f,f2,p; float lpos[4] = {1,0.5,1,0}; GLfloat crossVertices[12]; GLint lineVertexHandle = 0; void changeSize(int w, int h) { // Prevent a divide by zero, when window is too short // (you cant make a window of zero width). if(h == 0) h = 1; float ratio = 1.0* w / h; // Reset the coordinate system before modifying glMatrixMode(GL_PROJECTION); glLoadIdentity(); // Set the viewport to be the entire window glViewport(0, 0, w, h); // Set the correct perspective. //gluPerspective(45,ratio,1,1000); gluPerspective(45,ratio,1,10); glMatrixMode(GL_MODELVIEW); } void renderScene(void) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); gluLookAt(0.0,0.0,5.0, 0.0,0.0,0.0, 0.0f,1.0f,0.0f); glLightfv(GL_LIGHT0, GL_POSITION, lpos); //glutSolidTeapot(1); // map the border vertices glVertexAttribPointer(lineVertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) &crossVertices[0]); glEnableVertexAttribArray(lineVertexHandle); glLineWidth(1.0f); glDrawArrays(GL_LINES, 0, 4); glDisableVertexAttribArray(lineVertexHandle); glutSwapBuffers(); } void processNormalKeys(unsigned char key, int x, int y) { if (key == 27) exit(0); } void setShaders() { char *vs = NULL,*fs = NULL,*fs2 = NULL; v = glCreateShader(GL_VERTEX_SHADER); f = glCreateShader(GL_FRAGMENT_SHADER); f2 = glCreateShader(GL_FRAGMENT_SHADER); vs = textFileRead("toon.vert"); fs = textFileRead("toon.frag"); fs2 = textFileRead("toon2.frag"); const char * ff = fs; const char * ff2 = fs2; const char * vv = vs; glShaderSource(v, 1, &vv,NULL); glShaderSource(f, 1, &ff,NULL); glShaderSource(f2, 1, &ff2,NULL); free(vs);free(fs); glCompileShader(v); glCompileShader(f); glCompileShader(f2); p = glCreateProgram(); glAttachShader(p,f); glAttachShader(p,f2); glAttachShader(p,v); glLinkProgram(p); glUseProgram(p); } void defineVertices(){ crossVertices[0]= 10.0f; crossVertices[1]=0.0f; crossVertices[2]=0.0f; crossVertices[3]= -1 * 10.0f; crossVertices[4]=0.0f; crossVertices[5]=0.0f; crossVertices[6]=0.0f; crossVertices[7]= 10.0f; crossVertices[8]=0.0f; crossVertices[9]=0.0f; crossVertices[10]= -1 * 10.0f; crossVertices[11]=0.0f; } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA); glutInitWindowPosition(100,100); glutInitWindowSize(320,320); glutCreateWindow("MM 2004-05"); glutDisplayFunc(renderScene); glutIdleFunc(renderScene); glutReshapeFunc(changeSize); glutKeyboardFunc(processNormalKeys); glEnable(GL_DEPTH_TEST); glClearColor(1.0,1.0,1.0,1.0); glEnable(GL_CULL_FACE); /* glewInit(); if (glewIsSupported("GL_VERSION_2_0")) printf("Ready for OpenGL 2.0\n"); else { printf("OpenGL 2.0 not supported\n"); exit(1); } */ setShaders(); defineVertices(); glutMainLoop(); // just for compatibiliy purposes return 0; } 

and here is the vertex shader, which is from the example of lighthouse3D ...

 varying vec3 normal, lightDir; void main() { lightDir = normalize(vec3(gl_LightSource[0].position)); normal = normalize(gl_NormalMatrix * gl_Normal); gl_Position = ftransform(); } 

Any ideas on what could be causing this?

+4
source share
1 answer

In your vertex shader, you simply pass the positions of the vertices through the rasterizer, without converting them into a model or projection matrix. Given that this is absolutely true, you still have to take care of the range in which your coordinates are.

After the vertex processing step, your coordinates should be in the [-1,1] -cube, everything disappears there, and this cube is then converted by converting vieport to screen space, for example. [0,w]x[0,h]x[0,1] . Thus, your coordinates range from -10 to 10, so your line is actually 10 times the screen size. If you mean pixels, you should scale your x, y values ​​from [-w/2,w/2]x[-h/2,h/2] to [-1,1] in the vertex shader.

This is the same problem in the GL project you proposed on the desktop, you call ftransform in the shader, but your projection matrix is ​​a simple perspective matrix that does not scale your coordinates so much. Therefore, in this project, replace the gluPerspective call with glOrtho(-0.5*w, 0.5*w, -0.5*h, 0.5*h, -1.0, 1.0) if you want the line coordinates to be pixels.

And also keep in mind that y in OpenGL is used from the bottom up by default. Therefore, if you want this to be done differently (which many image processing engines do), you must also cancel your y-coordinate in the vertex shader (and also change the third and fourth values ​​in the glOrtho call in another project). But keep in mind that this will change the orientation of any triangles that you render, if any.

So, for example, in your vertex shader, just do something like:

 uniform vec2 screenSize; //contains the screen size in pixels attribute vec2 vertexPosition; //why take 4 if you only need 2? void main() { gl_Position = vec4(2.0*vertexPosition/screenSize, 0.0, 1.0); } 

This gives you a coordinate system in pixels with the origin at the center and the y axis going from bottom to top. If this is not what you need, then feel free to play with the conversion (it can also be optimized using precomputiong 2.0/screenSize on the CPU). But always keep in mind that after the vertex shader, the screen space is actually [-1,1] -cube, and then it is converted to the real screen space in pixels using the viewport transform (what values ​​did you give glViewport ).

0
source

All Articles