Segmentation error using glGetString () with pthreads under linux

I am trying to load textures into a background thread to speed up my application.

The stack used is C / C ++ on Linux, with compilation with gcc. We use OpenGL, GLUT and GLEW. We used libSOIL to load textures.

Ultimately, starting a texture load with libSOIL fails because it encounters a glGetString () call, which calls segfault. Trying to narrow down the problem, I wrote a very simple OpenGL application that reproduces the behavior. The code example below should not do anything, but it should not be segfault either. If I knew why this happened, I could theoretically redesign libSOIL to behave in a pthreaded environment.

void *glPthreadTest( void* arg ) { glGetString( GL_EXTENSIONS ); //SIGSEGV return NULL; } int main( int argc, char **argv ) { glutInit( &argc, argv ); glutInitDisplayMode( GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH ); glewInit(); glGetString( GL_EXTENSIONS ); // Does not cause SIGSEGV pthread_t id; if (pthread_create( &id, NULL, glPthreadTest, (void*)NULL ) != 0) fprintf( stderr, "phtread_create glPthreadTest failed.\n" ); glutMainLoop(); return EXIT_SUCCESS; } 

An example stacktrace for this application from gdb is as follows:

 #0 0x00000038492f86e9 in glGetString () from /usr/lib64/nvidia/libGL.so.1 No symbol table info available. #1 0x0000000000404425 in glPthreadTest (arg=0x0) at sf.cpp:168 No locals. #2 0x0000003148e07d15 in start_thread (arg=0x7ffff7b36700) at pthread_create.c:308 __res = <optimized out> pd = 0x7ffff7b36700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737349117696, -5802871742031723458, 1, 211665686528, 140737349117696, 0, 5802854601940796478, -5829171783283899330}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = 0 pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #3 0x00000031486f246d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:114 No locals. 

You will notice that I am using the nvidia libGL implementation, but this also happens identically with mesa libgl, which Ubuntu uses for Intel HD graphics cards.

Any tips on what might go wrong, or how to research further to find out what is going on?

Edit: here is #includes and the compilation line for my test example:

 #include <SOIL.h> #include <GL/glew.h> #include <GL/freeglut.h> #include <GL/freeglut_ext.h> #include <signal.h> #include <pthread.h> #include <cstdio> 

g++ -Wall -pedantic -I/usr/include/SOIL -O0 -ggdb -o sf sf.cpp -lSOIL -pthread -lGL -lGLU -lGLEW -lglut -lX11

+4
source share
1 answer

In order for any OpenGL call to work correctly, it requires an OpenGL context. Contexts are created using a window system binding call (for example, wglCreateContext or the like). After creating the context, it needs to be "made current", which means associating the context with the current thread of execution. This is done using another call to the Windows system (for example, wglMakeCurrent for Microsoft Windows or glXMakeCurrent for X Windows). GLUT abstracts all this complexity from you, doing all these operations when you call glutCreateWindow .

Now it’s important to know that only one OpenGL context can be current for a thread of execution at any given time. So, in the original OP example, if he / he could make the context current in the Pthread that they created, then the context will be lost in the main thread. The way for all this to be consistent was to use only one context in a single thread. (It is possible that OpenGL contexts exchange data, but they don’t open GLUT and cannot use window system context creation calls).

In your case, it is likely that GLUT does not allow access to what you really need (i.e. an OpenGL context) to make it current in another thread. You will need to create and manage OpenGL contexts yourself.

+8
source

All Articles