Android ffmpeg opengl es render movie

I am trying to make a video through the NDK to add some features that are simply not supported in sdk. I use FFmpeg to decode the video and can compile it via ndk and use this as a starting point. I modified this example and instead of using glDrawTexiOES to paint a texture, I set some vertices and I make a texture on top of it (opengl es quad rendering method).

The following is what I am doing for rendering, but creating glTexImage2D is slow. I want to know if there is a way to speed it up or to give an opportunity to speed it up, for example, try to adjust some textures in the background and display the pre-configured textures. Or, if there is any other way to more quickly draw video frames on the screen in Android? Currently, I can get about 12 frames per second.

glClear(GL_COLOR_BUFFER_BIT); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindTexture(GL_TEXTURE_2D, textureConverted); //this is slow glTexImage2D(GL_TEXTURE_2D, /* target */ 0, /* level */ GL_RGBA, /* internal format */ textureWidth, /* width */ textureHeight, /* height */ 0, /* border */ GL_RGBA, /* format */ GL_UNSIGNED_BYTE,/* type */ pFrameConverted->data[0]); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glVertexPointer(3, GL_FLOAT, 0, vertices); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); 

EDIT I changed my code to initialize gltextImage2D only once, and modify it with glSubTexImage2D, this did not improve the frame rate.

Then I changed the code to change my own Bitmap object to NDK. With this approach, I have a background thread that starts processing the next frames and fills the bitmap object on the inside. I think this has potential, but I need to increase the speed of converting an AVFrame object from FFmpeg to a native raster file. The below is currently using what I use to transform, brute force approach. Is there a way to increase the speed of this or optimize this conversion?

 static void fill_bitmap(AndroidBitmapInfo* info, void *pixels, AVFrame *pFrame) { uint8_t *frameLine; int yy; for (yy = 0; yy < info->height; yy++) { uint8_t* line = (uint8_t*)pixels; frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]); int xx; for (xx = 0; xx < info->width; xx++) { int out_offset = xx * 4; int in_offset = xx * 3; line[out_offset] = frameLine[in_offset]; line[out_offset+1] = frameLine[in_offset+1]; line[out_offset+2] = frameLine[in_offset+2]; line[out_offset+3] = 0; } pixels = (char*)pixels + info->stride; } } 
+3
source share
4 answers

Yes, creating textures (and buffers, shaders, and framebuffers) is slow.

This is why you should only create a texture once. After creating it, you can change your data by calling glSubTexImage2D .

And to speed up the loading of texture data - create two textures. While you use it to display, load texture data from ffmpeg into the second. When you show the second, load the data into the first. And repeat from the beginning.

I think everything will not be so fast. You can try using the jnigraphics library, which allows you to access the pixels of a Bitmap object from the NDK. After that, you simply view this bitmap on the screen from the java side.

+6
source

A few minor additions solve your problem, first convert the AVFrame to RGB using swscale, and then apply it directly to the texture ie:

 AVPicture *pFrameConverted; struct SwsContext img_convert_ctx; void init(){ pFrameConverted=(AVPicture *)avcodec_alloc_frame(); avpicture_alloc(pFrameConverted, AV_PIX_FMT_RGB565, videoWidth, videoHeight); img_convert_ctx = sws_getCachedContext(&img_convert_ctx, videoWidth, videoHeight, pCodecCtx->pix_fmt, videoWidth, videoHeight, AV_PIX_FMT_RGB565, SWS_FAST_BILINEAR, NULL, NULL, NULL ); ff_get_unscaled_swscale(img_convert_ctx); } void render(AVFrame* pFrame){ sws_scale(img_convert_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pFrame->height, pFrameConverted->data, pFrameConverted->lineSize); glClear(GL_COLOR_BUFFER_BIT); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, videoWidth, videoHeight, GL_RGB, GL_UNSIGNED_BYTE, pFrameConverted->data[0]); glDrawTexiOES(0, 0, 0, videoWidth, videoHeight); } 
+1
source

Oh, maybe you can use jnigraphics as https://github.com/havlenapetr/FFMpeg/commits/debug . but if you get yuv data after decoding the frame, you have to convert it to RGB555, it is too slow. Using an android media player is a good idea.

0
source

Yes, you can optimize this code:

 static void fill_bitmap(AndroidBitmapInfo* info, void *pixels, AVFrame *pFrame) { uint8_t *frameLine; int yy; for (yy = 0; yy < info->height; yy++) { uint8_t* line = (uint8_t*)pixels; frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]); int xx; for (xx = 0; xx < info->width; xx++) { int out_offset = xx * 4; int in_offset = xx * 3; line[out_offset] = frameLine[in_offset]; line[out_offset+1] = frameLine[in_offset+1]; line[out_offset+2] = frameLine[in_offset+2]; line[out_offset+3] = 0; } pixels = (char*)pixels + info->stride; } } 

to be something like:

 static void fill_bitmap(AndroidBitmapInfo* info, void *pixels, AVFrame *pFrame) { uint8_t *frameLine = (uint8_t *)pFrame->data[0]; int yy; for (yy = 0; yy < info->height; yy++) { uint8_t* line = (uint8_t*)pixels; int xx; int out_offset = 0; int in_offset = 0; for (xx = 0; xx < info->width; xx++) { int out_offset += 4; int in_offset += 3; line[out_offset] = frameLine[in_offset]; line[out_offset+1] = frameLine[in_offset+1]; line[out_offset+2] = frameLine[in_offset+2]; line[out_offset+3] = 0; } pixels = (char*)pixels + info->stride; frameLine += pFrame->linesize[0]; } } 

This will save you a few cycles.

0
source

All Articles