Convert Raw Data to Displayed Video for iOS

I have an interesting problem that I need for research related to a very low level video stream.

Does anyone have experience converting a raw byte stream (divided by pixel information, but not a standard video format) into a low-resolution video stream? I believe that I can map the data in RGB value to pixel bytes, since the color values ​​corresponding to the value in the source data will be determined by us. I'm not sure where to go from there or which RGB format should be per pixel.

I looked at FFMPeg, but the documentation is massive and I don't know where to start.

The specific questions I have is, is it possible to create a CVPixelBuffer with this pixel data? If I did this, what format for the data per pixel would I need to convert?

Also, should you go deeper into OpenGL, and if so, where is the best place to look for information on this topic?

What about CGBitmapContextCreate? For example, if I went, I went with something like this , what would a typical pixel byte look like? Will it be fast enough to support frame rates above 20 frames per second?

EDIT

I think that with the excellent help you two, and some more research yourself, I put together a plan to create raw RGBA data, and then built a CGImage from that data, in turn creating a CVPixelBuffer from this CGImage from here CVPixelBuffer from CGImage .

However, in order to later reproduce it in real time as data arrives, I'm not sure which FPS I will watch. I draw them on CALayer, or is there some kind of similar class for AVAssetWriter that I could use for playback when I add CVPixelBuffers. The experience I have is to use AVAssetWriter to export the built-in hierarchies of CoreAnimation to video, so the videos are always created before they start playing, and will not be displayed as real-time video.

+4
source share
2 answers

I have done this before, and I know that you recently found my GPUImage project. Since I answered the problems there, GPUImageRawDataInput is what you want for this, because it quickly loads RGBA, BGRA or RGB data directly into the OpenGL ES texture. From there, the frame data can be filtered, displayed on the screen, or written to a movie file.

Your suggested path for passing CGImage to CVPixelBuffer will not bring very good performance based on my personal experience. There is too much overhead when going through Core Graphics for real-time video. You want to go directly to OpenGL ES for maximum display speed here.

I could even improve my code to make it faster than now. I am currently using glTexImage2D() to update texture data from local bytes, but it will probably be even faster to use texture caches introduced in iOS 5.0 to speed up data updating in a texture that supports its size. There is some overhead when setting up caches, which makes them a little slower for one-time downloads, but fast data updates should be faster with them.

+5
source

My 2 cents:

I made an opengl game that allows the user to record a 3D scene. Playback was done by playing the scene (instead of playing the video, since real-time encoding did not give a convenient FPS.

There is a technique that could help, unfortunately, I did not have time to implement it: http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache -api /

This method should reduce the time it takes to return pixels from openGL. You can get an acceptable video encoding rate.

+1
source

All Articles