I have an interesting problem that I need for research related to a very low level video stream.
Does anyone have experience converting a raw byte stream (divided by pixel information, but not a standard video format) into a low-resolution video stream? I believe that I can map the data in RGB value to pixel bytes, since the color values ββcorresponding to the value in the source data will be determined by us. I'm not sure where to go from there or which RGB format should be per pixel.
I looked at FFMPeg, but the documentation is massive and I don't know where to start.
The specific questions I have is, is it possible to create a CVPixelBuffer with this pixel data? If I did this, what format for the data per pixel would I need to convert?
Also, should you go deeper into OpenGL, and if so, where is the best place to look for information on this topic?
What about CGBitmapContextCreate? For example, if I went, I went with something like this , what would a typical pixel byte look like? Will it be fast enough to support frame rates above 20 frames per second?
EDIT
I think that with the excellent help you two, and some more research yourself, I put together a plan to create raw RGBA data, and then built a CGImage from that data, in turn creating a CVPixelBuffer from this CGImage from here CVPixelBuffer from CGImage .
However, in order to later reproduce it in real time as data arrives, I'm not sure which FPS I will watch. I draw them on CALayer, or is there some kind of similar class for AVAssetWriter that I could use for playback when I add CVPixelBuffers. The experience I have is to use AVAssetWriter to export the built-in hierarchies of CoreAnimation to video, so the videos are always created before they start playing, and will not be displayed as real-time video.