Okay, I mainly work on a simple video player, and I will probably ask another question about the delay of video \ synchronization in audio later, but for now I have a problem with audio. I managed to make all the audio frames of the video and add them to the vector buffer, and then play the audio from this buffer using OpenAL.
This is inefficient and dangerous for memory, and therefore I need to be able to transfer it using what I think is called a rotating buffer. I ran into problems, one of which is that there is not much information about streaming using OpenAL, not to mention the proper way to decode sound using FFMPEG and connecting it to OpenAL. Itβs even less convenient for me to use a vector for my buffer, because I honestly have no idea how vectors work in C ++, but somehow I managed to get something out of my head to make it work.
I currently have a video class that looks like this:
class Video { public: Video(string MOV); ~Video(); bool HasError(); string GetError(); void UpdateVideo(); void RenderToQuad(float Width, float Height); void CleanTexture(); private: string FileName; bool Error; int videoStream, audioStream, FrameFinished, ErrorLevel; AVPacket packet; AVFormatContext *pFormatCtx; AVCodecContext *pCodecCtx, *aCodecCtx; AVCodec *pCodec, *aCodec; AVFrame *pFrame, *pFrameRGB, *aFrame; GLuint VideoTexture; struct SwsContext* swsContext; ALint state; ALuint bufferID, sourceID; ALenum format; ALsizei freq; vector <uint8_t> bufferData; };
Lower private variables are relevant. I am currently decoding audio in a class constructor in an AVFrame and adding data to bufferData as follows:
av_init_packet(&packet); alGenBuffers(1, &bufferID); alGenSources(1, &sourceID); alListener3f(AL_POSITION, 0.0f, 0.0f, 0.0f); int GotFrame = 0; freq = aCodecCtx->sample_rate; if (aCodecCtx->channels == 1) format = AL_FORMAT_MONO16; else format = AL_FORMAT_STEREO16; while (av_read_frame(pFormatCtx, &packet) >= 0) { if (packet.stream_index == audioStream) { avcodec_decode_audio4(aCodecCtx, aFrame, &GotFrame, &packet); bufferData.insert(bufferData.end(), aFrame->data[0], aFrame->data[0] + aFrame->linesize[0]); av_free_packet(&packet); } } av_seek_frame(pFormatCtx, audioStream, 0, AVSEEK_FLAG_BACKWARD); alBufferData(bufferID, format, &bufferData[0], static_cast<ALsizei>(bufferData.size()), freq); alSourcei(sourceID, AL_BUFFER, bufferID);
In my UpdateVideo (), where I decode a video into an OpenGL texture through a video stream, it would be wise for me to decode my sound and transmit it:
void Video::UpdateVideo() { alGetSourcei(sourceID, AL_SOURCE_STATE, &state); if (state != AL_PLAYING) alSourcePlay(sourceID); if (av_read_frame(pFormatCtx, &packet) >= 0) { if (packet.stream_index == videoStream) { avcodec_decode_video2(pCodecCtx, pFrame, &FrameFinished, &packet); if (FrameFinished) { sws_scale(swsContext, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize); av_free_packet(&packet); } } else if (packet.stream_index == audioStream) { } glGenTextures(1, &VideoTexture); glBindTexture(GL_TEXTURE_2D, VideoTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, 3, pCodecCtx->width, pCodecCtx->height, 0, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]); } else { av_seek_frame(pFormatCtx, videoStream, 0, AVSEEK_FLAG_BACKWARD); } }
So, I think the big question is: how do I do this? I have no idea. Any help is appreciated, thanks!