How to write a LiveBSD file Live555, which will allow me to broadcast H.264 live

I am trying to write a class that comes from FramedSource in Live555 that will allow me to transfer the current data from my D3D9 application to MP4 or similar.

What I do each frame, grab the backbuffer into the system memory as a texture, and then convert it from RGB → YUV420P, and then encode using x264, and then ideally transfer the NAL packets to Live555. I created a class called H264FramedSource, which was obtained from FramedSource mainly by copying the DeviceSource file. Instead of the input being the input file, I made it a NAL package, which I update every frame.

I am new to codecs and streaming, so I could do everything completely wrong. In each doGetNextFrame (), you should capture a NAL packet and do something like

memcpy(fTo, nal->p_payload, nal->i_payload) 

I assume that the payload is my frame data in bytes? If anyone has an example of a class that they produce from FramedSource, which, at least, may be close to what I'm trying to do, I would really like to see it, it's all new to me and it’s a little difficult to understand that going on. The Live555 documentation is pretty much the code itself, which doesn't make me easy to understand.

+9
source share
2 answers

Well, I finally have time to spend on it and make it work! I am sure that there are others who will ask to know how to do this, that’s all.

You will need your own FramedSource to receive each frame, encode and prepare it for streaming. In the near future I will provide some of the source code.

Essentially drop your FramedSource into an H264VideoStreamDiscreteFramer and then throw it into an H264RTPSink. Something like that

 scheduler = BasicTaskScheduler::createNew(); env = BasicUsageEnvironment::createNew(*scheduler); framedSource = H264FramedSource::createNew(*env, 0,0); h264VideoStreamDiscreteFramer = H264VideoStreamDiscreteFramer::createNew(*env, framedSource); // initialise the RTP Sink stuff here, look at // testH264VideoStreamer.cpp to find out how videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink); env->taskScheduler().doEventLoop(); 

Now, in your main rendering cycle, flip your buffer buffer that you saved into system memory to your FramedSource so that it can be encoded, etc. For more information on how to configure encoding, check this answer. How to encode a series of images in H264 using the x264 C API?

My implementation is very hacky and not yet optimized, my d3d application runs at about 15 frames per second due to encoding, so I will need to study this. But for all purposes and tasks, StackOverflow answered this question, because I was mainly after fixing it. Hope this helps other people.

As for my FramedSource, it looks a little something like this

 concurrent_queue<x264_nal_t> m_queue; SwsContext* convertCtx; x264_param_t param; x264_t* encoder; x264_picture_t pic_in, pic_out; EventTriggerId H264FramedSource::eventTriggerId = 0; unsigned H264FramedSource::FrameSize = 0; unsigned H264FramedSource::referenceCount = 0; int W = 720; int H = 960; H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env, unsigned preferredFrameSize, unsigned playTimePerFrame) { return new H264FramedSource(env, preferredFrameSize, playTimePerFrame); } H264FramedSource::H264FramedSource(UsageEnvironment& env, unsigned preferredFrameSize, unsigned playTimePerFrame) : FramedSource(env), fPreferredFrameSize(fMaxSize), fPlayTimePerFrame(playTimePerFrame), fLastPlayTime(0), fCurIndex(0) { if (referenceCount == 0) { } ++referenceCount; x264_param_default_preset(&param, "veryfast", "zerolatency"); param.i_threads = 1; param.i_width = 720; param.i_height = 960; param.i_fps_num = 60; param.i_fps_den = 1; // Intra refres: param.i_keyint_max = 60; param.b_intra_refresh = 1; //Rate control: param.rc.i_rc_method = X264_RC_CRF; param.rc.f_rf_constant = 25; param.rc.f_rf_constant_max = 35; param.i_sps_id = 7; //For streaming: param.b_repeat_headers = 1; param.b_annexb = 1; x264_param_apply_profile(&param, "baseline"); encoder = x264_encoder_open(&param); pic_in.i_type = X264_TYPE_AUTO; pic_in.i_qpplus1 = 0; pic_in.img.i_csp = X264_CSP_I420; pic_in.img.i_plane = 3; x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920); convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL); if (eventTriggerId == 0) { eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0); } } H264FramedSource::~H264FramedSource() { --referenceCount; if (referenceCount == 0) { // Reclaim our 'event trigger' envir().taskScheduler().deleteEventTrigger(eventTriggerId); eventTriggerId = 0; } } void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes) { uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]); memcpy(surfaceData, buf, surfaceSizeInBytes); int srcstride = W*3; sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride); x264_nal_t* nals = NULL; int i_nals = 0; int frame_size = -1; frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out); static bool finished = false; if (frame_size >= 0) { static bool alreadydone = false; if(!alreadydone) { x264_encoder_headers(encoder, &nals, &i_nals); alreadydone = true; } for(int i = 0; i < i_nals; ++i) { m_queue.push(nals[i]); } } delete [] surfaceData; surfaceData = NULL; envir().taskScheduler().triggerEvent(eventTriggerId, this); } void H264FramedSource::doGetNextFrame() { deliverFrame(); } void H264FramedSource::deliverFrame0(void* clientData) { ((H264FramedSource*)clientData)->deliverFrame(); } void H264FramedSource::deliverFrame() { x264_nal_t nalToDeliver; if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) { if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) { // This is the first frame, so use the current time: gettimeofday(&fPresentationTime, NULL); } else { // Increment by the play time of the previous data: unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime; fPresentationTime.tv_sec += uSeconds/1000000; fPresentationTime.tv_usec = uSeconds%1000000; } // Remember the play time of this data: fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize; fDurationInMicroseconds = fLastPlayTime; } else { // We don't know a specific play time duration for this data, // so just record the current time as being the 'presentation time': gettimeofday(&fPresentationTime, NULL); } if(!m_queue.empty()) { m_queue.wait_and_pop(nalToDeliver); uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E; newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload); unsigned newFrameSize = nalToDeliver.i_payload; // Deliver the data here: if (newFrameSize > fMaxSize) { fFrameSize = fMaxSize; fNumTruncatedBytes = newFrameSize - fMaxSize; } else { fFrameSize = newFrameSize; } memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload); FramedSource::afterGetting(this); } } 

Oh, and for those who want to know what my parallel queue is, here it works brilliantly http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables .html p>

Enjoy and good luck!

+13
source

The deliverFrame method does not have the following check:

 if (!isCurrentlyAwaitingData()) return; 

see DeviceSource.cpp in LIVE

+2
source

All Articles