Well, I finally have time to spend on it and make it work! I am sure that there are others who will ask to know how to do this, that’s all.
You will need your own FramedSource to receive each frame, encode and prepare it for streaming. In the near future I will provide some of the source code.
Essentially drop your FramedSource into an H264VideoStreamDiscreteFramer and then throw it into an H264RTPSink. Something like that
scheduler = BasicTaskScheduler::createNew(); env = BasicUsageEnvironment::createNew(*scheduler); framedSource = H264FramedSource::createNew(*env, 0,0); h264VideoStreamDiscreteFramer = H264VideoStreamDiscreteFramer::createNew(*env, framedSource);
Now, in your main rendering cycle, flip your buffer buffer that you saved into system memory to your FramedSource so that it can be encoded, etc. For more information on how to configure encoding, check this answer. How to encode a series of images in H264 using the x264 C API?
My implementation is very hacky and not yet optimized, my d3d application runs at about 15 frames per second due to encoding, so I will need to study this. But for all purposes and tasks, StackOverflow answered this question, because I was mainly after fixing it. Hope this helps other people.
As for my FramedSource, it looks a little something like this
concurrent_queue<x264_nal_t> m_queue; SwsContext* convertCtx; x264_param_t param; x264_t* encoder; x264_picture_t pic_in, pic_out; EventTriggerId H264FramedSource::eventTriggerId = 0; unsigned H264FramedSource::FrameSize = 0; unsigned H264FramedSource::referenceCount = 0; int W = 720; int H = 960; H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env, unsigned preferredFrameSize, unsigned playTimePerFrame) { return new H264FramedSource(env, preferredFrameSize, playTimePerFrame); } H264FramedSource::H264FramedSource(UsageEnvironment& env, unsigned preferredFrameSize, unsigned playTimePerFrame) : FramedSource(env), fPreferredFrameSize(fMaxSize), fPlayTimePerFrame(playTimePerFrame), fLastPlayTime(0), fCurIndex(0) { if (referenceCount == 0) { } ++referenceCount; x264_param_default_preset(¶m, "veryfast", "zerolatency"); param.i_threads = 1; param.i_width = 720; param.i_height = 960; param.i_fps_num = 60; param.i_fps_den = 1; // Intra refres: param.i_keyint_max = 60; param.b_intra_refresh = 1; //Rate control: param.rc.i_rc_method = X264_RC_CRF; param.rc.f_rf_constant = 25; param.rc.f_rf_constant_max = 35; param.i_sps_id = 7; //For streaming: param.b_repeat_headers = 1; param.b_annexb = 1; x264_param_apply_profile(¶m, "baseline"); encoder = x264_encoder_open(¶m); pic_in.i_type = X264_TYPE_AUTO; pic_in.i_qpplus1 = 0; pic_in.img.i_csp = X264_CSP_I420; pic_in.img.i_plane = 3; x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920); convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL); if (eventTriggerId == 0) { eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0); } } H264FramedSource::~H264FramedSource() { --referenceCount; if (referenceCount == 0) { // Reclaim our 'event trigger' envir().taskScheduler().deleteEventTrigger(eventTriggerId); eventTriggerId = 0; } } void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes) { uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]); memcpy(surfaceData, buf, surfaceSizeInBytes); int srcstride = W*3; sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride); x264_nal_t* nals = NULL; int i_nals = 0; int frame_size = -1; frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out); static bool finished = false; if (frame_size >= 0) { static bool alreadydone = false; if(!alreadydone) { x264_encoder_headers(encoder, &nals, &i_nals); alreadydone = true; } for(int i = 0; i < i_nals; ++i) { m_queue.push(nals[i]); } } delete [] surfaceData; surfaceData = NULL; envir().taskScheduler().triggerEvent(eventTriggerId, this); } void H264FramedSource::doGetNextFrame() { deliverFrame(); } void H264FramedSource::deliverFrame0(void* clientData) { ((H264FramedSource*)clientData)->deliverFrame(); } void H264FramedSource::deliverFrame() { x264_nal_t nalToDeliver; if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) { if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) { // This is the first frame, so use the current time: gettimeofday(&fPresentationTime, NULL); } else { // Increment by the play time of the previous data: unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime; fPresentationTime.tv_sec += uSeconds/1000000; fPresentationTime.tv_usec = uSeconds%1000000; } // Remember the play time of this data: fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize; fDurationInMicroseconds = fLastPlayTime; } else { // We don't know a specific play time duration for this data, // so just record the current time as being the 'presentation time': gettimeofday(&fPresentationTime, NULL); } if(!m_queue.empty()) { m_queue.wait_and_pop(nalToDeliver); uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E; newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload); unsigned newFrameSize = nalToDeliver.i_payload; // Deliver the data here: if (newFrameSize > fMaxSize) { fFrameSize = fMaxSize; fNumTruncatedBytes = newFrameSize - fMaxSize; } else { fFrameSize = newFrameSize; } memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload); FramedSource::afterGetting(this); } }
Oh, and for those who want to know what my parallel queue is, here it works brilliantly http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables .html p>
Enjoy and good luck!