Decoding h264 video using dxva

I am working on a project that is supposed to decode h264 video using dxva2.0. I am writing code according to the documentation http://msdn.microsoft.com/en-us/library/windows/desktop/aa965245%28v=vs.85%29.aspx . That is, I create the IDirectXVideoDecoder interface, then I call the dxva api "BeginFrame", "Execute" and "EndFrame". Then the problem comes out. when I run my program on an Intel Core i5 processor (the GPU is Intel HD graphics inside the processor), everything is fine. But when I run it on an Intel ATOM processor (with Intel GMA3000 graphics hardware), I cannot get the correct result: some video frames are decoded correctly, while others are completely messy. The data that I use is sent from another computer, and the data can be directly populated with dxva buffers. In h264, buffers are DXVA2_PictureParameter, DXVA2_Bitstream, DXVA2_InverseQuantization, and DXVA2_SliceControl. Therefore, there is no need to use ffmpeg or ffdshow (and ffdshow is gpl, I can not use it). The dxva checker software tells me that the director of the Intel Core i5 is “ModeH264_VLD_NoFGT_ClearVideo” and intel intel is “ModeH264_VLD_NoFGT”. I want to know the difference between the two tips. Can I use "ModeH264_VLD_NoFGT" on an Intel graphics card to decode video?

+6
source share

All Articles