AVFoundation Vs VideoToolbox - Hardware Encoding

So this is a more theoretical question / discussion, as I could not come up with a clear answer by reading other posts and SO sources from the Internet. There seem to be many options:

Brad Larson about AVFoundation

Accelerate video decoding

Videotoolbox

If I want to perform hardware decoding in iOS for H.264 (mov) files, can I just use AVFoundation and AVAssets or use VideoToolbox (or any other frameworks). When using these methods, how can I profile / evaluate the performance of equipment when starting a project? - Is it just viewing CPU usage in "Debug Navigator" in Xcode?

In short, I basically ask that AVFoundation and AVAssets do hardware encoding or not? Are they sufficient and how do I evaluate actual performance?

Thanks!

+8
ios avfoundation video-toolbox
source share
1 answer

If you want to decode a local file that is already on your iOS device, I would use AVFoundation.

If you want to decode the network stream (RTP or RTMP), use the Video Toolbox - because you need to unzip the video stream yourself.

With AVFoundation or Video Toolbox you get hardware decoding.

+1
source share

All Articles