So this is a more theoretical question / discussion, as I could not come up with a clear answer by reading other posts and SO sources from the Internet. There seem to be many options:
Brad Larson about AVFoundation
Accelerate video decoding
Videotoolbox
If I want to perform hardware decoding in iOS for H.264 (mov) files, can I just use AVFoundation and AVAssets or use VideoToolbox (or any other frameworks). When using these methods, how can I profile / evaluate the performance of equipment when starting a project? - Is it just viewing CPU usage in "Debug Navigator" in Xcode?
In short, I basically ask that AVFoundation and AVAssets do hardware encoding or not? Are they sufficient and how do I evaluate actual performance?
Thanks!
royherma
source share