I have just successfully used the standalone aecm WebRTC module for Android. and there are some tips:
1. The most important thing is the thing called βdelayβ, you can find its definition in the directory:
.. \ SRC \ modules \ audio_processing \ enable \ audio_processing.h
quote:
Sets | delay | in ms between AnalyzeReverseStream () receiving the far frame and ProcessStream () receiving the near end of the frame containing the corresponding echo. On the client side, this can be expressed as delay = (t_render - t_analyze) + (t_process - t_capture)
Where
- t_analyze is the time when the frame is transmitted to AnalyzeReverseStream (), and t_render is the time when the first sample of the same frame is provided by the audio equipment.
- t_capture is the time when the first frame sample is captured by the audio equipment, and t_pull is the time when the same frame is transmitted to
ProcessStream ().
If you want to use the aecm module offline, make sure that you obey this document strictly .
2.AudioRecord and AudioTrack sometimes block (due to minimizing the size of the buffer), so when you calculate the delay, do not forget to add a block time to it.
3. if you do not know how to compile aecm module, you can first learn Android NDK , and the path to the src module
.. \ SRC \ Modules \ audio_processing \ aecm
By the way, this blog can help a lot in the native dev. and debugging.
http://mhandroid.wordpress.com/2011/01/23/using-eclipse-for-android-cc-development/
http://mhandroid.wordpress.com/2011/01/23/using-eclipse-for-android-cc-debugging/
Hope this can help you.
source share