Delay audio recordings and audio tracks

I am trying to develop an application similar to iRig for Android, so the first step is to capture the microphone input and play it at the same time.

I have it, but the problem is that I get some delay that makes it unusable, and if I start to process the buffer, I am afraid that it will become completely unusable.

I use audiorecord and audiotrack as follows:

new Thread(new Runnable() { public void run() { while(mRunning){ mRecorder.read(mBuffer, 0, mBufferSize); //Todo: Apply filters here into the buffer and then play it modified mPlayer.write(mBuffer, 0, mBufferSize); //Log.v("MY AMP","ARA"); } 

And the tactics:

 // ==================== INITIALIZE ========================= // public void initialize(){ mBufferSize = AudioRecord.getMinBufferSize(mHz, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); mBufferSize2 = AudioTrack.getMinBufferSize(mHz, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); mBuffer = new byte[mBufferSize]; Log.v("MY AMP","Buffer size:" + mBufferSize); mRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, mHz, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, mBufferSize); mPlayer = new AudioTrack(AudioManager.STREAM_MUSIC, mHz, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, mBufferSize2, AudioTrack.MODE_STREAM); } 

Do you know how to get a faster response? Thanks!

+7
source share
4 answers

Android AudioTrack \ AudioRecord classes have high latency due to the minimal buffer size. The reason for these buffer sizes is to minimize drops when the GC happens according to Google (which, in my opinion, is the wrong solution, you can optimize your own memory management).

What you want to do is use OpenSL, which is available from 2.3. It contains built-in APIs for streaming audio. Here are some documents: http://mobilepearls.com/labs/native-android-api/opensles/index.html

+7
source

Just a thought, but shouldn't you read <mBufferSize

+1
source

My first instinct was to suggest using AudioTrack in static mode rather than streaming mode, since static mode has significantly less latency. However, the static mode is more suitable for short sounds that fit entirely into the memory, rather than the sound that you capture from other sources. But as a wild guess, what if you set AudioTrack to static mode and give it discrete chunks of your input audio?

If you need tighter sound control, I would recommend taking a look at OpenSL ES for Android. The learning curve will be a little steeper, but you will get much finer-grained control and lower latency.

0
source

As noted in mSparks, streaming should be done using a smaller read size: you do not need to read the full buffer for data transfer!

 int read = mRecorder.read(mBuffer, 0, 256); /* Or any other magic number */ if (read>0) { mPlayer.write(mBuffer, 0, read); } 

This will significantly reduce your delay. If mHz is 44100, and yours is in the MONO configuration with 256, your latency will be at least 1000 * 256/44100 milliseconds = ~ 5.8 ms. 256/44100 is a conversion from counts to seconds, so multiplying by 1000 gives you milliseconds. Problems are the player’s internal implementation. You have no control over this from java. Hope this helps someone :)

0
source

All Articles