IOS - creating and playing unlimited, simple sound (sine wave)

I want to create an incredibly simple iOS app with a button that starts and stops the beep. The signal will be a blue wave, and it will check my model (instance variable for the volume) throughout the playback and change its volume accordingly.

My difficulty is related to the uncertain nature of the task. I understand how to create tables, fill them with data, respond to button clicks, etc .; however, when it comes to the fact that something goes on endlessly (in this case, the sound), I'm a little stuck! Any pointers would be awesome!

Thanks for reading.

+7
source share
1 answer

Here's a barebones application that will play the generated frequency on demand. You did not indicate whether to do iOS or OSX, so I went for OSX, as it is a bit easier (not to bother with audio categories). If you need iOS, you can find out the missing bits by learning the basics of the audio session and sharing the default audio device for the RemoteIO audio device.

Please note that this intention is to demonstrate the basic basics of the Core Audio / Audio Unit. You probably want to look into the AUGraph API if you want to start getting more complicated than that (also in the interest of providing a clean example, I don't do any error checking. Always do error checking when working with Core Audio).

To use this code, you need to add the AudioToolbox and AudioUnit to your project.

 #import <AudioToolbox/AudioToolbox.h> @interface SWAppDelegate : NSObject <NSApplicationDelegate> { AudioUnit outputUnit; double renderPhase; } @end @implementation SWAppDelegate - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { // First, we need to establish which Audio Unit we want. // We start with its description, which is: AudioComponentDescription outputUnitDescription = { .componentType = kAudioUnitType_Output, .componentSubType = kAudioUnitSubType_DefaultOutput, .componentManufacturer = kAudioUnitManufacturer_Apple }; // Next, we get the first (and only) component corresponding to that description AudioComponent outputComponent = AudioComponentFindNext(NULL, &outputUnitDescription); // Now we can create an instance of that component, which will create an // instance of the Audio Unit we're looking for (the default output) AudioComponentInstanceNew(outputComponent, &outputUnit); AudioUnitInitialize(outputUnit); // Next we'll tell the output unit what format our generated audio will // be in. Generally speaking, you'll want to stick to sane formats, since // the output unit won't accept every single possible stream format. // Here, we're specifying floating point samples with a sample rate of // 44100 Hz in mono (ie 1 channel) AudioStreamBasicDescription ASBD = { .mSampleRate = 44100, .mFormatID = kAudioFormatLinearPCM, .mFormatFlags = kAudioFormatFlagsNativeFloatPacked, .mChannelsPerFrame = 1, .mFramesPerPacket = 1, .mBitsPerChannel = sizeof(Float32) * 8, .mBytesPerPacket = sizeof(Float32), .mBytesPerFrame = sizeof(Float32) }; AudioUnitSetProperty(outputUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ASBD, sizeof(ASBD)); // Next step is to tell our output unit which function we'd like it // to call to get audio samples. We'll also pass in a context pointer, // which can be a pointer to anything you need to maintain state between // render callbacks. We only need to point to a double which represents // the current phase of the sine wave we're creating. AURenderCallbackStruct callbackInfo = { .inputProc = SineWaveRenderCallback, .inputProcRefCon = &renderPhase }; AudioUnitSetProperty(outputUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, 0, &callbackInfo, sizeof(callbackInfo)); // Here we're telling the output unit to start requesting audio samples // from our render callback. This is the line of code that starts actually // sending audio to your speakers. AudioOutputUnitStart(outputUnit); } // This is our render callback. It will be called very frequently for short // buffers of audio (512 samples per call on my machine). OSStatus SineWaveRenderCallback(void * inRefCon, AudioUnitRenderActionFlags * ioActionFlags, const AudioTimeStamp * inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList * ioData) { // inRefCon is the context pointer we passed in earlier when setting the render callback double currentPhase = *((double *)inRefCon); // ioData is where we're supposed to put the audio samples we've created Float32 * outputBuffer = (Float32 *)ioData->mBuffers[0].mData; const double frequency = 440.; const double phaseStep = (frequency / 44100.) * (M_PI * 2.); for(int i = 0; i < inNumberFrames; i++) { outputBuffer[i] = sin(currentPhase); currentPhase += phaseStep; } // If we were doing stereo (or more), this would copy our sine wave samples // to all of the remaining channels for(int i = 1; i < ioData->mNumberBuffers; i++) { memcpy(ioData->mBuffers[i].mData, outputBuffer, ioData->mBuffers[i].mDataByteSize); } // writing the current phase back to inRefCon so we can use it on the next call *((double *)inRefCon) = currentPhase; return noErr; } - (void)applicationWillTerminate:(NSNotification *)notification { AudioOutputUnitStop(outputUnit); AudioUnitUninitialize(outputUnit); AudioComponentInstanceDispose(outputUnit); } @end 

You can call AudioOutputUnitStart() and AudioOutputUnitStop() at will to start / stop sound playback. If you want to dynamically change the frequency, you can pass a pointer to a struct containing both double and another representing the frequency you need.

Be careful of the render callback. It is called from a real-time thread (not from the same thread as your main loop loop). Failsafe callbacks obey some fairly stringent time requirements, which means that there are many things in your callback that you should not do, for example:

  • Allocate memory
  • Waiting for a mutex
  • Reading from a file on disk
  • Objective-C messaging (yes, seriously.)

Please note that this is not the only way to do this. I just demonstrated it this way, since you marked this core audio. If you don’t need to change the frequency, you can simply use AVAudioPlayer with a pre-created sound file containing a sine wave.

There is also Novocaine , which hides a lot of this verbosity from you. You can also look in the Audio Queue API, which works pretty much like the Core Audio sample I wrote, but it separates you a bit from the hardware (i.e., it is less strict about how you behave in the rendering callback).

+15
source

All Articles