How to play sound without reference to the ringer volume?

I have a simple sound system with several different categories of sounds. I use AudioToolbox.framework (all of my audio files are generally less than 10 seconds and are .wav files), but I am confused about how to make the application refer to the "Volume" volume and not the "Ringer" volume.

If my device is set to Silent, the sound from my buttons will not play even if the volume of the device is on. However, as soon as I turn on the “Ringer” (located on the side of my device), the sound sounds as loud as it rings.

I searched the Internet and found some sources that said to switch AVAudioSession to AVAudioSessionCategoryPlayback, so I inserted this

 AVAudioSession *audioSession = [AVAudioSession sharedInstance];

NSError *setCategoryError = nil;
BOOL success = [audioSession setCategory:AVAudioSessionCategoryPlayback error:&setCategoryError];
if (!success) { /* handle the error condition */ }

NSError *activationError = nil;
success = [audioSession setActive:YES error:&activationError];
if (!success) { /* handle the error condition */ }

in my viewDidLoad. However, the same problem arises. I found other suggestions on the Internet, but the explanations leave me confused as to what I should do. I am relatively new to Objective-C and coding, so please be specific and understandable in your explanations if you know the answer.

I am happy to thank any help you can provide.

Edit One I followed the suggestion of Pau Senabra, and I did not notice any changes. There was still no sound while Ringer was silent. Current code with Pau changes:

 AVAudioSession *audioSession = [AVAudioSession sharedInstance];

NSError *sessionError = NULL;
BOOL success = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback
                                                withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker
                                                      error:&sessionError];
if(!success) {
    NSLog(@"Error setting category Audio Session: %@", [sessionError localizedDescription]);
}

NSError *activationError = nil;
success = [audioSession setActive:YES error:&activationError];
if (!success) { /* handle the error condition */ }

Any other suggestions?

. , AVAudioSession Playback , .

[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: nil];

, .

?

:

NSURL *DariusSelectSound = [NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:@"DariusSelect" ofType:@"wav"]];
    AudioServicesCreateSystemSoundID((__bridge CFURLRef)DariusSelectSound, &DariusSelectAudio);

, :

AudioServicesPlaySystemSound(DariusSelectAudio);

, - . , , , .

. . , AudioServices, AVAudioPlayer. , ​​ , !

+1
3

AVAudioSession Playback. viewDidLoad.

// Play sound when silent mode on
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: nil];

AVAudioSession

0

, , , () :

: setInputGain AVAudioSession.

/* A value defined over the range [0.0, 1.0], with 0.0 corresponding to the lowest analog
gain setting and 1.0 corresponding to the highest analog gain setting.  Attempting to set values
outside of the defined range will result in the value being "clamped" to a valid input.  This is
a global input gain setting that applies to the current input source for the entire system.
When no applications are using the input gain control, the system will restore the default input
gain setting for the input source.  Note that some audio accessories, such as USB devices, may
not have a default value.  This property is only valid if inputGainSettable
is true.  Note: inputGain is key-value observable */
- (BOOL)setInputGain:(float)gain  error:(NSError **)outError NS_AVAILABLE_IOS(6_0);
@property(readonly) float inputGain NS_AVAILABLE_IOS(6_0); /* value in range [0.0, 1.0] */

, , , AVAudioSession, AudioUnit, OpenAL ..

, AudioUnit, AUGraph :

// MIXER unit ASBD
AudioComponentDescription MixerUnitDescription;
MixerUnitDescription.componentType          = kAudioUnitType_Mixer;
MixerUnitDescription.componentSubType       = kAudioUnitSubType_MultiChannelMixer;
MixerUnitDescription.componentManufacturer  = kAudioUnitManufacturer_Apple;
MixerUnitDescription.componentFlags         = 0;
MixerUnitDescription.componentFlagsMask     = 0;

///
/// NODE 6: MIXER NODE
///
err = AUGraphAddNode (processingGraph, &MixerUnitDescription, &mixerNode );
if (err) { NSLog(@"mixerNode err = %d", (int)err); return NO; }

// sets the overall mixer output volume
- (void)setOutputVolume:(AudioUnitParameterValue)value
{

    OSStatus result;

    result = AudioUnitSetParameter(mixerUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Output, 0, value, 0);

    if (result) {
        NSLog(@"AudioUnitSetParameter kMultiChannelMixerParam_Volume Output result %d %08X %4.4s\n", (int)result, (unsigned int)result, (char*)&result);

        return;
    }
}

, AUGraph

RemoteIO

 -------------------------
 | i                   o |
 -- BUS 1 -- from mic --> | n    REMOTE I/O     u | -- BUS 1 -- to app -->
 | p      AUDIO        t |
 -- BUS 0 -- from app --> | u       UNIT        p | -- BUS 0 -- to speaker -->
 | t                   u |
 |                     t |
 -------------------------

, , :

// sets the input volume for a specific bus
- (void)setInputVolume:(UInt32)inputBus value:(AudioUnitParameterValue)value
{

    micGainLevel = value;

    OSStatus result;

    result = AudioUnitSetParameter(mixerUnit, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, inputBus, value, 0);

    if (result) {
    NSLog(@"AudioUnitSetParameter kMultiChannelMixerParam_Volume Input result %d %08X %4.4s\n", (int)result, (unsigned int)result, (char*)&result);
}   
}

, WAV, , :

  WavInFile *inFile = new WavInFile( cString  );
  // get some audio file channels info
  float samplerate = (int)(*inFile).getSampleRate();
  int nChannels = (int)(*inFile).getNumChannels();
  float nSamples = (*inFile).getNumSamples();
  float duration = (double)(*inFile).getLengthMS() / (double) 1000;
  while (inFile->eof() == 0) {

      int num, samples;

     // Read a chunk of samples from the input file
     num = inFile->read(shortBuffer, BUFF_SIZE);
     samples = num / nChannels;
     seconds = (double)(*inFile).getElapsedMS() / (double) 1000;

    float currentFrequency = 0.0f;
    SInt16ToDouble(shortBuffer, doubleBuffer, samples);
    currentFrequency = dywapitch_computepitch(&(pitchtrackerFile), doubleBuffer, 0, samples);
    currentFrequency = samplerate / (float)44100.0f * currentFrequency;

    // here you can change things like the pitch at this point of the amplitude i.e the volume see later


   } //// eof

short amplifyPCMSInt16(int value, int dbGain, bool clampValue) {
    /*To increase the gain of a sample by X db, multiply the PCM value by
     * pow( 2.0, X/6.014 ). i.e. gain +6dB means doubling the value of the sample, -6dB means halving it.
     */
    int newValue = (int) ( pow(2.0, ((double)dbGain)/6.014 )*value);

    if(clampValue){
        if(newValue>32767)
            newValue = 32767;
        else if(newValue < -32768 )
            newValue = -32768;
    }
    return (short) newValue;
}

, EOF, , , AURenderCallbackStruct, , msec:

AURenderCallbackStruct lineInrCallbackStruct = {};
        lineInrCallbackStruct.inputProc = &micLineInCallback;
        lineInrCallbackStruct.inputProcRefCon = (void*)self;
        err = AudioUnitSetProperty(
                                   vfxUnit,
                                   kAudioUnitProperty_SetRenderCallback,
                                   kAudioUnitScope_Global,
                                   0,
                                   &lineInrCallbackStruct,
                                   sizeof(lineInrCallbackStruct));

static OSStatus micLineInCallback (void                 *inRefCon,
                                   AudioUnitRenderActionFlags   *ioActionFlags,
                                   const AudioTimeStamp         *inTimeStamp,
                                   UInt32                       inBusNumber,
                                   UInt32                       inNumberFrames,
                                   AudioBufferList              *ioData)
{

}

:

SInt16 *sampleBufferLeft = THIS.conversionBufferLeft;
    SInt16 *sampleBufferRight = THIS.conversionBufferRight;
    SInt16 *sampleBuffer;
    double *doubleBuffer = THIS.doubleBufferMono;

    // start the actual processing
    inSamplesLeft = (SInt32 *) ioData->mBuffers[0].mData; // left channel
    fixedPointToSInt16(inSamplesLeft, sampleBufferLeft, inNumberFrames);

    if(isStereo) {

        inSamplesRight = (SInt32 *) ioData->mBuffers[1].mData; // right channel

        fixedPointToSInt16(inSamplesRight, sampleBufferRight, inNumberFrames);

        for( i = 0; i < inNumberFrames; i++ ) { // combine left and right channels into left
            sampleBufferLeft[i] = (SInt16) ((.5 * (float) sampleBufferLeft[i]) + (.5 * (float) sampleBufferRight[i]));
        }
    }

( ), L-R, , amplifyPCMSInt16.

+1

AVAudioSession AVAudioSessionCategoryOptionDefaultToSpeaker

    NSError *sessionError = NULL;
    BOOL success = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback
                                                    withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker
                                                          error:&sessionError];
    if(!success) {
        NSLog(@"Error setting category Audio Session: %@", [sessionError localizedDescription]);
    }

AudioUnitSetProperty AudioComponentDescription AudioStreamBasicDescription:

    // Create Audio Unit
    AudioComponentDescription cd = {
        .componentManufacturer = kAudioUnitManufacturer_Apple,
        .componentType = kAudioUnitType_Output,
        .componentSubType = kAudioUnitSubType_RemoteIO,
        .componentFlags = 0,
        .componentFlagsMask = 0
    };

    AudioComponent component = AudioComponentFindNext(NULL, &cd);
    OSStatus result = AudioComponentInstanceNew(component, &_ioUnit);
    NSCAssert2(
               result == noErr,
               @"AudioComponentInstanceNew failed. Error code: %d '%.4s'",
               (int)result,
               (const char *)(&result));

    AudioStreamBasicDescription asbd = {
        .mFormatID          = kAudioFormatLinearPCM,
        .mFormatFlags       =
        kAudioFormatFlagIsSignedInteger |
        kAudioFormatFlagIsPacked |
        kAudioFormatFlagsNativeEndian |
        kAudioFormatFlagIsNonInterleaved,
        .mChannelsPerFrame  = 2,
        .mBytesPerPacket    = sizeof(SInt16),
        .mFramesPerPacket   = 1,
        .mBytesPerFrame     = sizeof(SInt16),
        .mBitsPerChannel    = 8 * sizeof(SInt16),
        .mSampleRate        = 1
    };

    result = AudioUnitSetProperty(
                                  _ioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Input,
                                  0,
                                  &asbd,
                                  sizeof(asbd));
    NSCAssert2(
               result == noErr,
               @"Set Stream Format failed. Error code: %d '%.4s'",
               (int)result,
               (const char *)(&result));

    // Set Audio Callback
    AURenderCallbackStruct ioRemoteInput;
    ioRemoteInput.inputProc = audioCallback;

    result = AudioUnitSetProperty(
                                  _ioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Input,
                                  0,
                                  &ioRemoteInput,
                                  sizeof(ioRemoteInput));
    NSCAssert2(
               result == noErr,
               @"Could not set Render Callback. Error code: %d '%.4s'",
               (int)result,
               (const char *)(&result));

    // Initialize Audio Unit
    result = AudioUnitInitialize(_ioUnit);
    NSCAssert2(
               result == noErr,
               @"Initializing Audio Unit failed. Error code: %d '%.4s'",
               (int)result,
               (const char *)(&result));
0

All Articles