"Audio Recorder" is really a very simple setup. The author tried to confuse the important parts of his settings (which function got hooked), but here's what I found.
Tweak basically intercepts only one function - AudioConverterConvertComplexBuffer from AudioToolbox.framework . Tweak is loaded into the mediaserverd daemon at startup.
First, we need to find out when we should start recording, because AudioConverterConvertComplexBuffer is called even if you are just playing with regular audio files. To achieve this setting, listen for the kCTCallStatusChangeNotification notification from CTTelephonyCenter .
Secondly, AudioConverterConvertComplexBuffer implementation. I have not finished it yet, so I will publish what I have. Here are some working examples to get you started.
Helper class to track AudioConverterRef pairs - ExtAudioFileRef
@interface ConverterFile : NSObject @property (nonatomic, assign) AudioConverterRef converter; @property (nonatomic, assign) ExtAudioFileRef file; @property (nonatomic, assign) BOOL failedToOpenFile; @end @implementation ConverterFile @end
ConverterFile object container
NSMutableArray* callConvertersFiles = [[NSMutableArray alloc] init];
Initial implementation of AudioConverterConvertComplexBuffer
OSStatus(*AudioConverterConvertComplexBuffer_orig)(AudioConverterRef, UInt32, const AudioBufferList*, AudioBufferList*);
AudioConverterConvertComplexBuffer converter declaration
OSStatus AudioConverterConvertComplexBuffer_hook(AudioConverterRef inAudioConverter, UInt32 inNumberPCMFrames, const AudioBufferList *inInputData, AudioBufferList *outOutputData);
Engagement
MSHookFunction(AudioConverterConvertComplexBuffer, AudioConverterConvertComplexBuffer_hook, &AudioConverterConvertComplexBuffer_orig);
AudioConverterConvertComplexBuffer Definition
OSStatus AudioConverterConvertComplexBuffer_hook(AudioConverterRef inAudioConverter, UInt32 inNumberPCMFrames, const AudioBufferList *inInputData, AudioBufferList *outOutputData) { //Searching for existing AudioConverterRef-ExtAudioFileRef pair __block ConverterFile* cf = nil; [callConvertersFiles enumerateObjectsUsingBlock:^(ConverterFile* obj, NSUInteger idx, BOOL *stop){ if (obj.converter == inAudioConverter) { cf = obj; *stop = YES; } }]; //Inserting new AudioConverterRef if (!cf) { cf = [[[ConverterFile alloc] init] autorelease]; cf.converter = inAudioConverter; [callConvertersFiles addObject:cf]; } //Opening new audio file if (!cf.file && !cf.failedToOpenFile) { //Obtaining input audio format AudioStreamBasicDescription desc; UInt32 descSize = sizeof(desc); AudioConverterGetProperty(cf.converter, kAudioConverterCurrentInputStreamDescription, &descSize, &desc); //Opening audio file CFURLRef url = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)[NSString stringWithFormat:@"/var/mobile/Media/DCIM/Call%u.caf", [callConvertersFiles indexOfObject:cf]], kCFURLPOSIXPathStyle, false); ExtAudioFileRef audioFile = NULL; OSStatus result = ExtAudioFileCreateWithURL(url, kAudioFileCAFType, &desc, NULL, kAudioFileFlags_EraseFile, &audioFile); if (result != 0) { cf.failedToOpenFile = YES; cf.file = NULL; } else { cf.failedToOpenFile = NO; cf.file = audioFile; //Writing audio format ExtAudioFileSetProperty(cf.file, kExtAudioFileProperty_ClientDataFormat, sizeof(desc), &desc); } CFRelease(url); } //Writing audio buffer if (cf.file) { ExtAudioFileWrite(cf.file, inNumberPCMFrames, inInputData); } return AudioConverterConvertComplexBuffer_orig(inAudioConverter, inNumberPCMFrames, inInputData, outOutputData); }
This is approximately the same as in the configuration. But why is this so? When a phone call is made, AudioConverterConvertComplexBuffer_hook will be called continuously. But the inAudioConverter argument will be different. I found that during one phone call, there can be more than nine different inAudioConverter objects on our hook. They will have different audio formats, so we can not write everything in one file. That's why we create an array of AudioConverterRef-ExtAudioFileRef pairs - to keep track of what is stored in place. This code will create as many files as there are AudioConverterRef objects. All files will contain a different sound - one or two will be the sound of the speaker. Others are a microphone. I tested this code on an iPhone 4S with iOS 6.1 and it works. Unfortunately, recording calls to 4S can only be done when the speaker is turned on. There is no such restriction on the iPhone 5. This is stated in the description of tweak.
It remains only to find out how we can find only two specific inAudioConverter objects: one for audio speakers and one for a microphone. Everything else is not a problem.
And last: the mediaserverd process is sandboxed, as our setup. We cannot save files anywhere. That's why I chose this path to the file - it can be written even from inside the sandbox.
PS Despite the fact that I sent this code, the loan should go to Elias Limnos. He did it.