Record iOS sessions. When to start?

I would like every time I talk with the MobilePhone application. My device is locked, so there are no problems with appStore restrictions.

Of course, I think that the state structure will not give anything. In addition, I looked at the private framework, but did not see anything useful.

I can currently request a microphone, but when the conversation starts, it takes the microphone in exclusive mode and data is no longer accepted.

Any guidance?

+4
ios jailbreak
Oct 14 '13 at 8:44
source share
2 answers

"Audio Recorder" is really a very simple setup. The author tried to confuse the important parts of his settings (which function got hooked), but here's what I found.

Tweak basically intercepts only one function - AudioConverterConvertComplexBuffer from AudioToolbox.framework . Tweak is loaded into the mediaserverd daemon at startup.

First, we need to find out when we should start recording, because AudioConverterConvertComplexBuffer is called even if you are just playing with regular audio files. To achieve this setting, listen for the kCTCallStatusChangeNotification notification from CTTelephonyCenter .

Secondly, AudioConverterConvertComplexBuffer implementation. I have not finished it yet, so I will publish what I have. Here are some working examples to get you started.

Helper class to track AudioConverterRef pairs - ExtAudioFileRef

 @interface ConverterFile : NSObject @property (nonatomic, assign) AudioConverterRef converter; @property (nonatomic, assign) ExtAudioFileRef file; @property (nonatomic, assign) BOOL failedToOpenFile; @end @implementation ConverterFile @end 

ConverterFile object container

 NSMutableArray* callConvertersFiles = [[NSMutableArray alloc] init]; 

Initial implementation of AudioConverterConvertComplexBuffer

 OSStatus(*AudioConverterConvertComplexBuffer_orig)(AudioConverterRef, UInt32, const AudioBufferList*, AudioBufferList*); 

AudioConverterConvertComplexBuffer converter declaration

 OSStatus AudioConverterConvertComplexBuffer_hook(AudioConverterRef inAudioConverter, UInt32 inNumberPCMFrames, const AudioBufferList *inInputData, AudioBufferList *outOutputData); 

Engagement

 MSHookFunction(AudioConverterConvertComplexBuffer, AudioConverterConvertComplexBuffer_hook, &AudioConverterConvertComplexBuffer_orig); 

AudioConverterConvertComplexBuffer Definition

 OSStatus AudioConverterConvertComplexBuffer_hook(AudioConverterRef inAudioConverter, UInt32 inNumberPCMFrames, const AudioBufferList *inInputData, AudioBufferList *outOutputData) { //Searching for existing AudioConverterRef-ExtAudioFileRef pair __block ConverterFile* cf = nil; [callConvertersFiles enumerateObjectsUsingBlock:^(ConverterFile* obj, NSUInteger idx, BOOL *stop){ if (obj.converter == inAudioConverter) { cf = obj; *stop = YES; } }]; //Inserting new AudioConverterRef if (!cf) { cf = [[[ConverterFile alloc] init] autorelease]; cf.converter = inAudioConverter; [callConvertersFiles addObject:cf]; } //Opening new audio file if (!cf.file && !cf.failedToOpenFile) { //Obtaining input audio format AudioStreamBasicDescription desc; UInt32 descSize = sizeof(desc); AudioConverterGetProperty(cf.converter, kAudioConverterCurrentInputStreamDescription, &descSize, &desc); //Opening audio file CFURLRef url = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)[NSString stringWithFormat:@"/var/mobile/Media/DCIM/Call%u.caf", [callConvertersFiles indexOfObject:cf]], kCFURLPOSIXPathStyle, false); ExtAudioFileRef audioFile = NULL; OSStatus result = ExtAudioFileCreateWithURL(url, kAudioFileCAFType, &desc, NULL, kAudioFileFlags_EraseFile, &audioFile); if (result != 0) { cf.failedToOpenFile = YES; cf.file = NULL; } else { cf.failedToOpenFile = NO; cf.file = audioFile; //Writing audio format ExtAudioFileSetProperty(cf.file, kExtAudioFileProperty_ClientDataFormat, sizeof(desc), &desc); } CFRelease(url); } //Writing audio buffer if (cf.file) { ExtAudioFileWrite(cf.file, inNumberPCMFrames, inInputData); } return AudioConverterConvertComplexBuffer_orig(inAudioConverter, inNumberPCMFrames, inInputData, outOutputData); } 

This is approximately the same as in the configuration. But why is this so? When a phone call is made, AudioConverterConvertComplexBuffer_hook will be called continuously. But the inAudioConverter argument will be different. I found that during one phone call, there can be more than nine different inAudioConverter objects on our hook. They will have different audio formats, so we can not write everything in one file. That's why we create an array of AudioConverterRef-ExtAudioFileRef pairs - to keep track of what is stored in place. This code will create as many files as there are AudioConverterRef objects. All files will contain a different sound - one or two will be the sound of the speaker. Others are a microphone. I tested this code on an iPhone 4S with iOS 6.1 and it works. Unfortunately, recording calls to 4S can only be done when the speaker is turned on. There is no such restriction on the iPhone 5. This is stated in the description of tweak.

It remains only to find out how we can find only two specific inAudioConverter objects: one for audio speakers and one for a microphone. Everything else is not a problem.

And last: the mediaserverd process is sandboxed, as our setup. We cannot save files anywhere. That's why I chose this path to the file - it can be written even from inside the sandbox.

PS Despite the fact that I sent this code, the loan should go to Elias Limnos. He did it.

+9
Oct 16 '13 at
source share

Do you want to count calls or want to record sound from these calls? The former is very simple and requires only one notice. For the latter, I did not find anything. I was researching and did not find an API that could help me record sound during a phone call. I do not know anyone who would do this.

The only thing I can think of is CommCenter. This daemon interacts with the base range and probably sends it an audio stream from the microphone. This is just an assumption, but looking at the disassembly of CommCenter, I found hints that it is responsible for redirecting audio streams. The new Qualcomm and iOS baseboards only talk to each other via USB handsets using the proprietary QMI protocol. One of the things this protocol does is handle audio streaming during phone calls - it is called the Core sound driver service . Therefore, I can only parse CommCenter and find a way to redirect audio streams through your handler, where you will record them. This obviously requires extensive knowledge of reverse engineering, the QMI protocol, IOKit for talking to USB devices, etc. Etc. Etc. I don’t think there is an API that will do this for you or that you can do it with simple connection methods. We are talking about C ++ code, which is much harder to remake than obj-c and much harder to connect.

+1
Oct 14 '13 at 18:18
source share



All Articles