AudioConverter # FillComplexBuffer returns -50 and does not convert anything

I closely follow this Xamarin sample (based on this Apple sample ) to convert the LinearPCM file to an AAC file.

The sample works fine, but is implemented in my project, the FillComplexBuffer method returns a -50 error, and the InputData event InputData not InputData once, so nothing is converted.

The error appears only when testing on the device. When testing on the emulator, everything is going fine and I get a good AAC encoded file at the end.

Today I tried a lot of things, and I do not see the difference between my code and the sample code. You do not know where this may come from?

I don’t know if this is anyway related to Xamarin, it doesn’t seem like the Xamarin sample works fine.

Here is the relevant part of my code:

 protected void Encode(string path) { // In class setup. File at TempWavFilePath has DecodedFormat as format. // // DecodedFormat = AudioStreamBasicDescription.CreateLinearPCM(); // AudioStreamBasicDescription encodedFormat = new AudioStreamBasicDescription() // { // Format = AudioFormatType.MPEG4AAC, // SampleRate = DecodedFormat.SampleRate, // ChannelsPerFrame = DecodedFormat.ChannelsPerFrame, // }; // AudioStreamBasicDescription.GetFormatInfo (ref encodedFormat); // EncodedFormat = encodedFormat; // Setup converter AudioStreamBasicDescription inputFormat = DecodedFormat; AudioStreamBasicDescription outputFormat = EncodedFormat; AudioConverterError converterCreateError; AudioConverter converter = AudioConverter.Create(inputFormat, outputFormat, out converterCreateError); if (converterCreateError != AudioConverterError.None) { Console.WriteLine("Converter creation error: " + converterCreateError); } converter.EncodeBitRate = 192000; // AAC 192kbps // get the actual formats back from the Audio Converter inputFormat = converter.CurrentInputStreamDescription; outputFormat = converter.CurrentOutputStreamDescription; /*** INPUT ***/ AudioFile inputFile = AudioFile.OpenRead(NSUrl.FromFilename(TempWavFilePath)); // init buffer const int inputBufferBytesSize = 32768; IntPtr inputBufferPtr = Marshal.AllocHGlobal(inputBufferBytesSize); // calc number of packets per read int inputSizePerPacket = inputFormat.BytesPerPacket; int inputBufferPacketSize = inputBufferBytesSize / inputSizePerPacket; AudioStreamPacketDescription[] inputPacketDescriptions = null; // init position long inputFilePosition = 0; // define input delegate converter.InputData += delegate(ref int numberDataPackets, AudioBuffers data, ref AudioStreamPacketDescription[] dataPacketDescription) { // how much to read if (numberDataPackets > inputBufferPacketSize) { numberDataPackets = inputBufferPacketSize; } // read from the file int outNumBytes; AudioFileError readError = inputFile.ReadPackets(false, out outNumBytes, inputPacketDescriptions, inputFilePosition, ref numberDataPackets, inputBufferPtr); if (readError != 0) { Console.WriteLine("Read error: " + readError); } // advance input file packet position inputFilePosition += numberDataPackets; // put the data pointer into the buffer list data.SetData(0, inputBufferPtr, outNumBytes); // add packet descriptions if required if (dataPacketDescription != null) { if (inputPacketDescriptions != null) { dataPacketDescription = inputPacketDescriptions; } else { dataPacketDescription = null; } } return AudioConverterError.None; }; /*** OUTPUT ***/ // create the destination file var outputFile = AudioFile.Create (NSUrl.FromFilename(path), AudioFileType.M4A, outputFormat, AudioFileFlags.EraseFlags); // init buffer const int outputBufferBytesSize = 32768; IntPtr outputBufferPtr = Marshal.AllocHGlobal(outputBufferBytesSize); AudioBuffers buffers = new AudioBuffers(1); // calc number of packet per write int outputSizePerPacket = outputFormat.BytesPerPacket; AudioStreamPacketDescription[] outputPacketDescriptions = null; if (outputSizePerPacket == 0) { // if the destination format is VBR, we need to get max size per packet from the converter outputSizePerPacket = (int)converter.MaximumOutputPacketSize; // allocate memory for the PacketDescription structures describing the layout of each packet outputPacketDescriptions = new AudioStreamPacketDescription [outputBufferBytesSize / outputSizePerPacket]; } int outputBufferPacketSize = outputBufferBytesSize / outputSizePerPacket; // init position long outputFilePosition = 0; long totalOutputFrames = 0; // used for debugging // write magic cookie if necessary if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0) { outputFile.MagicCookie = converter.CompressionMagicCookie; } // loop to convert data Console.WriteLine ("Converting..."); while (true) { // create buffer buffers[0] = new AudioBuffer() { NumberChannels = outputFormat.ChannelsPerFrame, DataByteSize = outputBufferBytesSize, Data = outputBufferPtr }; int writtenPackets = outputBufferPacketSize; // LET CONVERT (it about time...) AudioConverterError converterFillError = converter.FillComplexBuffer(ref writtenPackets, buffers, outputPacketDescriptions); if (converterFillError != AudioConverterError.None) { Console.WriteLine("FillComplexBuffer error: " + converterFillError); } if (writtenPackets == 0) // EOF { break; } // write to output file int inNumBytes = buffers[0].DataByteSize; AudioFileError writeError = outputFile.WritePackets(false, inNumBytes, outputPacketDescriptions, outputFilePosition, ref writtenPackets, outputBufferPtr); if (writeError != 0) { Console.WriteLine("WritePackets error: {0}", writeError); } // advance output file packet position outputFilePosition += writtenPackets; if (FlowFormat.FramesPerPacket != 0) { // the format has constant frames per packet totalOutputFrames += (writtenPackets * FlowFormat.FramesPerPacket); } else { // variable frames per packet require doing this for each packet (adding up the number of sample frames of data in each packet) for (var i = 0; i < writtenPackets; ++i) { totalOutputFrames += outputPacketDescriptions[i].VariableFramesInPacket; } } } // write out any of the leading and trailing frames for compressed formats only if (outputFormat.BitsPerChannel == 0) { Console.WriteLine("Total number of output frames counted: {0}", totalOutputFrames); WritePacketTableInfo(converter, outputFile); } // write the cookie again - sometimes codecs will update cookies at the end of a conversion if (converter.CompressionMagicCookie != null && converter.CompressionMagicCookie.Length != 0) { outputFile.MagicCookie = converter.CompressionMagicCookie; } // Clean everything Marshal.FreeHGlobal(inputBufferPtr); Marshal.FreeHGlobal(outputBufferPtr); converter.Dispose(); outputFile.Dispose(); // Remove temp file File.Delete(TempWavFilePath); } 

I already saw this SO question , but the non-detailed answer in C ++ / Obj-C does not seem to match my problem.

Thanks!

+7
ios core-audio
source share
1 answer

Finally I found a solution!

I needed to declare the AVAudioSession category before converting the file.

 AVAudioSession.SharedInstance().SetCategory(AVAudioSessionCategory.AudioProcessing); AVAudioSession.SharedInstance().SetActive(true); 

Since I also use AudioQueue to RenderOffline , I have to actually set the category to AVAudioSessionCategory.PlayAndRecord in order to work both stand-alone rendering and sound conversion.

0
source share

All Articles