It seems that it alternates along the pattern with the first left channel. With a signal on the input of the left channel and silence on the right channel, I get:
result = [0.2776, -0.0002, 0.2732, -0.0002, 0.2688, -0.0001, 0.2643, -0.0003, 0.2599, ...
So, to split it into a stereo stream, go to a 2D array:
result = np.fromstring(in_data, dtype=np.float32) result = np.reshape(result, (frames_per_buffer, 2))
Now to access the left channel use result[:, 0] , and for the right channel use result[:, 1] .
def decode(in_data, channels): """ Convert a byte stream into a 2D numpy array with shape (chunk_size, channels) Samples are interleaved, so for a stereo stream with left channel of [L0, L1, L2, ...] and right channel of [R0, R1, R2, ...], the output is ordered as [L0, R0, L1, R1, ...] """ # TODO: handle data type as parameter, convert between pyaudio/numpy types result = np.fromstring(in_data, dtype=np.float32) chunk_length = len(result) / channels assert chunk_length == int(chunk_length) result = np.reshape(result, (chunk_length, channels)) return result def encode(signal): """ Convert a 2D numpy array into a byte stream for PyAudio Signal should be a numpy array with shape (chunk_size, channels) """ interleaved = signal.flatten() # TODO: handle data type as parameter, convert between pyaudio/numpy types out_data = interleaved.astype(np.float32).tostring() return out_data