CSCore ( https://github.com/filoe/cscore ) seems like a very good audio library for C #, but it lacks documentation and good examples.
I played with Bass.Net for a long time, and the CSCore architecture is not like the Bass library, so it is very difficult to find the right way to perform some common tasks.
I'm trying to grab the microphone input from the WasapiCapture device and output the recorded data to the WasapiOut device, but I could not succeed.
Below is the code that I could find after googling, but it does not work.
MMDeviceEnumerator deviceEnum = new MMDeviceEnumerator(); MMDeviceCollection devices = deviceEnum.EnumAudioEndpoints(DataFlow.Capture, DeviceState.Active); using (var capture = new WasapiCapture()) { capture.Device = deviceEnum.GetDefaultAudioEndpoint(DataFlow.Capture, Role.Multimedia); capture.Initialize(); using (var source = new SoundInSource(capture)) { using (var soundOut = new WasapiOut()) { capture.Start(); soundOut.Device = deviceEnum.GetDefaultAudioEndpoint(DataFlow.Render, Role.Multimedia); soundOut.Initialize(source); soundOut.Play(); } } }
What I'm trying to do is write an application like this: http://www.pitchtech.ch/PitchBox/
I have my own DSP functions that I want to apply to the recorded data.
Does anyone have examples of pointing WasapiCapture to WasapiOut and writing a custom DSP?
EDIT:
I found a solution using the creator of the CSCore library, Florian Rosman (filoe).
Here is an example of a DSP class that leverages the provided audio data.
class DSPGain: ISampleSource { ISampleSource _source; public DSPGain(ISampleSource source) { if (source == null) throw new ArgumentNullException("source"); _source = source; } public int Read(float[] buffer, int offset, int count) { float gainAmplification = (float)(Math.Pow(10.0, GainDB / 20.0)); int samples = _source.Read(buffer, offset, count); for (int i = offset; i < offset + samples; i++) { buffer[i] = Math.Max(Math.Min(buffer[i] * gainAmplification, 1), -1); } return samples; } public float GainDB { get; set; } public bool CanSeek { get { return _source.CanSeek; } } public WaveFormat WaveFormat { get { return _source.WaveFormat; } } public long Position { get { return _source.Position; } set { _source.Position = value; } } public long Length { get { return _source.Length; } } public void Dispose() { } }
And you can use this class, as in the example below:
WasapiCapture waveIn; WasapiOut soundOut; DSPGain gain; private void StartFullDuplex() { try { MMDeviceEnumerator deviceEnum = new MMDeviceEnumerator(); MMDeviceCollection devices = deviceEnum.EnumAudioEndpoints(DataFlow.Capture, DeviceState.Active); waveIn = new WasapiCapture(false, AudioClientShareMode.Exclusive, 5); waveIn.Device = deviceEnum.GetDefaultAudioEndpoint(DataFlow.Capture, Role.Multimedia); waveIn.Initialize(); waveIn.Start(); var source = new SoundInSource(waveIn) { FillWithZeros = true }; soundOut = new WasapiOut(false, AudioClientShareMode.Exclusive, 5); soundOut.Device = deviceEnum.GetDefaultAudioEndpoint(DataFlow.Render, Role.Multimedia); gain = new DSPGain(source.ToSampleSource()); gain.GainDB = 5; soundOut.Initialize(gain.ToWaveSource(16)); soundOut.Play(); } catch (Exception ex) { Debug.WriteLine("Exception in StartFullDuplex: " + ex.Message); } } private void StopFullDuplex() { if (soundOut != null) soundOut.Dispose(); if (waveIn != null) waveIn.Dispose(); }