Audio hardware transfer from input to speakers - not performed in software

A few years ago, I wrote my own application for my company, which was going to run only one specific computer model. The application had to go through the audio signal coming to the microphone jack to the speakers. Instead of processing the bytes entering the socket and transferring them to the speakers in the software, I used the fact that I knew the specific equipment for recording a function that allowed the built-in sound card to loop the sound from the input to the speakers, Here is this function ( it was written in C using no more than mmsystem.dll):

int setMasterLevelsFromMicrophone (int volume, int mute) { MMRESULT error; // Open the mixer HMIXER mixerHandle; if (error = mixerOpen (&mixerHandle, 0, 0, 0, 0)) return 1; // Get the microphone source information MIXERLINE mixerline; mixerline.cbStruct = sizeof(MIXERLINE); mixerline.dwDestination = 0; if ((error = mixerGetLineInfo((HMIXEROBJ)mixerHandle, &mixerline, MIXER_GETLINEINFOF_DESTINATION))) return 2; // Get the microhone source controls MIXERCONTROL mixerControlArray[2]; MIXERLINECONTROLS mixerLineControls; mixerLineControls.cbStruct = sizeof(MIXERLINECONTROLS); mixerLineControls.cControls = 2; mixerLineControls.dwLineID = mixerline.dwLineID; mixerLineControls.pamxctrl = &mixerControlArray[0]; mixerLineControls.cbmxctrl = sizeof(MIXERCONTROL); if ((error = mixerGetLineControls((HMIXEROBJ)mixerHandle, &mixerLineControls, MIXER_GETLINECONTROLSF_ALL))) return 3; // Set the microphone source volume MIXERCONTROLDETAILS_UNSIGNED value; MIXERCONTROLDETAILS mixerControlDetails; mixerControlDetails.cbStruct = sizeof(MIXERCONTROLDETAILS); mixerControlDetails.dwControlID = mixerControlArray[0].dwControlID; mixerControlDetails.cChannels = 1; mixerControlDetails.cMultipleItems = 0; mixerControlDetails.paDetails = &value; mixerControlDetails.cbDetails = sizeof(MIXERCONTROLDETAILS_UNSIGNED); value.dwValue = volume; if ((error = mixerSetControlDetails((HMIXEROBJ)mixerHandle, &mixerControlDetails, MIXER_SETCONTROLDETAILSF_VALUE))) return 4; // Set the microphone source mute mixerControlDetails.dwControlID = mixerControlArray[1].dwControlID; value.dwValue = mute; if ((error = mixerSetControlDetails((HMIXEROBJ)mixerHandle, &mixerControlDetails, MIXER_SETCONTROLDETAILSF_VALUE))) return 5; return 0; } 

As you can see, this method is very specific to the hardware that I used at that time, since I hardcoded many array indices to gain access to specific mixer properties.

Now the question.

Now, several years have passed, and I need to change the application that I am writing in C # winforms to publish the same behavior. That is, I need an audio signal received from a microphone or line input, for transmission directly to the speakers. The trick here is that the hardware is no longer shut. And the application should run on any computer running WinXP or higher.

I started working with the NAudio library to execute this firmware programmatically (without using the built-in sound card). here is a small little toolkit that I created in C #:

 using System; using System.ComponentModel; using NAudio.Wave; namespace Media { public partial class AudioToolbox : Component { private WaveIn waveIn = null; private WaveOutEvent waveOut = null; public int SampleRate { get; set; } public int BitsPerSample { get; set; } public int Channels { get; set; } public AudioToolbox() { InitializeComponent(); SampleRate = 22050; BitsPerSample = 16; Channels = 1; } public void BeginReading(int deviceNumber) { if (waveIn == null) { waveIn = new WaveIn(); waveIn.DeviceNumber = deviceNumber; waveIn.WaveFormat = new NAudio.Wave.WaveFormat(SampleRate, BitsPerSample, Channels); waveIn.StartRecording(); } } public void BeginLoopback() { if (waveIn != null && waveOut == null) { WaveInProvider waveInProvider = new WaveInProvider(waveIn); waveOut = new WaveOutEvent(); waveOut.DeviceNumber = -1; // Default output device waveOut.DesiredLatency = 300; waveOut.Init(waveInProvider); waveOut.Play(); } } public void EndReading() { if (waveIn != null) { waveIn.StopRecording(); waveIn.Dispose(); waveIn = null; } } public void EndLoopback() { if (waveOut != null) { waveOut.Stop(); waveOut.Dispose(); waveOut = null; } } } } 

The problem I am facing is (I guess) resources. This code allows me to loop the audio output to the speakers, but performing tasks in the system leads to the appearance and skipping of sound. For example, if I open the application or quickly minimize and maximize the folder, playback appears and skips.

Is there any way to replenish the NAudio library in any way to avoid this pop-up and skips? Or is it better for me to find a common way to transfer audio through hardware, as it was many years ago with my C application?

EDIT:

My application that tests this sound toolbox is very simple. This is just the default winforms application created by Visual Studio 2010. I added one form to the form and the following event that occurs in the click event:

 private void button1_Click(object sender, EventArgs e) { AudioToolbox atr = new AudioToolbox(); atr.BeginReading(0); atr.BeginLoopback(); } 

I also launched a project to work in the .NET Framework 4, because it is the framework application that I need for this integration toolkit. When I compile the application and press the button, I hear the sound transmitted from my microphone jack to the speakers. Then I open Windows Explorer and constantly minimize / increase it. This action causes audio to skip. Failure.

I just posted this question on the NAudio forums. In case someone stumbles upon this page in the future, here is the link: Question posted on NAudio forums

+7
source share
2 answers

This is the best that I have been able to achieve so far in order to minimize the pass. I am going to accept this as an answer so that anyone else who stumbles on this page will see what I did, but if anyone comes up with a better solution, I will gladly select their answer .

The first thing I had to do was abandon NAudio 1.5, which is the last official release of NAudio. Instead, I grabbed the latest hot build, which is a beta version of NAudio 1.6. I did this because beta for 1.6 includes a new WaveInProvider called WaveInEvent. WaveInEvent is beneficial because it prevents calls to the GUI stream when reading from the microphone jack.

The second thing I did was switch from WaveOutEvent to DirectSoundOut. I did this because in my testing I found that when playing sound from a file, this WaveOutEvent will skip depending on my processor usage, but DirectSoundOut will not. Therefore, I suggested that the same behavior is observed when playing sound from the microphone port. Therefore, I use DirectSoundOut to play the sound from the microphone.

Here is my new AudioInputToolbox:

 using System; using System.ComponentModel; using NAudio.Wave; namespace Media { public partial class AudioInputToolbox : Component { private WaveInEvent waveIn = null; private DirectSoundOut waveOut = null; public int SampleRate { get; set; } public int BitsPerSample { get; set; } public int Channels { get; set; } public AudioInputToolbox() { InitializeComponent(); SampleRate = 22050; BitsPerSample = 16; Channels = 1; } public void BeginReading(int deviceNumber) { if (waveIn == null) { waveIn = new WaveInEvent(); waveIn.DeviceNumber = deviceNumber; waveIn.WaveFormat = new NAudio.Wave.WaveFormat(SampleRate, BitsPerSample, Channels); waveIn.StartRecording(); } } public void BeginLoopback() { if (waveIn != null && waveOut == null) { waveOut = new DirectSoundOut(DirectSoundOut.DSDEVID_DefaultPlayback, 300); waveOut.Init(new WaveInProvider(waveIn)); waveOut.Play(); } } public void EndReading() { if (waveIn != null) { waveIn.StopRecording(); waveIn.Dispose(); waveIn = null; } } public void EndLoopback() { if (waveOut != null) { waveOut.Stop(); waveOut.Dispose(); waveOut = null; } } } } 

And here is the code for my new test application. This is just a form with two buttons on it. Each button has a callback. One of them is the start button. Another stop button.

 using System; using System.Threading; using System.Windows.Forms; using Media; public partial class AITL : Form { AudioInputToolbox atr = new AudioInputToolbox(); public AITL() { InitializeComponent(); } private void startButton_Click(object sender, EventArgs e) { new Thread(() => { atr.BeginReading(0); atr.BeginLoopback(); }).Start(); } private void stopButton_Click(object sender, EventArgs e) { atr.EndReading(); atr.EndLoopback(); } } 

This approach does not solve my problem . It only helps to make the problem a little less and a little less strict .

Again, I will gladly accept a different answer from anyone who can completely resolve the issue of the pass. For repeated iteration, I encounter a skip after I clicked the start button, and I repeatedly minimize and maximize the window. Any window. I do this in windows explorer. (In my full, fully functional application, to which this audio component should fit, there are many intensive graphical interfaces, so this minimization / maximization of the window is a good simulation of this action).

+1
source

I think you just need to start your processing in a separate thread. You do all your work on the UI thread, so whenever you do something, it pauses your processing. I guess the sound goes to pieces from the event. Events are processed in the thread that sent them, in which case this is your ui thread.

Try wrapping your code like this:

 AudioToolbox atr = new AudioToolbox(); var audioThread = new Thread(()=> { atr.BeginReading(0); atr.BeginLoopback(); }).Start(); 

I see no reason why the execution of external tasks will lead to interruption. I worked in real time on the air and processed video in one stream without problems on different computers. It is possible what happens, since everything in the ui stream, when it redraws the screen, your sound processing is paused. If in this case the dedicated thread will solve this problem.

+1
source

All Articles