Help with dynamic range compression (audio)

I am writing a C # function to perform dynamic range compression (an audio effect that basically compresses transient peaks and amplifies everything else to produce an overall loud sound). I wrote a function that does this (I think):

alt text http://www.freeimagehosting.net/uploads/feea390f84.jpg

public static void Compress(ref short[] input, double thresholdDb, double ratio) { double maxDb = thresholdDb - (thresholdDb / ratio); double maxGain = Math.Pow(10, -maxDb / 20.0); for (int i = 0; i < input.Length; i += 2) { // convert sample values to ABS gain and store original signs int signL = input[i] < 0 ? -1 : 1; double valL = (double)input[i] / 32768.0; if (valL < 0.0) { valL = -valL; } int signR = input[i + 1] < 0 ? -1 : 1; double valR = (double)input[i + 1] / 32768.0; if (valR < 0.0) { valR = -valR; } // calculate mono value and compress double val = (valL + valR) * 0.5; double posDb = -Math.Log10(val) * 20.0; if (posDb < thresholdDb) { posDb = thresholdDb - ((thresholdDb - posDb) / ratio); } // measure L and R sample values relative to mono value double multL = valL / val; double multR = valR / val; // convert compressed db value to gain and amplify val = Math.Pow(10, -posDb / 20.0); val = val / maxGain; // re-calculate L and R gain values relative to compressed/amplified // mono value valL = val * multL; valR = val * multR; double lim = 1.5; // determined by experimentation, with the goal // being that the lines below should never (or rarely) be hit if (valL > lim) { valL = lim; } if (valR > lim) { valR = lim; } double maxval = 32000.0 / lim; // convert gain values back to sample values input[i] = (short)(valL * maxval); input[i] *= (short)signL; input[i + 1] = (short)(valR * maxval); input[i + 1] *= (short)signR; } } 

and I call it threshold values ​​between 10.0 dB and 30.0 dB and ratios between 1.5 and 4.0. This feature definitely produces a louder overall sound, but with an unacceptable level of distortion even at low thresholds and low ratios.

Can someone see something wrong with this feature? Am I processing the stereo spectrum correctly (function assumes stereo input)? Since I (vaguely) understand things, I don’t want to compress the two channels separately, so my code is trying to compress the "virtual" value of the mono sample, and then apply the same compression ratio to the value of the sample L and R separately. Not sure if I doing it right.

I think part of the problem may be the “hard knee” of my function, which suddenly hits compression when it crosses the threshold. I think I might have to use a soft knee like this:

alt text http://www.freeimagehosting.net/uploads/4c1040fda8.jpg

Can anyone suggest a modification of my function to create a soft knee curve?

+6
function c # audio
source share
2 answers

I think your basic understanding of how to do compression is wrong (sorry;)). This is not about “squeezing” individual sample values; which will radically change the waveform and cause serious harmonic distortion. You need to estimate the volume of the input signal from many patterns (I would have to use Google for the correct formula) and use this to apply a multiplier with a much more gradual change to the input samples to generate output.

The DSP forum at kvraudio.com/forum can point you in the right direction if you find it difficult to find the usual methods.

+1
source share

Open project The Skype Voice Changer project includes a port for C # of a number of pleasant compressors written by Scott Stillwell , all with customizable parameters:

The first one looks like it has the ability to make a soft knee, although the option for this is not displayed.

+2
source share

All Articles