How do you measure the difference between two sounds using the web audio API?

I am trying to measure the difference between two sounds using the node analyzer and getByteFrequencyData (). I thought that by summing up the difference in each frequency box, I could make one number to imagine how different the two sounds are. Then I could change the sounds and measure the numbers again to see if the new sound was more or less different than before.

Does getByteFrequencyData () completely cover the sound representation, or do I need to include other pieces of data to qualify the sound?

Here is the code I'm using:

var Spectrogram = (function(){ function Spectrogram(ctx) { this.analyser = ctx.createAnalyser(); this.analyser.fftSize = 2048; this.sampleRate = 512; this.scriptNode = ctx.createScriptProcessor(this.sampleRate, 1, 1); this.scriptNode.onaudioprocess = this.process.bind(this); this.analyser.connect(this.scriptNode); this.startNode = this.analyser; this.endNode = this.scriptNode; this.data = []; } Spectrogram.prototype.process = function(e) { var d = new Uint8Array(this.analyser.frequencyBinCount); this.analyser.getByteFrequencyData(d); this.data.push(d); var inputBuffer = e.inputBuffer; var outputBuffer = e.outputBuffer; for(var channel = 0; channel < outputBuffer.numberOfChannels; channel++) { var inputData = inputBuffer.getChannelData(channel); var outputData = outputBuffer.getChannelData(channel); for(var sample = 0; sample < inputBuffer.length; sample++) { outputData[sample] = inputData[sample]; } } }; Spectrogram.prototype.compare = function(other) { var fitness = 0; for(var i=0; i<this.data.length; i++) { if(other.data[i]) { for(var k=0; k<this.data[i].length; k++) { fitness += Math.abs(this.data[i][k] - other.data[i][k]); } } } return fitness; } return Spectrogram; })(); 
+6
source share
1 answer

You can use the spectralFlux function provided by Meyda to compare two signals. The spectral flux, according to Wikipedia, "is usually calculated as the 2-norm (also known as the Euclidean distance) between two normalized spectra."

After running npm install --save meyda you will do something like:

 const spectralFlux = require('meyda/src/extractors/spectralFlux'); const difference = spectralFlux({ signal: [your first signal], previousSignal: [your second signal] }); 

Feel free to simply copy the code from here , so that you do not have to process the dependency, the code base has an appropriate license.

It will return the coefficient of how two signals sound “different”. You can do this either in the time domain or in the frequency domain. You will get different numbers, but both will correlate with how “different” sounds are from each other.

But the “difference” may not describe the differences accurately enough for your use case. For example, you may worry a lot about volume differences, and not much about timbral differences, but the spectral flux metric does not take this into account. You can first run each signal through the function extractors, find other statistical data about their properties, such as their perception volume, their brightness, etc., and then take the weighted Euclidean distance between these data, which will provide a more individual "difference" to what you need for your purpose.

We will gladly think further, but this is already quite a long time for the SO answer.

+1
source

All Articles