Web Audio: Karplus Strong String Synthesis

Edit: cleared the code and player a bit (on Github) to make it easier to set the frequency

I am trying to synthesize strings using Karplus Strong's string synthesis algorithm , but I canโ€™t configure the string correctly. Anyone have any ideas?

As stated above, the code is in Github: https://github.com/achalddave/Audio-API-Frequency-Generator (the corresponding bits are in strings.js ).

Wiki has the following diagram:

Karplus Strong String Synthesis diagram

Thus, I generate noise, which is then output and sent to the delay filter at the same time. The delay filter is connected to a low-pass filter, which is then mixed with the output. According to Wikipedia, the delay should be N samples, where N is the sampling frequency divided by the fundamental frequency ( N = f_s/f_0 ).

Excerpts from my code:

Noise generation ( bufferSize is 2048, but this should not matter much)

 var buffer = context.createBuffer(1, bufferSize, context.sampleRate); var bufferSource = context.createBufferSource(); bufferSource.buffer = buffer; var bufferData = buffer.getChannelData(0); for (var i = 0; i < delaySamples+1; i++) { bufferData[i] = 2*(Math.random()-0.5); // random noise from -1 to 1 } 

Create delay node

 var delayNode = context.createDelayNode(); 

We need to delay the f_s/f_0 patterns. However, the delay node takes a delay in seconds, so we need to divide this into samples per second, and we get (f_s/f_0) / f_s , which is just 1/f_0 .

 var delaySeconds = 1/(frequency); delayNode.delayTime.value = delaySeconds; 

Create a low-pass filter (cutting the frequency, as far as I can tell, should not affect the frequency and depends more on whether the string is โ€œsonicโ€ natural):

 var lowpassFilter = context.createBiquadFilter(); lowpassFilter.type = lowpassFilter.LOWPASS; // explicitly set type lowpassFilter.frequency.value = 20000; // make things sound better 

Connect the noise to the output and delay node ( destination = context.destination and was previously defined):

 bufferSource.connect(destination); bufferSource.connect(delayNode); 

Connect the delay to the low pass filter:

 delayNode.connect(lowpassFilter); 

Connect the lower limit to the output and back to the delay *:

 lowpassFilter.connect(destination); lowpassFilter.connect(delayNode); 

Does anyone have any ideas? I cannot understand if the problem is my code, my interpretation of the algorithm, my understanding of the API, or (although this is the least likely) the problem with the API itself.


* Please note that on Github there is actually a node gain between the bottom and the output, but this is actually not very important.

+7
source share
1 answer

Here I think the problem. I do not think that the DelayNode implementation DelayNode designed to handle such tough feedback loops. For example, for a 441 Hz tone, that only 100 delay samples and a DelayNode implementation DelayNode likely to process their input in blocks of 128 or more. (The delayTime attribute is "k-rate", which means that changes to it are processed only in blocks of 128 samples. This does not prove my point, but it hints at it.) Thus, feedback also occurs late or only partially or something like that.

EDIT / UPDATE: As I said in a comment below, the actual problem is that the DelayNode in the loop adds 128 sample frames between the output and the input, so the observed delay is 128 / sampleRate seconds longer than indicated.

My advice (and what I started to do) is to implement all of Karplus-Strong, including your own delay line in JavaScriptNode (now known as ScriptProcessorNode ). It is not difficult, and I will send my code as soon as I get rid of the annoying error, which cannot exist, but somehow.

By the way, the tone that you (and I) get with delayTime of 1/440 (which should be A) seems to be G, two semitones below where it should be. Doubling the frequency increases it to B, four semitones higher. (I could disconnect from an octave or two types that are hard to say.) You could probably figure out what happens (mathematically) from a couple of extra data points like this, but I won't worry.

EDIT: Here is my code certified without errors.

 var context = new webkitAudioContext(); var frequency = 440; var impulse = 0.001 * context.sampleRate; var node = context.createJavaScriptNode(4096, 0, 1); var N = Math.round(context.sampleRate / frequency); var y = new Float32Array(N); var n = 0; node.onaudioprocess = function (e) { var output = e.outputBuffer.getChannelData(0); for (var i = 0; i < e.outputBuffer.length; ++i) { var xn = (--impulse >= 0) ? Math.random()-0.5 : 0; output[i] = y[n] = xn + (y[n] + y[(n + 1) % N]) / 2; if (++n >= N) n = 0; } } node.connect(context.destination); 
+6
source

All Articles