Polyphonic sound reproduction using node.js on raspberries pi

I am trying to create polyphonic WAV playback using node.js on raspberry pi 3 running on the latest version of raspbian:

Is there something I'm missing here? I know that I can easily do this in another programming language (I was able to write C ++ code with SDL and Python with pygame), but the question is whether this is possible using node.js :)

Here is my current web audio api + node-spepeaker code:

var AudioContext = require('web-audio-api').AudioContext; var Speaker = require('speaker'); var fs = require('fs'); var track1 = './tracks/1.wav'; var track2 = './tracks/1.wav'; var context = new AudioContext(); context.outStream = new Speaker({ channels: context.format.numberOfChannels, bitDepth: context.format.bitDepth, sampleRate: context.format.sampleRate }); function play(audioBuffer) { if (!audioBuffer) { return; } var bufferSource = context.createBufferSource(); bufferSource.connect(context.destination); bufferSource.buffer = audioBuffer; bufferSource.loop = false; bufferSource.start(0); } var audioData1 = fs.readFileSync(track1); var audioData2 = fs.readFileSync(track2); var audioBuffer1, audioBuffer2; context.decodeAudioData(audioData1, function(audioBuffer) { audioBuffer1 = audioBuffer; if (audioBuffer1 && audioBuffer2) { playBoth(); } }); context.decodeAudioData(audioData2, function(audioBuffer) { audioBuffer2 = audioBuffer; if (audioBuffer1 && audioBuffer2) { playBoth(); } }); function playBoth() { console.log('playing...'); play(audioBuffer1); play(audioBuffer2); } 
+5
source share
3 answers

The sound quality is very poor, with a lot of distortion

According to the WebAudio specification ( https://webaudio.imtqy.com/web-audio-api/#SummingJunction ):

No clipping is applied at the inputs or outputs of AudioNode to maximize the use of dynamic range in sound graphics.

Now, if you play two audio streams, it is possible that their summation leads to a value that exceeds the allowable range, which sounds like distortion.

Try lowering the volume of each audio stream by first laying them through the GainNode like this:

 function play(audioBuffer) { if (!audioBuffer) { return; } var bufferSource = context.createBufferSource(); var gainNode = context.createGain(); gainNode.gain.value = 0.5 // for instance, find a good value bufferSource.connect(gainNode); gainNode.connect(context.destination); bufferSource.buffer = audioBuffer; bufferSource.loop = false; bufferSource.start(0); } 

Alternatively, you can use DynamicsCompressorNode , but manually adjusting the gain gives you more control over the output.

+1
source

This is not quite the answer, but I can not leave comments at the moment> <

I had a similar problem with an application created using js audio api, and a fairly easy fix led to a decrease in sound quality and a change in format.

In your case, I could think about setting the sampling rate and sampling rate as low as possible without affecting the listener (for example, 44.1 kHz and 16-bit depth).

You can also try changing the format, wav, theoretically, it should work well not on processor intensity, but there are other uncompressed formats (e.g. .aiff)

You can try using several pi cores:

https://nodejs.org/api/cluster.html

Although this can be a little tricky if you are performing an audio stream in parallel with other unrelated processes, you can try moving the audio to a separate processor.

You might be trying to use node with more RAM, although in your case, I doubt that I can.

The biggest problem, however, may be in the code, unfortunately, I have no experience with the modules you use, and as such can give real advice on this (therefore, why I said that this is not an answer worthy: p )

0
source

You can create processes from node 2 aplay processes, each of which plays one file. Use detached: true to allow node to continue.

0
source

All Articles