I am working on a simple audio visualization application that uses the Web Audio API analyzer to extract frequency data, as in this example . It is expected that the more visual elements I add to my canvases, the greater the latency between the sound and the resulting visual results.
Is there a standard approach to accounting for this delay? I can imagine a lookahead technique that buffers upcoming audio data. I could work with clock synchronization of JavaScript and web audio, but I am convinced that there is a more intuitive answer. Perhaps it is as simple as playing the sound out loud with a slight delay (although it is not so difficult).
the dancer.js library seems to have the same problem (always a very thin delay), while other applications seem to have solved the delay problem. I could not pinpoint the technical differences. SoundJS seems to have improved a bit, but it would be nice to build from scratch.
Any methodologies pointing me in the right direction are greatly appreciated.
I think you will find answers to the exact sound moments in this article: http://www.html5rocks.com/en/tutorials/audio/scheduling/
SoundJS , , - javascript . . SoundJS, , fft , . , .
, .
, , , . . requestAnimationFrame? ?
, , JS Web Audio - Web Audio - , , ( , ), - (ScriptProcessorNodes, ) , ).
, (.. "" ), . FFT , ; , node, .
, , "" Analyzer - .