In my application, I inherit the javastreamingaudio class from the freeTTS package, then bypass the write method, which sends an array of bytes to the SourceDataLine to process the sound. Instead of writing to a data string, I write this and subsequent byte arrays to a buffer, which I then inject into my class and try to process the sound. My application processes sound like arrays of floats, so I convert to float and try to process, but always return a static sound.
I'm sure this is the way to go, but I missed something. I know that sound is processed as frames, and each frame is a group of bytes, so in my application I have to somehow handle bytes in frames. Am I looking at this correctly? Thanx in advance for any help.
source
share