The fix is ββto specify the size of the buffer in the open(AudioFormat,int) method open(AudioFormat,int) . A delay of 10 ms-100 ms will be acceptable for real-time audio. Very low latencies, for example, will not work on all computer systems, and 100 ms or more is likely to annoy your users. A good compromise is, for example, 50ms. For your audio format, 8 bits, mono at 44100 Hz, a good buffer size is 2200 bytes, which is almost 50 ms.
Also note that different OSs have different sound capabilities in Java. On Windows and Linux, you can work with fairly small buffer sizes, but OS X uses an older implementation with significantly greater latency.
In addition, writing bytes of data bytes in SourceDataLine is very inefficient (the buffer size is set in the open() method, and not in write() ), as a rule, I always write one full buffer size to SourceDataLine.
After setting up SourceDataLine, use this code:
final int bufferSize = 2200; // in Bytes soundLine.open(audioFormat, bufferSize); soundLine.start(); byte counter = 0; final byte[] buffer = new byte[bufferSize]; byte sign = 1; while (frame.isVisible()) { int threshold = audioFormat.getFrameRate() / sliderValue; for (int i = 0; i < bufferSize; i++) { if (counter > threshold) { sign = (byte) -sign; counter = 0; } buffer[i] = (byte) (sign * 30); counter++; } // the next call is blocking until the entire buffer is // sent to the SourceDataLine soundLine.write(buffer, 0, bufferSize); }
Florian
source share