Java live audio stream

I am implementing a live broadcast from MIC to a java server on another PC. But I hear only white noise.

I connected both client and server programs

Client: import java.io.IOException; import java.net.DatagramPacket; import java.net.DatagramSocket; import java.net.InetAddress; import java.net.SocketException; import java.net.UnknownHostException; import javax.sound.sampled.AudioFormat; import javax.sound.sampled.AudioInputStream; import javax.sound.sampled.AudioSystem; import javax.sound.sampled.DataLine; import javax.sound.sampled.LineUnavailableException; import javax.sound.sampled.TargetDataLine; public class Mic { public byte[] buffer; private int port; static AudioInputStream ais; public static void main(String[] args) { TargetDataLine line; DatagramPacket dgp; AudioFormat.Encoding encoding = AudioFormat.Encoding.PCM_SIGNED; float rate = 44100.0f; int channels = 2; int sampleSize = 16; boolean bigEndian = true; InetAddress addr; AudioFormat format = new AudioFormat(encoding, rate, sampleSize, channels, (sampleSize / 8) * channels, rate, bigEndian); DataLine.Info info = new DataLine.Info(TargetDataLine.class, format); if (!AudioSystem.isLineSupported(info)) { System.out.println("Line matching " + info + " not supported."); return; } try { line = (TargetDataLine) AudioSystem.getLine(info); int buffsize = line.getBufferSize()/5; buffsize += 512; line.open(format); line.start(); int numBytesRead; byte[] data = new byte[buffsize]; addr = InetAddress.getByName("127.0.0.1"); DatagramSocket socket = new DatagramSocket(); while (true) { // Read the next chunk of data from the TargetDataLine. numBytesRead = line.read(data, 0, data.length); // Save this chunk of data. dgp = new DatagramPacket (data,data.length,addr,50005); socket.send(dgp); } }catch (LineUnavailableException e) { e.printStackTrace(); }catch (UnknownHostException e) { // TODO: handle exception } catch (SocketException e) { // TODO: handle exception } catch (IOException e2) { // TODO: handle exception } } } 

and the server side is not a problem. It works great with the AudioRecord Android client.

 Server: import java.io.ByteArrayInputStream; import java.net.DatagramPacket; import java.net.DatagramSocket; import javax.sound.sampled.AudioFormat; import javax.sound.sampled.AudioInputStream; import javax.sound.sampled.AudioSystem; import javax.sound.sampled.DataLine; import javax.sound.sampled.SourceDataLine; public class Server { AudioInputStream audioInputStream; static AudioInputStream ais; static AudioFormat format; static boolean status = true; static int port = 50005; static int sampleRate = 44100; static DataLine.Info dataLineInfo; static SourceDataLine sourceDataLine; public static void main(String args[]) throws Exception { System.out.println("Server started at port:"+port); DatagramSocket serverSocket = new DatagramSocket(port); /** * Formula for lag = (byte_size/sample_rate)*2 * Byte size 9728 will produce ~ 0.45 seconds of lag. Voice slightly broken. * Byte size 1400 will produce ~ 0.06 seconds of lag. Voice extremely broken. * Byte size 4000 will produce ~ 0.18 seconds of lag. Voice slightly more broken then 9728. */ byte[] receiveData = new byte[4096]; format = new AudioFormat(sampleRate, 16, 1, true, false); dataLineInfo = new DataLine.Info(SourceDataLine.class, format); sourceDataLine = (SourceDataLine) AudioSystem.getLine(dataLineInfo); sourceDataLine.open(format); sourceDataLine.start(); //FloatControl volumeControl = (FloatControl) sourceDataLine.getControl(FloatControl.Type.MASTER_GAIN); //volumeControl.setValue(1.00f); DatagramPacket receivePacket = new DatagramPacket(receiveData, receiveData.length); ByteArrayInputStream baiss = new ByteArrayInputStream(receivePacket.getData()); while (status == true) { serverSocket.receive(receivePacket); ais = new AudioInputStream(baiss, format, receivePacket.getLength()); toSpeaker(receivePacket.getData()); } sourceDataLine.drain(); sourceDataLine.close(); } public static void toSpeaker(byte soundbytes[]) { try { System.out.println("At the speaker"); sourceDataLine.write(soundbytes, 0, soundbytes.length); } catch (Exception e) { System.out.println("Not working in speakers..."); e.printStackTrace(); } } } 
+5
source share
4 answers

So, I filled the microphone with a sine wave (or something that in some indefinite sense resembles a sine wave), and your program works fine.

My specific changes were as follows:

 package audioclient; import java.io.*; import java.net.*; import java.nio.ByteBuffer; import javax.sound.sampled.*; public class Mic { public byte[] buffer; private int port; static AudioInputStream ais; public static void main(String[] args) { TargetDataLine line; DatagramPacket dgp; AudioFormat.Encoding encoding = AudioFormat.Encoding.PCM_SIGNED; float rate = 44100.0f; int channels = 2; int sampleSize = 16; boolean bigEndian = true; InetAddress addr; AudioFormat format = new AudioFormat(encoding, rate, sampleSize, channels, (sampleSize / 8) * channels, rate, bigEndian); DataLine.Info info = new DataLine.Info(TargetDataLine.class, format); if (!AudioSystem.isLineSupported(info)) { System.out.println("Line matching " + info + " not supported."); return; } try { line = (TargetDataLine) AudioSystem.getLine(info); //TOTALLY missed this. int buffsize = line.getBufferSize() / 5; buffsize += 512; line.open(format); line.start(); int numBytesRead; byte[] data = new byte[buffsize]; /* * MICK injection: We have a buffsize of 512; it is best if the frequency * evenly fits into this (avoid skips, bumps, and pops). Additionally, 44100 Hz, * with two channels and two bytes per sample. That four bytes; divide * 512 by it, you have 128. * * 128 samples, 44100 per second; that a minimum of 344 samples, or 172 Hz. * Well within hearing range; slight skip from the uneven division. Maybe * bump it up to 689 Hz. * * That a sine wave of shorts, repeated twice for two channels, with a * wavelength of 32 samples. * * Note: Changed my mind, ignore specific numbers above. * */ { final int λ = 16; ByteBuffer buffer = ByteBuffer.allocate(λ * 2 * 8); for(int j = 0; j < 2; j++) { for(double i = 0.0; i < λ; i++) { System.out.println(j + " " + i); //once for each sample buffer.putShort((short)(Math.sin(Math.PI * (λ/i)) * Short.MAX_VALUE)); buffer.putShort((short)(Math.sin(Math.PI * (λ/i)) * Short.MAX_VALUE)); } } data = buffer.array(); } addr = InetAddress.getByName("127.0.0.1"); try(DatagramSocket socket = new DatagramSocket()) { while (true) { for(byte b : data) System.out.print(b + " "); // Read the next chunk of data from the TargetDataLine. // numBytesRead = line.read(data, 0, data.length); for(int i = 0; i < 64; i++) { byte b = data[i]; System.out.print(b + " "); } System.out.println(); // Save this chunk of data. dgp = new DatagramPacket(data, data.length, addr, 50005); for(int i = 0; i < 64; i++) { byte b = dgp.getData()[i]; System.out.print(b + " "); } System.out.println(); socket.send(dgp); } } } catch (LineUnavailableException e) { e.printStackTrace(); } catch (UnknownHostException e) { // TODO: handle exception } catch (SocketException e) { // TODO: handle exception } catch (IOException e2) { // TODO: handle exception } } } 

Obviously, I misinterpreted it as a 512-byte chunk and distorted the sine wave, but the fact is that it produced exactly the sound it was intended for - a breathtaking crack at a certain height.

This, I do not suspect, that the problem is clearly in your code. The first thing I would like to check out is what your system uses for audio. Do you have multiple microphones connected? Maybe a microphone for a webcam? You can grab a utility such as PulseAudio Volume Control for verification. If you have not tested the functionality of your microphone, you can do this too; they have a life expectancy.

It is not unusual to scramble bits in an audio stream, and it is not difficult; but I don’t see anywhere where you could do it.

One thought might be to change your program to try to play the sound locally before sending it to the server. That way, you can at least determine if there is a problem before or after the microphone.

+2
source

When the client and server use data buffers of different sizes, you will receive truncation and may cause one or both to produce artifacts.

The buffer size of your server is equal to byte[] receiveData = new byte[4096];

The size of the client buffer is dynamic for some reason and is set to byte[] data = new byte[buffsize];

Set the client buffer size to static 4096 to match the server: byte[] data = new byte[4096];

Or just make sure they are the same size ...

+1
source

I suggest you first write to the audio file, which is the recorder on the client. This will allow you to check whether the recorded audio is recorded correctly. You can convert PCM to WAV with utilities like sox.

0
source

It is important to compare the audio format both on the client and on the server, for example, change it in Client.java to: format = new AudioFormat(sampleRate, 16, 1, true, false); You must also use the same buffer size for both programs.

0
source

Source: https://habr.com/ru/post/1211805/


All Articles