OutputStream OutOfMemoryError when sending HTTP

I am trying to publish a large video / image file from the local file system to an http path, but after a while I ran into an error out of memory ...

here is the code

public boolean publishFile(URI publishTo, String localPath) throws Exception { InputStream istream = null; OutputStream ostream = null; boolean isPublishSuccess = false; URL url = makeURL(publishTo.getHost(), this.port, publishTo.getPath()); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); if (conn != null) { try { conn.setDoOutput(true); conn.setDoInput(true); conn.setRequestMethod("PUT"); istream = new FileInputStream(localPath); ostream = conn.getOutputStream(); int n; byte[] buf = new byte[4096]; while ((n = istream.read(buf, 0, buf.length)) > 0) { ostream.write(buf, 0, n); //<--- ERROR happens on this line.......??? } int rc = conn.getResponseCode(); if (rc == 201) { isPublishSuccess = true; } } catch (Exception ex) { log.error(ex); } finally { if (ostream != null) { ostream.close(); } if (istream != null) { istream.close(); } } } return isPublishSuccess; } 

HEre is the error I get ...

 Exception in thread "Thread-8773" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2786) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94) at sun.net.www.http.PosterOutputStream.write(PosterOutputStream.java:61) at com.test.HTTPClient.publishFile(HTTPClient.java:110) at com.test.HttpFileTransport.put(HttpFileTransport.java:97) 
+10
java out-of-memory
Jan 17 '10 at 18:18
source share
6 answers

HttpUrlConnection buffers the data so that it can set the Content-Length header (per HTTP specification ).

One option, if your target server supports it, is to use " chunked ". This will only delay a small portion of the data at a time. However, not all services support it (for example, Amazon S3).

Another alternative (and imo is better) is to use Jakarta HttpClient . You can set the "entity" in the request from a file, and the connection code will correctly configure the request headers.




Edit: nos commented that the OP might call HttpURLConnection.setFixedLengthStreamingMode(long length) . I did not know about this method; It was added in 1.5, and I have not used this class since.

However, I still suggest using Jakarta HttpClient for the simple reason that it reduces the amount of code that the OP should support. Code that is a template but still has the potential for errors:

  • The OP correctly handles the loop for copying between input and output. Usually, when I see an example of this, the poster either incorrectly checks the size of the returned buffer, or saves the re-allocation of buffers. Congratulations, but now you have to make sure that your successors take great care.
  • Exception handling is not so good. Yes, the OP remembers to close the connections in the finally block, and again, congratulations on that. Except that any of the close() calls might throw an IOException , leaving the other to execute. And the method as a whole throws an Exception , so the compiler is not going to catch such errors.
  • I am counting 31 lines of code to configure and execute the response (excluding checking the response code and calculating the URL, but including try / catch / finally). With HttpClient, it will be somewhere in the range of half a dozen LOCs.

Even if the OP wrote this code perfectly and reorganized it into methods similar to the methods in Jakarta Commons IO, he / she should not. This code has been written and verified by others. I know that it is a waste of time to rewrite it, and suspect that it is also a waste of time OP.

+13
Jan 17 '10 at 18:25
source
 conn.setFixedLengthStreamingMode((int) new File(localpath).length()); 

And for buffering you can cover your streams in BufferedOutputStream and BufferedInputStream

You can find a good example of a hosted download there: gdata-java-client

+3
Oct 25 '12 at 17:15
source

The problem is that the HttpURLConnection class uses an array of bytes to store your data. Presumably, this video you click takes up more memory than is available. You have several options:

  • Increase the amount of memory in the application. You can use the -Xmx1024m option to provide 1 GB of memory to your application. This will increase the amount of data that you can store in memory.

  • If you still don't have enough memory, you might want to try another library to push a video that doesn't store data in memory right away. Apache Commons HttpClient has such a function. See this site for more information: http://hc.apache.org/httpclient-3.x/features.html . See this section for multi-factorial download of large files: http://hc.apache.org/httpclient-3.x/methods/multipartpost.html

+2
Jan 17 '10 at 18:26
source

For anything other than basic GET operations, the java.net inline HTTP material is not very good. For this, it is recommended that you use Apache Commons HttpClient . This allows you to do much more intuitive material, for example:

 PutMethod put = new PutMethod(url); put.setRequestEntity(new FileRequestEntity(localFile, contentType)); int responseCode = put.executeMethod(); 

which replaces a lot of your boiler room code.

+2
Jan 17 '10 at 18:32
source

HttpsURLConnection # setChunkedStreamingMode (1024 * 1024 * 10); // 10 MB chunk This ensures that any file (of any size) is transmitted over the https channel without internal buffering. This should be used when the file size or content length is unknown.

+1
Apr 27 '15 at 16:15
source

Your problem is that you are trying to fix X bytes in X / N bytes of RAM when N> 1.

You either need to read the video in a smaller buffer, or write it down as you go, or make the file smaller or increase the available memory for your process.

Check the heap size. You can use -Xmx to increase it if you took the default value.

-one
Jan 17 '10 at 18:25
source



All Articles