Is it possible to clear memory after FileReader?

FileReader seems to consume all of the memory since it is reused to preload several blocks, and never free it. Any known way to get him to free up his memory? Setting the FileReader object and its result properties to null do not work.

UPDATE:

Here is an example code (test it on large files, for example in a movie, or you will not notice the effect in the task manager)

<input id="file" type="file" onchange="sliceMe()" /> <script> function sliceMe() { var file = document.getElementById('file').files[0], fr, chunkSize = 2097152, chunks = Math.ceil(file.size / chunkSize), chunk = 0; function loadNext() { var start, end, blobSlice = File.prototype.mozSlice || File.prototype.webkitSlice; start = chunk * chunkSize; end = start + chunkSize >= file.size ? file.size : start + chunkSize; fr = new FileReader; fr.onload = function() { if (++chunk < chunks) { // shortcut - in production upload happens and then loadNext() is called loadNext(); } }; fr.readAsBinaryString(blobSlice.call(file, start, end)); } loadNext(); } </script> 

I tried to create a new instance of FileReader every time, but the problem still remains. I suspect this may be due to the circular nature of the pattern, but I'm not sure which other pattern can be used in this case.

I checked this code in both Firefox and Chrome, and Chrome seems to process it more elegantly - it clears the memory after each cycle and is very fast. But the irony of the situation is that Chrome should not use this code at all. This is just an experiment to overcome the Gecko 6-FormData + Blob error ( Error 649150 - Blobs do not have a file name if sent via FormData ).

+8
javascript memory-management dom memory-leaks filereader
source share
2 answers

The error is marked as INVALID, as it turned out that I had not actually reused the FileReader object.

Here is a template that does not support memory and processor:

 function sliceMe() { var file = document.getElementById('file').files[0], fr = new FileReader, chunkSize = 2097152, chunks = Math.ceil(file.size / chunkSize), chunk = 0; function loadNext() { var start, end, blobSlice = File.prototype.mozSlice || File.prototype.webkitSlice; start = chunk * chunkSize; end = start + chunkSize >= file.size ? file.size : start + chunkSize; fr.onload = function() { if (++chunk < chunks) { //console.info(chunk); loadNext(); // shortcut here } }; fr.readAsBinaryString(blobSlice.call(file, start, end)); } loadNext(); } 

Another error report was sent: https://bugzilla.mozilla.org/show_bug.cgi?id=681479 , which is connected, but not evil in this case.

Thanks to Kyle Huey for catching my attention :)

+3
source share

Try this:

 function sliceMe() { var file = document.getElementById('file').files[0], fr = new FileReader, chunkSize = 2097152, chunks = Math.ceil(file.size / chunkSize), chunk = 0; function loadNext() { var start, end, blobSlice = File.prototype.mozSlice || File.prototype.webkitSlice; start = chunk * chunkSize; end = start + chunkSize >= file.size ? file.size : start + chunkSize; fr.onload = function() { if (++chunk < chunks) { //console.info(chunk); } }; fr.onloadend = function(e) { loadNext(); // shortcut here }; fr.readAsBinaryString(blobSlice.call(file, start, end)); } loadNext(); } 

The load will not allow you to block your other readings ... (Obviously, you can fix the increment a little better, but you get the idea ...)

+4
source share

All Articles