NodeJS - downloading a progress bar file using Core NodeJS and the original Node solution

Ryan Dahl said he invented NodeJS to solve the file upload problem ( https://youtu.be/SAc0vQCC6UQ ). Using technology available in 2009 when Node was introduced, so before Express and the more advanced javascript client libraries that automatically tell you about current updates, how exactly did NodeJS solve this problem?

Trying to use only Core NodeJS now, I understand that with the request flow I can look at the header, get the total file size, and then get the size of each piece of data as it arrives, to tell me the percentage is complete. But then I don’t understand how to transfer these progress updates back to the browser, as the browser does not seem to be updated to request.end ().

Once again, I want to discuss how NodeJS originally solved this update problem. WebSockets did not exist yet, so you could not just open a connection to the WebSocket client and send the progress updates to the browser. Has other javascript client technology been used?

Here is my attempt. Progress updates are sent to the console on the server side, but the browser is updated only after the response stream receives response.end ().

var http = require('http'); var fs = require('fs'); var server = http.createServer(function(request, response){ response.writeHead(200); if(request.method === 'GET'){ fs.createReadStream('filechooser.html').pipe(response); } else if(request.method === 'POST'){ var outputFile = fs.createWriteStream('output'); var total = request.headers['content-length']; var progress = 0; request.on('data', function(chunk){ progress += chunk.length; var perc = parseInt((progress/total)*100); console.log('percent complete: '+perc+'%\n'); response.write('percent complete: '+perc+'%\n'); }); request.pipe(outputFile); request.on('end', function(){ response.end('\nArchived File\n\n'); }); } }); server.listen(8080, function(){ console.log('Server is listening on 8080'); }); 

filechooser.html:

 <!DOCTYPE html> <html> <body> <form id="uploadForm" enctype="multipart/form-data" action="/" method="post"> <input type="file" id="upload" name="upload" /> <input type="submit" value="Submit"> </form> </body> </html> 

The following is an update attempt. The browser now displays runtime updates, but I'm sure this is not the real solution that Ryan Dahl originally came up with for the production scenario. Did he use a lengthy survey? What does this solution look like?

 var http = require('http'); var fs = require('fs'); var server = http.createServer(function(request, response){ response.setHeader('Content-Type', 'text/html; charset=UTF-8'); response.writeHead(200); if(request.method === 'GET'){ fs.createReadStream('filechooser.html').pipe(response); } else if(request.method === 'POST'){ var outputFile = fs.createWriteStream('UPLOADED_FILE'); var total = request.headers['content-length']; var progress = 0; response.write('STARTING UPLOAD'); console.log('\nSTARTING UPLOAD\n'); request.on('data', function(chunk){ fakeNetworkLatency(function() { outputFile.write(chunk); progress += chunk.length; var perc = parseInt((progress/total)*100); console.log('percent complete: '+perc+'%\n'); response.write('<p>percent complete: '+perc+'%'); }); }); request.on('end', function(){ fakeNetworkLatency(function() { outputFile.end(); response.end('<p>FILE UPLOADED!'); console.log('FILE UPLOADED\n'); }); }); } }); server.listen(8080, function(){ console.log('Server is listening on 8080'); }); var delay = 100; //delay of 100 ms per chunk var count =0; var fakeNetworkLatency = function(callback){ setTimeout(function() { callback(); }, delay*count++); }; 
+6
source share
1 answer

Firstly, your code really works; node sends flagged responses, but the browser just waits more before worrying about displaying it.

Further Information in Node Documentation :

The first time response.write () is called, it will send a buffer of header and first body information to the client. The second time response.write () is called, node assumes that you are going to transmit data streams and sends them separately. That is, the reaction is buffered to the first piece of the body.

If you set the html content type as response.setHeader('Content-Type', 'text/html; charset=UTF-8'); , it does chrome rendering of the content, but this is only a trick when I used a series of set timeouts with calls to response.write inside; it still did not update dom when I tried with your code, so I went deeper deeper ...

The problem is that the browser really needs to display the content when it sees fit, so I set up the code to send ajax requests to check the status:

Firstly, I updated the server to just save its status in a global variable and open the "checkstatus" endpoint to read it:

 var http = require('http'); var fs = require('fs'); var status = 0; var server = http.createServer(function (request, response) { response.writeHead(200); if (request.method === 'GET') { if (request.url === '/checkstatus') { response.end(status.toString()); return; } fs.createReadStream('filechooser.html').pipe(response); } else if (request.method === 'POST') { status = 0; var outputFile = fs.createWriteStream('output'); var total = request.headers['content-length']; var progress = 0; request.on('data', function (chunk) { progress += chunk.length; var perc = parseInt((progress / total) * 100); console.log('percent complete: ' + perc + '%\n'); status = perc; }); request.pipe(outputFile); request.on('end', function () { response.end('\nArchived File\n\n'); }); } }); server.listen(8080, function () { console.log('Server is listening on 8080'); }); 

Then I updated filechooser.html to check the status using ajax requests:

 <!DOCTYPE html> <html> <body> <form id="uploadForm" enctype="multipart/form-data" action="/" method="post"> <input type="file" id="upload" name="upload"/> <input type="submit" value="Submit"> </form> Percent Complete: <span id="status">0</span>% </body> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script> <script> var $status = $('#status'); /** * When the form is submitted, begin checking status periodically. * Note that this is NOT long-polling--that when the server waits to respond until something changed. * In a prod env, I recommend using a websockets library with a long-polling fall-back for older broswers--socket.io is a gentleman choice) */ $('form').on('submit', function() { var longPoll = setInterval(function () { $.get('/checkstatus').then(function (status) { $status.text(status); //when it done, stop annoying the server if (parseInt(status) === 100) { clearInterval(longPoll); } }); }, 500); }); </script> </html> 

Please note that even though I am not canceling the response, the server is still able to handle incoming status requests.

So, to answer your question, Dal was surrounded by the flickr application, which saw that he had uploaded the file and received a long request to check its status. The reason it was separated was because the server was able to process these ajax requests while it continued to work on the download. It was multitasking. Watch how he talks about it exactly 14 minutes later in this video - he even says: “So, how it works ...”. A few minutes later, he mentions the iframe method, and also distinguishes lengthy polling from simple ajax requests. He claims that he wanted to write a server that was optimized for these types of behavior.

In any case, it was infrequent in those days. Most web server programs process only one request at a time. And if they went to the database, called to the web service, interacted with the file system or something like that, the process just sat there and waited for it to finish, instead of processing other requests while it waited.

If you want to process several requests at the same time, you will have to start another thread or add additional servers with a load balancer.

Nodejs, on the other hand, uses the core process very efficiently by performing non-blocking IOs. node was not the first to do this, but what sets it apart in the non-blocking area of ​​IO is that all of its methods are asynchronous by default, and you must call the "sync" method to do the wrong thing. This forces users to do the right thing.

In addition, it should be noted that the reason javascript was chosen was because it is already a language that runs in an event loop; It was created to handle asynchronous code. You may have anonymous functions and locks, which makes it easier to work with asynchronous actions.

I also want to mention that using the promise library also makes writing asynchronous code much easier. For example, check bluebirdjs - it has a good “promisify” method that converts functions on the prototype of an object that have a callback signature (function (error, params) {}) instead to return a promise.

+5
source

All Articles