To add @MikeC's excellent answer , here are some important information from current documents (v8.4.0) for writable.write() :
If false returned, further attempts to write data to the stream should stop until the 'drain' event occurs.
Until the stream merges, write() calls will buffer chunk and return false . After all currently buffered chunks have been merged (accepted for delivery by the operating system), the 'drain' event will occur. It is recommended that after write() returns false , no more fragments will be written until the 'drain' event is 'drain' . When write() called in a stream that does not merge, it is allowed, Node.js will buffer all recorded fragments until the maximum memory usage occurs, after which it will be unconditionally canceled . Even before it stops, using high memory will result in poor garbage collection performance and high RSS (which usually doesnβt come back to the system, even after the memory is no longer required).
and for reverse compression in streams :
In any case, when the data buffer has exceeded the highWaterMark value or the write queue is currently busy, .write() will return false .
When false returned, the backpressure system starts.
Once the data buffer is empty, the .drain() event will occur and resume the flow of incoming data.
Once the queue is completed, backpressure will allow data to be sent again. The memory space that was used will be freed up and prepared for the next batch of data.
+-------------------+ +=================+ | Writable Stream +---------> .write(chunk) | +-------------------+ +=======+=========+ | +------------------v---------+ +-> if (!chunk) | Is this chunk too big? | | emit .end(); | Is the queue busy? | +-> else +-------+----------------+---+ | emit .write(); | | ^ +--v---+ +---v---+ ^-----------------------------------< No | | Yes | +------+ +---v---+ | emit .pause(); +=================+ | ^-----------------------+ return false; <-----+---+ +=================+ | | when queue is empty +============+ | ^-----------------------< Buffering | | | |============| | +> emit .drain(); | ^Buffer^ | | +> emit .resume(); +------------+ | | ^Buffer^ | | +------------+ add chunk to queue | | <---^---------------------< +============+
Here are some visualizations (running a script with a 512 MB V8 heap memory with --max-old-space-size=512 ).
This visualization shows heap memory usage (red) and delta time (purple) for every 10,000 i steps (X axis shows i ):
'use strict' var fs = require('fs'); var wstream = fs.createWriteStream('myOutput.txt'); var latestTime = (new Date()).getTime(); var currentTime; for (var i = 0; i < 10000000000; i++) { wstream.write(i+'\n'); if (i % 10000 === 0) { currentTime = (new Date()).getTime(); console.log([

The script runs slower and slower when memory usage approaches the maximum limit of 512 MB until it fails when the limit is reached.
This visualization uses v8.setFlagsFromString() with --trace_gc to show the current memory usage (red) and runtime (purple) of each garbage collection (the X axis shows the total elapsed time in seconds):
'use strict' var fs = require('fs'); var v8 = require('v8'); var wstream = fs.createWriteStream('myOutput.txt'); v8.setFlagsFromString('--trace_gc'); for (var i = 0; i < 10000000000; i++) { wstream.write(i+'\n'); } console.log('End!') wstream.end();

Memory usage reaches about 80% in about 4 seconds, and the garbage collector abandons the Scavenge attempt and is forced to use Mark-sweep (more than 10 times slower) - see this article for more details.
For comparison, here are the same visualizations for the @MikeC code that waits for drain when the write buffer is write :

