Node js piping pdfkit to memory stream

I use pdfkit ( https://github.com/devongovett/pdfkit ) on my node server, usually creating pdf files and then uploading them to s3. The problem is that the pdfkit examples translate the pdf document into the node write stream, which writes the file to disk, I followed the example and worked correctly, however, my requirement now is to transfer the pdf document to the memory stream, but do not save it on disk (I still upload to s3). I followed some node thread procedures, but none of them seem to work with the pdf channel with me, I could just write lines to the memory streams. Therefore, my question is: how to transfer the output in a pdf file to a memory stream (or something like that), and then read it as an object to load on s3?

var fsStream = fs.createWriteStream(outputPath + fileName); doc.pipe(fsStream); 

Thanks in advance.

+8
pdfkit memorystream
source share
2 answers

There is no need to use intermediate memory stream 1 - just pass the pdfkit output stream directly to the HTTP download stream.

In my experience, the AWS SDK is rubbish when it comes to working with threads, so I usually use request .

 var upload = request({ method: 'PUT', url: 'https://bucket.s3.amazonaws.com/doc.pdf', aws: { bucket: 'bucket', key: ..., secret: ... } }); doc.pipe(upload); 

1 - in fact, it is usually undesirable to use a memory stream, because it means buffering the entire object in RAM, which is exactly what threads should avoid!

+3
source share

You can try something like this and upload it to S3 inside the end event.

 var doc = new pdfkit(); var MemoryStream = require('memorystream'); var memStream = new MemoryStream(null, { readable : false }); doc.pipe(memStream); doc.on('end', function () { var buffer = Buffer.concat(memStream.queue); awsservice.putS3Object(buffer, fileName, fileType, folder).then(function () { }, reject); }) 
0
source share

All Articles