How to upload to S3 if I'm already in Conduit?

In Haskell, I process some data through pipes. During this processing, I want to conditionally save this data in S3. Are there any S3 libraries that will allow me to do this? In fact, what I want to do is β€œtee” the pipeline created by the conduit and put the data it contains on S3 while continuing to process it.

I found the aws library ( https://hackage.haskell.org/package/aws ), but functions like multipartUpload take Source as an argument. Given that I'm already in the channel, this is not like what I can use.

+4
source share
2 answers

This is actually not an answer, but only a hint. amazonka seems to open RequestBody of requests from http-client . Therefore, it is theoretically possible to transmit data from pipelines. However, it seems that you should know the data data in advance.

So tell me Can I transfer file upload to S3 without a content header? .

+1
source

Now there is an amazonka-s3-streaming package that provides multi-part download on S3 as a Sink channel.

0
source

All Articles