Download in S3 from HTTPWebResponse.GetResponseStream () in C #

I am trying to download from an HTTP stream directly to S3 without saving to memory or as a file in the first place. I already do this with Rackspace cloud files like HTTP-HTTP, however AWS authentication goes beyond me, so I'm trying to use the SDK.

The problem is that the upload thread with this exception:

"This stream does not support seek operations."

I tried with PutObject and TransferUtility.Upload , both refused with the same.

Is there a way to flow into S3 as the stream arrives, and not buffer it all to a MemoryStream or FileStream ?

or are there any good examples of performing authentication in an S3 request using HTTPWebRequest, so I can duplicate what I do with cloud files?

Edit: or is there an auxiliary function in AWSSDK to generate an authorization header?

CODE:

This is the bad part of S3 (both methods are included for completeness):

 string uri = RSConnection.StorageUrl + "/" + container + "/" + file.SelectSingleNode("name").InnerText; var req = (HttpWebRequest)WebRequest.Create(uri); req.Headers.Add("X-Auth-Token", RSConnection.AuthToken); req.Method = "GET"; using (var resp = req.GetResponse() as HttpWebResponse) { using (Stream stream = resp.GetResponseStream()) { Amazon.S3.Transfer.TransferUtility trans = new Amazon.S3.Transfer.TransferUtility(S3Client); trans.Upload(stream, config.Element("root").Element("S3BackupBucket").Value, container + file.SelectSingleNode("name").InnerText); //Use EITHER the above OR the below PutObjectRequest putReq = new PutObjectRequest(); putReq.WithBucketName(config.Element("root").Element("S3BackupBucket").Value); putReq.WithKey(container + file.SelectSingleNode("name").InnerText); putReq.WithInputStream(Amazon.S3.Util.AmazonS3Util.MakeStreamSeekable(stream)); putReq.WithMetaData("content-length", file.SelectSingleNode("bytes").InnerText); using (S3Response putResp = S3Client.PutObject(putReq)) { } } } 

And here is how I do it successfully with S3 in Cloud Files:

 using (GetObjectResponse getResponse = S3Client.GetObject(new GetObjectRequest().WithBucketName(bucket.BucketName).WithKey(file.Key))) { using (Stream s = getResponse.ResponseStream) { //We can stream right from s3 to CF, no need to store in memory or filesystem. var req = (HttpWebRequest)WebRequest.Create(uri); req.Headers.Add("X-Auth-Token", RSConnection.AuthToken); req.Method = "PUT"; req.AllowWriteStreamBuffering = false; if (req.ContentLength == -1L) req.SendChunked = true; using (Stream stream = req.GetRequestStream()) { byte[] data = new byte[32768]; int bytesRead = 0; while ((bytesRead = s.Read(data, 0, data.Length)) > 0) { stream.Write(data, 0, bytesRead); } stream.Flush(); stream.Close(); } req.GetResponse().Close(); } } 
+7
source share
5 answers

As no answer seems to have done this, I spent time developing it based on recommendations from Steve:

In response to this question, "are there any good examples of performing authentication in an S3 request using HTTPWebRequest, so can I duplicate what I do with cloud files?", Here's how to manually create an auth header:

 string today = String.Format("{0:ddd,' 'dd' 'MMM' 'yyyy' 'HH':'mm':'ss' 'zz00}", DateTime.Now); string stringToSign = "PUT\n" + "\n" + file.SelectSingleNode("content_type").InnerText + "\n" + "\n" + "x-amz-date:" + today + "\n" + "/" + strBucketName + "/" + strKey; Encoding ae = new UTF8Encoding(); HMACSHA1 signature = new HMACSHA1(ae.GetBytes(AWSSecret)); string encodedCanonical = Convert.ToBase64String(signature.ComputeHash(ae.GetBytes(stringToSign))); string authHeader = "AWS " + AWSKey + ":" + encodedCanonical; string uriS3 = "https://" + strBucketName + ".s3.amazonaws.com/" + strKey; var reqS3 = (HttpWebRequest)WebRequest.Create(uriS3); reqS3.Headers.Add("Authorization", authHeader); reqS3.Headers.Add("x-amz-date", today); reqS3.ContentType = file.SelectSingleNode("content_type").InnerText; reqS3.ContentLength = Convert.ToInt32(file.SelectSingleNode("bytes").InnerText); reqS3.Method = "PUT"; 

Note the added x-amz-date header, as HTTPWebRequest sends the date in a different format to what AWS expects.

From there, it was just a case of repeating what I was already doing.

+6
source

See Amazon S3 Authentication Tool for Twisting . On this web page:

Curl is a popular command line tool for interacting with HTTP Services. This Perl script computes the correct signature, then calls Curl with the appropriate arguments.

Perhaps you can adapt his or his output for your use.

+2
source

I think the problem is that according to the AWS Documentation, Content-Length is required , and you don't know what the length is until the stream ends.

(I would suggest that the Amazon.S3.Util.AmazonS3Util.MakeStreamSeekable procedure reads the entire stream into memory to work around this problem, making it unsuitable for your script.)

What you can do is read the file in chunks and upload them using MultiPart upload .

PS, I suppose you know that the C # source for AWSSDK for dotnet is on Github .

+1
source

This is a true hack (which is likely to break with the new AWSSDK implementation), and it requires knowing the length of the requested file, but if you wrap the response flow as shown in this class (entity) , as shown below:

 long length = fileLength; 

You can get the file length in several ways. I am loading from the dropbox link, so they give me the length along with the address. Alternatively, you can execute a HEAD request and get a Content-Length.

 string uri = RSConnection.StorageUrl + "/" + container + "/" + file.SelectSingleNode("name").InnerText; var req = (HttpWebRequest)WebRequest.Create(uri); req.Headers.Add("X-Auth-Token", RSConnection.AuthToken); req.Method = "GET"; using (var resp = req.GetResponse() as HttpWebResponse) { using (Stream stream = resp.GetResponseStream()) { //I haven't tested this path Amazon.S3.Transfer.TransferUtility trans = new Amazon.S3.Transfer.TransferUtility(S3Client); trans.Upload(new HttpResponseStream(stream, length), config.Element("root").Element("S3BackupBucket").Value, container + file.SelectSingleNode("name").InnerText); //Use EITHER the above OR the below //I have tested this with dropbox data PutObjectRequest putReq = new PutObjectRequest(); putReq.WithBucketName(config.Element("root").Element("S3BackupBucket").Value); putReq.WithKey(container + file.SelectSingleNode("name").InnerText); putReq.WithInputStream(new HttpResponseStream(stream, length))); //These are necessary for really large files to work putReq.WithTimeout(System.Threading.Timeout.Infinite); putReq.WithReadWriteTimeout(System.Thread.Timeout.Infinite); using (S3Response putResp = S3Client.PutObject(putReq)) { } } } 

The hack overrides the Position and Length properties and returns 0 for Position {get}, noop'ing Position {set} and returns the known length for Length.

I understand that this may not work if you have no length or if the server providing the source does not support HEAD requests and Content-Length headers. I also understand that this may not work if the specified content length or delivery length does not match the actual file length.

In my test, I also supply Content-Type in PutObjectRequest, but I do not want this.

+1
source

As sgmoore said, the problem is that the length of your content cannot be obtained from the HTTP response. However, HttpWebResponse has a content length property. That way, you can actually create your Http request for S3 yourself, instead of using the Amazon library.

https://stackoverflow.com/a/16626838/2326323 of https://stackoverflow.com/questions/7168161/how-to-store-string-string-in-javascript/232832#2158327

0
source

All Articles