Does SQS really send multiple S3 PUT object records per message?

I created an S3 bucket to pass the event to the PUT object in SQS, and I process the SQS queue at the EB working level.

A diagram of the message sent by SQS is here: http://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html

Records is an array, implying that there may be several records sent to the same POST by my working endpoint. Is this really happening? Or will my employee receive only one entry per message?

An employee can return only one response: 200 (the message was processed successfully) or non-200 (the message was not processed successfully, which returns it to the queue), regardless of how many records are in the received message.

So, if my worker receives several messages in a message and he processes it successfully (say, doing something with side effects, such as inserting into a database), but fails on one or more, how do I handle this? If I return 200, then those that failed will not be repeated. But if I return non-200, then those that were processed successfully will be repeated unnecessarily and possibly reinserted. Therefore, I would have to make my employee smart enough to repeat only failed ones, which is logical, I would prefer not to write.

This would be much simpler if only one message was sent per message. Therefore, if it is in practice, despite the fact that the records are an array, I would really like to know!

+7
amazon-s3 amazon-web-services elastic-beanstalk
source share
1 answer

To be clear, these are not the records that SQS sends. These are the records that S3 sends to SQS (or SNS or Lambda).

Currently, all S3 event notifications have one event for notification. We may include several entries as we add new types of events in the future. It is also a message format that is shared by other AWS services, and other services may include multiple entries.

- https://forums.aws.amazon.com/thread.jspa?messageID=592264򐦈

So, at the moment in it appears only one record per message.

But ... you are mistaken if you think that your application should not be ready to process repeated or repeated messages. In any massive and distributed system, such as SQS, it is extremely difficult to guarantee that this will never happen, however unlikely:

Q: How many times do I receive each message?

Amazon SQS is designed to deliver at least once all the messages in its queues. Although in most cases each message will be delivered to your application exactly once, you should develop your system so that processing the message several times does not create errors or inconsistencies.

- http://aws.amazon.com/sqs/faqs/

By the way, on my platform more than one record in the record array is considered an error, as a result of which the message is discarded and a dead letter is sent to the queue for review.

+7
source share

All Articles