Is it possible to create conditional inserts using an azure table

Can I do a conditional insertion using the Windows Azure table storage service?

Basically, I would like to add a new row / object to the table storage service section if and only if nothing has changed in this section since the last time I looked.

If you're interested, I have Event Sourcing, but I think the question is more general than this.

Basically, I would like to read part or all of a section and make a decision based on the contents of the data. To ensure that nothing has changed in the section since the data was uploaded, the insert should behave like a normal optimistic concurrency: the insert should be performed only if nothing has changed in the section β€” no rows have been added, updated or deleted.

Usually in the REST service, I expect to use ETags to control concurrency, but as far as I can tell, there is no ETag for the section.

The best solution I can come up with is to maintain one row / entity for each section in the table that contains the / ETag timestamp, and then make all the inserts part of the series consisting of insert, as well as conditionally updating this "object timestamp. " However, it sounds a bit bulky and fragile.

Is this possible with the Azure table storage service?

+7
source share
2 answers

View from a thousand feet

May I share with you a little fairy tale ...

Someone once wanted to save events for an aggregate (from the glory of domain design Driven Design) in response to a given command. This person wanted to make sure that the aggregate would be created only once and that any optimistic concurrency could be detected.

To solve the first problem - to create an aggregate only once - he invested in a transactional environment that was thrown away when a duplicate aggregate (or, more precisely, a primary key) was discovered. What he inserted was an aggregated identifier as a primary key and a unique identifier for a set of changes. The collection of events generated by the aggregate during the processing of a command is what is meant here by the substitution of changes. If someone or something else beat him, he will consider the already created population and leave it to that. The change set will be stored in advance in the environment of your choice. The only promise this medium should make is to return what was saved as is when asked. Any failure to store a set of changes will be considered a failure of the entire operation.

To solve the second problem - the detection of an optimistic concurrency in the further life cycle of the aggregate - he, after he wrote another set of changes, will update the aggregate record in the transactional environment, if and only if no one has updated it after his back (i.e. compared to what he read the last time before executing the command). The transactional environment will notify him if this happens. This will force him to restart the entire operation, re-reading the collection (or its changes) so that the team this time is successful.

Of course, now he has solved problems with writing, as well as problems with reading. How could one read all the changes in the set of aggregates that made up its history? Because of this, he only had the last perfect set of changes associated with the aggregated identifier in this transactional environment. And so he decided to implement some metadata as part of each set of changes. Among the metadata, which is not so rare as part of a set of changes, will be the identifier of the previous last fixed set of changes. Thus, he could β€œwalk along the line” of sets of changes in his aggregate, like a linked list, so to speak.

As an additional perk, he also saved the command message identifier as part of the change set metadata. Thus, when reading sets of changes, he could know in advance if the team that he was going to execute in aggregate was already part of its history.

All is well that ends well...

PS
1. The environment of the transactional environment and the environment for storing changes may be the same,
2. The change set identifier MUST NOT be the command identifier,
3. Feel free to punch holes in a fairy tale :-),
4. Although I am not directly related to Azure table storage, I have successfully completed this story using AWS DynamoDB and AWS S3.

+2
source

How about saving each event in a "PartitionKey / RowKey" created from AggregateId / AggregateVersion? where AggregateVersion is a sequence number based on how many events the aggregate already has.

This is very deterministic, so when adding a new event to the aggregate, you will be sure to use the latest version, because otherwise you will receive an error message indicating that the row for this section already exists. At this time, you can refuse the current operation and try again or try to find out if you can combine the operation in any case if the new unit updates do not conflict with the operation you are performing.

+2
source

All Articles