View from a thousand feet
May I share with you a little fairy tale ...
Someone once wanted to save events for an aggregate (from the glory of domain design Driven Design) in response to a given command. This person wanted to make sure that the aggregate would be created only once and that any optimistic concurrency could be detected.
To solve the first problem - to create an aggregate only once - he invested in a transactional environment that was thrown away when a duplicate aggregate (or, more precisely, a primary key) was discovered. What he inserted was an aggregated identifier as a primary key and a unique identifier for a set of changes. The collection of events generated by the aggregate during the processing of a command is what is meant here by the substitution of changes. If someone or something else beat him, he will consider the already created population and leave it to that. The change set will be stored in advance in the environment of your choice. The only promise this medium should make is to return what was saved as is when asked. Any failure to store a set of changes will be considered a failure of the entire operation.
To solve the second problem - the detection of an optimistic concurrency in the further life cycle of the aggregate - he, after he wrote another set of changes, will update the aggregate record in the transactional environment, if and only if no one has updated it after his back (i.e. compared to what he read the last time before executing the command). The transactional environment will notify him if this happens. This will force him to restart the entire operation, re-reading the collection (or its changes) so that the team this time is successful.
Of course, now he has solved problems with writing, as well as problems with reading. How could one read all the changes in the set of aggregates that made up its history? Because of this, he only had the last perfect set of changes associated with the aggregated identifier in this transactional environment. And so he decided to implement some metadata as part of each set of changes. Among the metadata, which is not so rare as part of a set of changes, will be the identifier of the previous last fixed set of changes. Thus, he could βwalk along the lineβ of sets of changes in his aggregate, like a linked list, so to speak.
As an additional perk, he also saved the command message identifier as part of the change set metadata. Thus, when reading sets of changes, he could know in advance if the team that he was going to execute in aggregate was already part of its history.
All is well that ends well...
PS
1. The environment of the transactional environment and the environment for storing changes may be the same,
2. The change set identifier MUST NOT be the command identifier,
3. Feel free to punch holes in a fairy tale :-),
4. Although I am not directly related to Azure table storage, I have successfully completed this story using AWS DynamoDB and AWS S3.
Yves reynhout
source share