First, let the chat talk about why someone might want to commit / roll back over multiple data paths ...
You need it?
As a rule, you do not need this if:
- you do not write with high concurrency (hundreds of letters per minute for ONE record from DIFFERENT users)
- your dependencies are simple (B depends on A and C depends on A, but A does not depend on B or C)
- your data can be combined in one way
Developers are too worried about the lost records that appear in their data. The likelihood that a network socket will not work between one record and another is probably trivial and somewhere in the order of collisions between identifiers based on a timestamp. What cannot be said that this is impossible, but it is usually a low sequence, unlikely and should not be your main concern.
In addition, orphans are extremely easy to clean up with a script, or even just by typing a few lines of code in the JS console. So again, they are usually very low impact.
What can you do instead?
Put all the data that should be written atomically in one path. You can then write it as a single set or transaction , if necessary.
Or in the case when one entry is primary, and the rest depend on it, just write the primary first, then write the rest in the callback. Add security rules to ensure compliance so that the primary record always exists before others are allowed to write.
If you denormalize data just to simplify and speed up the iteration (for example, get a list of names for users), just index this data on a separate path. Then you can have a complete record of the data on one path and names, emails, etc. In a quick, query-friendly / sorted list.
When is this useful?
This is a suitable tool to use if you have a denormalized record set that:
- cannot be combined practically in one way in practice
- have complex dependencies (A depends on C, and C depends on B, and B depends on A)
- recorded with high concurrency (i.e., possibly hundreds of write operations per minute in the SAME record from DIFFERENT users).
How do you do this?
The idea is to use update counters to ensure that all paths remain in the same revision.
1) Create an update counter that is incremented by transactions:
function updateCounter(counterRef, next) { counterRef.transaction(function(current_value) { return (current_value||0)+1; }, function(err, committed, ss) { if( err ) console.error(err) else if( committed ) next(ss.val()); }, false); }
2) Give him some safety rules
"counters": { "$counter": { ".read": true, ".write": "newData.isNumber() && ( (!data.exists() && newData.val() === 1) || newData.val() === data.val() + 1 )" } },
3) Give the security rules of your records to ensure the update update_counter
"$atomic_path": { ".read": true, // .validate allows these records to be deleted, use .write to prevent deletions ".validate": "newData.hasChildren(['update_counter', 'update_key']) && root.child('counters/'+newData.child('update_key').val()).val() === newData.child('update_counter').val()", "update_counter": { ".validate": "newData.isNumber()" }, "update_key": { ".validate": "newData.isString()" } }
4) Record data using update_counter
Since you have security rules, records can only be successfully recorded if the counter does not move. If it moves, then the records were overwritten by a simultaneous change, so they no longer matter (they are no longer the last and largest). A.
var fb = new Firebase(URL); updateCounter(function(newCounter) { var data = { foo: 'bar', update_counter: newCounter, update_key: 'myKey' }; fb.child('pathA').set(data); fb.child('pathB').set();
5) Rollback
Rollback is a bit more active, but you can build this principle:
- save old values before calling set
- track every op set for failures
- will revert to the old values for any committed changes, but save the new counter
Pre-built library
I wrote lib today that does this and stuffed it on GitHub . Feel free to use it, but please make sure you do not complicate your life by reading "Do you need this?" higher.