I think I understand your question:
Each client maintains a disconnected dataset and periodically updates its local copy of the dataset.
You use the SQL-92 timestamp (which is different from the SQL Server timestamp, one is the date-time, one is the binary version of the row) to track updates so you can find the delta in the dataset and update the clients.
This approach causes problems because your timestamps are calculated until the transaction is fully committed, and subsequent transactions calculate new timestamps, but can commit transactions to older transactions, and your "last timestamp" skips some or all of the records.
So what can be done?
The fact that this is such a “hard nut to crack” is a good sign that this is an atypical design template, although I assume that changing this model is out of the question.
Here are some solutions that might work.
Create a “margin of error” in your updates. If your last update was 2011-08-06 23:14:05, subtract a few minutes (you need to find out what the error is) from this and get these updates. This will be your quick group help solution. If you want to refine this solution, use the SQL Server timestamp (an automatic binary version of the row), which calculates the checksum of all rows and stores it in your local dataset and compares the rows during the update.
Lazy refreshing. Again, use the SQL Server timestamp (rowversion) to control row versions and check for changes before the user is allowed to edit if the timestamps do not match, and then update. When checking, use the NOWAIT hint to determine if the row is currently being edited. Typically, you repeat this check when saving to avoid conflict (http://www.mssqltips.com/tip.asp?tip=1501), this is a more typical approach.
Ultimately, you do not need to maintain the entire dataset on the client. SQL Server is very good at handling concurrency and multiple users. If you need to search on local data, it might be better to perform these searches on the server. If you have blocking issues, it’s best to focus on trying to shorten the duration of the transaction. All in all, this is a terrible idea for an open transaction that is awaiting user input. Your question indicates that this may happen, but I do not want to accept anything.
Also, ideas / options are getting worse / crazy (for example, you could call SQL instances (Express?) On clients and use replication to push changes and use those instances for local data storage. But replication latency can be a serious problem , and I'm sure this will cause more problems than it solves.)
There is another possibility
Perhaps I am mistaken in my interpretation of your question. You can calculate the update timestamp on the client, and then later save it on the server. If so, instead of passing explicit date-time to the server, go to GETDATE (). This will calculate the date and time on the fly.
If there is too much code in the refactor, you can also process it with a trigger:
A simple example:
CREATE TRIGGER Trig_MyTable_Update ON dbo.MyTable FOR INSERT, UPDATE AS IF @@ROWCOUNT=0 RETURN UPDATE U SET UpdateDT = GETDATE() FROM MyTable U INNER JOIN INSERTED I ON U.Id = I.Id