I use AzureDirectory for full-text indexing on Azure, and I get some odd results ... but hopefully this answer will be useful to you ...
firstly, the compound file option: from what I read and find out, the compound file is one large file with all the index data inside. all this leads to a large number of small files (configured using the SetMaxMergeDocs (int) IndexWriter function) written to memory. the problem with this is when you get to a large number of files (I stupidly set it to about 5000), it takes age to load the indexes (on the Azure server it takes about a minute, my dev box ... well it worked for 20 minutes and not finished yet ...).
as for backing up indexes, I have not come across this yet, but considering that we currently have about 5 million records and this will grow, I am also interested in this. if you use one mixed file, maybe upload the files to a working role, zip them and upload them with today's date, if you have a smaller set of documents, you can leave with re-indexing the data if something goes wrong ... but again, depends on the number ....
source share