Suppose I have a basic understanding of adding and finding documents.
What would be the best practice for managing instances of IndexWriter and IndexReader?
My application is currently creating a single instance of IndexWriter. When I need to search, I just create an IndexSearcher from IndexWriter using the following
var searcher = new IndexSearcher(writer.GetReader())
I do this because creating a new IndexReader causes the index to load into memory and then expects the GC to redistribute the memory. This caused a memory error.
Is this current implementation considered ideal? This implementation solved the memory problem, but there is a problem with the always existing write.lock file (since IndexWriter is always displayed and opened). Here is a trace of the error stack that I get in the application.
Lock timeout: NativeFSLock @C: \ Inetpub \ Wwwroot \ htdocs_beta \ App_Data \ products3 \ write.lock: System.IO.IOException: the process cannot access the file 'C: \ Inetpub \ Wwwroot \ htdocs_beta \ App_Data \ products3 \ write.lock 'because it is used by another handle. in System.IO .__ Error.WinIOError (Int32 errorCode, String maybeFullPath) with System.IO.FileStream.Init (String path, FileMode mode, access to FileAccess, Int32, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath) in System.IO.FileStream..ctor (String path, FileMode mode, FileAccess access) Lucene.Net.Store.NativeFSLock.Obtain ()
I think it might be better to instantiate a Singleton IndexSearcher and then create an IndexWriter as needed in memory. Thus, the write.lock file will be created / deleted when the index is updated. The only problem I see with is that the IndexSearcher instance is out of date, I will need to run a task that restarts IndexSearcher if the index has been updated.
What do you think?
How do you handle a large index with real-time update?