Problem:
I am working on a problem when I have a database with a huge list of files from the file system. If a group of files is deleted on the system, then it must be updated in the database.
An approach:
A list of file requests from db and a list of files from the file system. Then compare if each of the files from db is on a different list. Delete if not found. To avoid searching again for each file from the list, I plan to use sets in python and the difference_update () method
Question:
Inside, will it again have O (m X n) complexity, like another re-search approach or optimized to reduce complexity?
source share