My application has a potentially very large CoreData data store under it (it can easily exceed 30 MB). I started to notice memory problems when using automatic migration ( addPersistentStoreWithType:configuration:URL:options:error: , so I began to study the migration methods of smaller parts of the repository in order to avoid the surge of CoreData objects that happen when you fully transfer everything.
This is discussed in the official documentation in the Multiple Aisles section, however it seems that their approach is to divide your migration by object type, i.e. creating several matching models, each of which transfers a subset of entity types from a complete data model.
The only problem is what if one type of entity is the majority of your data warehouse? Based on the approach recommended by Apple, the entire entity type will still be executed in a single migration, and memory problems are likely to persist.
Are there any methods for actually migrating a subset of objects of a certain type to ensure that you do not run out of memory when you try to transfer them all?
Thanks in advance for your help.
EDIT: after you did more digging, I found that Apple's recommended separation of databases into entity types actually only works for unrelated objects (as discussed here ), so it even solves the problems of a real database less than I thought when I originally wrote this post.
I'm starting to think that CoreData migrations, which are actually performed through the NSMigrationManager, are no longer scalable, and in principle you cannot have a database that is larger than 20-30 MB if you want to be able to migrate it to the current iOS devices generations. The only viable approach seems to be to completely lock all the NSMigrationManager / NSMappingModel stuff and write the complete migration to the code. It seems like a huge oversight of part of Apple, if that is true.
memory-management iphone migration core-data
glenc Feb 02 2018-11-11T00: 00Z
source share