Basic data migration methods: moving attribute & # 8594; simulated relationships

I have a rather large database schema based on Core Data (~ 20 objects, more than 140 properties), which is undergoing major changes, as it is transferred from our 1.x database to our 2.x code database.

I am very familiar with the implementation of facilitated migrations, but I am a little confused by this particular migration, because there are several objects that used to store related objects as transformable attributes for the entity itself, but now I want to transfer these to actual persons.

This seems like a great example of when you should use heavy migration instead of light, but I'm not very happy about that either. I am not familiar with heavy migrations, one of the objects that have this transformed array → modeled relationship transformation takes ~ 90% of the rows in the database, these databases are larger than 200 MB, and I know a good part of our clients use iPad 1s . This, combined with repeated warnings in Apple's documentation and the wonderful Core Data book about speed and memory usage as a result of a large migration, makes me very wary and look for another way to handle this situation.

WWDC 2010 "Master Data Creation Wizard" session 118 ( slides here , requires logging in, from 9 to the last slide, with the heading "Migrate Transfer" - this is what I mean) mentions a way to sort the work around this - perform the migration and then use the repository metadata to determine whether the custom mail processing you want to perform has been performed. I think this may be the way to go, but it feels a bit hacked (due to lack of a better word) for me. In addition, I am worried about leaving attributes hanging around that are out of date in practice. ex. if I move the entity attribute foo barArray to the relationship between the foo entity and the entity string, and I noil out barArray, barArray still exists as an attribute that can be written and read. A potential way to solve this problem is to signal that these attributes are out of date by changing their names so that they are “out of date” in front, and also possibly redefined accessors for approval if they are used, but with KVO there is no guaranteed compilation that will not allow people use them, and I don’t want to leave “trap code” around, especially since the “trap code” should be around as long as I potentially have clients that still need to go from 1.0.

This turned into a bigger dump of the brain than I expected, so for clarity my questions are:
1) Is heavy migration a particularly bad choice with the restrictions I'm working on? (business-critical application, lack of experience with large migrations, databases larger than 200 MB, tens of thousands of rows, clients using iPad 1 with iOS 5+)
2) If so, is the migration post-processing method described in session 118 the best option?
3) If so, how can I immediately or ultimately eliminate these “obsolete” attributes so that they no longer pollute my code base?

+6
source share
1 answer

My suggestion is to stay away from heavy migration; full stop. This is too expensive for iOS and is likely to lead to an unacceptable user experience.

In this situation, I would do a lazy migration. Create a light migration that has related objects.

Then migrate, but don't move the data yet.

Change the accessory for this new relation so that it first checks the old transformable one, if the transformation to be filled is full, it pulls out the data, copies it to the new relation and then displays the transformation.

Doing this will cause the data to move as it is used.

Now there are some problems with this design.

If you want to use predicates against these new objects, it will be ... randomly. You need to make a choice of two passes. that is, Fetch with a predicate that does not hit this new object and then fetch the section as soon as they are memory, so that the transformability moves.

+6
source

All Articles