The goal is to reduce processor cost and response time for a piece of code that runs very often, and should db.get () several hundred keys each time.
Does it even work?
Can I expect the API time for db.get () with several hundred keys to decrease approximately linearly, as I reduce the size of the object? Currently, the enterprise has the following data: 9 String, 9 Boolean, 8 Integer, 1 GeoPt, 2 DateTime, 1 Text (average size ~ 100 bytes FWIW), 1 Reference, 1 StringList (average size 500 bytes). The goal is to move the vast majority of this data to the appropriate classes so that the core of the main model is fast.
If it really works, how is it implemented?
After refactoring, will I still have the high cost of retrieving existing objects? The documentation says that all the properties of the model are selected at the same time. Will the old unnecessary properties still be transferred by RPC to my penny, and users wait? In other words: if I want to reduce the loading time of my objects, do I need to transfer the old objects to new ones with a new definition? If so, is it enough to move () the object, or should I save under a completely new key?
Example
Consider:
class Thing(db.Model): text = db.TextProperty() strings = db.StringListProperty() num = db.IntegerProperty() thing = Thing(key_name='thing1', text='x' * 10240, strings = ['y'*500 for i in range(10)], num=23) thing.put()
Let's say I override Thing for optimization and wring out the new version:
class Thing(db.Model): num = db.IntegerProperty()
And again I take it:
thing_again = Thing.get_by_key_name('thing1')
Did I shorten the sampling time for this object?
google-app-engine google-cloud-datastore
Jason smith
source share