GAE put_multi () objects using backend NDB

I use backend to write multiple objects with ndb.put_multi(list_of_entities) .

The problem I am facing is that after that, if I make a request, I will not get any results. If I set a sleep timer, for example, for 1 second, I can read the objects that I just wrote.

So for example:

 class Picture(ndb.Expando): pass class Favourite(ndb.Expando): user_id = ndb.StringProperty(required=True) pass #...make lists with Picture and Favourite kinds entities = favourites entities[1:1] = pictures ndb.put_multi(entities) favourites = Favourite.query().filter(Favourite.user_id == user_id).fetch(99999, keys_only=True) logging.info(len(favourites)) #returns 0 in dev_appserver why? 

At first it was assumed that the problem was related to caching. But

Read NDB object operations for multiple keys or objects :

Extended note: these methods correctly interact with context and caching; they do not correspond directly to specific RPC calls.

Reading NDB Caching

In-context cache

The internal content cache is stored only for one incoming HTTP request and is "visible" only for the code that processes this request. It is fast; This cache lives in memory. When an NDB function writes to the Datastore, it also writes to the cache context. When the NDB function reads an object, it checks the cache context first. If an object is found there, no interaction with the data warehouse takes place.

Queries do not look up values ​​in any cache. However, the query results are written back to the cache if the cache policy says so (but never in Memcache).

I'm here. Everything seems to be in order. Even if the request from the console I get the correct amount, but never on the same handler, no matter what function, etc.

The only thing I noticed is that when you put wait time.sleep(1) , I get the correct results. So this is due to the fact that ndb.put_multi may not run synchronously or not. So confused ....

+4
source share
1 answer

A clear mind in the morning is always better than a dizzy mind at night.

Thanks everyone for the comments. The problem is solved. You are leading me right to answer my question:

I used ancestral queries to get the results correctly. It is worth noting the following

Understanding NDB Entries: Commit, Invalid Cache, and Use

The NDB function, which writes data (for example, put ()), returns after the cache is invalid; The Apply phase is asynchronous .

This means that after each start the application phase may not be completed.

AND:

This behavior affects how and when the data expression. This change cannot be fully applied to the underlying datastore after a few hundred milliseconds or after the NDB function returns. A query that is not an ancestor that is executed during a change applied may see an inconsistent state (that is, part, but not all, of the change). For more information about recording dates and requests, see the “Transaction Isolation in the Application” section.

Also some things about consistency between reading and writing taken from Google Academy Retrieving data from a data warehouse

Google App Engine (HRD) High Replication Data Warehouse provides high read and write accessibility by storing data synchronously in multiple data centers. However, the delay from the moment of writing until it becomes visible in all data centers means that queries for several groups of objects (queries that are not related to ancestors) can ultimately guarantee consistent results. Consequently, the results of such queries may sometimes not reflect recent changes to the underlying data. However, direct selection of an object by its key is always consistent .

Thanks to @Paul C for their continued help and @dragonx and @sologoub for helping me understand.

+5
source

All Articles