I would risk that the problem is with the creation of documents for sending to ElasticSearch, and that using the batch-size option will help you.
The update method in the ElasticSearch backend prepares documents for indexing from each provided set of queries, and then makes a separate insert for this set of queries.
self.conn.bulk_index(self.index_name, 'modelresult', prepped_docs, id_field=ID)
So it looks like if you have a table with millions of records, running update_index on this indexed model means that you need to generate these millions of documents and then index them. I would risk it is a problem. Setting a batch limit with the --batch-size parameter should limit the documents generated by slicing a set of requests for the size of your batch.
source share