Because inline fields are dynamic, the best approach is to change your schema so that the translation field becomes an array of inline documents. Below is an example of such a scheme, which displays the current structure:
"translation": [ { "lang": "en", "name" : "brown fox", "description" : "the quick brown fox jumps over a lazy dog" }, { "lang": "it", "name" : "brown fox ", "description" : " the quick brown fox jumps over a lazy dog" }, { "lang": "fr", "name" : "renard brun ", "description" : " le renard brun rapide saute par-dessus un chien paresseux" }, { "lang": "de", "name" : "brown fox ", "description" : " the quick brown fox jumps over a lazy dog" }, { "lang": "es", "name" : "brown fox ", "description" : " el rápido zorro marrón salta sobre un perro perezoso" } ]
Using this scheme, it is easy to apply a text index in the name and description fields:
db.collection.createIndex( { "translation.name": "text", "translation.description": "text" } )
Regarding the modification of the scheme, you will need to use the api, which allows you to update your collection in bulk and the Bulk API does this for you. They provide better performance, because you will send operations to the server in batches, say 1000, which gives you better performance, since you do not send every request to the server, but once every 1000 requests.
This approach is shown below, the first example uses the Bulk API, available in versions MongoDB> = 2.6 and <3.2. It updates all documents in the collection, changing all translation fields to arrays:
var bulk = db.collection.initializeUnorderedBulkOp(), counter = 0; db.collection.find({ "translation": { "$exists": true, "$not": { "$type": 4 } } }).snapshot().forEach(function (doc) { var localization = Object.keys(doc.translation) .map(function (key){ var obj = doc["translation"][key]; obj["lang"] = key; return obj; }); bulk.find({ "_id": doc._id }).updateOne({ "$set": { "translation": localization } }); counter++; if (counter % 1000 === 0) { bulk.execute();
The following example applies to the new version of MongoDB 3.2, which has since been deprecated in the Bulk API and provided a new apis suite using bulkWrite() .
It uses the same cursors as above, but creates arrays with volumetric operations, using the same forEach() cursor method to pop each massive record document into an array. Since write commands can accept no more than 1000 operations, you need to group your operations in order to have no more than 1000 operations and reinitialize the array when the loop falls into 1000 iterations:
var cursor = db.collection.find({ "translation": { "$exists": true, "$not": { "$type": 4 } } }).snapshot(), bulkUpdateOps = []; cursor.forEach(function(doc){ var localization = Object.keys(doc.translation) .map(function (key){ var obj = doc["translation"][key]; obj["lang"] = key; return obj; }); bulkUpdateOps.push({ "updateOne": { "filter": { "_id": doc._id }, "update": { "$set": { "translation": localization } } } }); if (bulkUpdateOps.length === 1000) { db.collection.bulkWrite(bulkUpdateOps); bulkUpdateOps = []; } }); if (bulkUpdateOps.length > 0) { db.collection.bulkWrite(bulkUpdateOps); }