So, looking at it with a new mind, the answer looks in my face. The main thing that you have already said is that you want to find the "intersection" of two queries in one answer.
Another way to look at this is if you want all the points associated with the first query to be “entered” for the second query, etc. as needed. This is essentially what the intersection does, but the logic is actually literal.
Therefore, use an aggregation chain to link related queries. For a simple example, consider the following documents:
{ "loc" : { "type" : "Point", "coordinates" : [ 4, 4 ] } } { "loc" : { "type" : "Point", "coordinates" : [ 8, 8 ] } } { "loc" : { "type" : "Point", "coordinates" : [ 12, 12 ] } }
And the aggregation chain chain, just two queries:
db.geotest.aggregate([ { "$match": { "loc": { "$geoWithin": { "$box": [ [0,0], [10,10] ] } } }}, { "$match": { "loc": { "$geoWithin": { "$box": [ [5,5], [20,20] ] } } }} ])
So, if you think logically, the first result will find points that fall within the framework of the original field or the first two elements. These results are then processed by the second query, and since the new field boundaries begin with [5,5] , which excludes the first point. The third point was already excluded, but if the restrictions on the boxes were lifted, the result would be the same average document.
How it works is completely unique to the $geoWithin query operator compared to other other geo functions:
$ geoWithin does not require a geospatial index. However, the geospatial index will improve query performance. Both versions of 2dsphere and 2d support $ geoWithin geospatial indexes.
So the results are good and bad. It’s good that you can perform this type of operation without an in-place index, but badly because once the aggregation pipeline has changed the collection results after the first query operation, the further index cannot be used. Thus, any index performance advantage is lost when combining the results of a “set” from anything after supporting the initial Polygon / MultiPolygon.
For this reason, I still recommend that you calculate the intersection boundaries "outside" of the query issued by MongoDB. Despite the fact that the aggregation structure can do this because of the "chain" nature of the pipeline, and although the resulting intersections will become smaller and smaller, your best performance is a single query with the correct limits, which can take full advantage of the index.
There are various methods for this, but for reference, an implementation using the JSTS library, which is the JavaScript port of the popular JTS for Java. There may be other or other language ports, but it has a simple analysis and built-in GeoJSON methods to get these boundaries:
var async = require('async'); util = require('util'), jsts = require('jsts'), mongo = require('mongodb'), MongoClient = mongo.MongoClient; var parser = new jsts.io.GeoJSONParser(); var polys= [ { type: 'Polygon', coordinates: [[ [ 0, 0 ], [ 0, 10 ], [ 10, 10 ], [ 10, 0 ], [ 0, 0 ] ]] }, { type: 'Polygon', coordinates: [[ [ 5, 5 ], [ 5, 20 ], [ 20, 20 ], [ 20, 5 ], [ 5, 5 ] ]] } ]; var points = [ { type: 'Point', coordinates: [ 4, 4 ] }, { type: 'Point', coordinates: [ 8, 8 ] }, { type: 'Point', coordinates: [ 12, 12 ] } ]; MongoClient.connect('mongodb://localhost/test',function(err,db) { db.collection('geotest',function(err,geo) { if (err) throw err; async.series( [
Use full GeoJSON "Polygon" views, as this means that JTS can understand and work. Most likely, any data that you could get for a real application would be in this format, and not for convenience, for example $box .
Thus, this can be done using an aggregation structure or even parallel queries that combine a “set” of results. But while the aggregation structure can do this better than merging result sets from the outside, the best results will always be obtained when calculating boundaries in the first place.