I want to create a web application in which the user enters a set of identifiers, and a set of points on the map appear related to these identifiers.
The problem is that these data can range from tens to hundreds of thousands, even up to a million. Given this opportunity, I want to take an easy approach. Below is my ideal aggregation behavior.
At low zoom levels, I want to combine these points into state counts (the corresponding size / color of the symbology indicating a higher intensity, with a point centered on the center of state). at slightly higher zoom levels, they will be divided into counts by smaller polygons. with even higher scaling, but smaller polygons. When, if not aggregated, the number of points on the map will be less than ~ 500, and then just build the points.
These polygons have already been solved, and each point has in its data a polygon identifier for each polygon in which it is located.
Since points are randomly displayed in the corresponding polygons, the actual distribution of points within the polygons does not matter. Most likely, any aggregation that ignores which polygon on which the points were drawn will destroy the information. For this reason, I cannot use markercluster (at least not with the parameters that I saw. If there is a simple tool for aggregating in the order I'm looking for, let me know)
For various reasons (I am not a javascript programmer, I am an R programmer) I work as part of a flyer package in R. Is there a way by which I can change the level of aggregation depending on scaling this way?
I collected a toy data set containing a relatively small subset (1 entity, 3 states, observations of ~ 10 thousand), along with centroids for census sites and districts for these states.
http://s000.tinyupload.com/index.php?file_id=00048836337627834343