If 23 independent variables are taken in a hypernetwork (on a regular basis), you can choose a partition into hypercubes and linearly interpolate the dependent value from the vertex closest to the origin, along the vectors defined from this vertex along the edges of the hypercube from the origin. In the general case, for this partition, you project an interpolation point onto each vector, which gives you a new βcoordinateβ in this particular space, which can then be used to calculate a new value by multiplying each coordinate by the difference of the dependent variable, adding up the results and adding to the dependent value in local origin. For hypercubes, this projection is simple (you just subtract the nearest vertex closest to the origin.)
If your samples are not evenly distributed, the problem is much more complicated, since you will need to select the appropriate partitioning if you want to perform linear interpolation. In principle, Delaunay triangulation generalizes to N dimensions, but this is not easy to do, and the resulting geometric objects are much more difficult to understand and interpolate than a simple hypercube.
One thing you might think is that your dataset naturally lends itself to projection so you can reduce the number of dimensions. For example, if your two independent variables dominate, you can minimize the problem to two dimensions, which is much easier to solve. Another thing you can consider is taking the sample points and placing them in the matrix. You can decompose SVDs and look at specific values. If there are several dominant singular values, you can use this to project onto the hyperplane defined by these basis vectors and reduce the size of your interpolation. Basically, if your data is distributed in a specific set of dimensions, you can use these dominant dimensions to perform your interpolation, since in reality you do not have much information in other dimensions.
I agree with other commentators, however, that your premise may be disabled. Usually you do not want the interpolation to perform the analysis, because you simply choose to interpolate your data in different ways, and the choice of interpolation distorts the analysis. This only makes sense if you have good reason to believe that a certain interpolation is physically consistent, and you just need additional points for a particular algorithm.
Dan bryant
source share