You can use a non-linear optimization approach by defining a βcostβ function that includes an error in the distance from each of your observation points.
Setting an unknown point in (x,y,z) and examining the set of observation points N (xi,yi,zi,di) to characterize the total distance error, the following function can be used:
C(x,y,z) = sum( ((x-xi)^2 + (y-yi)^2 + (z-zi)^2 - di^2)^2 ) ^^^ ^^^ for all observation points i = 1 to N
This is the sum of the squared distance errors for all points in the set. (This is actually based on a squared distance error, so there are no square roots!)
When this function is at least, the target point (x,y,z) will be in the optimal position. If the solution gives C(x,y,z) = 0 , all observations will be accurately performed.
One way to minimize this type of equation will be Newton's method . You would have to specify the starting point for the iteration β perhaps the average value of the observation points (if they span (x,y,z) ) or perhaps the initial triangulated value from any three observations.
Edit: Newton method is an iterative algorithm that can be used for optimization. A simple version will work in the following directions:
H(X(k)) * dX = G(X(k));
G(X(k)) denotes the gradient vector estimated at X(k) , in this case:
G(X(k)) = [dC/dx dC/dy dC/dz]
H(X(k)) denotes the Hessian matrix estimated in X(k) , in this case the 3x3 symmetric matrix:
H(X(k)) = [d^2C/dx^2 d^2C/dxdy d^2C/dxdz d^2C/dydx d^2C/dy^2 d^2C/dydz d^2C/dzdx d^2C/dzdy d^2C/dz^2]
You should be able to differentiate the cost function analytically and, therefore, get analytic expressions for G,H
Another approach - if you don't like derivatives - is to approximate G,H numerically using finite differences.
Hope this helps.