For underdetermined systems , such as yours (rank is less than the number of variables), mldivide returns a solution with as many null values as possible. Which of the variables will be equal to zero depends on its arbitrary choice.
On the contrary, the lstsq method returns the solution of the minimum norm in such cases: that is, among the infinite family of exact solutions, he will choose the one that has the least sum of the squares of the variables.
So, the “special” Matlab solution is somewhat arbitrary: in this task, you can set any of the three variables to zero. The solution given by NumPy is actually more special: there is only one solution to the minimum norm
Which solution is best for your purpose depends on your goal. Non-uniqueness of a solution is usually a reason to revise your approach to equations. But since you asked, here is the NumPy code that creates solutions like Matlab.
import numpy as np from itertools import combinations A = np.matrix([[1 ,2, 0],[0, 4, 3]]) b = np.matrix([[8],[18]]) num_vars = A.shape[1] rank = np.linalg.matrix_rank(A) if rank == num_vars: sol = np.linalg.lstsq(A, b)[0] # not under-determined else: for nz in combinations(range(num_vars), rank): # the variables not set to zero try: sol = np.zeros((num_vars, 1)) sol[nz, :] = np.asarray(np.linalg.solve(A[:, nz], b)) print(sol) except np.linalg.LinAlgError: pass # picked bad variables, can't solve
In your example, it outputs three “special” solutions, the last of which is what Matlab chooses.
[[-1. ] [ 4.5] [ 0. ]] [[ 8.] [ 0.] [ 6.]] [[ 0. ] [ 4. ] [ 0.66666667]]
user3717023
source share