Invalid result from scipy.optimize.minimize used for Fortran function using ctypes

I currently have a Fortran function that I want to optimize using SciPy, wrapping it with Ctypes. Is it possible? Maybe I did something wrong in my implementation. For example, suppose I have:

cost.f90

module cost_fn use iso_c_binding, only: c_float implicit none contains function sin_2_cos(x,y) bind(c) real(c_float) :: x, y, sin_2_cos sin_2_cos = sin(x)**2 * cos(y) end function sin_2_cos end module cost_fn 

which I compile with:

 gfortran -fPIC -shared -g -o cost.so cost.f90 

and then try to find the (local) minimum with:

cost.py

 #!/usr/bin/env python from ctypes import * import numpy as np import scipy.optimize as sopt cost = cdll.LoadLibrary('./cost.so') cost.sin_2_cos.argtypes = [POINTER(c_float), POINTER(c_float)] cost.sin_2_cos.restype = c_float def f2(x): return cost.sin_2_cos(c_float(x[0]), c_float(x[1])) # return np.sin(x[0])**2 * np.cos(x[1]) # print(f2( [1, 1] )) # print(f2( [0.5 * np.pi, np.pi] )) print( sopt.minimize( f2, (1.0, 1.0), options={'disp': True}, tol=1e-8) ) 

I expect a local minimum of f2 (pi / 2, pi) = -1. When I call f2 with the return value of cost.sin_2_cos, the "minimum" is indicated only under the initial assumption (1, 1). If I call f2 with the return value "Python", the optimization will find the correct minimum.

I tried overriding sin_2_cos to enter array (2), but saw a similar behavior. Perhaps I need to call sin_2_cos directly with minimization (but then how would I specify c_float for the arguments)? Any thoughts appreciated!

Edit: to the comment below, note that the two commented lines of print(f2(...)) create the expected values. Thus, I believe that the Fortran function is correctly called through the Python f2 function.

+5
source share
1 answer

Fortran code uses single-precision floating-point values ​​(i.e., 32-bit floats). (In C, float is single precision, and double is double precision.) The default method used by scipy.optimize.minimize() uses finite differences to approximate the derivatives of the function. That is, to estimate the derivative f'(x0) , it calculates (f(x0+eps) - f(x0))/eps , where eps is the step size. The default step size that it uses to calculate the final differences is approximately 1.49e-08. Unfortunately, this value is less than the distance between single precision values ​​around the value 1. Therefore, when the minimizer adds eps to 1, the result remains 1. This means that the function is evaluated at the same point, and the final difference is 0. The result is condition for a minimum, so the solver decides that this is done.

The eps solver option sets the size of the final difference step. Install it on something larger than 1.19e-7. For example, if I use options={'disp': True, 'eps': 2e-6} , I get a solution.

By the way, you can find this value, 1.19e-7, using numpy.finfo() :

 In [4]: np.finfo(np.float32(1.0)).eps Out[4]: 1.1920929e-07 

You will also get a solution if you use the method='nelder-mead' in the minimize() function. This method does not rely on finite differences.

Finally, you can convert the Fortran code to use double precision:

 module cost_fn use iso_c_binding, only: c_double implicit none contains function sin_2_cos(x,y) bind(c) real(c_double) :: x, y, sin_2_cos sin_2_cos = sin(x)**2 * cos(y) end function sin_2_cos end module cost_fn 

Then change the Python code to use ctypes.c_double instead of ctypes.c_float .

+9
source

All Articles