Fortran code uses single-precision floating-point values ββ(i.e., 32-bit floats). (In C, float is single precision, and double is double precision.) The default method used by scipy.optimize.minimize() uses finite differences to approximate the derivatives of the function. That is, to estimate the derivative f'(x0) , it calculates (f(x0+eps) - f(x0))/eps , where eps is the step size. The default step size that it uses to calculate the final differences is approximately 1.49e-08. Unfortunately, this value is less than the distance between single precision values ββaround the value 1. Therefore, when the minimizer adds eps to 1, the result remains 1. This means that the function is evaluated at the same point, and the final difference is 0. The result is condition for a minimum, so the solver decides that this is done.
The eps solver option sets the size of the final difference step. Install it on something larger than 1.19e-7. For example, if I use options={'disp': True, 'eps': 2e-6} , I get a solution.
By the way, you can find this value, 1.19e-7, using numpy.finfo() :
In [4]: np.finfo(np.float32(1.0)).eps Out[4]: 1.1920929e-07
You will also get a solution if you use the method='nelder-mead' in the minimize() function. This method does not rely on finite differences.
Finally, you can convert the Fortran code to use double precision:
module cost_fn use iso_c_binding, only: c_double implicit none contains function sin_2_cos(x,y) bind(c) real(c_double) :: x, y, sin_2_cos sin_2_cos = sin(x)**2 * cos(y) end function sin_2_cos end module cost_fn
Then change the Python code to use ctypes.c_double instead of ctypes.c_float .