How to find all zeros of a function using numpy (and scipy)?

Suppose I have a function f(x) defined between a and b . This function can have many zeros, but also many asymptotes. I need to restore all zeros of this function. What is the best way to do this?

Actually, my strategy is this:

  • I evaluate my function on a given number of points
  • I discover if there is a change of sign
  • I find zero between points that change sign
  • I check if the found zero is really zero, or if it is an asymptote

     U = numpy.linspace(a, b, 100) # evaluate function at 100 different points c = f(U) s = numpy.sign(c) for i in range(100-1): if s[i] + s[i+1] == 0: # oposite signs u = scipy.optimize.brentq(f, U[i], U[i+1]) z = f(u) if numpy.isnan(z) or abs(z) > 1e-3: continue print('found zero at {}'.format(u)) 

This algorithm works, but I see two potential problems:

  • It will not detect a zero that does not intersect the x axis (for example, in a function of the type f(x) = x**2 ). However, I do not think that this can happen with the function that I am evaluating.
  • If the sampling points are too far, there may be more than one zero between them, and the algorithm may not find them.

Do you have a better strategy (still effective) to find all the zeros of a function?


I do not think this is important for the question, but for the curious I am dealing with the characteristic equations of wave propagation in an optical fiber. The function looks like this (where V and ell defined earlier, and ell is a positive integer):

 def f(u): w = numpy.sqrt(V**2 - u**2) jl = scipy.special.jn(ell, u) jl1 = scipy.special.jnjn(ell-1, u) kl = scipy.special.jnkn(ell, w) kl1 = scipy.special.jnkn(ell-1, w) return jl / (u*jl1) + kl / (w*kl1) 
+7
source share
2 answers

The main problem I see with is that you can find all the roots, as mentioned in the comments, this is not always possible. If you are sure that your function is not completely pathological ( sin(1/x) has already been mentioned), the next one is that you are tolerant to the absence of a root or several of them. In other words, this is what length you are willing to go to make sure that you do not miss a single one - as far as I know, there is no general method to isolate all the roots for you, so you have to do it yourself. What you show is already a reasonable first step. A few comments:

  • The Brent method is really a good choice.
  • First of all, consider the discrepancies. Since you have Bessel in the denominators in your function, you can first solve their roots - it’s better to look at them, for example, Abramovich and Stegun ( Mathworld Link ). This will be better than using the special mesh you use.
  • What can you do if you find two roots or discrepancies, x_1 and x_2 , repeat the search in the interval [x_1+epsilon, x_2-epsilon] . Continue until more roots are found (the Brent method, as guaranteed, will converge to the root, if any).
  • If you cannot list all the discrepancies, you can be a little more careful in checking the candidate, there really is a discrepancy: a given x does not just check that f(x) large, check this, for example. |f(x-epsilon/2)| > |f(x-epsilon)| for multiple epsilon values ​​(1e-8, 1e-9, 1e-10, something like this).
  • If you want to make sure that you do not have roots that simply touch zero, find the extrema of the function and for each extremum x_e , check the value of f(x_e) .
+1
source

Why are you numpy limited? Scipy has a package that does exactly what you want:

http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html

One lesson I learned: numerical programming is difficult, so don't do this :)


In any case, if you are configured to create the algorithm yourself, the scipy doc page that I linked (required forever to load, btw) gives you a list of algorithms to start with. One of the methods that I used earlier is to discretize the function to the extent that is necessary for your problem. (That is, tune \ delta x so that it is much smaller than the characteristic size in your problem.) This allows you to look for function functions (for example, changes in sign). And, you can easily calculate the derivative of the segment (possibly starting from kindergarten), so your discretized function has a well-defined first derivative. Since you configured dx less than typical, you are guaranteed not to miss any function features that are important to your problem.

If you want to know what the "characteristic size" means, find some parameter of your function with units of length or length 1 /. That is, for some function f (x), suppose that x has units of length and f has no units. Then look for things that multiply x. For example, if you want to sample cos (\ pi x), the parameter that multiplies x (if x has units of length) should have units of length 1 /. Thus, the characteristic size cos (\ pi x) is 1 / \ pi. If you make your sampling much smaller, you will not have any problems. Of course, this trick will not always work, so you may need to work a little.

+2
source

All Articles