I am not sure what the problem is.
The autocorrelation of the vector x should be 1 with a lag of 0, since this is only the square of the norm L2 divided by itself, i.e. dot(x, x) / dot(x, x) == 1 .
In the general case, for any lags i, j in Z, where i != j self-oscillations with unit scaling dot(shift(x, i), shift(x, j)) / dot(x, x) , where shift(y, n) is a function that shifts the vector y by n time points, and Z is a set of integers since we are talking about implementation (theoretically, lags can be in a set of real numbers).
I get 1.0 as max with the following code (starting at the command prompt as $ ipython --pylab ), as expected:
In[1]: n = 1000 In[2]: x = randn(n) In[3]: xc = correlate(x, x, mode='full') In[4]: xc /= xc[xc.argmax()] In[5]: xchalf = xc[xc.size / 2:] In[6]: xchalf_max = xchalf.max() In[7]: print xchalf_max Out[1]: 1.0
The only time when the autocorrelation of lag 0 is not equal to 1 is equal to x - zero signal (all zeros).
The answer to your question : no, there is no NumPy function that automatically performs standardization for you.
In addition, even if this happened, you would still have to check it for the expected result, and if you could say βYes, it did the standardization correctly,β I would suggest that you know how to implement it.
I am going to assume that it may be that you did not correctly implement your algorithm, although I cannot be sure, since I am not familiar with it.