Measure it. Perform operations with linearly increasing operands and draw time on a chart. Remember to warm up the JVM (several runs) to get reliable test results.
If the operations are linear O (n), quadratic O (n ^ 2), polynomial or exponential should be obvious.
EDIT: Although you can give theoretical estimates of the algorithms, they may not be as useful in practice. First of all, complexity does not give a factor. Some linear or sub-quadratic algorithms are simply not useful, because they consume so much time and resources that they are not suitable for the problem (for example, multiplication of the Coppersmith-Winograd matrix). Then your calculations can have all the kludges that you can only detect by experiment. There are preparatory algorithms that do nothing to solve the problem, but to speed up the real solver (matrix conditioning). There are suboptimal implementations. With a longer length, your speed may decrease sharply (there is no cache, moving memory, etc.). Therefore, for practical purposes, I advise you to experiment.
It is best to double the length of the input each time and compare the time. And yes, you will find out if the algorithm has n ^ 1.5 or n ^ 1.8 difficulties. Just four of the input length, and you only need half the time for 1.5 instead of 2. You will again get almost half the time for 1.8, if you multiply the length 256 times.
Thorsten S.
source share