Unlike the other answer, I will say no. However, you can get a pretty good guess (not even an estimate, since this is not suitable here). This probably means "often."
The fact is that you never know the constant factors. Big Oh - this is an asymptomatic behavior (at infinity), it is very useful to leave everything except the growing term. Therefore, mathematically you cannot confirm your assumption.
Firstly, there are many algorithms and use cases where asymptomatic behavior is not useful in real applications. Just because the distribution of "typical use" input data is falling. This most often happens. And you can still check / "rate" it.
But there is also a case where the best algorithm has such large constant factors that it is not applicable to modern systems. The best example I know is large number multiplication algorithms .
Nevertheless, there are systems that “approximate” (better mentioned) the complexity class of the algorithm. I'm not sure if this measures the coding or guesses their code analysis, but they can do it: https://codility.com/public-report-detail/ .
What can be done is to run the algorithm and resize the input, run the tests and fit the data to the model. It is pretty simple. Then you can say that for the test input range, the algorithms behave like O(class(n)) . (This can be of practical value, valuable even more than theoretical asymptotic complexity.)
Please note that selecting test points is not trivial. Basically, if your algorithm behaves "fast", you need to increase the input speed to the next class. For example. if you have something like (100n+n!) you can go n={1,10,100} because it will kill the runtime. However, the transition n={1,2,3,4,5,6,7} will not get part n! (ok 7! is 5040 , but for n^2 it will be much harder).
The bottom line, which received a pleasant assumption, is of course possible, but outside the simplest cases it can be difficult and difficult to do, and, unfortunately, it is quite difficult to say whether the situation is complicated or not.
In addition, this discussion is purely theoretical, skipping hardware effects. I heard about algorithms that n^2 behaved better than n^log n , because the previous one was always (very) cache friendly, but don’t take my word for it, I can’t remember the source.