EDIT: mra68 answer made me realize that for data with some noise (which is much more realistic) my original answer does not work well, so I updated it with a potential solution using the nls() function.
Original answer
As stated in Adam , I believe that this can be solved by taking the logarithm of your variables and then fitting the linear model.
Your hypothesis is that y = time n , and you want to evaluate n. First take the logarithm on both sides: log (y) = log (time n )
Then consider the logarithmic rule that log (x n ) = n.log (x), and therefore: log (y) = n.log (time) , which is a linear equation.
So, to be more specific:
#Simulate some data

#Plot the log of both variables. plot(log(time), log(y))

#Fit the linear model fit = lm(log(y) ~ log(time))

Updated Answer
When some noise is introduced, using the above solution gives lower estimates than would be expected. The nls() function seems to overcome the problem somewhat, as shown here.
#Simulate some data with noise added set.seed(10021) time = 1:100 y2 = (time + rnorm(100, sd = 2))^25 # Plot both non-transformed and log-transformed data par(mfrow = c(1, 2)) plot(time, y2) plot(log(time), log(y2)) lines(log(time), log(time^25), col = "red") # line when there is no noise

The graph on the right shows that for lower values ββthe logarithmic transformation leads to large differences with respect to the basic (in our case, well-known) model. This is further illustrated with the rest of lm() fit:
# Fit the model using the log-transformed variables fit_lm = lm(log(y2) ~ log(time))

Using nls() seems to improve exponent evaluation.
# Fit using nls fit_nls = nls(y2 ~ (time ^ b), start = c(b = 24), trace = T) # The coefficient is much closer to the known coef(fit_nls) # b # 25.04061 # Plot of data and two estimates plot(time, y2) lines(time, time^coef(fit_nls), col = "red") lines(time, time^coef(fit_lm)[2], col = "green3") legend("topleft", c("fit_lm", "fit_nls"), lwd = 2, col = c("green3", "red"))
