Just reading this book for fun is not homework.
However, I was already confused in the first main task:
1-1 Runtime Comparison
For each function f (n) and time t in the following table, determine the largest size n of the problem that can be solved in time t, assuming that the algorithm takes f (n) microseconds to solve the problem.
What does it mean?
The following table shows several times along one axis (1 second, 1 minute, hour, etc.), and the other axis shows different f (n), such as log n, sqrt (n), n, etc. ..
I am not sure how to fill in the matrix, because I cannot understand the question. So, if f (n) = lg n, then it asks for the largest n that can be resolved, for example, in 1 second, but does f (n) = lg n microseconds solve the problem? What does this last part mean? I donβt even know how to set up equations / relations to solve this problem, because I literally cannot even combine the meaning of the question.
My hovering over the sentence is "assuming the algorithm takes f (n) microseconds to solve the problem," because I donβt know what this refers to. The time during which the algorithm decides which problem takes f (n) microseconds? So if I call f (100), it takes lg 100 microseconds? So I need to find n, where f (n) = lg n microseconds = 1 second?
Does this mean that log n microseconds = 1 second, when log n microseconds = 10 ^ 6 microseconds, so n = 2 ^ (10 ^ 6)?