I am trying to understand Big-O Notation. Sorry if I ask something too obvious, but it seems like I can't wrap myself around this.
I have the following C # code function that I am trying to calculate Big-O Notation for.
for (i = 2; i < 100; i++) { for (j = 2; j <= (i / j); j++) if ((i % j) == 0) break; // if factor found, not prime if (j > (i / j)) Console.WriteLine("{0} is prime", i); }
Now, what have I received so far, I believe that both if are considered constant O (1) and are not taken into account when calculating this algorithm? And also, if I correctly understood the single for the loop
for(i = 0; i < 100; i++)
since this is a linear function, there will be O (n) and a nested loop that is independent of the variable from the surrounding loop
for(i = 0; i < 100; i++) for(j = 0; j < 100; j++)
Is O (n ^ 2)? But how can I calculate a function like the top one, where the second cycle depends on the first cycle and creates a non-linear function?

I found a definition for linearithmic that says
The linear arithmetic algorithm scales to huge problems. Whenever N doubles, the runtime is longer (but not much longer) than doubled.
Although this is apparently a good description of how this piece of code is executed, it means that it is O (N Log [n]), and if so, how could I calculate this?
c # algorithm big-o
Karl-Henrik
source share