ColdFusion Execution Accuracy

I found a very old thread since 2004, which reported that the runtime specified in ColdFusion debugging was only accurate to 16 ms. This means that when you turn on debug output and see the runtime, you see an estimate of the nearest multiple of 16 ms. I see this today with ACF10. When refreshing the page, it jumps most times between the edges for 15-16 ms.

Here are the questions:

  • Starting from the bottom when ColdFusion reports 0 ms or 16 ms, what does it always mean somewhere between 0 and 16, but not more than 16 ms?

  • When coldfusion reports 32 ms, does that mean somewhere between 17 and 32?

  • ColdFusion lists everything separately by default, and not as an execution tree in which callers include many functions. when determining the cost of execution higher on the tree, does it sum the "inaccurate" times of children, or is it the real value of the actual time that all the child processes took upon themselves?

  • Can we use cftimers or getTickCount () to get the exact times, or is it also an estimate?

  • Sometimes you will see that 3 functions took 4 ms each for a total of 12 ms, or even one call that took 7 ms. Why does it sometimes seem "accurate?"

Now I will provide some insights, but I need community support!

  • Yes

  • Yes

  • ColdFusion will track the report with an accuracy of 16 ms for the total time spent on the process, rather than summing the child processes.

  • cftimers and getTickCount () are more accurate.

  • I have no idea.

+7
coldfusion
source share
2 answers

In Java, you either have System.currentTimeMillis () or System.nanoTime ().

I assume getTickCount () just returns System.currentTimeMillis (). It is also used by ColdFusion to report debug runtime. You can find in numerous StackOverflow questions that complain about System.currentTimeMillis () inaccuracies as they report from the operating system. On Windows, accuracy can vary quite a bit, some say up to 50 ms. This does not mean that there are any leaps in it. However, it is fast. The queries seem to report either something from the JDBC driver, either in the SQL module, or another method, since they are usually accurate.

Alternatively, if you really want to increase accuracy, you can use this:

currentTime = CreateObject("java", "java.lang.System").nanoTime()

It is less than currentTimeMillis (), but it is accurate to nanoseconds. You can divide by 1000 to get to microseconds. You will want to wrap the precisionEvaluate () function if you are trying to convert to milliseconds by dividing by 1,000,000.

Note that nanoTime () does not exactly match the nanosecond, exactly accurate for the nanosecond. Accuracy is just an improvement over currentTimeMillis ().

+3
source share

This is more of a comment, then an answer, but I still can not comment.

In my experience, the minimum query execution time is 0 ms or 16 ms. It is never 8 ms or 9 ms. For fun, you can try the following:

 <cfset s = gettickcount()> <cfset sleep(5)> <cfset e = gettickcount() -s> <Cfoutput>#e#</cfoutput> 

I tried it with different values, it seems that the expected output and the actual output always differ in the range from 0ms to 16ms no matter what value is used. Coldfusion (java) seems to be accurate with a margin of about 16 ms.

0
source share

All Articles