[lug] High resolution timers

D. Stimits stimits at attbi.com
Fri Feb 21 20:12:33 MST 2003


Jeffrey Siegal wrote:

> D. Stimits wrote:
>
> > I doubt that any non-RT extension will get you better than 10 milli
> > seconds, even if the code resolves to micro seconds. A question that
> > needs to be asked is not just whether it must resolve a micro second,
> > but whether it is allowed to have an error of +/- 50 ms (with a 100 Hz
> > IRQ rate) on that 1 micro second tick.
>
>
> Not exactly.  Most kernel builds use the CPU tick counter (TSC) to
> adjust the 100 Hz timer value.  It should be accurate to 1 us (or less,
> but there is no user mode interface for that).

I wasn't aware that gettimeofday() was capable of an accuracy to make 
its microsecond resolution of use. Relatively speaking, if 
gettimeofday() returns with one value and then returns with another, you 
can be certain that the larger return value occurs after the smaller 
return value (continuous), but the time spread between values is 
probably not useful unless the spread is at least 1 to 10 ms on an 
average system. I gettimeofday() actually useful as a microsecond 
resolution clock on non-RT x86 linux? I'd suspect that it would be 
easier to just buy a PCI instrumentation type card that does RT, rather 
than fiddle with making non-RT linux. Even the earlier mentioned URL 
talks about how the scheduler itself can cause a 10 ms error simply by 
scheduling something else temporarily.

D. Stimits, stimits AT attbi DOT com




More information about the LUG mailing list