[lug] High resolution timers
Ed Hill
ed at eh3.com
Fri Feb 21 22:06:45 MST 2003
On Fri, 2003-02-21 at 20:12, D. Stimits wrote:
> Jeffrey Siegal wrote:
>
> > D. Stimits wrote:
> >
> > > I doubt that any non-RT extension will get you better than 10 milli
> > > seconds, even if the code resolves to micro seconds. A question that
> > > needs to be asked is not just whether it must resolve a micro second,
> > > but whether it is allowed to have an error of +/- 50 ms (with a 100 Hz
> > > IRQ rate) on that 1 micro second tick.
> >
> >
> > Not exactly. Most kernel builds use the CPU tick counter (TSC) to
> > adjust the 100 Hz timer value. It should be accurate to 1 us (or less,
> > but there is no user mode interface for that).
>
> I wasn't aware that gettimeofday() was capable of an accuracy to make
> its microsecond resolution of use. Relatively speaking, if
> gettimeofday() returns with one value and then returns with another, you
> can be certain that the larger return value occurs after the smaller
> return value (continuous), but the time spread between values is
> probably not useful unless the spread is at least 1 to 10 ms on an
> average system. I gettimeofday() actually useful as a microsecond
> resolution clock on non-RT x86 linux? I'd suspect that it would be
> easier to just buy a PCI instrumentation type card that does RT, rather
> than fiddle with making non-RT linux. Even the earlier mentioned URL
> talks about how the scheduler itself can cause a 10 ms error simply by
> scheduling something else temporarily.
A "10ms error"? Hardly, Dan. Go back and read the article again.
The function gettimeofday() is most certainly *precise* to 1us. And if
root doesn't run settimeofday() at some intervening point then its also
non-decreasing. What the HOWTO:
http://www.ibiblio.org/pub/Linux/docs/HOWTO/mini/other-formats/html_single/IO-Port-Programming.html#s4
says is that the *wait* that your user-space code may experience between
any two calls to gettimeofday() may be much larger than the execution
time that you'd suspect would be necessary for the code between the two
calls to execute. This is because all user space code is subject to
time-sharing and, on a fairly stock ("non-RT") Linux kernel, it may take
an arbitrarily long period before the kernel scheduler makes any
particular process active (running) again.
If you'd actually bothered to compile, run, and tinker with the code
that I sent earlier you'd have a much better appreciation for the
resolution of gettimeofday() and the time required by function calls
such as printf(), etc. For instance, on newer hardware, it is possible
to call gettimeofday() twice in succession and get the same answer. It
is also noticeably easier to accomplish this "feat" on, for instance, an
Athlon-1.2Ghz workstation than a PIII-900Mhz laptop. I've included a
code snippet again so you can get off your ass and go verify it for
yourself.
Ed
===
#include <stdio.h>
#include <sys/time.h>
int
main( int argc, char **argv ) {
struct timeval tv, tvo;
gettimeofday(&tvo, NULL);
while(1) {
gettimeofday(&tv, NULL);
if ( (tv.tv_sec == tvo.tv_sec) &&
(tv.tv_usec == tvo.tv_usec) )
printf("same!");
else
printf(".");
tvo.tv_sec = tv.tv_sec;
tvo.tv_usec = tv.tv_usec;
}
return 0;
}
--
Edward H. Hill III, PhD
Post-Doctoral Researcher | Email: ed at eh3.com, ehill at mines.edu
Division of ESE | URLs: http://www.eh3.com
Colorado School of Mines | http://cesep.mines.edu/people/hill.htm
Golden, CO 80401 | Phones: 303-384-2094, 303-273-3483
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://lists.lug.boulder.co.us/pipermail/lug/attachments/20030221/52370ede/attachment.pgp>
More information about the LUG
mailing list