[lug] filehandles in perl question...

arnie asherman1 at uswest.net
Sat Jul 29 09:47:46 MDT 2000


This might make more sense if I explained exactly what I am trying to do. At work
I support a GIS  application in several environments, one of which is an offsite
training center. I frequently get calls of problems which can be traced to
various causes - database troubles, system or hardware, or user error. The
application creates a log file each time it starts up, and it also renames the
existing log files, so logfile becomes logfile1, the existing logfile1 becomes
logfile2, etc. Up to 10 log files are kept in the directory at any time.

There are currently 3 training rooms with a dozen or so workstations per room. At
times I get calls from the trainer which amount to little more info than,
"something is not working on about half of our workstations" I'd like to build a
tool to retrieve these log files, so I could have it do something like - go to
room two, get the last 3 log files for machines 1 through 6. I planned on catting
the files into a single text file w/ some formatting to separate and identify
them, then retrieve that one text file. My first step was just to see how to get
a single file from one machine, and add the loops to get multiple files after
that. I'm trying to avoid having to retrieve all of the logs by hand.

BTW, this is all being done on Solaris, not Linux. I often use Linux to prototype
tools at home, and find there is little I need to change when I move them to
work. Unfortunately, I don't have a network setup at home (single machine) which
would  allow me to create a similar situation.

Tkil wrote:

> you *are* making sure that you only have one program adding to that
> log file at a time?

You do bring up an interesting point which I had overlooked, the log from the
current session is  open for writing by the application. If it is not possible to
get this log file while it is open, I may have to reconsider how this tool will
work. Getting the log from the current session is important though.

> it might be easier to just transfer a file over
> (maybe using scp or similar) and have a process on the target machine
> to the concatenation in a controlled manner.

This is a good idea, I'll try it.

> or configure your logging mechanism to spit to a single machine as
> well as to local log.

Unfortunately, I do not have any influence over how the application logs the
activity, I have to live w/ it as it is.

> one way to try to make your current version work would be the
> following, assuming that $t is your telnet object.
>
>     $t->cmd("PS1='DONEWITHTEXT\$'");
>     $t->cmd("stty -echo");
>     $t->cmd("cat >> remote_log_file.txt");
>
>     open LLF, $local_log_file
>       or die "couldn't open local log file \"$local_log_file\" for read: $!";
>     my $old_ors = $t->output_record_separator('');
>     while (<LLF>)
>     {
>         $t->print($_);
>     }
>     close LLF
>       or warn "closing local log file \"$local_log_file\" after read: $!";
>     $t->print("\x04");
>     $t->output_record_separator($old_ors);
>     $t->cmd("stty echo");
>
> but that's totally untested.  hopefully you can see what i'm trying to
> do.  the perldoc for Net::Telnet has a similar example, going the
> other way (retrieving file from remote machine) that is a little
> fancier about dodging shell interpretation.
>
> if you want to transfer files, transfer files, and use a protocol
> designed for this.  failing that, you might get away with rsync, which
> does this sort of thing over rsh or ssh tunnels.
>
> even better, use a logging daemon that is designed for this sort of
> thing, including concurrent access, etc.
>

Thanks for your suggestions, I think the biggest stumbling block might be getting
the log from the current open session. I'll see what I can do next week at work.
--
arnie sherman
frenomulax at bigfoot.com

"I'm only Bob Dylan when I have to be."
  - Bob Dylan






More information about the LUG mailing list