[lug] ssh compression
D. Stimits
stimits at idcomm.com
Wed May 1 13:39:28 MDT 2002
Paul Walmsley wrote:
>
> On Wed, 1 May 2002, D. Stimits wrote:
>
> > "Sexton, George" wrote:
> > >
> > > The best solution I have found for things like this is to write a cron job
> > > that bzips the data, pipes the output through gpg, and then use FTP to move
> > > the encrypted data. You could use NCFTPGET at the destination site to
> > > retrieve the file. I move a client's SQL Database to my site once a week
> > > this way. 2.5GB compresses down to 350MB. I also do it early in the morning,
> > > so it doesn't use bandwidth during production hours.
> >
> > One advantage here is that bzip is the better compression, and any ssh
> > tunneling should then have compression turned off. The bzip with -9
> > compression (or even just -7) is very very good, and a cron job can use
> > a nice level to cut down how much cpu is used, whereas you might find
> > problems if you cut the nice level to very small priority in ssh.
> > Separation of compression from transmission is appealing for batch
> > processes.
>
> the added compression from bzip2 comes at a price, though: it is much much
> slower than gzip.
This is true for higher compression levels, usually no matter what the
compression routine. Anyway, this is why I suggested renice to a lower
priority, and that it is nice for batch processes (versus realtime). Max
or high compression bzip2 is not a good idea for realtime, but can be
somewhat harmless if it is run at lower nice levels, or if file size is
not large. But I would guess that cpu time is far less expensive than a
saturated or incapacitated network.
D. Stimits, stimits at idcomm.com
More information about the LUG
mailing list