<< | Thread Index | >> ]    [ << | Date Index | >> ]

Subject: Re: Proposal: Compression of large packets: fragmentation
From: ewheeler,AT,kaico,DOT,com
Date: Sun, 16 Dec 2001 21:25:22 +0100
In-reply-to: <Pine.LNX.4.31.0112150143570.18082-100000@josefsbox.cjb.net>

When we are talking about time to compress, we also need to keep in mind
latency.  If there is a slow compression algorithm, it could impact the
transfer delay;  I don't know if this will be in us or ms.  I wonder how
difficult it would be to implement several compression algorithms (bzip,
gzip, lzo).

--Eric

On Sat, 15 Dec 2001, Josef Drexler wrote:

> On Fri, 14 Dec 2001, Bryan-TheBS-Smith wrote:
> 
> > Josef Drexler wrote:
> > > This could be done by using zlib's compress() function, or to use a
> > > dictionary that we reset for each packet -- this might give better
> > > compression but needs coordination between both cipe peers.
> >
> > If you are interested in real-time compression at a small ratio loss, be
> > sure to check out Oberhumer's LZ variant (GPL):
> >    http://www.oberhumer.com/opensource/lzo/
> >
> > Using tar, it actually outputs faster than tar with no compression
> > (because there is less to write), at a 3-5x speed improvement over GZip,
> > with only a 10-15% loss in ratio (text-binary), and a 8-15x speed
> > improvement over BZip2, with only a 15-25% loss in ratio (text-binary).
> 
> Well, as long as the compression is faster than the bandwidth of the link
> it's fast enough.  For modem or ADSL links zlib should be better because
> of the better compression.  But this is certainly something to keep in
> mind for high-bandwidth links.
> 
> 

-- 

Eric Wheeler
Network Administrator
KAICO
20417 SW 70th Ave.
Tualatin, OR 97062
www.kaico.com
Voice: 503.692.5268





<< | Thread Index | >> ]    [ << | Date Index | >> ]