<< | Thread Index | >> ]    [ << | Date Index | >> ]

Subject: Re: Proposal: Compression of large packets: fragmentation
From: Josef Drexler <jdrexler,AT,josefsbox,DOT,cjb,DOT,net>
Date: Sat, 15 Dec 2001 07:55:28 +0100
In-reply-to: <3C1AC88A.4531D158@ieee.org>

On Fri, 14 Dec 2001, Bryan-TheBS-Smith wrote:

> Josef Drexler wrote:
> > This could be done by using zlib's compress() function, or to use a
> > dictionary that we reset for each packet -- this might give better
> > compression but needs coordination between both cipe peers.
>
> If you are interested in real-time compression at a small ratio loss, be
> sure to check out Oberhumer's LZ variant (GPL):
>    http://www.oberhumer.com/opensource/lzo/
>
> Using tar, it actually outputs faster than tar with no compression
> (because there is less to write), at a 3-5x speed improvement over GZip,
> with only a 10-15% loss in ratio (text-binary), and a 8-15x speed
> improvement over BZip2, with only a 15-25% loss in ratio (text-binary).

Well, as long as the compression is faster than the bandwidth of the link
it's fast enough.  For modem or ADSL links zlib should be better because
of the better compression.  But this is certainly something to keep in
mind for high-bandwidth links.

-- 
   Josef Drexler                 |    http://publish.uwo.ca/~jdrexler/
---------------------------------+---------------------------------------
 Please help Conserve Gravity    | Email address is *valid*.
 Boycott multistory buildings.   | Don't remove the "nospam" part.





<< | Thread Index | >> ]    [ << | Date Index | >> ]