<< | Thread Index | >> ]    [ << | Date Index | >> ]

Subject: Re: CIPE Compared To PPP + SSH
From: Keith Smith <keith,AT,ksmith,DOT,com>
Date: Wed, 23 Jan 2002 05:55:06 +0100
In-reply-to: <BF967F2D50B0D511952A00B0D0208C3721E987@owa.dfa>

I have been using ppptcp for quite some time.  Similar in concept to ppp+ssh,
it used an RSA pub/private keyring/config file, and you simply put a server
out, which tried decrypting the setup packet until it matched, then opened
a psuedo tty with pppd and sat between.

I've read the paper on the list, and while the points are valid, I simply did
not experience many problems with reasonable connectivity between the hosts.
I occasionally ran into the "spike" problem but generally it was short lived
in duration.  At some point your output will drain.  A simple way around much
of this is to use a much smaller MTU on the ppp link, I would suggest 296.or
168.  Make sure PPPD does NOT HAVE ACCESS TO COMPRESSION modules!  Myriad 
issues
dissappear.  My ppptcp used a smallish transmission packet too.

All that being said, I switched to CIPE.  It is superior technology.  The
UDP connection allows the error correction to be optomised and filtered, so
you still can get spikes, but I have noticed they are not as high or as long
in duration.  Also the interface is up as long as the daemon is running.  This
prevents you from having to use dummy interfaces and the like.  I've also 
found
the actual connection to be much more stable than the PPP.  It appears that
it never exits.

Lastly the overhead is lower and the flow is smoother this is CIPE
over the tunnel & direct:
------------------------------------
PING msc-gw.terminix-triad.com (192.168.25.128): 56 octets data
64 octets from 192.168.25.128: icmp_seq=0 ttl=255 time=26.4 ms
64 octets from 192.168.25.128: icmp_seq=1 ttl=255 time=27.0 ms
64 octets from 192.168.25.128: icmp_seq=2 ttl=255 time=26.1 ms
64 octets from 192.168.25.128: icmp_seq=3 ttl=255 time=27.1 ms

--- msc-gw.terminix-triad.com ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 26.1/26.6/27.1 ms

PING terminix-triad.com (65.210.142.2): 56 octets data
64 octets from 65.210.142.2: icmp_seq=0 ttl=248 time=25.6 ms
64 octets from 65.210.142.2: icmp_seq=1 ttl=248 time=25.2 ms
64 octets from 65.210.142.2: icmp_seq=2 ttl=248 time=25.1 ms
64 octets from 65.210.142.2: icmp_seq=3 ttl=248 time=25.4 ms

--- terminix-triad.com ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 25.1/25.3/25.6 ms
------------------------------------

This is ppptcp (vice-versa :)):
------------------------------------
PING admin.pittsgrove.org (216.83.97.50): 56 octets data
64 octets from 216.83.97.50: icmp_seq=0 ttl=247 time=23.1 ms
64 octets from 216.83.97.50: icmp_seq=1 ttl=247 time=23.2 ms
64 octets from 216.83.97.50: icmp_seq=2 ttl=247 time=23.0 ms
64 octets from 216.83.97.50: icmp_seq=3 ttl=247 time=23.4 ms

--- admin.pittsgrove.org ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 23.0/23.1/23.4 ms

PING 192.168.7.1 (192.168.7.1): 56 octets data
64 octets from 192.168.7.1: icmp_seq=0 ttl=255 time=26.0 ms
64 octets from 192.168.7.1: icmp_seq=1 ttl=255 time=25.2 ms
64 octets from 192.168.7.1: icmp_seq=2 ttl=255 time=25.0 ms
64 octets from 192.168.7.1: icmp_seq=3 ttl=255 time=24.7 ms

--- 192.168.7.1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 24.7/25.2/26.0 ms
------------------------------------

overhead:
CIPE:   1.3/25.3 = 5%
PPPTCP: 
2.1/23.1 = 9%

All of these hosts are on T-1's or better on the same backbone,
in fact the latter is on an OC-3  Notice the way the ping times
hop around.  I did a much larger sample before switching.  The
CIPE tunnels have much less "spread" in the ping times.  This
can get very significant if your hosts start getting in the 100's
of ms.

Honestly, in the above enviroment, it's not significant enough
to matter, but then again UUnet guarantee's 99%+ packet delivery
within it's backbone.  Start tossing away packets accross and ISP
boundry, and while they both go to hell, CIPE recovers much more
gracefully.

I would imagine hacking ppptcp to use an encrypted udp stream between
them would probably resolve that, but frankly I've never been much of a
ppp fan.  SLIP always worked really well for me, but the Linux PPP
for some reason always goofs with me.

I do prefer the client/server approach, but CIPE is new, and I'm sure
will mature quite a bit as more and more people use it, and it's seems
to be pretty thin.  pppd is a pig with all the auth and other crap they
keep piling in.

Good Luck!

-- 
Keith Smith                 keith,AT,ksmith,DOT,com
655 W Fremont Dr
Tempe AZ 85282              it's hot





<< | Thread Index | >> ]    [ << | Date Index | >> ]