RE: Data integrity check in CIPE - Please explain me the necessityor benefit of a larger checksum.|
"Mark Smith" <mark.smith,AT,avcosystems,DOT,co,DOT,uk>|
Mon, 29 Sep 2003 14:50:15 +0100|
> If your database transaction application doesn't have any safeguard
> against this, you shouldn't probably use CIPE: it wasn't designed for
> that purpose. Somebody can do the same on your LAN.
An application almost certainly won't be able to detect that a new
connection is actually caused by packets identical to an earlier one.
Tunnel or no tunnel, I can't find any reason to accuse any application of
not being good enough for this reason.
My LAN security isn't what I'm worried about - these packets appear to be
coming from a secured network (the other end of the tunnel) and they're
actually being controlled by someone outside that I don't trust. Whatever
the application, this isn't secure. If someone within my own network
compromises our own server, that's an internal matter. However, if J.Random
hacker decides to mess with my network traffic, I have to do my best to stop
them. I'm trying to work towards a solution to exactly that.
> Replay protection (using sequence/ack numbers) has some problems when
> UDP datagrams are used for transport: as outlined somewhere else
> (by P.G.), you have to restart your tunnel each time a UDP datagram
> is lost or duplicated...
I've been thinking through this a little trying to figure out if there's
something we can do about it, even within the scope of UDP. Lost or
duplicated isn't as much of an issue as the effect of packets arriving out
of order. As it stands now, the packets will be decrypted and passed on out
of order, which I think TCP sorts out when it happens to individual TCP
packets. If these events occur as a result of outside intervention, any
attacker would just be duplicating the effects of using UDP. If, however,
they're occurring an unspecified small amount of time later, they might be
treated differently, and it's that effect I want to attempt to prevent.
If we attempt to protect against modified packets then the replay would then
only be able to send a real packet, which (with appropriate detection) we
should then be able to drop and handle. A simple idea that takes a little
bit of memory is to keep track of the packets we _haven't_ seen and time
them out after a short interval, perhaps configurable with a default.
For example, packets may arrive in this order...
1 accept as initial
3 record that we've missed 2, expect 4 or higher
5 record that we've missed 4, expect 6 or higher
4 accept and remove record of 4
4 drop as we only expect 6 or higher now
then at some point remove the record of 2 as it's been too long for it to
With a working stream, this queue would keep itself to a minimum unless
there is extreme packet loss, and even then the records would time out.
This is more a packet issue than a cryptography one, but I'm hoping it'll be
workable. However, it needs someone else to understand it and maybe pick
holes in it, so please feel free...
Mark Smith - Avco Systems Ltd
Tel: +44 (0)1784 430996 Fax: +44 (0)1784 431078