[ << | Thread Index | >> ]    [ << | Date Index | >> ]

Subject: Re: NAT through CIPE (not CIPE through NAT)
From: "E. Jay Berkenbilt" <ejb,AT,ql,DOT,org>
Date: Sun, 1 Jul 2001 15:51:50 +0200

SUMMARY: iptables SNAT rule seems to cause source address of encrypted
public cipe packets themselves to be altered; not just the packets
routed through ciped.  This must be either a bug in netfilter or a bug
in the way cipe interacts with netfilter.  This does not happen with
ipchains, even with the 2.4.3 kernel.  Using ipchains, NAT
(masquerading) through CIPE works.

---------------------------------------------------------------------------

I have more information about the message I sent out last weekend,
which is attached below for reference.  I'm hoping someone who
understands netfilter or cipe's interaction with it deeply can kick
in.  I believe that what we have here is either a bug in the netfilter
code or in the manner in which cipe interacts with it.  I have a lot
of evidence to support this:

 1.  Even with the 2.4 kernel, if I make sure that all iptables
     modules are unloaded and use ipchains to set up masquerading,
     then NAT through cipe works.  In other words, if I issue this
     command:

     ipchains -A forward -d 192.168.0.0/16 -j MASQ

     then I'm in business.  However, I'd much rather use iptables than
     ipchains.  (Note: iptables with MASQUERADE target rather than
     SNAT target also fails.)

 2.  Using various LOG and other targets with iptables, it appears
     that the cipe packets are not passing through the tables as I
     would expect.  Maybe this is because my expectations are wrong.
     This does not happen with ipchains even with the 2.4 kernel.  For
     example, this ipchains command:

     ipchains -A output -p udp -d <public IP of site2-gw> 9999 -j REJECT

     stops cipe dead in its tracks exactly as expected.  However,
     these iptables commands:

     iptables -t filter -A OUTPUT -p udp -d <public IP of site2-gw> --dport 
9999 -j DROP
     iptables -t filter -A FORWARD -p udp -d <public IP of site2-gw> --dport 
9999 -j DROP

     have no impact.  (Recall that in ipchains, forwarded packets
     traverse both the forward and output chains, whereas in iptables,
     forward packets do not traverse the OUTPUT chain.)

     Also, if I use the LOG target in iptables to look at packets
     destined for site2-gw's public IP address on udp port 9999, I
     don't see any.  I can put these LOG targets in the nat filter's
     OUTPUT or POSTROUTING chains and in the filter table's OUTPUT or
     FORWARD chains, and I see nothing.  Actually, if I unload all
     modules and start everything from scratch, I get to see one
     single UDP packet logged in this way.  tcpdump shows the packets
     going out though.

 3.  If I use "tcpdump icmp or udp port 9999" and ping through my
     forwarded connection with no NAT enabled, I see icmp packets from
     my site1 internal address to my site2 internal address (as
     expected) and udp port 9999 cipe packets from my site1 external
     address to my site2 external address as expected.  However, once
     I enable SNAT, I see that the source address of the icmp packets
     are modified to the SNAT address as expected, but also, the
     source address of the UDP packets that are destined to site2's
     cipe daemon are also modified!  This means that the system is
     sending cipe packets with the source address 192.168.14.2 and the
     destination address of site2-gw's public IP address.  There's no
     way this could ever work as there is no public route to
     192.168.14.2.  In fact, tcpdump on each gateway shows that the
     packets are going out but not being received on the other end.
     They are probably being blocked by some intermediate router.  (If
     I adjust my firewall rules appropriately, I can see them going
     out at the border of site1's network, but I can't see them coming
     in at the border of site2's network.  Ordinarily I don't allow
     192.168.* out un-NAT-ted anyway, so the packets would ordinarily
     never leave my network.)

I've tried all this stuff with the latest released cipe (1.5.2).  (The
development snapshot link on the cipe webpage is broken.)  I'm going
to investigate what changed between 2.4.3 (which I am now running
after installing the 2.4.3 kernel rpm from redhat's update area) and
2.4.5.  Any tips, including how to reach the correct audience for
additional help, would be helpful.  I think I've gone as far as I can
go without digging into the guts of ipip and netfilter in the kernel
(which will probably be my next step if no one swoops in with an
answer or a patch).





To: 

cipe-l,AT,inka,DOT,de


Subject: 

NAT through CIPE (not CIPE through NAT)


From: 

"E. Jay Berkenbilt" <ejb,AT,ql,DOT,org>


Date: 

Sun, 24 Jun 2001 19:03:22 -0400




SUMMARY: IP forwarding across a CIPE VPN is working, but NAT across
the same CIPE VPN is failing.  tcpdump shows packets only on one side
of the interface.

Note: this question pertains to running NAT over CIPE, not to running
CIPE over NAT.  In other words, I have a working CIPE VPN between two
specific machines.  Each machine is on a private network.  I'd like to
talk between the two private networks, but one side doesn't have a
route to the other.  I am successful in routing between the two
networks using the CIPE boxes as gateways if I establish all the
required routing, but not in doing NAT over the CIPE interface.

Here are the details:

site1-machine: eth0: 10.160.59.1/24

site1-gateway: eth0: 10.160.59.17/24
               cipcb0: 192.168.14.2/24
               eth1: (dynamic public address)

site2-gateway: eth0: 192.168.0.3/24
               cipcb0: 192.168.14.1/24
               eth1: (static public address)

site2-machine: eth0: 192.168.0.1/24

All machines are running RedHat Linux 7.1 with cipe 1.4.6 as
distributed in RedHat 7.1 and with the default RedHat 7.1 2.4.2-based
kernel.  I've checked 1.5.2 but not installed it as it doesn't seem
that any changes are relevant to this problem.

site1-machine has a route for 192.168.0.0/16 to site1-gateway.
site1-gateway has a route to 192.168.0.0/16 through interface cipcb0.
site2-machine has site2-gateway as its default gateway.

site1-gateway has IP forwarding enabled and accepts forwarding from
10.160.59.0/24 to any destination.

site2-gateway has IP forwarding enabled and accepts forwarding from
192.168.0.0/16 to any destination.

site1's options file:

ipaddr  192.168.14.2
ptpaddr 192.168.14.1
peer    (site2's public address):9999
key     (key)
dynip

site2's options file

ipaddr  192.168.14.1
ptpaddr 192.168.14.2
peer    127.0.0.1:9999
me      (site 2's public address):9999
key     (key)

What works:

site1-gateway and site2-gateway can ping each other.  site2-gateway
sees the source address as 192.168.14.2.  site1-gateway can ping
either 192.168.14.1 or 192.168.0.3.

site1-gateway and site2-machine can both ping each other since
site1-gateway knows that site2-machine is on the other side of the
CIPE VPN and site2-machine routes all non-local packets through
site2-gateway.  site2-machine can see 192.168.14.1 but not
10.160.59.17, which is fine.

In order to get site1-machine and site2-machine to see each other, I
should be able to tell site1-gateway to NAT any packets being
forwarded to 192.168.0.0/16 to source address 192.168.14.2.  This does
not work.  I know, however, that I can forward packets through this
VPN without NAT.  Here are the details:

If I teach site2-gateway about 10.160.59.0/24 with

route add -net 10.160.59.0/24 dev cipbc0
iptables -t nat -I POSTROUTING -d 10.160.59.0/24 -j ACCEPT
iptables -t filter -I FORWARD -d 10.160.59.0/24 -j ACCEPT

then site1-machine and site2-machine can ping each other.
Furthermore, if I run tcpdump -i cipcb0 on both site1-gateway and
site2-gateway, I can see both the echo request and echo reply packets,
and I can see 192.168.0.1 and 10.160.59.1 as the source/destination
addresses.  This is exactly as expected.  Everything works perfectly.
My two networks can talk to each other.

However, I don't want site2 to know about 10.160.59.0/24.  I want
site1-gateway to SNAT all its traffic to 192.168.14.2.  This should be
easy.  Once the above situation works fine, I should simply need to
run the following on site1-gateway:

iptables -t nat -I POSTROUTING -d 192.168.0.0/16 -j SNAT --to-source 
192.168.14.2
and everything should just work.  (Note that site2-machine can ping
192.168.14.2 fine.)  However, when I give this command, my tcpdump on
site1-gateway shows the echo requests with the source of 192.168.14.2
and the destination of 192.168.0.1 as expected, but site2-gateway's
tcpdump shows nothing!

In other words, CIPE does not appear to be forwarding the traffic at
all.  tcpdump on site1 shows the packets being sent, but tcpdump on
site2 does not show the packets being received.

The thing that's baffling to me is that when I turn SNAT to the
site1's CIPE ip address, the cipe interface on site2 no longer appears
to be receiving packets even though the interface on site1 appears to
sending them.  Running strace on the ciped-cb processes is
unenlightening.  Any further tips on diagnosing this will be helpful.

I have administrative control of all machines in question, and I am
the only person using this VPN at the moment.  I have full freedom to
bring things up and down as required, so I can try experiments that
people may suggest.  One thing I have tried is to explicitly specify
both the peer: and me: parameters as static addresses (using the
address I happen to have now) on both sides.  This changes nothing --
I get exactly the same results.  When I try to NAT through the cipe
interface, tcpdump shows the packets on one side but not on the other.

For what it's worth, I used to use ppp over stunnel with otherwise
identical configurations.  NAT across that VPN worked fine.

--
E. Jay Berkenbilt <ejb,AT,ql,DOT,org>
http://www.ql.org/q/





[ << | Thread Index | >> ]    [ << | Date Index | >> ]