<< | Thread Index | >> ]    [ << | Date Index | >> ]

Subject: Re: NAT through CIPE (not CIPE through NAT)
From: ewheeler,AT,kaico,DOT,com
Date: Mon, 2 Jul 2001 05:46:58 +0200
In-reply-to: <200107011326.f61DQBu03311@soup.in.ql.org>

Here's my cipe setup, It sounds like it's similar to yours.  I am using
2.4.4/CIPE 1.5.2 and it works great!

Site 1:
  (eth1)   external (public) ip: 1.2.3.4
  (eth0)   internal ip: 192.168.0.1/24
  (cipcb0) cipe addr: 192.168.0.1   P-t-P addr: 192.168.1.1

Site 2:
  (eth1)   external (public) ip: 4.3.2.1
  (eth0)   internal ip: 192.168.1.1/24
  (cipcb0) cipe addr: 192.168.1.1   P-t-P addr: 192.168.0.1

Here are the table rules that I am using in the nat table:

Site 1:
  iptables -t nat -A POSTROUTING -j RETURN -d 192.168.0.0/24
  iptables -t nat -A POSTROUTING -j RETURN -d 192.168.1.0/24
  iptables -t nat -A POSTROUTING -j SNAT --to 1.2.3.4 -s 192.168.0.0/24

Site 2: 
  iptables -t nat -A POSTROUTING -j RETURN -d 192.168.0.0/24
  iptables -t nat -A POSTROUTING -j RETURN -d 192.168.1.0/24
  iptables -t nat -A POSTROUTING -j SNAT --to 4.3.2.1 -s 192.168.1.0/24

Note that if your SNAT rule is already in place you may wish to do '-I
POSTROUTING 1' for youre RETURN rules.  It is important that the RETURN
rules are first.

Here are the routes that I have in place so CIPE routing works:

Site 1:
  route add -net 192.168.1.0 netmask 255.255.255.0 gw 192.168.1.1

Site 2:
  route add -net 192.168.0.0 netmask 255.255.255.0 gw 192.168.0.1

With this in place, 192.168.1.123 can ping 192.168.0.210 and vice-versa
without a problem.  The 192.168.0.0 and 192.168.1.0 networks get SNAT'd
out to the internet if the packet is not destined to either 192.168.0.0/24
or 192.168.1.0/24.  The RETURN rules are important.  Otherwise when
192.168.1.123 pings 192.168.0.210, SNAT at Site 2 will change the source
such that 192.168.0.210 will try to icmp-reply to 4.3.2.1.  Instead, we
want the linux box to just act as a simple router and pass it through w/o
any SNAT/MASQ involved.  

I think the reason ipchains works is because it is more forgiving.  What I
am going to say next is speculation, but I think it is pretty close to
accurate.  When ipchains uses the -j MASQ rule, it changes the source
address of the packet to the ip address of the interface leaving it.  NAT
still occurs, but instead of changing it to a public internet ip, it
changes it to it's internal cipcb0 ip.  When the packet returns, NAT will
demangle the source address based on the port that it comes into.  

(the following isn't speculation, I know this to be true)
  iptables SNAT will always force the source address to what you tell it
to be regardles of which interface it leaves.  This is why we add
RETURN rules to make it so SNAT doesn't effect packets traversing your 
VPN WAN.  I don't know how the MASQUERADE target works; I've never had to
work with it.  I would guess it to be similar but have no experience with
it.  If you add the RETURN lines above the MASQUERADE lines in the nat
table, It should fix the source-port mangling problem when the packet is
destined to a system on the other end of the WAN.

Hope this helps!  Let me know if I can explain/help more!

--Eric

On Sun, 1 Jul 2001, E. Jay Berkenbilt wrote:

> SUMMARY: iptables SNAT rule seems to cause source address of encrypted
> public cipe packets themselves to be altered; not just the packets
> routed through ciped.  This must be either a bug in netfilter or a bug
> in the way cipe interacts with netfilter.  This does not happen with
> ipchains, even with the 2.4.3 kernel.  Using ipchains, NAT
> (masquerading) through CIPE works.
> 
> ---------------------------------------------------------------------------
> 
> I have more information about the message I sent out last weekend,
> which is attached below for reference.  I'm hoping someone who
> understands netfilter or cipe's interaction with it deeply can kick
> in.  I believe that what we have here is either a bug in the netfilter
> code or in the manner in which cipe interacts with it.  I have a lot
> of evidence to support this:
> 
>  1.  Even with the 2.4 kernel, if I make sure that all iptables
>      modules are unloaded and use ipchains to set up masquerading,
>      then NAT through cipe works.  In other words, if I issue this
>      command:
> 
>      ipchains -A forward -d 192.168.0.0/16 -j MASQ
> 
>      then I'm in business.  However, I'd much rather use iptables than
>      ipchains.  (Note: iptables with MASQUERADE target rather than
>      SNAT target also fails.)
> 
>  2.  Using various LOG and other targets with iptables, it appears
>      that the cipe packets are not passing through the tables as I
>      would expect.  Maybe this is because my expectations are wrong.
>      This does not happen with ipchains even with the 2.4 kernel.  For
>      example, this ipchains command:
> 
>      ipchains -A output -p udp -d <public IP of site2-gw> 9999 -j REJECT
> 
>      stops cipe dead in its tracks exactly as expected.  However,
>      these iptables commands:
> 
>      iptables -t filter -A OUTPUT -p udp -d <public IP of site2-gw> --dport 
>9999 -j DROP
>      iptables -t filter -A FORWARD -p udp -d <public IP of site2-gw> 
>--dport 9999 -j DROP
> 
>      have no impact.  (Recall that in ipchains, forwarded packets
>      traverse both the forward and output chains, whereas in iptables,
>      forward packets do not traverse the OUTPUT chain.)
> 
>      Also, if I use the LOG target in iptables to look at packets
>      destined for site2-gw's public IP address on udp port 9999, I
>      don't see any.  I can put these LOG targets in the nat filter's
>      OUTPUT or POSTROUTING chains and in the filter table's OUTPUT or
>      FORWARD chains, and I see nothing.  Actually, if I unload all
>      modules and start everything from scratch, I get to see one
>      single UDP packet logged in this way.  tcpdump shows the packets
>      going out though.
> 
>  3.  If I use "tcpdump icmp or udp port 9999" and ping through my
>      forwarded connection with no NAT enabled, I see icmp packets from
>      my site1 internal address to my site2 internal address (as
>      expected) and udp port 9999 cipe packets from my site1 external
>      address to my site2 external address as expected.  However, once
>      I enable SNAT, I see that the source address of the icmp packets
>      are modified to the SNAT address as expected, but also, the
>      source address of the UDP packets that are destined to site2's
>      cipe daemon are also modified!  This means that the system is
>      sending cipe packets with the source address 192.168.14.2 and the
>      destination address of site2-gw's public IP address.  There's no
>      way this could ever work as there is no public route to
>      192.168.14.2.  In fact, tcpdump on each gateway shows that the
>      packets are going out but not being received on the other end.
>      They are probably being blocked by some intermediate router.  (If
>      I adjust my firewall rules appropriately, I can see them going
>      out at the border of site1's network, but I can't see them coming
>      in at the border of site2's network.  Ordinarily I don't allow
>      192.168.* out un-NAT-ted anyway, so the packets would ordinarily
>      never leave my network.)
> 
> I've tried all this stuff with the latest released cipe (1.5.2).  (The
> development snapshot link on the cipe webpage is broken.)  I'm going
> to investigate what changed between 2.4.3 (which I am now running
> after installing the 2.4.3 kernel rpm from redhat's update area) and
> 2.4.5.  Any tips, including how to reach the correct audience for
> additional help, would be helpful.  I think I've gone as far as I can
> go without digging into the guts of ipip and netfilter in the kernel
> (which will probably be my next step if no one swoops in with an
> answer or a patch).
> 
> 





<< | Thread Index | >> ]    [ << | Date Index | >> ]