NoSwitchport.com

Internet-based DMVPN coming through your front-door (VRF, that is)

Posted in Networking by shaw38 on April 20, 2010

While working on the design for a 20+ site DMVPN migration, I realized something often overlooked in the documentation for an internet-based DMVPN deployment. To maintain a zero (or minimal) touch deployment model in an internet-based DMVPN, default routing is a must for dynamic tunnel establishment between hubs and spokes. The public addressing of spoke routers is typically at the mercy of one or more service providers and even if you have been allocated a static address per the service contract, these still have a tendency to change due to reasons out of the customer’s control. This is especially true in teleworker-type deployments with a broadband service provider. To deal with this issue, an engineer has two options: maintain a list of static routes on every hub/spoke router comprised of every public and next-hop address in the DMVPN environment or use a static default route pointing out the public interface.

Tough decision, huh? Not so fast.

What happens when you have a transparent proxy deployed in your network at the hub site? No problem, just have the spoke routers carry a default route advertised into the IGP from the hub site. Wait…we are already using a default route to handle DMVPN tunnel establishment between spoke routers. To resolve this issue, we need two default routes: one for clients within the VPN and one for establishing spoke-to-spoke tunnels. We could add two defaults to the same routing table with the same administrative distance but load balancing is not the behavior we want and our tunnels would throw a fuss due to route recursion. How about policy-based routing with the local policy command configured for router-initiated traffic? Pretty ugly. Enter FVRF or Front-door VRF.

Front-door VRF takes advantage of the VRF-aware features of IPSec. While touted as a security feature in the scant Cisco documentation by separating your private routing table into an isolated construct from your public address space, this feature also provides an ideal solution for maintaining separate routing topologies for DMVPN control-plane traffic and user data-plane traffic.

So how does all this work? Pretty simply if you are familiar with the VRF concept. First, on your spoke routers, create a VRF to be used for resolving tunnel endpoints:

ip vrf FVRF
 description FRONT_DOOR_VRF_FOR_TUNNEL_MGMT
 rd 1:1

Add the publicly addressed or outside-facing interface to the VRF:

interface FastEthernet0/1
 ip vrf forwarding FVRF
 ip address 10.1.1.1 255.255.255.252

Now, we need to configure our ISAKMP/IPSec policy in a VRF-aware fashion:

crypto isakmp policy 1
 authentication pre-share
 group 2
 encr 3des
!
crypto keyring DMVPN vrf FVRF
 pre-shared-key address 0.0.0.0 0.0.0.0 key PR35H4R3D
!
crypto ipsec transform-set DMVPN esp-3des
 mode transport
!
crypto ipsec profile DMVPN
 set security-association lifetime seconds 1800
 set transform-set DMVPN 
 set pfs group2

Note the only VRF-specific configuration is the crypto keyring statement. Both the ISAKMP policy and IPSec transform-set configuration is no different than a typical deployment. GET VPN could be used instead, if your security posture calls for it.

Next up–configuration of the mGRE interface:

interface Tunnel1
 ip address 10.2.2.2.1 255.255.255.0
 no ip redirects
 ip mtu 1400
 ip nhrp authentication DMVPN
 ip nhrp map multicast 2.2.2.2
 ip nhrp map 10.2.2.254 2.2.2.2
 ip nhrp network-id 1
 ip nhrp holdtime 450
 ip nhrp nhs 10.2.2.254
 ip nhrp shortcut
 ip nhrp redirect
 ip tcp adjust-mss 1360
 load-interval 30
 qos pre-classify
 tunnel source FastEthernet0/0
 tunnel mode gre multipoint
 tunnel key 1
 tunnel vrf FVRF
 tunnel protection ipsec profile DMVPN

Configuring the tunnel interface is standard fare except for the “tunnel vrf” argument. This command forces the far-side tunnel endpoint to be resolved in the VRF specified. By default, tunnel endpoint resolution takes place in the global table which is obviously not the behavior we want. Also, notice the “ip nhrp shortcut” and “ip nhrp redirect” arguments. These two commands mean we are using DMVPN Phase 3 and it’s fancy CEF rewrite capable for spoke-to-spoke tunnel creation.

Last, lets add our default route within the VRF:

ip route vrf FVRF 0.0.0.0 0.0.0.0 10.1.1.2 name DEFAULT_FOR_FVRF

And we’re done! At this point, assuming your hub site configuration is correct, you should have a working DMVPN tunnel.

In the output below, notice the “fvrf” and “ivrf” sections under tunnel interface 1. The concept of IVRF is the exact opposite of FVRF: tunnel control-plane traffic operates in the global routing table, and your private side operates in a VRF. IVRF can be tricky in that, if your spoke routers are managed over the tunnel, all management functionality (SNMP, SSH, etc.) must be VRF-aware. Recent IOS releases have been much better with VRF-aware features but YMMV:

Test-1841#sh crypto session detail 
Crypto session current status

Code: C - IKE Configuration mode, D - Dead Peer Detection     
K - Keepalives, N - NAT-traversal, T - cTCP encapsulation     
X - IKE Extended Authentication, F - IKE Fragmentation

Interface: Tunnel1
Uptime: 3d22h
Session status: UP-ACTIVE     
Peer: 2.2.2.2 port 500 fvrf: FVRF ivrf: (none)
      Phase1_id: 2.2.2.2
      Desc: (none)
  IKE SA: local 10.1.1.1/500 remote 2.2.2.2/500 Active 
          Capabilities:D connid:1048 lifetime:01:41:34
  IPSEC FLOW: permit 47 host 10.1.1.1 host 2.2.2.2
        Active SAs: 2, origin: crypto map
        Inbound:  #pkts dec'ed 114110 drop 0 life (KB/Sec) 4396354/1063
        Outbound: #pkts enc'ed 119898 drop 492 life (KB/Sec) 4396347/1063

You can now configure your favorite flavor of IGP as would normally would (globally, that is) without impacting DMVPN control-plane traffic. In this scenario, OSPF is used with the tunnel interfaces configured as a point-to-multipoint network type. The static default route in the FVRF table handles tunnel establishment while the dynamically-learned default via OSPF handles the user data plane within the VPN:

Test-1841#sh ip route vrf FVRF 0.0.0.0

Routing Table: FVRF
Routing entry for 0.0.0.0/0, supernet
  Known via "static", distance 1, metric 0, candidate default path
  Routing Descriptor Blocks:
  * 10.1.1.2
      Route metric is 0, traffic share count is 1
Test-1841#
Test-1841#
Test-1841#
Test-1841#sh ip route 0.0.0.0

Routing entry for 0.0.0.0/0, supernet
  Known via "ospf 100", distance 110, metric 101, candidate default path, type inter area
  Last update from 10.2.2.254 on Tunnel1, 3d23h ago
  Routing Descriptor Blocks:
  * 10.2.2.254, from 10.2.2.254, 3d23h ago, via Tunnel1
      Route metric is 101, traffic share count is 1

Front-door VRF works best when used on both hub and spoke routers. Why? Well, anytime a new spoke is to be provisioned, you have to do zero configuration on the hub site. Configure the spoke router, ship it out the door, and have the field plug it in at their convenience.

Advertisements
Tagged with: , ,

EIGRP Edge Routing w/ DMVPN

Posted in Networking by shaw38 on April 20, 2009

We have a ton of remote sites (payments centers, small offices, etc) with a single router on-site, typically an 871 series hanging off a cable modem with a static IP. These routers are terminated via point-to-point GRE tunnels back to a pair of central hub sites. Because we run OSPF, and are poorly summarized, these routers can carry up to 7500 prefixes depending on the area type of the market.  All these routers really need is a default route back towards the two hubs in a active/standby model with return traffic to the spokes preferring the active hub site for symmetry. Secondly, provisioning new remotes requires touching both the hub and spoke routers for building tunnel interfaces. The configs on the hub sites can be fairly long and annoying to troubleshoot, especially with static IPs of the remote sites changing over time and lack of cleaning up old configs.

To address the size of the RIB on the spokes, we could use statics and redistribution on the hubs and floating statics on the spoke routers. It will reduce the RIB of the spokes but its fairly high maintenance on both of the hub sites. With a static for a loopback, voice and data vlans, your looking at least 90 statics on some of the hub routers. That’s not helping the config complexity problem. Distribute lists do not work with OSPF in the outbound direction and the interface-level “ip ospf database-filter all” won’t help us leak a default to the spokes. We need distance vector. EIGRP stub flags and an 0.0.0.0/0 summary towards the spokes would be perfect.  

To cut down on the tunnel interfaces, the obvious choice is DMVPN. There’s is little to no spoke-to-spoke traffic so DMVPN will purely serve as a tool for configuration simplification.

Here’s the DMVPN configuration for the two hub sites. First, notice there are no static unicast maps, multicast maps or nhs configuration pointing to the opposite hub site. Basically, I don’t want a tunnel and ultimately a EIGRP neighbor relationship built from Hub site 1 to Hub site 2. There would be no reason to have this in place and will only cause issues since both hubs are only advertising a default route out their tunnel interfaces. Secondly, Hub site 1 has it’s tunnel interface delay set to 100 so all spokes will prefer the default route via Hub site 1 after calculating their feasible distance.  Lastly, the default route being generated to the spokes is being set with an administrative distance of 254. The reason for this is because when you manually summarize, a summary-route is generated in the routing table pointing to Null0 with an administrative distance of 5. While this is not necessarily a problem for CIDR blocks where more specific prefixes exist in the RIB, this can cause traffic following a default route to be black-holed. We want to set this null route above any IGP-learned default route so it is not preferred.  Oh, and notice split-horizon and next-hop-self for EIGRP are not being disabled on the tunnel interface. We are not interested in spoke to spoke tunnels nor are we interested in spokes having all routes within the DMVPN. Disabling split-horizon would allow the spoke prefixes to be advertised back out the tunnel interfaces to the other spokes. Disabling next-hop-self would allow these prefixes to be advertised with a next-hop of the advertising spoke router (which is where the NHRP query would come into play for a spoke-to-spoke tunnel). 

Hub Site 1:
interface Tunnel0
ip address 10.255.0.1 255.255.255.0
no ip redirects
ip nhrp authentication ccie
ip nhrp map multicast dynamic
ip nhrp network-id 10
ip summary-address eigrp 10 0.0.0.0 0.0.0.0 254
delay 100
tunnel source FastEthernet1/0
tunnel mode gre multipoint
tunnel key 10
!
router eigrp 10
network 10.255.0.0 0.0.0.255
no auto-summary

Hub Site 2:
interface Tunnel0
ip address 10.255.0.2 255.255.255.0
no ip redirects
ip nhrp authentication ccie
ip nhrp map multicast dynamic
ip nhrp network-id 10
ip summary-address eigrp 10 0.0.0.0 0.0.0.0 254
delay 200
tunnel source FastEthernet1/0
tunnel mode gre multipoint
tunnel key 10
!
router eigrp 10
network 10.255.0.0 0.0.0.255
no auto-summary

 

Here’s the DMVPN configuration for the spoke sites. It’s pretty straightforward.  I found the “ip nhrp registration timeout” command had to be added on the spokes. When testing failure of a hub site, there were issues with EIGRP adjacencies being reformed with the failed hub router when it came back online. Because we don’t want Hub site 1 and Hub site 2 to be NHRP peers, the hub site’s will not query one another for NHRP mappings. So instead, the spokes are periodically broadcasting their presence every 5 seconds. When the hub comes back online, it will receive the registration message from the spoke, rebuild its mGRE tunnel and reform its EIGRP adjacency.  The spoke routers do not necessarily need to be configured as stubs as they should never be queried but it is good practice.

Spoke 1:
interface Tunnel0
ip address 10.255.0.3 255.255.255.0
no ip redirects
ip nhrp authentication ccie
ip nhrp map multicast dynamic
ip nhrp map 10.255.0.1 10.1.1.1
ip nhrp map multicast 10.1.1.1
ip nhrp map multicast 10.1.1.2
ip nhrp map 10.255.0.2 10.1.1.2
ip nhrp network-id 10
ip nhrp nhs 10.255.0.1
ip nhrp nhs 10.255.0.2
ip nhrp registration timeout 5
tunnel source FastEthernet0/0
tunnel mode gre multipoint
tunnel key 10
!
router eigrp 10
passive-interface default
no passive-interface Tunnel0
network 4.4.4.4 0.0.0.0
network 10.255.0.0 0.0.0.255
network 10.255.1.0 0.0.0.255
no auto-summary
eigrp stub connected

 

So that takes care of getting a dynamic default to the spokes, but now we need advertise the routes from the spokes back into the rest of the network. Remember, we set the delay of the tunnel interface of Hub site 1 so all spokes would prefer its default route. When we are redistributing EIGRP into OSPF, we will be injecting them as E2s with a metric of 100 from Hub site 1 and a metric of 200 from Hub site 2. This should give us traffic symmetry. Also, we are only permitting specific blocks for redistribution. We don’t want anyone routing any prefix they damn please. 

Hub Site 1:
router ospf 100
router-id 1.1.1.1
log-adjacency-changes
redistribute eigrp 10 subnets route-map EIGRP-->OSPF
!
ip prefix-list DMVPN_SPOKES seq 5 permit 10.255.0.0/16 le 32
!
route-map EIGRP-->OSPF permit 10
match ip address prefix-list DMVPN_SPOKES
set metric 100

Hub Site 2:
router ospf 100
router-id 2.2.2.2
log-adjacency-changes
redistribute eigrp 10 subnets route-map EIGRP-->OSPF
!
ip prefix-list DMVPN_SPOKES seq 5 permit 10.255.0.0/16 le 32
!
route-map EIGRP-->OSPF permit 10
match ip address prefix-list DMVPN_SPOKES
set metric 200

 

With this configuration, it should be zero touch on the hub site routers. I’ll be adding the crypto configurations in at a later date.

Tagged with: ,