A VMware@SoftLayer CookBook v1.1 April 30, 2014 VMware NSX @SoftLayer Author(s) & Contributor(s) (IBM) Shane B. Mcelligott Dani Roisman (VMware) Merlin Glynn, mglynn@vmware.com Chris Wall Geoff Wing Marcos Hernandez Coby Litvinskey Page 1 of 21
VMware NSX @Softlayer I. Summary The core objective of this series of VMware@SoftLayer CookBooks is to enable vsphere administrators with key information to deploy VMware vsphere environments within SoftLayer. Soft- Layer offers a very unique capability to VMware administrators to consume Bare Metal instances and network/storage/backup&recovery constructs from SoftLayer in a self service cloud construct manor. These constructs can be utilized to deploy fully functional vsphere implementations that can be architected to extend or replace on premises vsphere implementations (VMware@Home). VMware@SoftLayer will enable VMware administrators to realize Hybrid Cloud characteristics rapidly, and in a cost effective manor by deploying into SoftLayer s Enterprise grade Global Cloud. This is a key differentiator from other cloud providers like Amazon Web Services, in that vsphere workloads and catalogs can be provisioned onto VMware vsphere environments within SoftLayer s Global Cloud DataCenters, without modification to VMware VMs or guests. A common vsphere hypervisor and management/orchestration platform makes this possible. vsphere implementations in SoftLayer also enable utilization of other components of the VMware vcloud Suite such as vcloud Automation Center, vcenter Operations Management Suite, vsan, Site Recovery Manager, vcenter Orchestrator, and NSX. This document will focus on leveraging VMware NSX to provide SDN constructs to VMware@SoftLayer deployments. It will present information in the following sections: II. VMware NSX Overview III. Key Design Concepts for NSX @SoftLayer IV. NSX @SoftLayer Recipe (How To Deploy @ SoftLayer) V. NSX @SoftLayer Advanced Use Cases NSX Logical Switches (VXLAN) Across SoftLayer Pods BYOIP (Bring Your Own IP): Routing Customer Subnets from VMware@Home BYOIP (Bring Your Own IP): BCDR Recovery from VMware@Home. VI. Engaging VMware for NSX @SoftLayer Note: This document is intended for experienced vsphere Administrators, and assumes a basic understanding of VMware@SoftLayer Architecture which is documented here: http://knowledgelayer.softlayer.- com/learning/deploy-vmwaresoftlayer. Note: This document is intended for experienced vsphere Administrators. Some topics covered will consider that the reader has basic deployment skills to Install and Configure vsphere & vcenter 5.x, and a fundamental understanding of Layer 2 and Layer 3 networking. Note: This document is NOT intended to provide enablement on basic Operating System tasks within VM Guest Operating Systems. Page 2 of 21
II. VMware NSX Overview VMware NSX is a software networking and security virtualization platform that delivers the operational model of a virtual machine for the network. Virtual networks reproduce the Layer2 - Layer7 network model in software, allowing complex multi-tier network topologies to be created and provisioned programmatically in seconds, without the need for additional SoftLayer Private Networks. NSX also provides a new model for network security. Security profiles are distributed to and enforced by virtual ports and move with virtual machines. NSX supports VMware's software-defined data center strategy. By extending the virtualization capabilities of abstraction, pooling and automation across all data center resources and services, the software-defined data center architecture simplifies and speeds the provisioning and management of compute, storage and networking resources through policy-driven automation. By virtualizing the network, NSX delivers a new operational model for networking that breaks through current physical network barriers and enables VMware@SoftLayer to achieve better speed and agility with reduced costs. NSX includes a library of logical networking services - logical switches, logical routers, logical firewalls, logical load balancers, logical VPN, and distributed security. You can create custom combinations of these services in isolated software-based virtual networks that support existing applications without modification, or deliver unique requirements for new application workloads. Virtual networks are programmatically provisioned and managed independent of SoftLayer networking constructs. This decoupling from hardware introduces agility, speed, and operational efficiency that can transform datacenter operations. benefits of NSX include: DataCenter automation Self-Service Networking services Rapid application deployment with automated network and service provisioning. Isolate dev, test, and production environments on the same SoftLayer Bare metal infrastructure. Single SoftLayer Account Multi-tenant clouds NSX Network Services: NSX can be configured through the vsphere Web Client, a command line interface (CLI), and REST API. The core network services offered by NSX are: Logical Switches A cloud deployment or a virtual data center like VMware@SoftLayer may have a variety of applications across multiple tenants. These applications and tenants require isolation from each other for security, fault isolation, and avoiding overlapping IP addressing issues. The NSX logical switch creates logical broadcast domains or segments (VXLAN vwires) to which an application or tenant virtual machine can be logically wired. This allows for flexibility and speed of deployment while still providing all the characteristics of a physical network's broadcast domains (VLANs) without physical Layer 2 sprawl at SoftLayer. Logical switches allow for thousands of tenant networks to be provisioned onto of a single SoftLayer Private Network(VLAN). Page 3 of 21
A logical switch is distributed and can span arbitrarily large compute clusters, even across Soft- Layer Pods within the same SoftLayer datacenter. This allows for virtual machine mobility within the datacenter without limitations of physical Layer 2 (VLAN) boundaries across SoftLayer Pods. Logical Routers Dynamic routing provides the necessary forwarding information between layer 2 broadcast domains (VXLAN vwires/logical Switches), thereby allowing you to decrease layer 2 broadcast domains and improve network efficiency and scale. NSX extends this intelligence to where the workloads reside for providing East-West routing functions. This allows more direct virtual machine to virtual machine communication without the costly or timely need to extend hops. At the same time, NSX also provides North-South connectivity inbound/outbound of SoftLayer Data- Centers, thereby enabling tenants to access public networks securely and efficiently. Logical Firewall Logical Firewall provides security mechanisms for dynamic virtual data centers. The Distributed Firewall component of a NSX Logical Firewall allows you to segment virtual datacenter entities, like virtual machines, based on VM names and attributes, user identity, vcenter objects lie DataCenters, and hosts as well as traditional networking attributes like IP addresses, VLANs, etc. The Edge Firewall component helps you achieve key perimeter security needs such as building DMZs based on IP/VLAN constructs, tenant to tenant isolation in multi-tenant virtual data centers, Network Address Translation (NAT), VPNs, and User based SSL VPNs. Edge Firewalls can be leveraged in combination with or in place of Vyatta & Fortinet services from SoftLayer for perimeter protection. The Firewall Flow Monitoring feature displays network activity between virtual machines at the application protocol level. You can use this information to audit network traffic, define and refine firewall policies, and identify threats to your network. Logical Virtual Private Networks (VPN)s SSL VPN-Plus allows remote users to access private corporate applications. IPSec VPN offers site-to-site connectivity between an NSX Edge instance and remote sites (VMware@Home). L2 VPNs allow you to extend your datacenter by allowing virtual machines to retain network connectivity across geographical boundaries and across VMware@SoftLayer DataCenters and between VMware@Home. Logical Load Balancer The NSX Edge load balancer enables network traffic to follow multiple paths to a specific destination. It distributes incoming service requests evenly among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource utilization, maximizing throughput, minimizing response time, and avoiding overload. NSX Edge provides load balancing up to Layer 7. Service Composer Service Composer helps you provision and assign network and security services to applications in a virtual infrastructure. These services can be mapped and applied to virtual machines in the security groups. Data Security provides visibility into sensitive data stored within your organization's virtualized and cloud environments including VMware@SoftLayer. Based on the violations reported by NSX Data Security, you can ensure that sensitive data is adequately protected and assess compliance with regulations around the world. Page 4 of 21
NSX Extensibility VMware partners can integrate their network service solutions with the NSX platform, which enables customers to have an integrated experience across VMware products and partner solutions. Data center operators can provision complex, multi-tier virtual networks in seconds, independent of the underlying network topology or components from SoftLayer. NSX Core Components This section describes the core NSX components that would be deployed on VMware@Soft- Layer. These components can be configured/managed through the vsphere Web Client, a command line interface (CLI), and REST API. VMware NSX requires a functional VMware@SoftLayer environment with at least vsphere & vcenter version 5.5 deployed. All components described in this section will be deployed as VMware Appliance VMs running on VMware@SoftLayer. NSX Components are not supported as SoftLayer CCIs, It is therefore recommended that VMware@SoftLayer guidance be followed to create a dedicated ESX Management Cluster, additionally an Edge Services Cluster may also be required as will be discussed further in this document. Figure 1 Page 5 of 21
NSX Manager The NSX Manager is the centralized network management component of NSX, and is installed as a virtual appliance on an ESX host in your vcenter Server environment. VMware@Soft- Layer Architecture recommends this VM be deployed on a dedicated Management ESX Cluster. One NSX Manager maps to a single vcenter Server environment and multiple NSX Edge, vshield Endpoint, and NSX Data Security instances. NSX vswitch NSX vswitch is the software that operates on VMware@SoftLayer ESX hosts to form a software abstraction layer between servers and the physical network. As the demands on DataCenters continue to grow and accelerate, requirements related to speed and access to the data itself continue to grow as well. In most infrastructures, virtual machine access and mobility usually depend on physical networking infrastructure and the physical networking environments they reside in. This can force virtual workloads into less than ideal environments due to potential layer 2 or layer 3 boundaries, such as being tied to specific SoftLayer Private Networks (VLANs) in specific pods. NSX vswitch allows you to place these virtual workloads on any available infrastructure in the DataCenter, regardless of the underlying physical network infrastructure. This not only allows increased flexibility and mobility, but increased availability and resilience. NSX Controller NSX controller is an advanced distributed state management system that controls virtual networks and VXLAN overlay transport tunnels. NSX controller is the central control point for all logical switches within a network and maintains information of all virtual machines, hosts, logical switches, and VXLANs. The controller supports three logical switch control plane modes, Multicast, Unicast, and Hybrid. These modes decouple NSX from the physical network. VMware@SoftLayer requires Unicast mode as SoftLayer Private Networks (VLANs) do not offer IGMP services for Multicast or Hybrid mode. The NSX Controller(s) will utilize Unicast mode with virtual tunnel endpoints (VTEPS) to provide mac learning and other functions to allow VXLAN Broadcast, Unknown unicast, and Multicast (BUM) traffic within a logical switch. The unicast mode replicates all the BUM traffic locally on the host and requires no physical network configuration outside of Layer 3 connectivity between VTEPS. NSX Controller(s) are deployed by the NSX Manager as a minimum set of 3 controller nodes, as well as various other nodes to support (distributed) Layer 3 routing services. All of the nodes are deployed as virtual machines and are managed by the NSX Manager on an ESX Management Cluster at VMware@SoftLayer. NSX Edge NSX Edge provides network edge security and gateway services to isolate a virtualized networks. You can install an NSX Edge either as a logical (distributed) router or as a services gateway. The NSX Edge logical (distributed) router provides East-West distributed routing with tenant IP address space and data path isolation. Virtual machines or workloads that reside on the same host on different subnets can communicate with one another without having to traverse a traditional routing interface. The NSX Edge Gateway connects isolated stub networks to shared (uplink) networks by providing common gateway services such as DHCP, VPN, NAT, dynamic routing, and Load Balancing. Common deployments of NSX Edge include in the DMZ, VPN Extranets, and multi-tenant Cloud environments where the NSX Edge creates virtual boundaries for each tenant. Page 6 of 21
III. Key Design Concepts for NSX@SoftLayer Planning for NSX@SoftLayer This section will present some of the key planning concepts and constructs you should consider when deploying NSX@SoftLayer. For full planning and Installation documentation of NSX, please refer to: http://pubs.vmware.com/nsx-6/index.jsp & http://www.vmware.com/files/pdf/products/nsx/vmw-nsx-network-virtualization-design-guide.pdf NSX@SoftLayer Logical Overview Figure 2" 1. Dedicated Management Cluster (From Figure 2): While not a strict requirement for VMware@SoftLayer, a dedicated management cluster is a strict requirement when NSX will be utilized with VMware@SoftLayer. The Management cluster will be required to host the NSX Controller VMs and NSX Manager Virtual Appliance. NSX Manager is delivered as a OVF that must be deployed on a ESX Host (CCI and BareMetal instances are not supported). NSX manager will deploy a minimum of 3 NSX Controller VMs that also require an ESX/vSphere cluster. A best practice is to dedicate a vsphere Management Cluster with adequate N+1 capacity to host all required management VMs and Virtual Appliances for your solution. NSX Manager and Controller cluster sizing information can be found here: http://pubs.vmware.com/nsx-6/topic/com.vmware.nsx.install.doc/ GUID-311BBB9F-32CC-4633-9F91-26A39296381A.html. Page 7 of 21
2. Dedicated Edge Services Cluster (From Figure 2): Network traffic flows can be viewed at a high level to be North/South & East/West patterns. North/South traffic would traditionally be where traffic leaves a Layer 2 domain and is routed at Layer 3 to various public networks and/or other linked DataCenters across VPN or SoftLayer Point of Presence connections. One or more NSX Edge VMs will typically provide these North/South services (In addition to various other services mentioned in the previous section II). It is highly recommended to dedicate an Edge Services cluster. This allows a specific set of Edge ESX Hosts to be connected to SoftLayer Public Networks for Public/Untrusted Traffic, as well as dedicating ESX vmnics to traffic for North/South services. An Edge Services Cluster should be sized to host all of the expected NSX Edge Virtual Appliances ( NSX Edge Compact: 1 vcpu & 512 MB, Large: 2 vcpu &1 GB, X-Large: 6 vcpu & 8 GB, and Quad Large: 4vCPU & 1 GB ). Figure 3" It should be noted, That it is possible to leverage a common Management & Edge Services Cluster with one very important consideration. Management clusters in VMware@Soft- Layer deployments are recommended to leverage vsphere Standard vswitches for Virtual Networking. This ensures that Management Clusters that host vcenter VMs will have available virtual ports to start vcenter in a reboot/failover event. NSX requires that VTEPs (VXLAN Tunnel Endpoints) be deployed on a vsphere Distributed Virtual Switch. ESX vmnics can only belong to one vswitch (either a Standard or Distributed not both). In order for a Common Management & Edge Services cluster to be utilized at SoftLayer, each ESX host must have no fewer than 4 Private Network NICs. 2 Private NICs will be associated with a vsphere Standard vswitch and Standard Port Groups for vcenter as well as other management VMs, the remaining 2 Private NICs should then be associated with a vsphere Distributed Virtual Switch and a VXLAN VTEP deployed (Figure 3). This will allow NSX Edge s on the Common cluster to place an interface on any logical switch that it will be providing North/South or other services to. Page 8 of 21
3. Disable LACP for the Management Cluster Private Interfaces (From Figure 2): It is a best practice to have a dedicated ESX Management Cluster for VMware@SoftLayer to host vcenter and other management VMs & Virtual Appliances. It is highly recommended to have multiple NICs for redundancy and performance on that ESX Cluster. SoftLayer deploys all paired NICs in a LACP bundle as a default configuration. vsphere Standard switches DO NOT Support the LACP protocol. A vsphere Standard switch is highly recommended to host vcenter vnics. This is suggested to prevent outages if a vcenter instance fails. Distributed vswitches require a control plane to enable a port connected to newly powered on VMs, and to forward packets on the connected ports on a DVS. By default, vcenter provides that control plane function, if a vcenter instance is shutdown and recovered whiles its vnic is attached to a port on a DVS it manages, there will be no control plane available to allow port forwarding to occur (Chicken & Egg). For these reasons, a dedicated ESX Management Cluster cannot use both Private NICs with a Standard vswitch unless the LACP bundle is removed. This is also why a dedicated Edge Services cluster is also recommended as a DVS is required for NSX VTEPs. 4. Enhanced LACP for the Capacity & Edge Services Cluster Interfaces (From Figure 2): As previously mentioned, SoftLayer places multiple NICs into a LACP bundle as a default configuration on all hosts. All ESX hosts in a Capacity or Edge Services Cluster should leverage dual NICs for redundancy and performance. NSX VTEPs do support LACP LAGS (Link Aggregation Groups) on a DVS. It is recommended to set DVS LAGs in Passive LACP Mode with Source and destination IP Address load balancing mode at SoftLayer (Figure 4). There should be only a single VTEP vmk per Enhanced LACP LAG. Additionally, each DVS should have its LLDP (Link Layer Discovery Protocol) enabled & set to Both. LACPv1 for VTEPS is supported but not recommended for VMware@SoftLayer. Figure 4" 5. Capacity Clusters That Cross SoftLayer POD Boundaries (From Figure 2): A SoftLayer POD is a unit of approximately 5000 bare metal servers with a unique set of customer Backend Switches & Routers boundaries. Private Networks (VLANs) are unique to a POD. A SoftLayer Datacenter may have 1 or more PODs. VLANs with the same Tag ID are still unique, meaning if POD1 in a datacenter has a PrivateNetwork VLAN 1000 provisioned, and POD2 has the same VLAN TAG 1000 provisioned to the same tenant, they are still 2 separate Layer 2 broadcast domains (Note: SoftLayer Private Network Spanning allows Private Networks to route between each other across PODs/DataCenters, it does not extend a Layer 2 broadcast domain.). If your VMware@SoftLayer capacity spans PODs, and you Page 9 of 21
intend to build NSX logical switches across the POD boundary, you must ensure that the VTEPs from each capacity cluster can communicate with each other. 6. VXLAN On SoftLayer Private Networks (From Figure 2): A dedicated SoftLayer Private Network is required for VXLAN. VXLAN is the encapsulation protocol leveraged by NSX. It enables NSX Logical Switches to present a Layer 2 broadcast domain over a Layer 3 network (depicted by #7 in Figure 2). VXLAN and Logical switches provide a key feature for VMware@SoftLayer, by allowing VM networks to extend in a Datacenter, regardless of the POD and/or VLAN placement of ESX Host NICs. A few key points in relation to SoftLayer networks must be understood for VXLAN. Firstly, only the Unicast Control Plane Mode is supported for NSX at SoftLayer (Figure 5). This is a limitation due to SoftLayer not supporting IGMP Snooping and Multicast routing over Private Networks. Although Multicast broadcast packets will be allowed within a SoftLayer Private Network, the MultiCast address range is limited to the 224.0.0.115-224.0.0.250 address range, and traffic will not be allowed to route outside of a single Private Network/VLAN, thus making Unicast mode a more favorable option. Second, all SoftLayer Private Networks have jumbo frames enabled and therefore will support the larger MTU required by VXLAN, 1600 is the recommended default value, but can be tuned up to 9000 if the network traffic traversing the physical switches will benefit from Jumbo frames. Figure It is also important to understand that a tenant is NOT guaranteed they will have all of their ESX hosts share common Private Networks/VLANs on the SoftLayer physical backend switches as new hosts are provisioned. As depicted in (Figure 2), ESX hosts that are deployed across SoftLayer Datacenter PODs, cannot share the same Private Networks/ VLANs. This scenario is likely to occur as ESX Host capacity is gradually added over time, and new hosts can be limited to connectivity in a new POD or Datacenter. It is critical that SoftLayer tenants allow the various IP Subnets linked to the VXLAN dedicated Private Net- 5" Page 10 of 21
works/vlans in each POD/DataCenter the ability to route and communicate with each other. The VTEPs must be able to freely communicate with each other. Although it is possible to ACL filter VXLAN UDP traffic through a firewall, it is highly recommended that VXLAN Private Networks/VLANs be allowed to have the VTEPs freely communicate with each other. 7. NSX Logical Switches (VXLAN based vwire Networks) On SoftLayer (From Figure 2): NSX Logical switches (VXLAN) provide a key feature for VMware@SoftLayer, by allowing VM networks to extend in a Datacenter, regardless of the POD and/or VLAN placement of ESX Host NICs (Figure 6). It is important to note, that this capability to extend a Layer 2 broadcast domain across PODs can also allow a tenant to extend a Layer 2 network across SoftLayer DataCenters. This may not be a desirable architectural decision depending on the amount of East/West traffic generated by the workloads on the Logical Switch, also Layer 3 routing services out of the network could still be pinned to a single Datacenter, and possibly cause undesirable network trombone effects. Figure 6" Page 11 of 21
8. NSX Edge (From Figure 2): A NSX Edge provides various services to Logical Switches, Logical (Distributed) Routers and other traditional vsphere port groups. As shown in Figure 7, an Single NSX Edge appliance is providing first hop services for Logical Switch A subnet 192.168.0.0/16, next hop services over a transit network (172.16.100.0/24) for Logical Switch B subnet 10.1.1.0/16, and Edge Gateway services for all connected networks to the SoftLayer Public Network providing external access (192.155.1.0/27). In order to provide this capability, the design point of a dedicated Edge Services Cluster for VMware@SoftLayer is beneficial in this scenario. Figure 7" Page 12 of 21
9. NSX Logical Routers (From Figure 2): NSX Distributed Logical Routers (DLRs) run in the ESX kernel of each ESX prepared host (Although there is a control plane VM that is typically provisioned on the Edge Services Cluster). In addition to basic Layer 3 functionality (DLRs do not provide all of the Layer 3 & above services as a NSX Edge Gateway is capable of), the DLR can bridge SoftLayer Private or Public Networks. Bridging allows SoftLayer Bare metal Servers the ability to interface with VMs that are connected to VXLAN based Logical Networks. As shown in Figure 8, Baremetal database instances can communicate at Layer 2 with the NSX Logical Network and its associated Subnet(s) (192.168.0.0/16). It should also be noted that using customer IP space in the 10.0.0.0/8 address range at SoftLayer may have certain design requirements when routing across SoftLayer BCR (Backend Customer Routers). The 10.0.0.0/8 range can easily be isolated and routed within NSX Logical switches / DLRs / NSX Edge devices, but if that IP space must route over a SoftLayer BCR, then VPN or NAT technologies may be required. It is recommended to work with SoftLayer Sales Engineering for any scenario where 10.0.0.0/8 customer address ranges are to be routed by SoftLayer Private Networks. Figure 8" Page 13 of 21
IV. NSX@SoftLayer Recipe (Simple Single Site/Single POD) Before deploying NSX, It is highly recommended to contact a VMware NSX Sales Engineer to assist you with architecture of your NSX deployment at SoftLayer. Please refer to http:// www.vmware.com/files/pdf/products/nsx/vmw-nsx-network-virtualization-design-guide.pdf for more guidance on design considerations. Full NSX Installation Documentation can be located here: http://pubs.vmware.com/nsx-6/topic/com.vmware.nsx.install.doc/guid-b28b1167-b46c-460b- B4A8-0B7CCE453F5F.html & http://pubs.vmware.com/nsx-6/topic/com.vmware.nsx.install.doc/guid-8fee494f-8d3e-45b3- BFC6-4BE41F87607B.html 1. Obtain the NSX Manager OVA File: This will require interaction with your VMware NSX Sales Representative as SoftLayer DOES NOT PROVIDE NSX Licenses or code. 2. Install the NSX Manager Virtual Appliance: Install the NSX Manager virtual appliance on an ESX host in the dedicated VMware@SoftLayer Management Cluster. The Management Cluster must have DRS enabled. 3. Register vcenter Server with NSX Manager: Open a web browser to the configured IP address you assigned NSX Manager in the previous step. Log into the appliance with the following credentials (admin/default). Under Appliance Management, click Manage Appliance Settings From the left panel, select NSX Management Service and click Configure next to vcenter Server. Type the IP address of the vcenter Server, and the vcenter Server user name and password. Type the IP address and port number of the NSX Management service. Click OK. 4. Assign NSX for vsphere License: Log in to the vsphere Web Client. Click Administration and then click License Click the Solutions tab. From the drop-down menu at the top, select Assign a new license key. Type the license key (Provided by VMware) and an optional label for the new key. Click Decode. Click OK. 5. Add NSX Controllers: Log in to the vsphere Web Client. Click Networking & Security and then click Installation. Ensure that the Management tab is selected. In the NSX Controller nodes section, click the Add Node ( ) icon. In the Add Controller dialog box, select the datacenter on which you are adding the node. Select the dedicated VMware@SoftLayer Management cluster or a resource pool on that cluster, select an appropriate datastore, select the logical switch, port group, or distributed port group to which the controller nodes are to be connected to. Note: The IP address of the controller(s) must be reachable from the NSX Manager and the management network of the vsphere hosts communicating with the controller. If you have followed the Page 14 of 21
VMware@SoftLayer architectural guidance at http://knowledgelayer.softlayer.com/ learning/deploy-vmwaresoftlayer, You should select your Management Network Standard vswitch PortGroup." Click OK. 6. Install Network Virtualization Components on ESX Hosts: Log in to the vsphere Web Client. Click Networking & Security and then click Installation. Click the Host Preparation tab. For each Edge Services & Capacity VMware@SoftLayer cluster, click Install in the Installation Status column. Monitor the installation until the Installation Status column displays a green check mark. If the Installation Status column displays a red warning icon and says Not Ready, click Resolve. If the installation is still not successful, click the warning icon. All errors are displayed. Take the required action and click Resolve again. When the installation is complete, the Installation Status column displays 6.0 and the Firewall column displays Enabled. Both columns have a green check mark as well. If you see Resolve in the Installation Status column, click Resolve and then refresh your browser window. 7. Assign Segment ID Pool to NSX Manager: Log in to the vsphere Web Client. Click Networking & Security and then click Installation. Click the Logical Network Preparation tab and then click Segment ID. Click the Edit ( ) icon Type a range for segment IDs. For example, 5000-7000 Click OK. 8. Configure VXLAN Transport Parameters: Refer to http://pubs.vmware.com/nsx-6/topic/com.vmware.nsx.install.doc/ GUID-2FA9D4DE-56C0-40A4-A085-2FCE502A87B9.html for specific details on VTEPs. For VMware@SoftLayer you will be required to utilize NSX IP Pools for the VTEP IP address assignment method. The IP Addresses should be provided by SoftLayer Portable IP Subnets associated with your VXLAN transport Private Networks. Additionally, those IP addresses must be routable/acl allowed if setting up VTEPs across SoftLayer DataCenters or SoftLayer DataCenter PODs. Page 15 of 21
You must ensure you choose the appropriate Teaming Policy when deploying your VTEPs. It is highly recommended to leverage Enhanced LACP on the Distributed Virtual Switches where the VTEPs will be deployed, and Setting the VMKNic Teaming Policy = Enhanced LACP (Figure 9). NSX supports a maximum of 1 VTEP vmk virtual nic per Enhanced LACP LAG. After the afore mentioned steps are completed, a functional NSX Deployment with prepared clusters should now exist at VMware@SoftLayer. Consult http://www.vmware.- com/files/pdf/products/nsx/vmw-nsx-network-virtualization-design-guide.pdf for more information on proper configuration and deployment of other NSX constructs: Transport Zones NSX Edge NSX Distributed Logical Router(s) Firewalls SpoofGuard NSX Services Flow Monitoring Integrated Partner Solutions Figure 8 Page 16 of 21
V. NSX@SoftLayer Advanced Use Cases A: NSX Logical Switches (VXLAN) Across SoftLayer Pods Figure 9 As discussed in section III of this document, VXLAN based Logical Switches are a core feature of NSX. Logical Switches will allow VMware@SoftLayer tenants the ability to present a consistent Layer 2 broadcast domain/network across SoftLayer Pods. As shown in Figure 9, consider if 2 ESX hosts are provisioned in separate SoftLayer Datacenter Pods. Each host will have a different VLAN or set of VLANs associated with each. The host in Pod 1 has VLAN 1001 presented for VM connectivity traffic. The host in Pod 2 utilizes VLAN 2200. NSX Logical Switches allow for multiple hundreds or even thousands of VXLAN based networks to be presented to VMs over these seperate transport VLANs (For a primer on VXLAN please visit http://www.vmware.com/files/pdf/techpaper/virtual-network-design-guide.pdf). The VTEPs (VXLAN Tunnel Endpoints) in Figure 9 encapsulate and forward traffic across Layer 3 routed networks to other VTEPs. When traffic arrives at destination VTEPs, traffic is then un-encapsulated and presented to destination mac addresses at Layer 2. In this way, Logical Switches abstract the various physical SoftLayer Private Networks that may make up a VMware@Soft- Layer deployment and allow Logical Layer 2 networks to be presented to the VMs. As mentioned earlier in this document, this capability is critical when VMware@SoftLayer capacity hosts must cross Pod or Backend Router boundaries at SoftLayer. This design does also offer the capability to stretch Logical switches across SoftLayer Datacenter boundaries, but those scenarios must be carefully planned to ensure proper East West traffic patterns and adequate latency and bandwidth exists between SoftLayer sites for proper performance and throughput. Page 17 of 21
B: BYOIP (Bring Your Own IP): Routing Customer Subnets from VMware@Home Figure 10" A common need for tenants leveraging hybrid cloud scenarios with SoftLayer is BYOIP (Bring Your Own IP). This is the ability to push and route tenant IP address blocks over SoftLayer Networks. As of the writing of this document, SoftLayer only routes their own 10.0.0.0/8 private IP address space over their Private Network Infrastructure. This can present problems for customers who utilize other private RFC address ranges or who also utilize 10.0.0.0/8 CIDR blocks that have already been assigned to other SoftLayer customers. A common solution to allow tenants to extend their IP Address ranges onto SoftLayer s Private Network is to utilize VPN/ NAT technologies and have the tenant handle routing their IP address space over the SoftLayer Private Network infrastructure. NSX Edge Gateways can provide this function. Consider Figure 10 above: 1. A VMware@Home Deployment leverages the 192.168.0.0/16 & 10.0.0.0/8 private RFC ranges in the tenants own production datacenter. The intent is to acquire 2 SoftLayer capacities and Supernet specific private ranges from VMware@Home into the VMware@SoftLayer DataCenters. In Figure 10, NSX can deploy 1 or more NSX Edge devices with private interfaces in those VMware@Home Supernets (192.168.100.1 & 10.100.1.1). VPNs will be established between the VMware@Home Location and the 2 VMware@SoftLayer DataCenters. 2. There are 2 Primary methods by which a tenant can interconnect their VMware@Home deployments with VMware@SoftLayer. The most performant and secure method is to interconnect through a SoftLayer PoP (Point of Presence http://www.softlayer.com/network). This allows for a private & high bandwidth (up to 10 Gbps) connection into SoftLayers backend network. SoftLayer will then negotiate a valid IP address in their 10.0.0.0/8 address Page 18 of 21
range to route tenant traffic through to the tenants SoftLayer Private Networks. In Figure 10, an NSX Edge from VMware@Home has a MPLS Provider assigned IP address of 10.10.0.1 & the VMware@SoftLayer deployment in Dallas has a NSX Edge with a SoftLayer assigned Private IP address of 10.100.0.1. An IPSEC VPN SA and NAT tables are built between the 2 NSX Edge appliances and a tunnel is created to route the desired VMware@Home IP address ranges through the tunnel, and over the SoftLayer IP ranges. For more information on customer PoP connections, contact sales@softlayer.com. 3. The other method by which a tenant can interconnect their VMware@Home deployments with VMware@SoftLayer, is to utilize a VPN over the public internet. This method is similar to the PoP method, but connectivity performance is dependent on public transport as well as being less secure than a private connection. In Figure 10, an NSX Edge from VMware@Home has a public assigned IP address from a telco of 128.66.1.1. The VMware@SoftLayer deployment in Washington DC has a NSX Edge with a SoftLayer assigned Public IP address of 128.66.250.100. An IPSEC VPN SA is built between the 2 NSX Edge appliances and a tunnel is created to route the desired VMware@Home IP address ranges through the tunnel, and over the public Internet. 4. The 2 VPN Tunnels abstract the routing & NAT layer away from the public or SoftLayer/Telco provided IP address ranges. It is important to note, that bandwidth fees to the tenant will apply to both of these methods. In the case of the PoP connection, the tenant s telco will bill a monthly fee for consumed bandwidth and SoftLayer will charge monthly cross connect and backend switch port fees. In the public VPN Scenario, SoftLayer will charge consumed bandwidth over SoftLayer Public Networks. In Figure 10, the 192.168.200.0/24 & 10.200.1.0/16 address ranges are being pushed to the SoftLayer Dallas DataCenter, and the 192.168.210.0/24 & 10.210.1.0/16 address ranges are being pushed to the SoftLayer Washington D.C. DataCenter through the respective VPNs. 5. In the scenario presented in Figure 10, a 3rd IPSEC SA can be established between the 2 VMware@SoftLayer DataCenters to allow 192.168.200.0/24 & 10.200.1.0/16 to route between 192.168.210.0/24 & 10.210.1.0/16. In this scenario, that tunnel could also be leverages as a secondary route for VMware@Home to continue to access both SoftLayer Data- Centers in the event of a PoP or Public VPN failure to one of the VMware@SoftLayer sites. It is also important to note that as of this writing, traffic traversing SoftLayer s Private Network Backbone is not metered/billed to a tenant. The high level approach outlined above in Figure 10, demonstrates how NSX Edge can provide a capability of a tenant extending their own IP address space into VMware@SoftLayer. Page 19 of 21
C: BYOIP (Bring Your Own IP): BCDR Recovery from VMware@Home Figure 11" Another BYOIP requirement for VMware@SoftLayer, is the ability to replicate a private network topology for BCDR purposes. In this scenario, Private IP address space is not required to be routed to a VMware@SoftLayer deployment, but intended to allow a VM to recover into VMware@SoftLayer without requiring IP address modification. An example of this is depicted above in Figure 11. In this scenario, a VM has been hosting an application from VMware@Home with a private IP address of 192.168.100.101. It has been replicated to a VMware@SoftLayer SDDC (SoftWare Defined DataCenter), where a NSX Logical Network has been created and a NSX Edge router has been deployed to act as the recovery Gateway (192.168.100.1). This in effect replicates the core Layer 2 and Layer 3 services of the VMs application network at VMware@Home. In an SRM managed recovery, the VM boots in the VMware@SoftLayer recovery site with no changes to the guest or IP Address. The application is still provided access by use of DNAT of a public IP address at each site, with a GSLB load balancer service directing requests to the recovery site in the event that the primary site is unavailable. This type of recovery scenario is adequate for most web tier based applications and highlights the core function of NSX in replicating the private network components in the recovery site. Page 20 of 21
VI. Engaging VMware for NSX@SoftLayer To engage VMware for NSX@SoftLayer sales & support, please email nsx-ibm@vmware.com to learn more about the VMware NSX solution, including training and lab resources, documentation, product capabilities, professional certifications, roadmap information, and pre-sales guidance. Additionally, for questions regarding NSX@SoftLayer that involve SoftLayer Network Architecture, please contact SoftLayer Sales engineering at sales@softlayer.com. Page 21 of 21