SOLUTION GUIDE F5 Application Delivery in a Virtual Network Automating Server Load Balancing with Big Virtual Switch 1
This solution guide describes how to simplify application delivery and scale out with F5 application delivery controllers and the Big Virtual Switch application. By deploying automated application delivery, you can reduce the complexity of data center configuration, avoid repetitive and manual configuration changes and become more productive by automating the tasks required to roll out new applications or to scale out existing deployments. The solution leverages programmability available in the F5 BIG-IP platforms and the Network Application platform from Big Switch Networks, Big Network Controller, to make your data center network programmable: unified, flexible, and more cost effective. Table of Contents ADCs in Virtual Networks... 3 Virtual Compute in Traditional Networks... 4 Virtual ADC Considerations... 5 Automating Application Delivery with Big Virtual Switch... 6 Synchronization of server pools and virtual network segments... 6 Big Switch Open SDN and F5 icontrol... 6 Big Virtual Switch APIs... 7 icontrol APIs... 7 ECM Cloud Manager and BVS... 9 About Big Switch Networks... 10 2
ADCs in Virtual Networks In concept, load-balancing applications in virtual datacenters is much the same as in a traditional environment. Compute virtualization, network virtualization, and server load balancers and application delivery controllers (ADCs) delivered as virtual appliances, however, introduce some differences. In a virtual network the ADC still presents a single termination point for end-user requests (the application endpoint). This manifests as a virtual IP address (VIP). And the VIP is resolved via DNS. This VIP, as with traditional ADC services is the front end to a pool of resources, which is comprised of the application instances. In large deployments, the ADC service might be clustered and represents more than one ADC element (a cluster of ADCs or simply a device with a backup). In traditional environments, these devices are deployed logically in line (Even when off-path approaches are used, such as WCCP from a switch, they remain logically in line). In a traditional environment, the request arrives at the ADC and is dispatched to an application instance which is hosted on a single server per instance. By adding instances to the application, it can scale horizontally. By adding additional ADCs, load balancing, too, can scale horizontally. APP 1 APP 2 DATA APP 3 DATA DNS DATA Figure 1: Traditional Load Balancing When applications were hosted on a single server adding capacity or adding a new application involved adding a new instance another, physical server. High availability requires the use of backup nodes another physical server that might operate in active mode and handle requests or that might stand by and operate passively unless there is a failure on the primary application server. Moving applications to virtual servers, of course, reduces the number of physical servers required for scale out, and it can simplify planning for high availability. But the speed with which applications can deploy or scale on virtual compute introduces some problems for traditional networks and for traditional load balancing environments. Automation tools can ensure that there is sufficient application capacity and compute capacity, but if the network is not configured to use that new capacity, then it is not available to service end-user requests. And, if the process of provisioning the network is manual, slow and error prone, then responding to new application requests or to the need for additional capacity can be a slow process, which hampers productivity. Now, when we replace the physical servers with virtual servers, we have pretty much the same system. There still exists a pool of resources that comprise the application ; the load balancing service still mediates for the end-user. There are still enough application instances in the pool to compensate for failure, thus ensuring availability of the application. Network virtualization can speed the response of the network to application requests and simplifying application delivery in a virtual data center. 3
Virtual Compute in Traditional Networks To protect against problems with hardware or hypervisors, it remains prudent to deploy applications on two physical servers. On these platforms, multiple application nodes can run, enabling both scale out of the application and protection against an availability fault with the application. As such, virtual compute enables applications to scale out up to the limits of the physical hardware and hypervisor and enables several application nodes to be deployed on just a couple physical servers. App 2 Pool App 1 Pool App 3 Pool Figure 2: Application nodes within hypervisors pooled for use by ADCs In this model, a physical server problem or a failure within the hypervisor will still cause a potential outage, and, in fact, such an outage will impact more nodes, depending on the virtual machine density. A physical network problem or a network misconfiguration in this scenario could impact every instance of an application on a physical server. In the APP3 (red) example in the above figure, such a failure could leave ¾ of the capacity unreachable. Even with physical network redundancy, there remains the risk of a misconfiguration as network operators manually change settings to roll out new instances to add capacity to existing applications. The need to always have application instances on different physical server can complicate planning and rollout but it is required, because failing over to a backup instance on the same server is more risky than failing over to an instance on another server. The same is true of performance when multiple application instances run within a hypervisor: network I/O and storage are ultimately on the same physical systems. So applications can only scale out on a single system as far as the physical network and storage can scale up. By using network virtualization in combination with ADCs and by automating application rollout and scale out, network administrators can ensure improved up time, and they can avoid the manual planning and configuration processes that have historically been associated with application rollouts and scale-out projects. In short, they can get more done in less time while reducing the risk of an outage. To achieve this goal, the Virtual Network Segments (VNS) defined and programmed within Big Virtual Switch must be associated with the server pools used by the ADC. App1 App2 App3 Figure 3: Virtual Network Segments isolate traffic within both physical and virtual systems. (Virtual systems shown here.) In this example, segments have been defined for each of three applications. A VNS is the slice of the network that is created for an application or workload within Big Virtual Switch, the data center network virtualization application that runs on Big Network Controller. Big Virtual Switch provisions Virtual Network Segments and enables automation. Instead of being constrained by static configuration steps required with traditional approaches, Big Virtual Switch and the use of VNSs enables traffic to be managed dynamically, according to programmed policies and real-time changes in workload definitions. By associating ADC pools with Virtual Network Segments (VNS), the complexity that comes with delivering scalable and highly available applications from a virtual infrastructure is reduced. Network administrators don t have to lock down network configurations after defining settings within the switching and load balancing devices. And administrators can simplify the process of ensuring that application instances are running on more than just a 4
single hypervisor (which risks an outage). And they no longer have to manually define and deploy each server instance on each hypervisor and on each switch port. Deployment and the tasks associated with adding additional capacity can be automated. Big Virtual Switch Big Network Controller APP1 Pool App1 App2 App3 APP2 Pool APP3 Pool Figure 4: Automated server pool and VNS integration using Big Virtual Switch and Big Network Controller In a deployment where Big Virtual Switch has programmed pools, there is no need to change the configuration with the F5 BIG-IP systems when a server instance is added or removed. The application changes that are completed through Big Virtual Switch or through cloud orchestration tools that work with Big Virtual Switch are programmed into the network fabric, into both physical and virtual switch elements and into the F5 BIGP-IP LTM. Using a VNS, time-consuming tasks, such as adding new applications or adding new capacity to existing application can be solved instantly, even in multi-data center or global load balanced environments. The hope of auto-scaling in a cloud environment can actually be achieved with the automated integration of F5 BIGP-IP server pools and Big Virtual Switch VNS. If an element within a VNS is added or removed or becomes unreachable, that state is programmed into the network into the server pools, so that no configuration is required to make the new resource available to handle requests and to ensure that no requests are sent to a resource that has gone offline. Virtual ADC Considerations The BIG-IP LTM Virtual Edition enables ADC functionality in a virtualized environment, which adds flexibility to data center architectures and can address the specific, application-by-application needs. Many applications, especially those that require SSL offload and other hardware-based application delivery features, will continue to require a physical BIG-IP device. Big Virtual Switch supports both physical devices and virtual ADCs. The decision to use BIG- IP LTM Virtual Edition doesn t change the functionality in automating server load balancing, and the considerations related to using BIG-IP LTM VE make little difference in the context of this solution. 5
Automating Application Delivery with Big Virtual Switch Big Switch Networks has worked closely with partner, F5, to support multi-tier application deployment. Big Switch has developed a set of reference tools and configurations for using Big Virtual Switch with F5 BIG-IP LTM. Synchronization of server pools and virtual network segments While virtual network membership in BVS can be automated through popular cloud orchestration platforms, in traditional systems users must separately manage mapping new servers into pools for load balancing services. This solution, built on the BIG-IP icontrol APIs, supports the automatic creation and bi-directional synchronization of server pools with members of a Big Virtual Switch Virtual Network Segment. Specifically, this integration takes three forms. 1. Server pools can be created from BVS virtual network segments 2. Virtual network segments can be created from existing server pools in a BIG-IP system 3. New hosts added to virtual network segments can be automatically added to a server pool This solution decreases deployment time for new applications and enables a zero-touch method to add capacity to existing applications. Big Switch Networks provides a set of reference tools to implement this functionality upon request. Big Switch Open SDN and F5 icontrol Big Network Controller from Big Switch Networks and BIG IP ADC from F5 are both prime examples of programmable systems. The Big Network Controller platform publishes APIs that allow for configuration and management of the platform. Applications that run on the Big Network Controller and external orchestration or management systems use these APIs to interact with and program the network through the controller. The applications on the controller themselves can also expose APIs that can be used by an external systems. This allows for the entire network, across the data, control and management planes across the controller and application planes in modern Open SDN terms to be completely programmable. Most of the APIs published by the Big Network Controller and its SDN applications are REST-based APIs. SDKs for various programming languages have been developed to ease integration with existing orchestration and management systems. External systems can program F5 ADCs via multiple programming interfaces: the icontrol interface, an example of the programming interface on the ADC, can be used to manage the configuration of the F5 BIG-IP LTM. The F5 icontrol API is an open API that enables applications to work in concert with the underlying application delivery network and helps F5 customers realize new levels of automation and configuration management efficiency. The integration explained in this document uses both icontrol and BigPy, a python-based library that integrates with the REST APIs of Big Network Controller and its applications. The Python binding for icontrol, pycontrol, is used to integrate with F5 s BIG-IP icontrol management API. 6
Big Virtual Switch APIs The Big Virtual Switch APIs used for the integration are described below. BVS interface The BVS interface allows you to create Virtual Network Segments (VNS) on the Big Network Controller. Each VNS is an isolation domain and all communications within a VNS are isolated from other segments on the network. def bvs_create(self, bvsname): Creates a BVS with the specified name def bvs_delete(self, bvsname): Deletes the BVS with the specified name def bvs_defs_get(self): Get the list of all BVSes defined in the systems def bvs_devices_get(self, bvsname): Get the list of devices that are part of the specified BVS. Host interface The host interface allows you to create host entities on the controller and manage those host entities. def host_create(self, hostmac, ipaddress): Creates a host entity on the controller with the host s MAC or the IP address or both. Returns host_id, a unique identifier for the host. def host_delete(self, hostmac, ipaddress): Deletes the host entity on the controller Tag interface The tag interface allows for creation of a meta-tag that can be associated with a group of hosts. The meta-tag can then be associated with a BVS. def tag_create(self, name): Creates a tag for associating hosts to BVSes. def tag_delete(self, name): Deletes the specified tag. def bvs_tagmapping_create(self, tag, bvsname): Associates a tag with BVS. def bvs_tagmapping_delete(self, tagid, bvsname): Removes BVS to tag association. def host_tagmapping_create(self, tag, host_id): Associates the host to a tag, thereby associating the host to the BVS. Multiple hosts can be associated to a single tag. def host_tagmapping_delete(self, tagid, host_id): Deletes the host to tag mapping. icontrol APIs The following icontrol::locallb APIs are used in the integration: Pool: The Pool interface enables you to work with attributes, and statistics for pools. You can also use this interface to create pools, add members to a pool, delete members from a pool, find out the load-balancing mode for a pool, and set the load balancing mode for a pool. NodeAddressV2: The NodeAddress interface enables you to work with the states, statistics, limits, availability, ratios, application response deltas, and monitors of a local load balancer s node address. This updated interface is required to support the switch from accessing node addresses via their IP address to their name. VirtualServer: The VirtualServer interface enables you to work with the states, statistics, limits, availability, and settings of a local load balancer s virtual servers. For example, you can use the Virtual Server interface to create a virtual server from a specified pool or rule or to delete a virtual server from a specified pool. VirtualAddressV2: The VirtualAddressV2 interface enables you to work with the states, statistics, limits, availability, and settings of a local load balancer s virtual address. The second version was created to handle the shift from using the IP address to using a name to reference a virtual address. 7
The following code snippet illustrates the simplicity of programming the BVS application and BIG IP ADC with these APIs. 1. Create server pools on BIG IP corresponding to BVS instances on the Big Network Controller 2. Add hosts that as part of a BVS on the Big Network Controller as servers in the corresponding BIG IP server pool # Connect to the Big Network Controller self.ctrl = bsc.controller(server, port, bnc_user, bnc_password) # Connect to the the BIG IP ADC bigip = pc.bigip(bigip_address, bigip_user, bigip_password, fromurl = True, wsdls = [ LocalLB.Pool ]) # GET a list of BVSes from the bvsdefs = bvs_defs_get(self) for bvs in bvsdefs: bvsname = bvs[ id ] # Placeholder for BVS members that will be part of the pool. mem_sequence = b.locallb.pool.typefactory.create( Common. IPPortDefinitionSequence ) mem_sequence.item = [] # Create a pool corresponding to the bvs bigip.locallb.pool.create(pool_names = [bvsname], lb_methods = \ [lbmeth.lb_method_round_robin], members = [mem_sequence]) bvsdevs = bvs_devices_get(self, bvsname) # Add hosts that are part of the Virtual Network Segment as servers in the server pool for dev in bvsdevs: mem.address = dev[ ipv4_addr ] mem.port = HTTP_PORT # Create a sequence of pool members and add it to pool mem_sequence.item = [mem] [mem_sequence]) bigip.locallb.pool.add_member_v2(pool_names = [bvsname], members = With open APIs available from both Big Switch Networks and F5, it is possible to programmatically standup networks along with the associated application delivery services. The fact that these APIs are very simple and are available as libraries for various programming environments reduces the effort needed to integrate them into the management and orchestration systems of your choice. 8
About Big Switch Networks Big Switch Networks is the leader in open source Software-Defined Networking (SDN) products, delivering unmatched network agility, automated network provisioning, and dramatic reductions in the cost of network operations. The company s Open SDN platform offers an OpenFlow switch fabric that can run on bare metal switches and hypervisor virtual switches, and enables a wide variety of SDN network applications including data center network virtualization and network monitoring. For more information, visit www.bigswitch.com Headquarters 110 West Evelyn Street, Suite 110 Mountain View, CA 94041, USA Phone: +1.650.322.6510 or: +1.800.653.0565 bigswitch.com Copyright 2013 Big Switch Networks, Inc. All rights reserved. Big Switch Networks, Big Network Controller, Big Tap, Big Virtual Switch, Switch Light, Floodlight and Open SDN are trademarks or registered trademarks of Big Switch Networks, Inc. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Big Switch Networks assumes no responsibility for any inaccuracies in this document. Big Switch Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. SG01-03 July 2013 9