HP ProCurve Application Integration Guide



Similar documents
How to Configure Web Authentication on a ProCurve Switch

DEPLOYMENT GUIDE CONFIGURING THE BIG-IP LTM SYSTEM WITH FIREPASS CONTROLLERS FOR LOAD BALANCING AND SSL OFFLOAD

DEPLOYMENT GUIDE. Deploying F5 for High Availability and Scalability of Microsoft Dynamics 4.0

Exam F F5 BIG-IP V9.4 LTM Essentials Version: 5.0 [ Total Questions: 100 ]

Configuring the BIG-IP and Check Point VPN-1 /FireWall-1

Exam : EE : F5 BIG-IP V9 Local traffic Management. Title. Ver :

DEPLOYMENT GUIDE DEPLOYING F5 WITH MICROSOFT WINDOWS SERVER 2008

Deploying the BIG-IP System with Oracle E-Business Suite 11i

HP VMware ESXi 5.0 and Updates Getting Started Guide

Introducing the BIG-IP and SharePoint Portal Server 2003 configuration

HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios

Configuring the BIG-IP system for FirePass controllers

Overview of WebMux Load Balancer and Live Communications Server 2005

ProLiant Essentials Intelligent Networking Active Path Failover in Microsoft Windows environments

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP System v9.x with Microsoft IIS 7.0 and 7.5

GlobalSCAPE DMZ Gateway, v1. User Guide

HP CloudSystem Enterprise

ProCurve Manager Plus 2.2

F5 Configuring BIG-IP Local Traffic Manager (LTM) - V11. Description

DEPLOYMENT GUIDE DEPLOYING THE BIG-IP SYSTEM WITH MICROSOFT INTERNET INFORMATION SERVICES (IIS) 7.0

HP IMC Firewall Manager

Deploying the BIG-IP LTM v10 with Microsoft Lync Server 2010 and 2013

Microsoft SharePoint 2010 Deployment with Coyote Point Equalizer

DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch

Configuring Security for FTP Traffic

Deploying F5 with Microsoft Active Directory Federation Services

Load Balancing for Microsoft Office Communication Server 2007 Release 2

HP Device Manager 4.6

Introducing the Microsoft IIS deployment guide

Abstract. Avaya Solution & Interoperability Test Lab

DEPLOYMENT GUIDE. Deploying the BIG-IP LTM v9.x with Microsoft Windows Server 2008 Terminal Services

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

Using HP Systems Insight Manager to achieve high availability for Microsoft Team Foundation Server

Deployment Guide AX Series for Palo Alto Networks Firewall Load Balancing

Deployment Guide Microsoft IIS 7.0

Coyote Point Systems White Paper

5 Easy Steps to Implementing Application Load Balancing for Non-Stop Availability and Higher Performance

HP network adapter teaming: load balancing in ProLiant servers running Microsoft Windows operating systems

Starting a Management Session

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP LTM for SIP Traffic Management

How to configure MAC authentication on a ProCurve switch

DEPLOYMENT GUIDE DEPLOYING THE BIG-IP LTM SYSTEM WITH MICROSOFT WINDOWS SERVER 2008 TERMINAL SERVICES

Load Balancing 101: Firewall Sandwiches

Deploying the BIG-IP LTM system and Microsoft Windows Server 2003 Terminal Services

NetScaler VPX FAQ. Table of Contents

QuickSpecs. Models. HP ProLiant Lights-Out 100c Remote Management Cards Overview

HP Load Balancing Module

7 Easy Steps to Implementing Application Load Balancing For 100% Availability and Accelerated Application Performance

SAN Conceptual and Design Basics

Deployment Guide Oracle Siebel CRM

Lab Configuring Access Policies and DMZ Settings

HP ProLiant Lights-Out 100c Remote Management Cards Overview

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

F5 BIG-IP V9 Local Traffic Management EE Demo Version. ITCertKeys.com

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP System v10 with Microsoft IIS 7.0 and 7.5

HP ProLiant DL380 G5 High Availability Storage Server

Installing and Using the vnios Trial

DEPLOYMENT GUIDE Version 1.1. Configuring BIG-IP WOM with Oracle Database Data Guard, GoldenGate, Streams, and Recovery Manager

Transparent Cache Switching Using Brocade ServerIron and Blue Coat ProxySG

AT-S60 Version Management Software for the AT-8400 Series Switch. Software Release Notes

Accelerating SaaS Applications with F5 AAM and SSL Forward Proxy

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version Rev.

DEPLOYMENT GUIDE Version 1.0. Deploying F5 with the Oracle Fusion Middleware SOA Suite 11gR1

Deploying the BIG-IP LTM with Microsoft Skype for Business

DEPLOYMENT GUIDE DEPLOYING THE BIG-IP LTM SYSTEM WITH ADOBE ACROBAT CONNECT PROFESSIONAL

Chapter 7 Configuring Trunk Groups and Dynamic Link Aggregation

Brocade Solution for EMC VSPEX Server Virtualization

Deploying the Barracuda Load Balancer with Office Communications Server 2007 R2. Office Communications Server Overview.

SSL-VPN 200 Getting Started Guide

vrealize Automation Load Balancing

HP Device Manager 4.6

HP ProLiant Essentials Vulnerability and Patch Management Pack Planning Guide

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade

Barracuda Link Balancer

DEPLOYMENT GUIDE Version 1.2. Deploying F5 with Oracle E-Business Suite 12

ProCurve Switch ProCurve Switch

DEPLOYMENT GUIDE Version 1.1. Deploying F5 with IBM WebSphere 7

Barracuda Link Balancer Administrator s Guide

Deploying F5 for Microsoft Office Web Apps Server 2013

SonicOS Enhanced Release Notes

HP ProCurve Identity Driven Manager 3.0

Configuring Redundancy

FTP Server Configuration

Port Trunking. Contents

Port Trunking. Contents

Deploying the BIG-IP System with Microsoft Lync Server 2010 and 2013 for Site Resiliency

Release Notes: Version P.1.8 Software. Related Publications. for HP ProCurve 1810G Switches

HP Server Management Packs for Microsoft System Center Essentials User Guide

management and configuration guide hp procurve series 2500 switches

Deployment Guide. Deploying F5 BIG-IP Global Traffic Manager on VMware vcloud Hybrid Service

Radware s AppDirector and Microsoft Windows Terminal Services 2008 Integration Guide

Deployment Guide Microsoft Exchange 2013

Securing Networks with PIX and ASA

estpassport Bessere Qualität, bessere Dienstleistungen!

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Read Me First for the HP ProCurve Routing Switch 9304M and Routing Switch 9308M

How To Write An Article On An Hp Appsystem For Spera Hana

Smart Tips. Enabling WAN Load Balancing. Key Features. Network Diagram. Overview. Featured Products. WAN Failover. Enabling WAN Load Balancing Page 1

Achieving High Availability with Websphere Application Server SIP Container and F5 BIG-IP Local Traffic Manager

Deployment Guide AX Series with Active Directory Federation Services 2.0 and Office 365

Transcription:

HP ProCurve Application Integration Guide Delivering HP ProCurve Switching and Threat Management Services (TMS) Scalability and Availability with F5 Networks BIG-IP Local Traffic Manager (LTM)

Introduction... 3 Solution architecture... 3 Description... 3 Typical topologies... 3 Solution components... 6 Solution configuration... 6 Test configuration... 6 Test topology... 7 Configuration steps... 7 Solution test results... 8 Test objectives... 8 Test cases... 8 Test results... 9 Conclusion... 9 Design considerations... 10 Best practices... 10 References...11 Solution information...11 Product documentation...11 Technical training courses...11 Support...11 Appendix A: configuration information... 12 ProCurve switch VLAN and route configuration... 12 ProCurve Threat Management Services zl Module installation and configuration... 12 BIG-IP LTM platform configuration... 13 BIG-IP LTM network configuration... 13 BIG-IP LTM local traffic management configuration... 14

Introduction Application Delivery Controllers (ADC) offer a set of critical network and application services which help reduce equipment and support costs by optimizing the utilization and performance of applications, servers, firewalls and other inline networking equipment. The ADC solutions jointly offered by HP ProCurve and F5 enable large enterprise companies to improve application performance, availability and scalability. Two key use models are detailed: A typical Enterprise Data Center server scalability and availability configuration using load balancing, SSL acceleration, connection mirroring, and session persistence. A ProCurve Threat Management Services (TMS) Sandwich designed to increase overall TMS scalability and availability while ensuring Data Center security. A wide range of tests were performed designed to assure solution interoperability and key functionality. The test cases, results and best practices are discussed below. Solution architecture Description The Solution consists of ProCurve Data Center focused switching products, the ProCurve Threat Management Services (TMS) module, F5 BIG-IP Local Traffic Manager (LTM) Application Delivery Controllers in both single tier and multi tier (sandwich) arrangements. The combination of ProCurve and F5 Networks to provide switching, TMS, and ADC components enables some unique capabilities compared to other competitive options. These capabilities include: Business continuity and resiliency for critical network systems and applications. Application fluency enabling network-speed full payload inspection, and programmable, event-based traffic management to understand and act upon application flows. Reduced Server and Bandwidth Cost Triples server capacity through a rich set of infrastructure optimization capabilities, and reduces bandwidth significantly through intelligent HTTP compression, bandwidth management, and more. Industry-leading Performance Delivers the industry leading traffic management solution to secure, deliver and optimize application performance. As a leader, BIG-IP LTM delivers best-in class SSL TPS, bulk encryption, and one of the highest levels of concurrent SSL connections. Typical topologies This diagram represents a common use model where F5 products with two layers of ADC managing both the inbound and outbound network traffic through multiple in-line ProCurve Threat Management Services (TMS) Modules. This TMS Sandwich provides scalability, availability, and virtualization for multiple TMS Modules using advanced load balancing features. 3

This diagram also illustrates high resiliency within the data center through the use of redundant switches and ADC in active-standby mode. 4

This next diagram shows another high value basic use model for F5 BIG-IP LTM ADC and HP ProCurve Switching in a consolidated Enterprise Data Center. This solution topology depicts F5 BIG-IP LTM providing server scalability, availability, and virtualization using advanced server load balancing (SLB) features. This solution also provides reduced server processing load with BIG-IP LTM s SSL offload capabilities by centralizing SSL encryption and certificate management on the F5 device rather than each server. The purpose of including this second solution is to demonstrate the ease in adding this application server solution to an existing TMS firewall solution or vice versa. 5

Solution components The following products could be considered when deploying such a solution. The specific test hardware is detailed in section 3 and represents a subset of the potential set of equipment. ProCurve switches 8212 core Data Center switch - Should be equipped with full HW redundancy including power supplies, fabric modules, fans & management modules. Other 3rd party core switches could also be used here if full hitless redundancy is required. 5406 distribution layer switch - For larger systems this could be an 8212 or 5412. Selection of 5400 vs. 8200 will depend on the customer s tolerance for switch failure. 6600-24G-4XG Top of Rack Switches if Servers have Gig NICs. For 10G NICs use the 6600-24XG. The other switches and workstations shown in the above blueprints are not relevant to these specific solutions so any industry standard, enterprise class products could be used. ProCurve Threat Management Services zl Module (TMS) ProCurve TMS module for 5400 or 8200 Chassis Switches 10G throughput F5 BIG-IP LTM ADC platforms The recommended platform will depend on the required performance of each device such as maximum throughput, maximum SSL acceleration, add-on software modules, and maximum L4 and L7 connections. Appliance Platforms: BIG-IP 1600, 3600, 6900 or 8900 Chassis Platforms: VIPRION with 1 to 4 blades Servers Could be any industry grade server. Could be towers, Rack-mounted, such as the ProLiant family or bladed chassis such as the HP c-class server. The servers can be either physical or virtual. Solution configuration Test configuration Equipment Software/Firmware Version Model Number Comments HP ProCurve Switch 5412zl K.14.09 J8698A HP ProCurve TMS zl Module Services Module Agent: B.01.04.01 BIOS: HP01R1O1 EEPROM: 0001 OPTROM: A.01.06 TMS: ST.1.0.090213 J9156A F5 Networks BIG-IP 3600 BIG-IP 9.4.6 Build 401.0 Final BIG-IP 3600 Series F5 Networks BIG-IP 6900 BIG-IP 9.4.6 Build 401.0 Final BIG-IP 6900 Series HP ProLiant DL320 G5p Microsoft Windows 2003 with SP2+ DL320 Clients and Servers HP ProCurve Switch 6600 K.14.09 J9263A HP ProCurve Switch 2626 H.08.05 J4900B Management Switch 6

Test topology This diagram detailed the test configuration designed to represent the TMS Sandwich use model which is a superset of the basic server scalability and availability use case and therefore represents both use cases. Configuration steps Configuring the TMS Sandwich is a task for technical consultants possessing moderate to strong experience with the technologies employed. Configuration of the tested solution required the following steps: ProCurve Switch VLAN and Route Configuration ProCurve Threat Management Services zl Module Installation and Configuration BIG-IP Platform Configuration BIG-IP Network Configuration Multiple Spanning Tree Configuration BIG-IP Local Traffic Management Configuration - BIG-IP 3600 Nodes, Pools, Health Monitors, Default Gateway, Wildcard Virtual Server, Virtual Servers, Redundancy, and ConfigSync BIG-IP Local Traffic Management Configuration - BIG-IP 6900 Nodes, Pools, Health Monitors, Default Gateway, Wildcard Virtual Server, Virtual Servers, Redundancy, and ConfigSync For more configuration details, refer to Appendix A. 7

Solution test results Test objectives The goal of these set of tests was to assure functionality and interoperability of the ProCurve and F5 devices in both TMS sandwich and server scalability and availability topologies and to provide a guide for those designing similar systems. Test cases BIG-IP LTM connection persistence HTTP cookie Insert: The Client makes a web page address. BIG-IP Virtual Server Persistence is set to Cookie Insert persistence. BIG-IP inserts a BIG-IP specific cookie in the server response to the Client along with the requested web page. The cookie sent to the client is valid for a predetermined period of time. While the cookie is valid, the Client will be directed to the Server filling the request when the cookie was issued. BIG-IP LTM connection persistence source address: The Client requests a web page. BIG-IP Virtual Server Persistence is set to Source Address persistence. For the period of time specified in the Source Address Persistence Profile, the Client will be directed to the Server that filled the first request. When the time expires, a different server is possible, but the Client will be glued (persisted) to the new server for the Source Address Persistence time period. BIG-IP LTM HTTPS to HTTP redirection client-side SSL acceleration: Client makes an HTTPS request for a web page to a BIG-IP Virtual Server configured for client-side SSL acceleration. BIG-IP terminates the SSL connection and redirects the request to a Pool of HTTP Nodes instead of a Pool of HTTPS Nodes. When the request if filled by the server, BIG-IP re-encrypts the response and sends it to the Client via SSL. ProCurve switch/f5 BIG-IP interoperability copper and optical link tests: Link Test of ESX Optical and Copper between BIG-IP Platform and ProCurve Switches. Create a trunk between ProCurve and BIG-IP comprised of copper and fibre links with LACP. Reduce/Increase number of links. Test single ESX and single Copper Link under load. ProCurve switch/f5 BIG-IP interoperability: LACP trunk tests: The intent of this test is to prove that ProCurve Switches and BIG-IP Trunks function correctly. The tests include support of LACP (active, and passive) and will include both copper and fibre connections. Individual links which comprise the Trunks will be enabled or disabled (or link pulls can be used if preferred). Load balance across application servers FTP server: Use FTP as the application to load balance. IX Load will make hundreds of ftp requests directed to BIG-IP s FTP Virtual Server. Load must be spread across ftp pool members associated with the virtual server Load balance across Threat Management Services zl Modules: Test load balancing in both directions: From Client to Server; and from Server to Client. Load Balance across web servers HTTP server: Use HTTP as the Web Server protocol to load balance. IXIA ixload will make hundreds of HTTP requests directed to BIG-IP s HTTP Virtual Server. Load must be spread across HTTP pool members associated with that Virtual Server. Load balance across web servers HTTPS (SSL) server: Use HTTPS as the Web Server protocol to load balance. IXIA ixload will make hundreds of HTTPS requests directed to BIG-IP s HTTPS Virtual Server. Load must be spread across HTTPS pool members associated with that Virtual Server. Connection mirroring between BIG-IP redundant pair FTP server: Start an FTP file transfer and fail over to the standby BIG-IP. The file transfer was observed to continue through the newly active BIG-IP. BIG-IP redundant pair device failover: Test BIG-IP Failover, Serial Cable Failover, Network Failover, Mirrored Connection Failover, and VLAN Failsafe Dual power supply equipped BIG-IP platform with single power supply failure: Loaded BIG-IP looses a single power supply, users do not loose connectivity as evidenced by continuous throughput. Layer failures: Solution Layer Fail/Recovery. If two components in the same layer fail, no traffic should get through. This test confirms that there are no unexpected data paths in the solution. 8

MSTP path failures: Assure that the solution is resilient to link failures that force MSTP to adapt to keep a full bandwidth or partial bandwidth path open to ingress and egress. It is not intended to guarantee correct operation of MSTP with BIG-IP, but to make sure that solution paths do not lock in a condition that will not recover or limit accessibility. Solution multi-unit failures: Validate that Clients can always have access web servers and application servers when experiencing a combination of multi-unit failures. Solution single unit failures: Test is to assure users can always have access web servers and application servers during a single unit failure of components. TMS failures: Threat Management Service zl Module Failures solution reacts as expected TMS signature detection no load: This test will ensure the operation of the IPS in detecting attacks caused by Karalon Traffic IQ Pro TMS signature detection with load: TMS under a 50% load is still able to detect attacks caused by Karalon Traffic IQ Pro TMS signature download: Test signature update in different modes (Manual, Automatic) TMS traffic inbound allow: DUT must allow access requests from EXT->INT for allowed protocols (HTTP, HTTPS, FTP and Telnet) then limit the Allows to occur only from specified Clients. TMS inbound traffic deny: Test Description: DUT must deny access requests from EXT->INT of the following protocol (HTTP, HTTPS, FTP, and Telnet) TMS outbound traffic allow: DUT must support access requests from INT->EXT of the following protocol (HTTP, HTTPS, FTP, and Telnet) TMS traffic outbound deny: DUT must deny access requests from INT->EXT of the following protocol (HTTP, HTTPS, FTP, and Telnet) Test results All of the test cases achieved the expected results and therefore passed. No exceptions were required for this solution. BIG-IP Failover Testing had some impressive results: Serial Cable Failover immediately initiated failover to the standby. The initiation took under a second. The Standby became Active in our test setup in seconds. VLAN Failsafe recovered in an average of 6 seconds according to our tests. VLAN Failsafe forces a failover event if the VLAN it is monitoring goes silent for a configurable period of time. Conclusion The ProCurve-F5 Networks solutions were tested for compatibility and interoperability using ProCurve 5400 Series Switches and F5 Networks BIG-IP Local Traffic Manager Platforms 3600 and 6900. The TMS Sandwich solution was comprised of multiple Threat Management Services zl Modules and ProCurve switches interoperating with BIG-IP Platforms running LTM version 9.4.6 software without error. The bi-directional load balancing of multiple TMS Modules provided by the BIG-IP LTM Platforms executed flawlessly. Adding additional TMS zl Modules to the testing was accomplished by hot-plugging the module into the ProCurve 5400 switch. Configuration of the new module did not interrupt operation of the other TMS Modules. Once the new module was added to BIG-IP Pools on the BIG-IP Platforms, the load was distributed to the new module. Adding a new TMS zl Module is easily accomplished. Availability was proven using high stress loads. In every case, failover scenarios were completed correctly. One test case involved BIG-IP LTM s Connection Mirroring feature. The Connection Mirroring feature was implemented in BIG-IP LTM to keep stateful protocols and application connections alive during device failover events. The feature was impressive to see as FTP gets complete after the standby BIG-IP became Active. It should be noted that enabling BIG-IP LTM Connection Mirroring system overhead may reduce BIG-IP LTM Platform performance. If the feature is required for a customer, the size of the BIG-IP Platforms involved may need to be increased. We noted only brief delays (~3 to 5 second) when running FTP gets of large files during failover events with Connection Mirroring enabled. 9

BIG-IP LTM Client-Side SSL Acceleration allows Clients to make requests via SSL without requiring SSL processing by the servers. The BIG-IP terminates the HTTPS client session and initiates a HTTP request to the BIG-IP HTTP Server Pool. The redirection allows the Server filling the request to simply provide the data as required. The Server does not have to handle the SSL handshake with the Client. This feature, if implemented reduces server load, saving money, as well as significantly reducing the number of certificates purchased and maintained. Design considerations Best practices BIG-IP backup/restore: Optimizing BIG-IP LTM Platforms in this solution may involve a series of trial configurations. Always make a configuration backup using BIG-IP System Archives process before and after any change is made to the configuration. Even though BIG-IP has a configuration synchronization utility (ConfigSync) for redundant pair deployments, the utility will not overwrite fundamental BIG-IP Platform configuration that is unique to a platform. As a result, it is necessary to backup each BIG-IP Platform to preserve the data unique to that platform. Local Traffic Management configuration information (Nodes, Pools, Monitors, Virtual Servers, etc.) can be restored on one BIG-IP Platform and, using ConfigSync, restored to its peer in a redundant pair deployment. BIG-IP hardware platform configuration: Each BIG-IP Platform plays a different network role in the solution and must have the network-related configuration found in the Network configuration tab (Interfaces, Routes, Self IPs, Spanning Tree, Trunks, and VLANs) must be completed independently. BIG-IP local traffic manager redundant pair configuration synchronization: Local Traffic Manager Configuration (Virtual Servers, Profiles, Pools, Nodes, Monitors, SSL Certificates, etc) should be completed on one of the BIG-IP in each redundant pair. Once the configuration has been completed and verified on one of the BIG-IP, use System ->High Availability->ConfigSync to configure the other BIG-IP in the pairing. When using ConfigSync, exercise care when selecting the correct direction (Synchronize TO peer or Synchronize FROM Peer). Getting the direction of the sync wrong is a definite time sink. See BIG-IP Backup/Restore above to help avoid or recover from this situation. BIG-IP device failover: BIG-IP devices should always be deployed in a redundant pair running in activestandby mode. Either serial or network failover must be configured between each pair. BIG-IP power supply failover: Each BIG-IP should be equipped with dual power supplies. BIG-IP link failover: Configure several methods of link failover as opposed to one. For example, Adding VLAN Failsafe forces failover when the configured VLAN goes silent. Trunking multiple interfaces will allow network traffic to flow as long as one interface in the trunk survives. TMS expansion limits: A maximum of four TMS zl Modules are supported in each ProCurve Switch in this solution. The tested TMS Sandwich employed two ProCurve switches containing TMS zl Modules. If one of the two switches fails, the other switch must contain a sufficient number of TMS zl Modules to support the cumulative TMS maximum load. If more TMS zl Modules are required, additional switches can be added in pairs to host additional TMS zl Modules. TMS zl Modules can be configured in a high availability mode. This mode is not supported in this solution. All the TMS zl Module in this solution perform independently. Modules are not aware of other modules presence or state. TMS QoS/ToS: At present, TMS zl Modules do not support QoS/ToS principals. If a TMS zl Module is overloaded, it drops packets without regard to priority. If the TMS Firewall Solution must support QoS/ ToS prioritized packet traffic, a diligent effort should be made to scale the firewall to minimize possibility of overloading the solution. TMS signatures: The Threat Management Services zl Module should be updated with the threat signatures on a regular basis. The frequency of updates can be set using the Management Services zl Module GUI. TMS modules require ProCurve zl switches: The Threat Management Services zl Modules must be installed in ProCurve model zl switches. Older ProCurve switches may be available but cannot be used with TMS zl Modules. 10

BIG-IP training: BIG-IP LTM offers many features that can be used to optimize network and application traffic. It is highly recommended that product training be obtained before attempting to optimize this solution for a customer. BIG-IP help: Use the BIG-IP LTM Web Configuration Utility Help tab when selecting options or configuring BIG-IP. Reading the help screen adds insight to the terminology used and the meaning of screen choices. References Please refer to the following tools for additional information on the joint HP ProCurve and F5 BIG IP LTM solution: Solution information ProCurve - F5 BIG IP LTM solution brief: http://procurve.com/docs/one/f5-big-ip-solution-brief.pdf ProCurve ONE: www.procurve.com/one Product documentation HP ProCurve product documentation can be found at: http://www.procurve.com/customercare/support/manuals/index.htm F5 Networks product documentation can be found at: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/pg_1600_3600.html https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/pg_6900.html https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/cli_guide_943.html https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip_ilu_setup_943.html https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip_nsm_guide_943.html https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_config_guide_943.html https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_sol_guide_943.html Technical training courses Please refer to the following link for more information on the ProCurve technical certification programs: http://www.procurve.com/network-training/certifications/technical.htm Please refer to the following link for more information on F5 technical certification programs: http://www.f5.com/training-support/certification Support For technical support on HP products, consult the support pages at http://www.procurve.com/customercare/index.htm For technical support on F5 Networks products, please visit: http://www.f5.com/training-support/customer-support/ 11

Appendix A: configuration information ProCurve switch VLAN and route configuration 5412zl-2# config 5412zl-2(config) vlan 20 5412zl-2(vlan-20) ip addr 125.1.20.1/24 5412zl-2(vlan-20) tagged A1, A11 5412zl-2(vlan-20) exit 5412zl-2(config) vlan 30 5412zl-2(vlan-30) ip address 125.1.30.1/24 5412zl-2(vlan-30) tagged A4, A8 5412zl-2(vlan-30) exit 5412zl-2(config) ip route 125.1.40.0 255.255.255.0 125.1.30.105 5412zl-2(config) write memory ProCurve Threat Management Services zl Module installation and configuration For details regarding the installation and management of HP ProCurve Threat Management Services zl Modules (TMS zl Modules) used in this solution, please refer to the following documents: Installation and getting started guide: http://cdn.procurve.com/training/manuals/tmszlmodule-gettingstarted-050109-59925504.pdf Management and configuration guide: http://cdn.procurve.com/training/manuals/tmszlmodule-mgmtcfg-050109-59900224.pdf For this solution, TMS zl Module configuration is broken into five steps: Step 1: Add each TMS zl Module to VLAN20 and VLAN 30 5412zl-2# config 5412zl-2(config)# show vlan 1 5412zl-2(config)# vlan 20 5412zl-2(vlan-20)# tagged D1 5412zl-2(vlan-20)# exit 5412zl-2(config)# vlan 30 5412zl-2(vlan-30)# tagged D1 5412zl-2(vlan-30)# exit 5412zl-2(config)# write memory Step 2: Configure the TMS zl to create a Management zone for the TMS zl Module s GUI interface by creating an Internal zone and assign it a virtual IP address 5412zl-2# services D 2 5412zl-2(tms-module-D)# configure terminal 5412zl-2(tms-module-D:config)# management zone internal 5412zl-2(tms-module-D:config)# vlan 30 zone internal 5412zl-2(tms-module-D:config)# vlan 30 ip address 125.1.30.150 255.255.255.0 12

Step 3: Create an External zone and assign it a virtual IP address 5412zl-2(tms-module-D:config)# vlan 20 zone external 5412zl-2(tms-module-D:config)# vlan 20 ip address 125.1.20.150 255.255.255.0 Step 4: Configure three static routes for each TMS 5412zl-2(tms-module-D:config)# ip route 125.1.10.0/24 125.1.20.105 5412zl-2(tms-module-D:config)# ip route 125.1.40.0/24 125.1.30.105 5412zl-2(tms-module-D:config)# ip route 0.0.0.0/0 125.1.20.105 5412zl-2(tms-module-D:config)# write memory Step 5: Using a supported internet browser, log into the TMS zl Module s GUI interface and configure the Threat Management Services as required for the customer. Access the management interface using a supported web browser: https://125.1.xx.150 BIG-IP LTM platform configuration For details relating to installation and configuration of the BIG-IP 3600 & 6900 platforms, refer to the appropriate Platform Guide for the BIG-IP Platform(s) and BIG-IP LTM version in place. For the platforms and version 9.4.6 used in this document, these can be found at: https://support.f5.com/kb/en-us/products/big-ip_ltm/versions.9_4_6.html Set-up the BIG-IP management interface for each BIG-IP platform Once the BIG-IP Platform has powered up, use the six round menu buttons to navigate the front panel menu. Press the red X button Navigate to System > Management > Mgmt IP Enter the IP address of the Management Server (125.1.XX.XX) using the four arrow buttons Press the green check-symbol button In like manner set up the Mgmt Mask (255.255.255.0) and Mgmt Gateway (125.1.XX.1) as appropriate Press the down-arrow button to select commit Press the green check-symbol button twice to commit the change Use the red X-symbol button to return to the main menu Connect to the BIG-IP management interface for the BIG-IP platform to be configured NOTE: BIG-IP Platforms offer both a SSH or serial console command line interface (CLI) and a Web based graphical user interface (GUI). For this document, the GUI BIG-IP Configuration Utility will be used. Using the management server; connect to BIG-IP Platform s Management IP set above and launch the BIG-IP Configuration Utility. https://125.1.xx.xx The default user/password: admin/admin Note: You may choose to connect a Laptop to the MGMT RJ-45 connector on the left side of the BIG-IP platform to perform configuration functions using the Mgmt IP configured earlier. Configure the platform settings for BIG-IP platforms See specific product documentation. BIG-IP LTM network configuration Configure BIG-IP network configuration Network Configuration includes: Creating VLANs Adding Interfaces to the VLANs Network Configuration must be performed on each BIG-IP Platform. See specific product documentation. 13

Configure self-ips for BIG-IP platforms Self-IP addresses are IP addresses that connect the BIG-IP platform interfaces to the networks. For this solution, they are comprised of interface addresses and Floating Self-IP Addresses (a floating IP address that always points to the Active BIG-IP platform in a redundant pair). See specific product documentation. Configure a route for each BIG-IP platform See specific product documentation. Multiple spanning tree (MSTP) configuration Two MSTP Instances were created on each TMS Sandwich component (BIG-IP 3600-1, BIG-IP 3600-2, ProCurve 5412zl-2, ProCurve 5412zl-3, BIG-IP 6900-1, and BIG-IP 6900-2): Instance 1 included all of VLAN 20 and its interfaces Instance 2 included all of VLAN 30 and its interfaces See specific product documentation. Steps to configure MSTP on ProCurve 5400 series switches 5412zl-2# config 5412zl-2(config)# spanning-tree 5412zl-2(config)# spanning-tree instance 1 vlan 20 5412zl-2(config)# spanning-tree instance 2 vlan 30 5412zl-2(config)# span config-name firewall-san 5412zl-2(config)# span config-revision 0 5412zl-2(config)# spanning-tree instance 1 priority 0 5412zl-2(config)# spanning-tree instance 2 priority 0 5412zl-2(config)# spanning-tree priority 0 5412zl-2(config)# spanning-tree A1 priority 4 5412zl-2(config)# spanning-tree A4 priority 4 5412zl-2(config)# span force-version mstp-operation 5412zl-3(config)# write memory BIG-IP LTM local traffic management configuration Node configuration for BIG-IP platforms BIG-IP LTM load balances objects in designated Pool(s). Pools are comprised of Nodes. For this solution, a Node represents an individual TMS zl Module or an individual Web or Application Server used to service incoming or outgoing Client requests. Nodes for the BIG-IP 3600 Platforms consist of the TMS zl Modules. See specific product documentation. Health monitor, pools, gateway, virtual server, redundancy, and configsync configuration for BIG-IP platforms For details relating to installation and configuration of the BIG-IP Platforms, refer to version specific documentation at https://support.f5.com/kb/en-us/products/big-ip_ltm.html?product=big-ip_ltm. For this solution, the configuration process proceeded as follows: Configure a Pool with Health Monitors Configure a Default Gateway Configure a Virtual Server Configure Redundancy Configure ConfigSync 14

Technology for better business outcomes To learn more, visit: www.hp.com/go/procurve www.f5.com Copyright 2009 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. 4AA2-9421ENW, September 2009