TEST METHODOLOGY. Network Firewall Data Center. v1.0



Similar documents
TEST METHODOLOGY. Data Center Firewall. v2.0

TEST METHODOLOGY. Hypervisors For x86 Virtualization. v1.0

NETWORK FIREWALL TEST METHODOLOGY 3.0. To receive a licensed copy or report misuse, Please contact NSS Labs at: or advisor@nsslabs.

How To Test A Ddos Prevention Solution

TEST METHODOLOGY. Web Application Firewall. v6.2

2013 Thomas Skybakmoen, Francisco Artes, Bob Walder, Ryan Liles

TEST METHODOLOGY. Distributed Denial-of-Service (DDoS) Prevention. v2.0

NETWORK FIREWALL PRODUCT ANALYSIS

WEB APPLICATION FIREWALL PRODUCT ANALYSIS

DATA CENTER IPS COMPARATIVE ANALYSIS

DATA CENTER IPS COMPARATIVE ANALYSIS

NEXT GENERATION FIREWALL PRODUCT ANALYSIS

NEXT GENERATION FIREWALL TEST REPORT

NEXT GENERATION FIREWALL PRODUCT ANALYSIS

NETWORK INTRUSION PREVENTION SYSTEM PRODUCT ANALYSIS

SSL Performance Problems

DATA CENTER IPS COMPARATIVE ANALYSIS

2013 Thomas Skybakmoen, Francisco Artes, Bob Walder, Ryan Liles

Why Is DDoS Prevention a Challenge?

NEXT GENERATION FIREWALL COMPARATIVE ANALYSIS

Breach Found. Did It Hurt?

Firewalls. Securing Networks. Chapter 3 Part 1 of 4 CA M S Mehta, FCA

ACHILLES CERTIFICATION. SIS Module SLS 1508

WEB APPLICATION FIREWALL COMPARATIVE ANALYSIS

4 Delivers over 20,000 SSL connections per second (cps), which

Can Consumer AV Products Protect Against Critical Microsoft Vulnerabilities?

The CISO s Guide to the Importance of Testing Security Devices

NEXT GENERATION FIREWALL COMPARATIVE ANALYSIS

TEST METHODOLOGY. Next Generation Firewall (NGFW) v5.4

TEST METHODOLOGY. Secure Web Gateway (SWG) v1.5.1

Internet Explorer Exploit Protection ENTERPRISE BRIEFING REPORT

FortiGate-3950B Scores 95/100 on BreakingPoint Resiliency Score (Security, Performance, & Stability)

Architecture Overview

NEXT GENERATION INTRUSION PREVENTION SYSTEM (NGIPS) TEST REPORT

TEST METHODOLOGY. Endpoint Protection Evasion and Exploit. v4.0

CMPT 471 Networking II

ENTERPRISE EPP COMPARATIVE REPORT

Evolutions in Browser Security

Network Agent Quick Start

Placing the BlackBerry Enterprise Server for Microsoft Exchange in a demilitarized zone

A host-based firewall can be used in addition to a network-based firewall to provide multiple layers of protection.

An Old Dog Had Better Learn Some New Tricks

Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations

How To Sell Security Products To A Network Security Company

Managing Latency in IPS Networks

ETM System SIP Trunk Support Technical Discussion

TECHNICAL NOTE 01/2006 ENGRESS AND INGRESS FILTERING

Security+ Guide to Network Security Fundamentals, Fourth Edition. Chapter 6 Network Security

Computer Security CS 426 Lecture 36. CS426 Fall 2010/Lecture 36 1

Availability Digest. Redundant Load Balancing for High Availability July 2013

On-Premises DDoS Mitigation for the Enterprise

Netsweeper Whitepaper

Sonus Networks engaged Miercom to evaluate the call handling

Data Sheet. V-Net Link 700 C Series Link Load Balancer. V-NetLink:Link Load Balancing Solution from VIAEDGE

Network Simulation Traffic, Paths and Impairment

ENTERPRISE EPP COMPARATIVE ANALYSIS

Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB _v02

Firewalls and VPNs. Principles of Information Security, 5th Edition 1

PROTECTING INFORMATION SYSTEMS WITH FIREWALLS: REVISED GUIDELINES ON FIREWALL TECHNOLOGIES AND POLICIES

Packet Filtering using the ADTRAN OS firewall has two fundamental parts:

CORPORATE AV / EPP COMPARATIVE ANALYSIS

We will give some overview of firewalls. Figure 1 explains the position of a firewall. Figure 1: A Firewall

Intro to Firewalls. Summary

Mobile App Containers: Product Or Feature?

INSTANT MESSAGING SECURITY

About Firewall Protection

Introducing FortiDDoS. Mar, 2013

Software- Defined Networking: Beyond The Hype, And A Dose Of Reality

1. Introduction. 2. DoS/DDoS. MilsVPN DoS/DDoS and ISP. 2.1 What is DoS/DDoS? 2.2 What is SYN Flooding?

Security Technology White Paper

By the Citrix Publications Department. Citrix Systems, Inc.

How To Create A Firewall Security Value Map (Svm) 2013 Nss Labs, Inc.

Firewall Design Principles

Stingray Traffic Manager Sizing Guide

Internet Security for Small to Medium Sized Businesses

SERVICE LEVEL AGREEMENT

FIREWALLS & NETWORK SECURITY with Intrusion Detection and VPNs, 2 nd ed. Chapter 5 Firewall Planning and Design

A Layperson s Guide To DoS Attacks

FIREWALLS & CBAC. philip.heimer@hh.se

Quality Certificate for Kaspersky DDoS Prevention Software

10 Configuring Packet Filtering and Routing Rules

Quick Start for Network Agent. 5-Step Quick Start. What is Network Agent?

DMZ Network Visibility with Wireshark June 15, 2010

Implementing Network Address Translation and Port Redirection in epipe

PROFESSIONAL SECURITY SYSTEMS

Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.

axsguard Gatekeeper Internet Redundancy How To v1.2

SIP Trunking with Microsoft Office Communication Server 2007 R2

Firewall Security. Presented by: Daminda Perera

Networking for Caribbean Development

Guideline on Firewall

What is a Firewall? Computer Security. Firewalls. What is a Firewall? What is a Firewall?

Frequently Asked Questions

Networking Topology For Your System

How To Block A Ddos Attack On A Network With A Firewall

GlobalSCAPE DMZ Gateway, v1. User Guide

Firewall Introduction Several Types of Firewall. Cisco PIX Firewall

5 Easy Steps to Implementing Application Load Balancing for Non-Stop Availability and Higher Performance

User Manual. Page 2 of 38

NSFOCUS Web Application Firewall White Paper

Transcription:

TEST METHODOLOGY Network Firewall Data Center v1.0

Table of Contents 1 Introduction... 4 1.1 The Need for Firewalls In The Data Center... 4 1.2 About This Test Methodology and Report... 4 1.3 Inclusion Criteria... 5 2 Product Guidance... 6 2.1 Recommended... 6 2.2 Neutral... 6 2.3 Caution... 6 3 Security Effectiveness... 7 3.1 Firewall Policy Enforcement... 7 3.1.1 Baseline Policy... 7 3.1.2 Simple Policies... 7 3.1.3 Complex Policies... 8 3.1.4 Static NAT (Network Address Translation)... 8 3.1.5 SYN Flood Protection... 8 3.1.6 IP Address Spoofing... 8 3.1.7 TCP Split Handshake Spoof... 8 4 Performance... 9 4.1 Raw Packet Processing Performance (UDP Traffic)... 9 4.1.1 64 Byte Packets... 9 4.1.2 128 Byte Packets... 9 4.1.3 256 Byte Packets... 9 4.1.4 512 Byte Packets... 9 4.1.5 1024 Byte Packets... 9 4.1.6 1514 Byte Packets... 9 4.2 Latency... 10 4.2.1 64 Byte Frames... 10 4.2.2 128 Byte Frames... 10 4.2.3 256 Byte Packets... 10 4.2.4 512 Byte Packets... 10 4.2.5 1024 Byte Packets... 10 4.2.6 1514 Byte Packets... 10 4.3 Maximum Capacity... 10 4.3.1 Theoretical Maximum Concurrent TCP Connections... 11 4.3.2 Theoretical Maximum Concurrent TCP Connections With Data... 11 4.3.3 Maximum TCP Connections Per Second... 11 4.3.4 Maximum HTTP Connections Per Second... 11 2013 NSS Labs, Inc. All rights reserved. 2

4.3.5 Maximum HTTP Transactions Per Second... 11 4.4 HTTP Capacity With No Transaction Delays... 12 4.4.1 44KB HTTP response size 2,500 Connections Per Second... 12 4.4.2 21KB HTTP response size 5,000 Connections Per Second... 12 4.4.3 10KB HTTP response size 10,000 Connections Per Second... 12 4.4.4 4.5KB HTTP response size 20,000 Connections Per Second... 12 4.4.5 1.7KB HTTP response size 40,000 Connections Per Second... 13 4.5 Application Average Response Time: HTTP... 13 4.6 HTTP Connections per Second and Capacity (With Delays)... 13 4.7 Real- World Traffic... 13 4.7.1 Real- World Protocol Mix (Data center Financial)... 13 4.7.2 Real- World Protocol Mix (Data center Virtualization Hub)... 13 4.7.3 Real- World Protocol Mix (Data center Mobile users and applications)... 13 4.7.4 Real- World Protocol Mix (Data center Web- based applications and services)... 13 4.7.5 Real- World Protocol Mix (Data center Internet Service Provider (ISP) Mix)... 14 5 Stability & Reliability... 15 5.1 Blocking Under Extended Attack... 15 5.2 Passing Legitimate Traffic Under Extended Attack... 15 5.3 Protocol Fuzzing & Mutation... 15 5.4 Power Fail... 16 5.5 Redundancy... 16 5.6 Persistence Of Data... 16 5.7 High Availability (HA)... 16 5.7.1 Failover Legitimate Traffic... 16 5.7.2 Time To Failover... 16 5.7.3 Stateful Operation... 16 5.7.4 Active- Active Configuration... 16 6 Management & Configuration... 17 7 Total Cost of Ownership & Value... 18 Appendix A: Test Environment... 19 Contact Information... 20 2013 NSS Labs, Inc. All rights reserved. 3

1 Introduction 1.1 The Need for Firewalls In The Data Center Firewall technology is one of the largest and most mature security markets. Firewalls have undergone several stages of development, from early packet filtering and circuit relay firewalls to application layer (proxy based) and dynamic packet filtering firewalls. Throughout their history, however, the goal has been to enforce an access control policy between two networks, and thus firewalls should be viewed as an implementation of policy. A firewall is a mechanism used to protect a trusted network from an untrusted network, while allowing authorized communications to pass from one side to the other. When considering firewalls for the data center rather than for the network perimeter, there are several key metrics that need to be adjusted. Performance metrics, while important in any firewall, become more critical in a device intended for data center deployment. The volume of traffic will be significantly higher than for a firewall intended to enforce policy for end users accessing the Internet through the corporate network perimeter. Data center firewalls need to support much higher data rates as they handle traffic for potentially hundreds of thousands of users accessing large applications in a server farm inside the network perimeter. Connection rate and concurrent connection capacity are also metrics that become even more critical in data center firewalls. Traffic mix will alter significantly between a corporate network perimeter and a data center, and this can put additional load on the firewall inspection process. Stateless UDP traffic (such as you would see in a Network File System (NFS)) and long- lived TCP connections (such as you would see in an iscsi Storage Area Network (SAN), or a backup application) are common in many data center networks. These types of applications present continued and heavy load to the network. In the data center, application traffic puts a very different load on the network than does file system traffic. Client- server communications between users and servers, and server- server communications between application, database, and directory servers have very different profiles. Application traffic is connection intensive, with connections constantly being set up and torn down. Firewalls that include any form of application awareness capabilities will find particular challenges in data center deployments. Latency is also a critical concern, because if the firewall introduces delays, applications will be adversely affected. 1.2 About This Test Methodology and Report NSS Labs test reports are designed to address the challenges faced by information technology (IT) professionals in selecting and managing security products. The scope of this particular report includes: Security effectiveness Performance and stability Management Total Cost of Ownership (TCO) In order to establish a secure perimeter, a basic network firewall must provide granular control based upon the source and destination IP addresses and ports. As firewalls will be deployed at critical points in the network, the stability and reliability of a firewall is imperative. 2013 NSS Labs, Inc. All rights reserved. 4

In addition, it must not degrade network performance or it will never be installed. Any new firewall must be as stable, as reliable, as fast, and as flexible as the firewall it is replacing. The following capabilities are considered essential as part of a firewall: Basic packet filtering Stateful packet inspection Network Address Translation (NAT) Highly Stable Ability to operate at layer 3 (IPv4) 1.3 Inclusion Criteria In order to encourage the greatest participation, and allay any potential concerns of bias, NSS invites all leading firewall vendors to submit their products at no cost. Vendors with major market share, as well as challengers with new technology, will be included. To be considered a data center device, any firewall submitted to this test should be capable of a minimum of 40Gbps throughput (as claimed by the vendor). The firewall should be supplied as a single appliance, where possible (cluster controller solutions are acceptable), with the appropriate number of physical interfaces capable of achieving the required level of connectivity and performance (minimum of one in- line segment per physical medium unit of throughput). Firewall products should be implemented as in- line Layer 3 (routing) devices. Multiple separate connections will be made from the external to internal switches via the device under test (DUT), subject to a minimum of one in- line port pair per gigabit of throughput. Thus, an 80 Gbps device with only four 10Gb port pairs will be limited to 40 Gbps. The minimum number of port pairs will be connected to support the claimed maximum bandwidth of the DUT. Once installed in the test lab, the DUT will be configured for the use- case appropriate to the target deployment (corporate data center). The DUT should also be configured to block all traffic when resources are exhausted or when traffic cannot be analyzed for any reason. 2013 NSS Labs, Inc. All rights reserved. 5

2 Product Guidance NSS Labs issues summary product guidance based on evaluation criteria that is important to information security professionals. The evaluation criteria are weighted as follows: 1. Security effectiveness - The primary reason for buying a firewall is to separate internal trusted networks from external untrusted networks, while allowing select controlled traffic to flow between trusted and untrusted networks. 2. Resistance to evasion - Failure in any evasion class permits attackers to circumvent protection. 3. Stability Long- term stability is particularly important for an in- line device, where failure can produce network outages. 4. Performance Correct sizing of a firewall is essential. 5. Management - In particular, how difficult is it to configure the highest degree of protection across multiple devices? 6. Value Customers should seek low TCO and high effectiveness and performance rankings. Products are listed in rank order according to their guidance rating. 2.1 Recommended A Recommended rating from NSS indicates that a product has performed well and deserves strong consideration. Only the top technical products earn a Recommended rating from NSS, regardless of market share, company size, or brand recognition. 2.2 Neutral A Neutral rating from NSS indicates that a product has performed reasonably well and should continue to be used if it is the incumbent within an organization. Products that earn a Neutral rating from NSS deserve consideration during the purchasing process. 2.3 Caution A Caution rating from NSS indicates that a product has performed poorly. Organizations using one of these products should review their security posture and other threat mitigation factors, including possible alternative configurations and replacement. Products that earn a Caution rating from NSS should not be short- listed or renewed. 2013 NSS Labs, Inc. All rights reserved. 6

3 Security Effectiveness This section verifies that the DUT is capable of enforcing a specified security policy effectively. The NSS firewall analysis is conducted by incrementally building upon a baseline configuration (simple routing with no policy restrictions and no content inspection) to a complex, real world, multiple- zone configuration supporting many addressing modes, policies, applications and inspection engines. At each level of complexity, test traffic is passed across the firewall to ensure that only specified traffic is allowed and the rest is denied, and that appropriate log entries are recorded. The firewall must support stateful firewalling either by managing state tables to prevent traffic leakage, or as a stateful proxy. The ability to manage firewall policy across multiple interfaces/zones is a required function. At a minimum, the firewall must provide a trusted internal interface, an untrusted external/internet interface, and (optionally) one or more DMZ interfaces. In addition, a dedicated management interface (virtual or otherwise) is preferred. 3.1 Firewall Policy Enforcement Policies are rules that are configured on a firewall to permit or deny access from one network resource to another, based on identifying criteria such as: source, destination, and service. A term typically used to define the demarcation point of a network where policy is applied is a demilitarized zone (DMZ). Policies are typically written to permit or deny network traffic from one or more of the following zones: Untrusted This is typically an external network and is considered to be unknown and non- secure. An example of an untrusted network would be the Internet. DMZ This is a network that is being isolated by the firewall restricting network traffic to and from hosts contained within the isolated network. Trusted This is typically an internal network; a network that is considered secure and protected. The NSS firewall tests verify performance and the ability to enforce policy between the following: Trusted to Untrusted Untrusted to DMZ Trusted to DMZ Note: Firewalls must provide at a minimum one DMZ interface in order to provide a DMZ or transition point between untrusted and trusted networks. 3.1.1 Baseline Policy Routed configuration with an allow all policy. 3.1.2 Simple Policies Simple outbound and inbound policies allowing basic browsing and e- mail access for external clients and no other external access. 2013 NSS Labs, Inc. All rights reserved. 7

3.1.3 Complex Policies Complex outbound and inbound policies consisting of many rules, objects, and services. 3.1.4 Static NAT (Network Address Translation) Inbound network address translation (NAT) to DMZ using fixed IP address translation with one- to- one mapping. 3.1.5 SYN Flood Protection The basis of a SYN flood attack is to fail to complete the 3- way handshake necessary to establish a legitimate session. The objective of SYN flooding is to disable one side of the TCP connection, which will result in one or more of the following: The server is unable to accept new connections. The server crashes or becomes inoperative. Authorization between servers is impaired. The DUT is expected to protect against SYN floods in both normal and distributed denial of service (DDoS) situations. 3.1.6 IP Address Spoofing This test attempts to confuse the firewall into allowing traffic to pass from one network segment to another. By forging the IP header to contain a different source address from where the packet was actually transmitted, an attacker can make it appear that the packet was sent from a different (trusted) machine. The endpoint that receives successfully spoofed packets will respond to the forged source address (the attacker). The DUT is expected to protect against IP address spoofing. 3.1.7 TCP Split Handshake Spoof This test attempts to confuse the firewall into allowing traffic to pass from one network segment to another. The TCP split handshake blends features of both the three- way handshake and the simultaneous- open connection. The result is a TCP spoof attack that allows an attacker to bypass the firewall by instructing the target to initiate the session back to the attacker. Popular TCP/IP networking stacks respect this handshaking method, including Microsoft, Apple, and Linux stacks, with no modification. 1 The DUT is expected to protect against TCP split handshake spoofing. 1 The TCP Split Handshake: Practical Effects on Modern Network Equipment, Tod Alien Beardsley & Jin Qian, http://www.macrothink.org/journal/index.php/npa/article/view/285 2013 NSS Labs, Inc. All rights reserved. 8

4 Performance This section measures the performance of the firewall using various traffic conditions that provide metrics for real world performance. Individual implementations will vary based on usage, however these quantitative metrics provide a gauge as to whether a particular DUT is appropriate for a given environment. 4.1 Raw Packet Processing Performance (UDP Traffic) This test uses UDP packets of varying sizes generated by BreakingPoint Systems traffic generation tool. A constant stream of the appropriate packet size, with variable source and destination IP addresses transmitting from a fixed source port to a fixed destination port, is transmitted bi- directionally through each port pair of the DUT. Each packet contains dummy data and is targeted at a valid port on a valid IP address on the target subnet. The percentage load and frames per second (fps) figures across each in- line port pair are verified by network monitoring tools before each test begins. Multiple tests are run and averages are taken where necessary. This traffic does not attempt to simulate any form of real- world network condition. No TCP sessions are created during this test, and there is very little for the state engine to do. The aim of this test is purely to determine the raw packet processing capability of each in- line port pair of the DUT and its effectiveness at forwarding packets quickly in order to provide the highest level of network performance and lowest latency. 4.1.1 64 Byte Packets Maximum 1,488,000 frames per second per gigabit of traffic. This test determines the ability of a device to process packets from the wire under the most challenging packet processing conditions. 4.1.2 128 Byte Packets Maximum 844,000 frames perssecond per gigabit of traffic 4.1.3 256 Byte Packets Maximum 452,000 frames per second per gigabit of traffic. 4.1.4 512 Byte Packets Maximum 234,000 frames per second per gigabit of traffic. This test provides a reasonable indication of the ability of a device to process packets from the wire on an average network. 4.1.5 1024 Byte Packets Maximum 119,000 frames per second per gigabit of traffic. 4.1.6 1514 Byte Packets Maximum 81,000 frames per second per gigabit of traffic. This test has been included mainly to demonstrate how easy it is to achieve good results using large packets. Readers should use caution when taking into consideration those test results that only quote performance figures using similar packet sizes. 2013 NSS Labs, Inc. All rights reserved. 9

4.2 Latency The aim of the latency and user response time tests is to determine the effect the firewall has on the traffic passing through it under various load conditions. Test traffic is passed across the infrastructure switches and through all in- line port pairs of the DUT simultaneously (the latency of the basic infrastructure is known and is constant throughout the tests). The packet loss and average latency (µs) are recorded for each packet size (64, 128, 256, 512, 1024 and 1514 bytes) at a load level of 90 per cent of the maximum throughput with zero packet loss as previously determined in test 4.1 - Raw Packet Processing Performance (UDP Traffic). 4.2.1 64 Byte Frames Maximum 1,488,000 frames per second per gigabit of traffic 4.2.2 128 Byte Frames Maximum 844,000 frames per second per gigabit of traffic 4.2.3 256 Byte Packets Maximum 452,000 frames per second per gigabit of traffic. 4.2.4 512 Byte Packets Maximum 234,000 frames per second per gigabit of traffic. 4.2.5 1024 Byte Packets Maximum 119,000 frames per second per gigabit of traffic. 4.2.6 1514 Byte Packets Maximum 81,000 frames per second per gigabit of traffic. 4.3 Maximum Capacity The use of IXIA BreakingPoint appliances allows NSS engineers to create true real world traffic at multi- gigabit speeds as a background load for the tests. The aim of these tests is to stress the inspection engine and determine how it handles high volumes of TCP connections per second, application layer transactions per second, and concurrent open connections. All packets contain valid payload and address data, and these tests provide an excellent representation of a live network at various connection/transaction rates. Note that in all tests, the following critical breaking points (where the final measurements are taken) are used: Excessive concurrent TCP connections - Unacceptable increase in open connections on the server- side. Excessive response time for HTTP transactions - Excessive delays and increased response time to client. Unsuccessful HTTP transactions Normally, there should be zero unsuccessful transactions. Their occurrence indicates that excessive latency is causing connections to time out. 2013 NSS Labs, Inc. All rights reserved. 10

4.3.1 Theoretical Maximum Concurrent TCP Connections This test is designed to determine the maximum concurrent TCP connections of the DUT with no data passing across the connections. This type of traffic would not typically be found on a normal network, but it provides the means to determine the maximum possible concurrent connections figure. An increasing number of Layer 4 TCP sessions are opened through the device. Each session is opened normally and then held open for the duration of the test as additional sessions are added up to the maximum possible. Load is increased until no more connections can be established, and this number is recorded. 4.3.2 Theoretical Maximum Concurrent TCP Connections With Data This test is identical to 4.3.1, except that once a connection has been established, 21KB of data is transmitted (in 21KB segments). This ensures that the DUT is capable of passing data across the connections once they have been established. 4.3.3 Maximum TCP Connections Per Second This test is designed to determine the maximum TCP connection rate of the DUT with one byte of data passing across the connections. This type of traffic would not typically be found on a normal network, but it provides the means to determine the maximum possible TCP connection rate. An increasing number of new sessions are established through the DUT and ramped slowly to determine the exact point of failure. Each session is opened normally, one byte of data passed to the host, and then the session is closed immediately. Load is increased until one or more of the breaking points defined earlier is reached. 4.3.4 Maximum HTTP Connections Per Second This test is designed to determine the maximum TCP connection rate of the DUT with a 1 byte HTTP response size. The response size defines the number of bytes contained in the body, excluding any bytes associated with the HTTP header. A 1 byte response size is designed to provide a theoretical maximum HTTP connections per second rate. Client and server are using HTTP 1.0 without keep alive, and the client will open a TCP connection, send one HTTP request, and close the connection. This ensures that all TCP connections are closed immediately once the request is satisfied, thus any concurrent TCP connections will be caused purely as a result of latency of the DUT. Load is increased until one or more of the breaking points defined earlier is reached. 4.3.5 Maximum HTTP Transactions Per Second This test is designed to determine the maximum HTTP transaction rate of the DUT with a 1 byte HTTP response size. The object size defines the number of bytes contained in the body, excluding any bytes associated with the HTTP header. A 1 byte response size is designed to provide a theoretical maximum connections per second rate. Client and server are using HTTP 1.1 with persistence, and the client will open a TCP connection, send ten HTTP requests, and close the connection. This ensures that TCP connections remain open until all ten HTTP transactions are complete, thus eliminating the maximum connection per second rate as a bottleneck (one TCP connection = 10 HTTP transactions). Load is increased until one or more of the breaking points defined earlier is reached. 2013 NSS Labs, Inc. All rights reserved. 11

4.4 HTTP Capacity With No Transaction Delays The aim of these tests is to stress the HTTP detection engine and determine how the DUT copes with network loads of varying average packet size and varying connections per second. By creating genuine, session- based traffic with varying session lengths, the DUT is forced to track valid TCP sessions, thus ensuring a higher workload than for simple packet- based background traffic. This provides a test environment that is as close to real world as it is possible to achieve in a lab environment, while ensuring absolute accuracy and repeatability. Each transaction consists of a single HTTP GET request and there are no transaction delays (i.e. the Web server responds immediately to all requests). All packets contain valid payload (a mix of binary and ASCII objects) and address data, and this test provides an excellent representation of a live network (albeit one biased towards HTTP traffic) at various network loads. Connections per Second 44Kbyte Response 21Kbyte Response 10Kbyte Response Figure 1 HTTP Capacity. 4.5Kbyte Response 1.7Kbyte Response CPS 2,500 5,000 10,000 20,000 40,000 Mbps 1,000 1,000 1,000 1,000 1,000 Mbps 4.4.1 44KB HTTP response size 2,500 Connections Per Second Max 2,500 new connections per second per gigabit of traffic with a 44KB HTTP response size - average packet size 900 bytes - maximum 140,000 packets per second per gigabit of traffic. With relatively low connection rates and large packet sizes, all hosts should be capable of performing well throughout this test. 4.4.2 21KB HTTP response size 5,000 Connections Per Second Max 5,000 new connections per second per gigabit of traffic with a 21KB HTTP response size - average packet size 670 bytes - maximum 185,000 packets per second per gigabit of traffic. With average connection rates and average packet sizes, this is a good approximation of a real- world production network, and all hosts should be capable of performing well throughout this test. 4.4.3 10KB HTTP response size 10,000 Connections Per Second Max 10,000 new connections per second per gigabit of traffic with a 10KB HTTP response size - average packet size 550 bytes - maximum 225,000 packets per second per gigabit of traffic. With smaller packet sizes coupled with high connection rates, this represents a very heavily used production network. 4.4.4 4.5KB HTTP response size 20,000 Connections Per Second Max 20,000 new connections per second per gigabit of traffic with a 4.5KB HTTP response size - average packet size 420 bytes - maximum 300,000 packets per second per gigabit of traffic. With small packet sizes and extremely high connection rates, this is an extreme test for any host. 2013 NSS Labs, Inc. All rights reserved. 12

4.4.5 1.7KB HTTP response size 40,000 Connections Per Second Max 40,000 new connections per second per gigabit of traffic with a 1.7KB HTTP response size - average packet size 270 bytes - maximum 445,000 packets per second per gigabit of traffic. With small packet sizes and extremely high connection rates, this is an extreme test for any host. 4.5 Application Average Response Time: HTTP Test traffic is passed across the infrastructure switches and through all in- line port pair of the DUT simultaneously (the latency of the basic infrastructure is known and is constant throughout the tests). The results recorded at each response size (44KB, 21KB, 10KB, 4.5KB, and 1.7KB HTTP responses) load level of 90% of the maximum throughput with zero packet loss as previously determined in test 4.4 (HTTP Capacity With No Transaction Delays). 4.6 HTTP Connections per Second and Capacity (With Delays) Typical user behavior introduces delays between requests and reponses, for example, think time, as users read web pages and decide which links to click next. This group of tests is identical to the previous group except that these include a 5 second delay in the server response for each transaction. This has the effect of maintaining a high number of open connections throughout the test, thus forcing the firewall to utilize additional resources to track those connections. 4.7 Real- World Traffic Where previous tests provide a pure HTTP environment with varying connection rates and average packet sizes, the goal of this test is to simulate a real- world environment by introducing additional protocols and real content, while still maintaining a precisely repeatable and consistent background traffic load. The result is a background traffic load that is closer to what may be found on a heavily- utilized normal production network. 4.7.1 Real- World Protocol Mix (Data center Financial) Traffic is generated across the DUT comprising a protocol mix typical of that seen in a large financial institution data center. 4.7.2 Real- World Protocol Mix (Data center Virtualization Hub) Traffic is generated across the DUT comprising a protocol mix typical of that seen in a large data center, focusing on virtualization traffic. (VMotion, Hyper- V migration, etc.) 4.7.3 Real- World Protocol Mix (Data center Mobile users and applications) Traffic is generated across the DUT comprising a protocol mix typical of that seen in a large mobile carrier. 4.7.4 Real- World Protocol Mix (Data center Web- based applications and services) Traffic is generated across the DUT comprising a protocol mix typical of that seen in a web hosting datacenter. 2013 NSS Labs, Inc. All rights reserved. 13

4.7.5 Real- World Protocol Mix (Data center Internet Service Provider (ISP) Mix) Traffic is generated across the DUT comprising a protocol mix typical of that seen in a typical ISP installation, covering all types of traffic. 2013 NSS Labs, Inc. All rights reserved. 14

5 Stability & Reliability Long- term stability is particularly important for an in- line device, where failure can produce network outages. These tests verify the stability of the DUT along with its ability to maintain security effectiveness while under normal load and while passing malicious traffic. Products that are not able to sustain legitimate traffic (or crash) while under hostile attack will not pass. The DUT is required to remain operational and stable throughout these tests, and to block 100 per cent of previously blocked traffic, raising an alert for each. If any prohibited traffic passes successfully, as a result of either the volume of traffic or by the DUT failing open for any reason, this will result in a FAIL. 5.1 Blocking Under Extended Attack The DUT is exposed to a constant stream of security policy violations over an extended period of time. The device is configured to block and alert, and thus this test provides an indication the effectiveness of both the blocking and alert handling mechanisms. A continuous stream of security policy violations mixed with legitimate traffic is transmitted through the device at a maximum of 100Mbps (max 50,000 packets per second, average packet sizes in the range of 120-350 bytes) for 8 hours with no additional background traffic. This is not intended as a stress test in terms of traffic load (covered in the previous section), but is merely a reliability test in terms of consistency of blocking performance. The device is expected to remain operational and stable throughout this test, and to block 100 per cent of recognizable violations, raising an alert for each. If any recognizable policy violations are passed, as a result of either the volume of traffic or the sensor failing open for any reason, this will result in a FAIL. 5.2 Passing Legitimate Traffic Under Extended Attack This test is identical to 5.1, where the external interface of the device is exposed to a constant stream of security policy violations over an extended period of time. The device is expected to remain operational and stable throughout this test, and to pass most/all of the legitimate traffic. If an excessive amount of legitimate traffic is blocked throughout this test, as a result of either the volume of traffic or the DUT failing for any reason, this will result in a FAIL. 5.3 Protocol Fuzzing & Mutation This test stresses the protocol stacks of the DUT by exposing it to traffic from various protocol randomizer and mutation tools. Several of the tools in this category are based on the ISIC test suite and the BreakingPoint Stack Scrambler component. Traffic load is a maximum of 350Mbps and 60,000 packets per second (average packet size is 690 bytes). Results are presented as a simple PASS/FAIL. The device is expected to remain operational and capable of detecting and blocking exploits throughout the test. 2013 NSS Labs, Inc. All rights reserved. 15

5.4 Power Fail Power to the DUT is cut whilst passing a mixture of legitimate and disallowed traffic. Firewalls should always be configured to fail closed - no traffic should be passed once power has been cut. 5.5 Redundancy Does the DUT include multiple redundant critical components? (Fans, power supplies, hard drive, etc.) (YES/NO/OPTION) 5.6 Persistence Of Data The DUT should retain all configuration data, policy data and locally logged data once restored to operation following power failure. 5.7 High Availability (HA) High availability (HA) is important to many enterprise customers, and this test is designed to evaluate the effectiveness of available HA options. If no HA offering is available, all results in this section will be marked as N/A. 5.7.1 Failover Legitimate Traffic Two identical devices will be configured in an active- passive configuration and legitimate traffic will be passed through the DUT at 50 percent of the maximum rated load as determined in Test 4.4.2 (21KB HTTP response size.) Switch connectivity to the primary device will be terminated and the device will be expected to failover seamlessly with zero loss of legitimate traffic (some retransmissions are acceptable). 5.7.2 Time To Failover Time to failover to the standby device will be recorded. 5.7.3 Stateful Operation Is full state maintained across all connections throughout the period of failover? 5.7.4 Active- Active Configuration Is activeo active configuration available? (YES/NO) 2013 NSS Labs, Inc. All rights reserved. 16

6 Management & Configuration Security devices are complicated to deploy; essential systems such as centralized management console options, log aggregation, and event correlation/management systems further complicate the purchasing decision. Understanding key comparison points will allow customers to model the overall impact on network service level agreements (SLAs), estimate operational resource requirements to maintain and manage the systems, and better evaluate required skill / competencies of staff. As part of this test, NSS will perform in- depth technical evaluations of all the main features and capabilities of the centralized enterprise management systems offered by each vendor, covering the following key areas: General Management and Configuration How easy is it to install and configure devices, and deploy multiple devices throughout a large enterprise network? Policy Handling How easy is it to create, edit and deploy complicated security policies across an enterprise? Alert Handling How accurate and timely is the alerting, and how easy is it to drill down to locate critical information needed to remediate a security problem? Reporting How effective and customizable is the reporting capability? For additional information concerning enterprise management testing, refer to the separate management questionnaire document. 2013 NSS Labs, Inc. All rights reserved. 17

7 Total Cost of Ownership & Value Organizations should be concerned with the ongoing, amortized cost of operating security products. This section evaluates the costs associated with the purchase, installation, and ongoing management of the firewall. Product Purchase The cost of acquisition. Product Maintenance The fees paid to the vendor (including software and hardware support, maintenance and updates). Installation The time required to take the device out of the box, configure it, put it into the network, apply updates and patches, initial tuning, and set up desired logging and reporting. Upkeep The time required to apply periodic updates and patches from vendors, including hardware, software, and firmware updates. Management Day- to- day management tasks including device configuration, policy updates, policy deployment, alert handling, and the like. 2013 NSS Labs, Inc. All rights reserved. 18

NSS Labs Test Methodology Network Firewall Data Center v1.0 Appendix A: Test Environment The aim of this procedure is to provide a thorough test of all the main components of a routed firewall device in a controlled and repeatable manner and in the most real- world environment that can be simulated in a test lab. The Test Environment The NSS Labs test network is a multi- gigabit infrastructure that can accommodate both gigabit copper and 10 gigabit fiber interfaces. The firewall is configured for the use- case according to the test methodology. Traffic generation equipment, such as the hosts generating exploits, and BreakingPoint transmit ports, is connected to the external network, while the receiving equipment, such as the vulnerable hosts for the exploits, and BreakingPoint receive ports, is connected to the internal network. The firewall is connected between two gateway switches, one at the edge of the external network and one at the edge of the internal network. External Hosts Clients Data Center Device Under Test (DUT) Figure 2 Test Environment All normal network traffic, background load traffic, and exploit traffic is transmitted through the firewall, from external to internal (responses will flow in the opposite direction). The same traffic is mirrored to multiple SPAN ports of the external gateway switch, to which network monitoring devices are connected. The network monitoring devices ensure that the total amount of traffic per port pair reflects the amount being sent and received by the BreakingPoint. The management interface is used to connect the appliance to the management console on a private subnet. This ensures that the firewall and console can communicate even when the target subnet is subjected to heavy loads, in addition to preventing attacks on the console itself. 2013 NSS Labs, Inc. All rights reserved. 19

Contact Information NSS Labs, Inc. 206 Wild Basin Rd, Building A, Suite 200 Austin, TX 78746 USA +1 (512) 961-5300 info@nsslabs.com www.nsslabs.com This and other related documents available at: www.nsslabs.com. To receive a licensed copy or report misuse, please contact NSS Labs at +1 (512) 961-5300 or sales@nsslabs.com. 2013 NSS Labs, Inc. All rights reserved. No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without the express written consent of the authors. Please note that access to or use of this document is conditional on the following: 1. NSS Labs reserves the right to modify any part of the methodology before, or during, a test, or to amend the configuration of a device under test (DUT) where specific characteristics of the DUT or its configuration interfere with the normal operation of any of the tests, or where the results obtained from those tests would, in the good faith opinion of NSS Labs engineers, misrepresent the true capabilities of the DUT. Every effort will be made to ensure the optimal combination of security effectiveness and performance, as would be the aim of a typical customer deploying the DUT in a live network environment. 2. The information in this document is believed by NSS Labs to be accurate and reliable at the time of publication, but is not guaranteed. All use of and reliance on this document are at the reader s sole risk. NSS Labs is not liable or responsible for any damages, losses, or expenses arising from any error or omission in this document. 3. NO WARRANTIES, EXPRESS OR IMPLIED ARE GIVEN BY NSS LABS. ALL IMPLIED WARRANTIES, INCLUDING IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON- INFRINGEMENT ARE DISCLAIMED AND EXCLUDED BY NSS LABS. IN NO EVENT SHALL NSS LABS BE LIABLE FOR ANY CONSEQUENTIAL, INCIDENTAL OR INDIRECT DAMAGES, OR FOR ANY LOSS OF PROFIT, REVENUE, DATA, COMPUTER PROGRAMS, OR OTHER ASSETS, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 4. This document does not constitute an endorsement, recommendation, or guarantee of any of the products (hardware or software) tested or the hardware and software used in testing the products. The testing does not guarantee that there are no errors or defects in the products or that the products will meet the reader s expectations, requirements, needs, or specifications, or that they will operate without interruption. 5. This document does not imply any endorsement, sponsorship, affiliation, or verification by or with any organizations mentioned in this report. 6. All trademarks, service marks, and trade names used in this document are the trademarks, service marks, and trade names of their respective owners. 2013 NSS Labs, Inc. All rights reserved. 20