TEST METHODOLOGY Secure Web Gateway (SWG) v1.5.1
Table of Contents 1 Introduction... 4 1.1 The Need for Secure Web Gateways... 4 1.2 About This Test Methodology... 4 1.3 Inclusion Criteria... 5 1.4 Deployment... 5 2 Product Guidance... 6 2.1 Recommended... 6 2.2 Neutral... 6 2.3 Caution... 6 3 Security Effectiveness... 7 3.1 False Positive Testing... 7 3.2 Exploit Testing... 7 3.3 Live Stack Testing... 8 3.4 Application Awareness... 8 3.5 User/Group ID Awareness... 9 3.5.1 Users Defined via DUT Integration with Active Directory... 9 3.5.2 Users Defined in DUT Database... 9 3.6 URL Filtering... 9 3.6.1 URL Obfuscation... 9 3.7 Evasion... 10 3.7.1 HTML Obfuscation... 10 3.7.2 Payload Encoding... 10 3.7.3 Binary Obfuscation... 11 3.7.4 Layered Evasions... 11 4 Performance... 12 4.1 Maximum Capacity... 12 4.1.1 Maximum HTTP Connections per Second... 12 4.1.2 Maximum HTTP Transactions per Second... 12 4.2 HTTP Capacity with No Transaction Delays... 13 4.2.1 44 KB HTTP Response Size 2,500 Connections per Second... 13 4.2.2 21 KB HTTP Response Size 5,000 Connections per Second... 13 4.2.3 10 KB HTTP Response Size 10,000 Connections per Second... 13 4.2.4 4.5 KB HTTP Response Size 20,000 Connections per Second... 13 4.2.5 1.7 KB HTTP Response Size 40,000 Connections per Second... 13 4.3 HTTP Connections per Second and Capacity (with Delays)... 14 4.3.1 21 KB HTTP Response with Delay... 14 4.3.2 10 KB HTTP Response with Delay... 14 2
4.4 Application Average Response Time: HTTP... 14 5 Stability and Reliability... 15 5.1 Passing Legitimate Traffic under Extended Attack... 15 5.2 Protocol Fuzzing and Mutation... 15 5.3 Persistence of Data... 15 6 Total Cost of Ownership and Value... 16 Appendix A: Change Log... 17 Contact Information... 18 3
1 Introduction 1.1 The Need for Secure Web Gateways Modern cybercampaigns frequently focus on attacking users through the most common application protocols and the most commonly used applications. HTTP (including HTTPS) and web browsers have expanded the threat landscape, allowing cybercriminals to exploit more enterprises via socially engineered malware, exploits, and social exploits. These malicious attacks leverage not only flaws in the web browser, but also flaws in common business applications such as Java, Adobe Reader, and Microsoft Office. In turn, enterprises must evolve their network defenses to provide enhanced protection. Secure web gateways (SWG) are designed to detect and block attacks that target vulnerable systems and applications by monitoring web traffic for malicious content. 1.2 About This Test Methodology NSS Labs test reports are designed to address the challenges faced by IT professionals in selecting and managing security products. The scope of this particular report includes: Security effectiveness Performance Stability and reliability Total cost of ownership (TCO) Based on the needs identified in NSS research, the following capabilities are considered essential in any SWG device: Detect and block web-based (HTTP and HTTPS) attacks, including malware and exploits Anti-malware capabilities (the ability to detect and prevent drive-by exploits as well as traditional malware downloads). Techniques can include but are not limited to signature matching, heuristics, URL/site reputation, file reputation, and real-time code and/or activity analysis. Application awareness (the ability to identify and restrict common applications such as Facebook, Skype, or web browsers, even when operating over common ports and protocols such as HTTP/HTTPS) User/group awareness (the ability to identify specific users and/or groups and apply security policy appropriately) Reputation awareness URL categorization and blocking Resistance to known evasion techniques Adequate performance Resilient and stable under all traffic conditions Centralized enterprise management capabilities The detection of outbound malicious traffic, such as command and control traffic from infected hosts, is an increasingly common feature in SWG devices. NSS will not be testing this capability, but vendors should consider the inclusion of such capabilities in their products. 4
1.3 Inclusion Criteria In order to encourage the greatest participation and to allay any potential concerns of bias, NSS invites all security vendors claiming SWG capabilities to submit their products at no cost. Vendors with major market share as well as challengers with new technology will be included. 1.4 Deployment SWG products should be implemented as in-line or web cache communication protocol (WCCP). Products should be supplied with the appropriate number of physical interfaces capable of achieving the required level of connectivity and performance (i.e., a minimum of one port pair per Gigabit of throughput or one port pair per 10 Gbps of throughput). Products are subject to a minimum of one port pair per Gigabit of throughput. Thus, an 8 Gbps device with only four port pairs will be limited to 4 Gbps. The minimum number of port pairs will be connected to support the claimed maximum bandwidth of the solution being tested (thus an 8 Gbps SWG with ten port pairs will have eight 1 Gbps connections tested). At this time, only devices that offer in-line deployment scenarios or WCCP will be tested. Future revisions will include devices that support explicit proxy (i.e., requiring browser proxy settings to be configured). 5
2 Product Guidance NSS issues summary product guidance based on evaluation criteria that are important to information security professionals. Key evaluation criteria are as follows: Security effectiveness Resistance to evasion Stability Performance Value Products are listed in rank order according to their guidance ratings. 2.1 Recommended A Recommended rating from NSS indicates that a product has performed well and deserves strong consideration. Only the top technical products earn a Recommended rating from NSS regardless of market share, company size, or brand recognition. 2.2 Neutral A Neutral rating from NSS indicates that a product has performed reasonably well and should continue to be used if it is the incumbent within an organization. Products that earn a Neutral rating from NSS deserve consideration during the purchasing process. 2.3 Caution A Caution rating from NSS indicates that a product has performed poorly. Organizations using one of these products should review their security postures and other threat prevention factors, including possible alternative configurations and replacement. Products that earn a Caution rating from NSS should not be shortlisted or renewed. 6
3 Security Effectiveness This section verifies that the SWG product can detect and block web-based attacks effectively. The device under test (DUT) will be configured with the most suitable security policy for a typical enterprise deployment as determined by the vendor software engineers. The security policy may incorporate any or all of the available protection mechanisms offered by the SWG, including reputation. Once testing begins, the product version and configuration will be frozen to preserve the integrity of the test. Security effectiveness testing is conducted using a combination of the extensive NSS exploit library and NSS unique live network stack test environment. Throughout security effectiveness testing, NSS engineers validate that legitimate traffic is still allowed and is not inadvertently blocked by the SWG while it is providing protection against malware and exploits. The objective of the network stack test is to determine the amount of protection provided by each SWG against socially engineered malware, exploits, and social exploits. 3.1 False Positive Testing The ability of the DUT to identify and allow legitimate traffic while maintaining protection against threats and exploits is as important as the its ability to provide protection against malicious content. This test will include a varied sample of legitimate live application traffic that should be correctly identified and permitted to pass through the device. After completion of the false positive test and prior to security effectiveness testing, all signatures that are found to cause the false positive alerts are disabled within the security policy. Note that the false positive test will be repeated at random intervals throughout the test in order to ensure that subsequent updates to the security policy on the DUT do not adversely affect legitimate traffic. 3.2 Exploit Testing The latest signature pack is acquired from the vendor (where applicable), and all signatures used must be available to the general public at the time of testing. The vendor is permitted to tune the DUT to deploy the most appropriate security policy, as would be expected in a typical enterprise deployment scenario. This test leverages multiple commercial, open-source, and proprietary tools, and focuses exclusively on client-side (target-initiated) attacks, since this is the threat that the SWG is designed to protect against. With thousands of live exploits, this is the industry s most comprehensive test to date. All of the live exploits and payloads in the NSS exploit test have been validated in the lab such that: A reverse shell is returned A bind shell is opened on the target allowing the attacker to execute arbitrary commands A malicious payload is installed A system is rendered unresponsive 7
3.3 Live Stack Testing NSS has developed a unique test harness 1 that is designed specifically to determine the efficacy of the DUT while protecting a given stack (typically a combination of operating system, browser, and standard business applications such as Java or Adobe Reader). This test utilizes real threats and attack methods that exist in the wild and are actually being used by cybercriminals and other threat actors, based on attacks collected from the NSS global threat intelligence network. Using this test harness, NSS is able to determine the protection offered by the DUT as a whole, as well as its subcomponents. Key features include: All devices are tested in a way that does not introduce bias. For example, multiple stacks are tested simultaneously; there is verification that the attacks are delivered; and measures are taken to ensure that threat actors do not blacklist the test network. Malware and exploits are validated for efficacy. Features of the DUT are tested as they are used in the real world. In order to provide the most actionable information, testing utilizes actual, live attacks from genuine cybercriminals and other threat actors. Scoring is based upon observed results: attack success or failure. For example, was the attacker able to obtain code execution and/or a remote shell? Or, was malware successfully installed on the system? This monitoring infrastructure, currently spanning 37 countries around the globe (with more countries being added as available), harvests live malware and exploits from the Internet in order to test the efficacy of security products in real time. This sophisticated research and test infrastructure allows NSS to monitor, track, and participate in actual cybercrime campaigns run by threat actors. This is a live environment, not a facsimile; the DUT is deployed in front of real victim stacks connected to a live Internet feed. Each DUT will be exposed to a constant stream of live (real-time) exploits ( drive-by exploits, socially engineered malware, and embedded attacks) to determine its effectiveness in detecting and blocking attacks, and in preventing infection of the protected endpoints. 3.4 Application Awareness This test verifies that the DUT is capable of determining the correct application from deep packet inspection (regardless of port/protocol used) and taking the appropriate action. *WCCP deployments require applications to run over port 80 and this capability will be tested. Typical applications include but are not limited to: Popular social networking web sites (web applications) Instant messaging (IM) Web browsers Torrents Online gaming 1 For more information on the NSS Labs Live Testing harness and methodology, please refer to the latest Security Stack: Test Methodology at www.nsslabs.com 8
3.5 User/Group ID Awareness This test verifies that the DUT is capable of correctly determining the correct user/group ID and taking the appropriate action. 3.5.1 Users Defined via DUT Integration with Active Directory The DUT should be able to integrate with Microsoft Active Directory (AD) through direct connection to the Active Directory server or a vendor-supplied application installed on the AD Server. This integration should allow import of AD users and groups as well as allow the DUT to log and take action on AD events. 3.5.2 Users Defined in DUT Database Should integration with Active Directory be unavailable, the DUT is expected to allow local creation of users/groups. In addition, the ability to import thousands of users and dozens of groups is highly desirable. 3.6 URL Filtering This test will attempt to access multiple URLs through the DUT, including both known-good and known-bad sites. The aim of this test is purely to determine that a URL filtering capability exists within the product and is enabled as part of the security policy. 3.6.1 URL Obfuscation Although this evasion technique typically is utilized for server-side attacks, it is important for the DUT to be resistant to URL obfuscation to prevent users from bypassing the URL blocking capabilities. Random URL encoding techniques are employed to transform simple URLs that are often used in pattern-matching signatures to apparently meaningless strings of escape sequences and expanded path characters using a combination of the following techniques: Escape encoding (% encoding) Microsoft %u encoding Path character transformations and expansions ( /./, //, \ ) These techniques are combined in various ways for each URL tested, ranging from minimal transformation to extreme (every character transformed). All transformed URLs are verified to ensure they still function as expected after transformation. URL encoding Levels 1 8 (minimal to extreme) Premature URL ending Long URL Fake parameter TAB separation Case sensitivity Windows\delimiter Session splicing Any combination of the above methods 9
3.7 Evasion Attackers can modify basic attacks to evade detection in a number of ways. If a DUT fails to detect a single form of evasion, any exploit can pass through the device, rendering it ineffective. NSS has selected multiple older/common exploits to be used as a baseline for testing; these baseline exploits will be provided to vendors prior to testing to ensure that protection is available within the DUT. The exploits are initially executed across the DUT to ensure that they are detected in their unmodified state. NSS then verifies that the DUT is capable of detecting and blocking the same exploits subjected to common evasion techniques. Wherever possible, the DUT is expected to successfully decode the obfuscated traffic to provide an accurate alert relating to the original exploit, rather than alerting purely on anomalous traffic detected as a result of the evasion technique itself. None of the exploits that were used in section 3.2 will be used as evasion baselines. This ensures that vendors are not provided with information on the content of any part of the main NSS exploit library in advance of the test. It is a requirement of the test that the DUT submitted will have all evasion detection options enabled by default in the shipping product 3.7.1 HTML Obfuscation Recognizing malicious HTML documents is important in order to protect the enterprise effectively. Malicious HTML documents exploit flaws in common web browsers, browser plug-ins, and add-ons to gain control of the client system and silently install malware such as Trojans, rootkits, and key loggers. Many security products use simple pattern matching systems with very little semantic or syntactic understanding of the data they are analyzing. This leaves them vulnerable to evasion through the use of redundant, but equivalent, alternative representations of malicious documents. This test suite uses a number of malicious HTML documents that are transferred from server to client through the DUT. Each malicious HTML document is served with a different form of obfuscation, as follows: UTF-16 and UTF-32 character set encoding (big-endian) UTF-16 and UTF-32 character set encoding (little-endian) UTF-7 character set encoding Chunked encoding (random chunk size, fixed 8-byte chunk size, or chaffing/arbitrary numbers inserted between chunks) Compression (deflate) Compression (gzip) Base-64 character set encoding with or without bit shifting or chaffing Any combination of the above methods For each of the above, it is verified that a standard web browser (such as Internet Explorer) is capable of rendering the results of the evasion. 3.7.2 Payload Encoding This test attempts to confuse the DUT into allowing an otherwise blocked exploit to pass using various encoding options, including but not limited to: 10
x86/call4_dword_xor This encoder implements a Call+4 Dword XOR Encoder. x86/countdown This encoder uses the length of the payload as a position-dependent encoder key to produce a small decoder stub. x86/fnstenv_mov This encoder uses a variable-length mov equivalent instruction with fnstenv for getip. x86/jmp_call_additive This encoder implements a Jump/Call XOR Additive Feedback Encoder. x86/shikata_ga_nai This encoder implements a polymorphic XOR additive feedback encoder. The decoder stub is generated based on dynamic instruction substitution and dynamic block ordering. Registers are also selected dynamically. 3.7.3 Binary Obfuscation Malware authors use the following techniques to evade detection: Packing/compressor/crypters Encoding Polymorphism Metamorphism File type manipulation 3.7.4 Layered Evasions This test attempts to bypass the DUT by performing any legitimate combination of the evasion techniques specified in section 3.7. It will be verified that the target machine s standard network stack is capable of decoding the evasion correctly while maintaining the exploit viability. 11
4 Performance This section measures the performance of the DUT using traffic conditions that provide metrics for real-world performance. These quantitative metrics provide an indication as to the suitability of a solution for a given environment. Note, performance will be tested with and without caching settings enabled. 4.1 Maximum Capacity The use of traffic generation tools allows NSS engineers to create true real world traffic at multi-gigabit speeds as a background load for the tests. The aim of these tests is to stress the inspection engine and determine how it handles high volumes of HTTP connections per second, application layer transactions per second, and concurrent open connections. All packets contain valid payload and address data, and these tests provide an excellent representation of a live network at various connection/transaction rates. Note that in all tests, the following critical breaking points where the final measurements are taken are used: Excessive concurrent HTTP connections Unacceptable increase in open connections on the server-side Excessive response time for HTTP transactions Excessive delays and increased response time to client Unsuccessful HTTP transactions Normally, there should be zero unsuccessful transactions. Once these appear, it is an indication that excessive latency is causing connections to time out. 4.1.1 Maximum HTTP Connections per Second This test is designed to determine the maximum TCP connection rate of the DUT with a 1-byte HTTP response size. The response size defines the number of bytes contained in the body, excluding any bytes associated with the HTTP header. A 1-byte response size is designed to provide a theoretical maximum HTTP connections per second rate. Client and server are using HTTP 1.0 without keep-alive, and the client will open a TCP connection, send one HTTP request, and close the connection. This ensures that all TCP connections are closed immediately upon the request being satisfied, thus any concurrent TCP connections will be caused strictly as a result of latency of the DUT. Load is increased until one or more of the breaking points defined earlier is reached. 4.1.2 Maximum HTTP Transactions per Second This test is designed to determine the maximum HTTP transaction rate of the DUT with a 1-byte HTTP response size. The object size defines the number of bytes contained in the body, excluding any bytes associated with the HTTP header. A 1-byte response size is designed to provide a theoretical maximum connections per second rate. Client and server are using HTTP 1.1 with keep-alive, and the client will open a TCP connection, send 10 HTTP requests, and close the connection. This ensures that TCP connections remain open until all 10 HTTP transactions are complete, thus eliminating the maximum connection per second rate as a bottleneck (1 TCP connection = 10 HTTP transactions). Load is increased until one or more of the breaking points defined earlier is reached. 12
4.2 HTTP Capacity with No Transaction Delays The aim of these tests is to stress the HTTP detection engine and determine how the DUT copes with network loads of varying average packet size and varying connections per second. By creating genuine session-based traffic with varying session lengths, the DUT is forced to track valid TCP sessions, thus resulting in a higher workload than for simple packet-based background traffic. Each transaction consists of a single HTTP GET request and there are no transaction delays (i.e., the web server responds immediately to all requests). All packets contain valid payload (a mix of binary and ASCII objects) and address data, and this test provides an excellent representation of a live network (albeit one biased towards HTTP traffic) at various network loads. Connections Per Second 44Kbyte Response 21Kbyte Response 10Kbyte Response 4.5Kbyte Response 1.7Kbyte Response CPS 2,500 5,000 10,000 20,000 40,000 Mbps 1,000 1,000 1,000 1,000 1,000 Mbps 4.2.1 44 KB HTTP Response Size 2,500 Connections per Second Maximum 2,500 new connections per second per Gigabit of traffic with a 44 KB HTTP response size average packet size 900 bytes maximum 140,000 packets per second per Gigabit of traffic. With relatively low connection rates and large packet sizes, all DUTs should be capable of performing well throughout this test. 4.2.2 21 KB HTTP Response Size 5,000 Connections per Second Maximum 5,000 new connections per second per Gigabit of traffic with a 21 KB HTTP response size average packet size 670 bytes maximum 185,000 packets per second per Gigabit of traffic. With average connection rates and average packet sizes, this is a good approximation of a real-world production network, and all DUTs should be capable of performing well throughout this test. 4.2.3 10 KB HTTP Response Size 10,000 Connections per Second Maximum 10,000 new connections per second per Gigabit of traffic with a 10 KB HTTP response size average packet size 550 bytes maximum 225,000 packets per second per Gigabit of traffic. With smaller packet sizes coupled with high connection rates, this represents a highly utilized production network. 4.2.4 4.5 KB HTTP Response Size 20,000 Connections per Second Maximum 20,000 new connections per second per Gigabit of traffic with a 4.5 KB HTTP response size average packet size 420 bytes maximum 300,000 packets per second per Gigabit of traffic. With small packet sizes and extremely high connection rates, this is an extreme test for any DUT. 4.2.5 1.7 KB HTTP Response Size 40,000 Connections per Second Maximum 40,000 new connections per second per Gigabit of traffic with a 1.7 KB HTTP response size average packet size 270 bytes maximum 445,000 packets per second per Gigabit of traffic. With small packet sizes and extremely high connection rates, this is an extreme test for any DUT. 13
4.3 HTTP Connections per Second and Capacity (with Delays) Typical user behavior introduces delays between requests and reponses, e.g., think time, as users read web pages and decide which links to click next. This group of tests is identical to the previous group except that these include a 5-second delay in the client request for each transaction. This has the effect of maintaining a high number of open connections throughout the test, thus forcing the DUT to utilize additional resources to track those concurrent connections. 4.3.1 21 KB HTTP Response with Delay Maximum 5,000 new connections per second per Gigabit of traffic with a 21 KB HTTP response size average packet size 670 bytes maximum 185,000 packets per second per Gigabit of traffic. 5-second transaction delay resulting in an additional 50,000 open connections per Gigabit over the test described in section 4.2.2. With average connection rates and average packet sizes, this is a good approximation of a real-world production network, and all DUTs should be capable of performing well throughout this test. 4.3.2 10 KB HTTP Response with Delay Maximum 10,000 new connections per second per Gigabit of traffic with a 10KB HTTP response size average packet size 550 bytes maximum 225,000 packets per second per Gigabit of traffic. 5-second transaction delay resulting in an additional 100,000 open connections over the test described in section 4.2.3. With large average packet sizes coupled with high connection rates, this represents a heavily used production network and is a strenuous test for any DUT. 4.4 Application Average Response Time: HTTP Test traffic is passed across the infrastructure switches and through all port pairs of the DUT simultaneously (the latency of the basic infrastructure is known and is constant throughout the tests). The response time for each response size (44 KB, 21 KB, 10 KB, 4.5 KB, and 1.7 KB HTTP responses) are recorded at a load level of 90% of the maximum throughput as previously determined in section 4.2. 14
5 Stability and Reliability Long-term stability is particularly important for an in-line device, where failure can produce network outages. These tests verify the stability of the DUT along with its ability to maintain security effectiveness while under normal load and while passing malicious traffic. DUTs that are not able to sustain legitimate traffic (or that crash) while under hostile attack will not pass. The DUT is required to remain operational and stable throughout these tests and to block 100% of previously blocked traffic, raising an alert for each. If any prohibited traffic passes successfully caused by either the volume of traffic or the DUT failing open for any reason this will result in a FAIL. 5.1 Passing Legitimate Traffic under Extended Attack The external interface of the device is exposed to a constant stream of exploits over an extended period of time. The DUT is expected to remain operational and stable throughout this test, and to pass most/all of the legitimate traffic. If an excessive amount of legitimate traffic is blocked throughout this test caused by either the volume of traffic or the DUT failing for any reason this will result in a FAIL. 5.2 Protocol Fuzzing and Mutation This test stresses the protocol stacks of the DUT by exposing it to traffic from various protocol randomizer and mutation tools. Several of the tools in this category are based on the ISIC test suite and other well-known test tools/suites. Traffic load is a maximum of 350 Mbps and 60,000 packets per second (average packet size is 690 bytes). Results are presented as a simple PASS/FAIL the DUT is expected to remain operational and capable of detecting and logging exploits throughout the test. Note that this test does not apply to WCCP devices, since they only scan for traffic on port 80. 5.3 Persistence of Data The DUT should retain all configuration data, policy data, and locally logged data once restored to operation following power failure. 15
6 Total Cost of Ownership and Value Implementation of security solutions can be complex, with several factors affecting the overall cost of deployment, maintenance, and upkeep. All of these should be considered over the course of the useful life of the solution. Product Purchase the cost of acquisition Product Maintenance the fees paid to the vendor (including software and hardware support, maintenance, and updates) Installation the time required to take the device out of the box, configure it, deploy it in the network, apply updates and patches, perform initial tuning, and set up desired logging and reporting Upkeep the time required to apply periodic updates and patches from vendors, including hardware, software, and firmware updates 16
Appendix A: Change Log Version 1.0 07 March 2014 Original Document Version 1.5 19 May 2014 Added static exploit testing Added SSL Added application awareness Added user awareness Added URL categorization and blocking Version 1.5.1 22 October 2015 Removed SSL tests Added WCCP Removed some sections under Stability and Reliability 17
Contact Information NSS Labs, Inc. 206 Wild Basin Road Building A, Suite 200 Austin, TX 78746 USA info@nsslabs.com www.nsslabs.com This and other related documents are available at: www.nsslabs.com. To receive a licensed copy or report misuse, please contact NSS Labs. 2015 NSS Labs, Inc. All rights reserved. No part of this publication may be reproduced, copied/scanned, stored on a retrieval system, e-mailed or otherwise disseminated or transmitted without the express written consent of NSS Labs, Inc. ( us or we ). Please read the disclaimer in this box because it contains important information that binds you. If you do not agree to these conditions, you should not read the rest of this report but should instead return the report immediately to us. You or your means the person who accesses this report and any entity on whose behalf he/she has obtained this report. 1. The information in this report is subject to change by us without notice, and we disclaim any obligation to update it. 2. The information in this report is believed by us to be accurate and reliable at the time of publication, but is not guaranteed. All use of and reliance on this report are at your sole risk. We are not liable or responsible for any damages, losses, or expenses of any nature whatsoever arising from any error or omission in this report. 3. NO WARRANTIES, EXPRESS OR IMPLIED ARE GIVEN BY US. ALL IMPLIED WARRANTIES, INCLUDING IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT, ARE HEREBY DISCLAIMED AND EXCLUDED BY US. IN NO EVENT SHALL WE BE LIABLE FOR ANY DIRECT, CONSEQUENTIAL, INCIDENTAL, PUNITIVE, EXEMPLARY, OR INDIRECT DAMAGES, OR FOR ANY LOSS OF PROFIT, REVENUE, DATA, COMPUTER PROGRAMS, OR OTHER ASSETS, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 4. This report does not constitute an endorsement, recommendation, or guarantee of any of the products (hardware or software) tested or the hardware and/or software used in testing the products. The testing does not guarantee that there are no errors or defects in the products or that the products will meet your expectations, requirements, needs, or specifications, or that they will operate without interruption. 5. This report does not imply any endorsement, sponsorship, affiliation, or verification by or with any organizations mentioned in this report. 6. All trademarks, service marks, and trade names used in this report are the trademarks, service marks, and trade names of their respective owners. 18