FireEye Email Threat Prevention Cloud Evaluation



Similar documents
Mailwall Remote Features Tour Datasheet

GFI Product Comparison. GFI MailEssentials vs Barracuda Spam Firewall

Migration Project Plan for Cisco Cloud Security

Intercept Anti-Spam Quick Start Guide

Marketing Glossary of Terms

Overview. Accessing the User Interface. Logging In. Resetting your Password

How to Use Red Condor Spam Filtering

eprism Security Appliance 6.0 Intercept Anti-Spam Quick Start Guide

Cloud Services. Anti-Spam. Admin Guide

The Network Box Anti-Spam Solution

Trend Micro Hosted Security Stop Spam. Save Time.

INLINE INGUARD GUARDIAN

Commtouch RPD Technology. Network Based Protection Against -Borne Threats

s and anti-spam Page 1

Copyright 2011 Sophos Ltd. Copyright strictly reserved. These materials are not to be reproduced, either in whole or in part, without permissions.

SPAM FILTER Service Data Sheet

SSL-TLS VPN 3.0 Certification Report. For: Array Networks, Inc.

TREND MICRO. InterScan VirusWall 6. SMTP Configuration Guide. Integrated virus and spam protection for your Internet gateway.

Why Content Filters Can t Eradicate spam

How To Integrate Hosted Security With Office 365 And Microsoft Mail Flow Security With Microsoft Security (Hes)

Deploying Layered Security. What is Layered Security?

Opus One PAGE 1 1 COMPARING INDUSTRY-LEADING ANTI-SPAM SERVICES RESULTS FROM TWELVE MONTHS OF TESTING INTRODUCTION TEST METHODOLOGY

Configuration Information

Configuration Information

Spam Testing Methodology Opus One, Inc. March, 2007

Service Launch Guide (US Customer) SEG Filtering

Recurrent Patterns Detection Technology. White Paper

SonicWALL Security Quick Start Guide. Version 4.6

FortiMail Filtering Course 221-v2.0. Course Overview. Course Objectives

FortiMail Filtering Course 221-v2.2 Course Overview

ICSA Labs Web Application Firewall Certification Testing Report Web Application Firewall - Version 2.1 (Corrected) Radware Inc. AppWall V5.6.4.

WildFire Reporting. WildFire Administrator s Guide 55. Copyright Palo Alto Networks

ESET Mail Security 4. User Guide. for Microsoft Exchange Server. Microsoft Windows 2000 / 2003 / 2008

Quick Start Policy Patrol Mail Security 10

Exchange Online Protection In-Depth

SurfControl Filter for SMTP

Mimecast Security

FILTERING FAQ

Symantec Protection Suite Add-On for Hosted and Web Security

Quarantined Messages 5 What are quarantined messages? 5 What username and password do I use to access my quarantined messages? 5

Government of Canada Managed Security Service (GCMSS) Annex A-5: Statement of Work - Antispam

Trustwave SEG Cloud Customer Guide

WatchGuard QMS End User Guide

Symantec Hosted Mail Security Getting Started Guide

GFI Product Comparison. GFI MailEssentials vs. Trend Micro ScanMail Suite for Microsoft Exchange

PineApp Daily Traffic Report

Trend Micro Hosted Security Stop Spam. Save Time.

More Details About Your Spam Digest & Dashboard

IBM Express Managed Security Services for Security. Anti-Spam Administrator s Guide. Version 5.32

When Reputation is Not Enough: Barracuda Spam & Virus Firewall Predictive Sender Profiling

When Reputation is Not Enough. Barracuda Security Gateway s Predictive Sender Profiling. White Paper

When Reputation is Not Enough: Barracuda Spam Firewall Predictive Sender Profiling. White Paper

Blackbaud Communication Services Overview of Delivery and FAQs

ANTI-SPAM SOLUTIONS TECHNOLOGY REPORT FEBRUARY SurfControl Filter.

T E C H N I C A L S A L E S S O L U T I O N

Comparing Industry-Leading Anti-Spam Services

Websense Messaging Security Solutions. Websense Security Websense Hosted Security Websense Hybrid Security

Mod 08: Exchange Online FOPE

How To Stop Spam From Being A Problem

eprism Security Appliance 6.0 Release Notes What's New in 6.0

WHAT S NEW IN WEBSENSE TRITON RELEASE 7.8

Hosted CanIt. Roaring Penguin Software Inc. 26 April 2011

Symantec Messaging Gateway 10.5

Anti Spam Best Practices

POP3 Connector for Exchange - Configuration

ASAV Configuration Advanced Spam Filtering

AntiVirus and AntiSpam scanning The Axigen-Kaspersky solution

Implementing Endpoint Protection in System Center 2012 R2 Configuration Manager

Secret Server Qualys Integration Guide

Quarantine Central for end users: FAQs

Evaluation Guide. eprism Messaging Security Suite V8.200

STPIC/Admin/002/ / Date: Sub: Quotation for purchase/renewal of Anti Virus Software Reg.

BitDefender Client Security Workstation Security and Management

Eiteasy s Enterprise Filter

escan SBS 2008 Installation Guide

Symantec Hosted Mail Security. Console and Spam Quarantine User Guide

Serial Deployment Quick Start Guide

USER S MANUAL Cloud Firewall Cloud & Web Security

Release Notes for Websense Security v7.2

Ipswitch IMail Server with Integrated Technology

Stop Spam. Save Time.

AntiSpam. Administrator Guide and Spam Manager Deployment Guide

Services Deployment. Administrator Guide

PureMessage for Microsoft Exchange Help. Product version: 4.0

Do you need to... Do you need to...

What makes Panda Cloud Protection different? Is it secure? How messages are classified... 5

Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

Global Reputation Monitoring The FortiGuard Security Intelligence Database WHITE PAPER

Spam DNA Filtering System

How To Prevent Hacker Attacks With Network Behavior Analysis

IBM Unica emessage Version 8 Release 5 February 19, Transactional Administration Guide

Transcription:

Evaluation Prepared for FireEye June 9, 2015 Tested by ICSA Labs 1000 Bent Creek Blvd., Suite 200 Mechanicsburg, PA 17050 www.icsalabs.com

Table of Contents Executive Summary... 1 Introduction... 1 About ICSA Labs... 1 FireEye Email Threat Prevention Cloud Overview... 1 Test Environment... 1 Description of Messages in Each Test Set... 2 Test Procedure... 3 Results and Discussion... 4 Conclusions... 6 Appendix A... 7 Test Facility Information... 9 Test Location... 9 Lab Report Date... 9 Page i of i June 9, 2015 2015 ICSA Labs. All rights reserved.

Executive Summary ICSA Labs conducted a test of the FireEye Email Threat Prevention Cloud (ETP) service to evaluate its effectiveness in detecting malware and SPAM messages in email. The testing was conducted over ten consecutive weekdays beginning March 23, 2015 and ending on April 3, 2015. ICSA Labs used a cloudbased mail relay to forward recently in-the-wild SPAM messages and malcode samples extracted from a live SPAM corpus in order to simulate the deployed use of the product to protect an enterprise. During the 20 hour testing period, a total of 15,347 messages were delivered to the ETP (13,845 SPAM messages, 1,424 legitimate email messages, and 78 messages with a malcode attachment). The ETP delivered only 76 SPAM messages for a SPAM detection rate of 99.45% and only one message with a malcode sample attached (1.28%). Although 9 legitimate messages were blocked (0.63%), none of those were personal email messages. Nor was any email message with a legitimate attachment blocked. Every legitimate email message that was blocked was either a newsletter or mailing list notification. Introduction About ICSA Labs ICSA Labs, an independent division of Verizon, has been providing credible, independent, third-party product assurance for end-users and enterprises since 1989. ICSA Labs provides third-party testing and certification of security and health IT products, as well as network-connected devices, to measure product compliance, reliability and performance for most of the world s top technology vendors. FireEye Email Threat Prevention Cloud Overview The FireEye Email Threat Prevention Cloud (ETP) platform provides real-time, dynamic threat prevention without the use of signatures to protect an organization across primary threat vectors, including web, email, and files and across different stages of an attack lifecycle. The core of the FireEye platform is a virtual execution engine, complemented by dynamic threat intelligence to identify and block cyber-attacks in real time. Test Environment The test environment for the evaluation consisted of a cloud-based virtual machine running CentOS 6.6 with a custom-built mail relay that connected using ESMTP to the ETP Cloud system IP address provided by FireEye. The CentOS server communicated with a server running on-site at ICSA Labs using a VPN tunnel to enable real-time access to the SPAM corpus and access to the MySQL database used for tracking the delivery and receipt status of every message during each test. From the perspective of ICSA Labs, FireEye provided an ESMTP service listening on port 25 on a specified host along with HTTPS access to an administrative interface for monitoring the ETP status and reports. Note that due to the configuration of the test environment, two of the many methods used by the ETP to assist in spam detection could not be leveraged. First enforcement of Sender Policy Framework (SPF) was disabled because it relies on knowing the original source IP address of an incoming message. The ICSA Labs mail relay forwards the message to the ETP so the source IP address that the ETP observes is that of the ICSA Labs system and not the one for the original sender. Page 1 of 9 June 9, 2015

Second knowledge of the set of valid recipients for the domain being protected could not be obtained by the ETP because the ICSA Labs next-hop MTA that was receiving messages does not implement recipient validation. FireEye reports that the ETP is designed to learn the addresses of valid users and factor that information into its classification decisions. Description of Messages in Each Test Set Each test set is comprised of a mix of four types of messages described in the table below. Note that prior to its inclusion in the test set, each message is classified as one and only one type. Message Type (#) SPAM (1) HAM Person-to- Person (2) HAM Subscription (3) Malcode (4) Description Unsolicited messages arriving at the ICSA Labs honeypot. Messages of this type should NOT be delivered. Legitimate person-to-person(s) messages. May or may not have one or more attachments. All attachments for this type are known to be free of malcode. Messages of this type should be delivered. Legitimate mailing list messages to which the recipient has subscribed. To be included in the test set, the message must be addressed to the recipient, be delivered by a server with the same IP address as has been used for previous mailing list messages, and have the same top-level domain (e.g., govdelivery.com). If a marketing campaign message met these requirements, it would be included in the test set as HAM Subscription. Messages of this type should be delivered. Legitimate person-to-person(s) messages with a malcode sample attached. Even though the message body is harmless, messages of this type should NOT be delivered because the attachment is malicious. In terms of the SPAM used, ICSA Labs performs automated analysis on each SPAM message received in our honeypot. All of the SPAM messages are stored as files. The identifying characteristics or metadata for each stored SPAM message is kept in a database. The same is done for all mailing list e- mail messages received into the honeypot. Unlike legitimate mailing list and spam messages, legitimate person-to-person(s) messages are created when and as needed for testing as explained later. ICSA Labs controlled the makeup and composition of the messages in each test set as shown in the table below. The values in the table were chosen to reflect conditions encountered by deployed enterprise anti-spam solutions. SPAM (type 1) to non-spam (types 2,3, or 4) 90:10 Of the 10% non-spam: Person-to-Person(s) (type 2) vs. Mailing List (type 3) vs. Malcode (type 4) 80:15:5 25% had clean attachment For the HAM Person-to-Person(s) (type 2) 40% Text, 40% HTML, 20% Text+HTML 80% were to 1 recipient 20% were to between 2-10 recipients Page 2 of 9 June 9, 2015

When a spam message or legitimate mailing list message was needed, the controller program contacted the ICSA Labs SPAM and HAM e-mail database and requested the most recent message that arrived in the ICSA Labs honeypot matching the desired type. By the time a SPAM or legitimate mailing list message is sent to the ETP, the e-mail is typically less than one second old. Legitimate person-to-person(s) message were assembled from sender to addressee(s) to subject to type of message (html, text, both) to body to attachment(s) to closing from a corpus of component parts. In constructing the message, the controller program choose the parts at random to eliminate repeats for that test run while following the ratios and percentages presented in the previous table. In the event that the controller program determines that a legitimate person-to-person(s) message was next to be sent the message was created and sent. A received header with an appropriate time stamp was created making it appear as though the message just arrived into the ICSA Labs honeypot. When a malcode sample was needed, the controller program first assembled a legitimate person-toperson(s) message. Then an attachment was added from ICSA Labs collection of malicious email attachments. The collection consisted of attachments that have been extracted from recently arrived messages in the ICSA Labs SPAM corpus. The file name and MD5 digest for each attached sample was provided to FireEye privately along with this report. Test Procedure A total of twenty hours of testing was split into two 10 hour blocks to enable comparison of the two different ETP configurations described in the following section. Testing was conducted over ten consecutive week days as described below. Week 1: March 23-27, 2015 Week 2: March 30 - April 3, 2015 Weekday Test Start Test End (all times GMT -0400) Monday 10:00 12:00 Tuesday 12:00 14:00 Wednesday 14:00 16:00 (week 2: 18:00-20:00) Thursday 18:00 20:00 (week 2: 14:00-16:00) Friday 08:00 10:00 Note that the test sessions were intended to run during the same time window each week for a given weekday in order to reduce the differences introduced by the continually changing composition of SPAM messages on the Internet. However due to an unexpected issue, the time for the test sessions on Wednesday and Thursday in the second week were swapped. For each 2-hour session, the control program began by verifying messages were arriving at the honeypot and the connection to the backend server is available. The control program then generates an ordered list of the types of messages for the current test to match the configured SPAM-to-HAM-to-malcode ratio. For example, the list might begin with seven SPAM, then one person-to-person HAM, then five more SPAM, then a message with a malcode attachment, and so on. The program obtains a message with the most recent arrival time that matches the type prescribed by first entry in the list and relays the message to the ETP using ESMTP. Once delivery to the ETP was confirmed, the program documented the result in the SPAM and HAM e-mail database and moved on to the next message type from the ordered list. This process was repeated until the time limit set for the test had been reached. Independent of the sending control program, an MTA was listening on port 2525 to receive whatever messages the ETP attempted to deliver. The ETP delivered the message using standard ESMTP. The Page 3 of 9 June 9, 2015

receiving MTA looked up the message in the SPAM and HAM database to determine its classification and updated its delivery status. If the message was supposed to have been blocked, for example because it was SPAM or malcode, a copy of the delivered message was saved to the results directory to assist with any subsequent analysis. The ETP administrative interface was reviewed to determine how it classified the messages in the test session (i.e., Spam, Advanced Threat or Virus). The Policy Violation functionality was not evaluated. Two of many console screenshots taken during the audit are included in Appendix A below. Results and Discussion The table below summarizes the results of each two hour testing session separately, the combined results for each week of testing and for the overall engagement. The definitions for the headings SD and HFP are below the table. Date Total SPAM SPAM SD* HAM HAM HFP* Malcode Malcode msgs sent dlvd (%) sent blkd (%) sent dlvd 3/23 1519 1365 8 99.41 146 4 2.74 8 0 3/24 1544 1419 4 99.72 121 0 0.00 4 0 3/25 1331 1196 3 99.75 130 0 0.00 5 0 3/26 1580 1406 9 99.36 168 3 1.79 6 1 3/27 1183 1052 8 99.24 123 1 0.81 8 0 Wk 1 7157 1634 32 99.50 688 8 1.16 31 1 3/30 1811 1418 5 99.69 166 0 0.00 11 0 3/31 1569 1661 5 99.65 145 0 0.00 6 0 4/1 1818 1522 19 98.86 144 1 0.69 13 0 4/2 1696 1172 15 99.01 163 0 0.00 11 0 4/3 1296 1634 0 99.69 118 0 0.00 6 0 Wk 2 8190 7407 44 99.41 736 1 0.14 47 0 Total 15347 13845 76 99.45 1424 9 0.63 78 1 *SPAM Detection (SD): the percentage of SPAM e-mail messages that the ETP attempts to deliver. *HAM False Positive (HFP): the percentage of legitimate e-mail messages blocked, dropped or otherwise not delivered by the ETP. During the first week of testing the spam detection was 99.50%, eight HAM messages were blocked by the ETP, and one message with a malcode sample attached was delivered. During the second week of testing the SPAM detection was 99.41%, and only one HAM message was blocked by the ETP. Note that FireEye reported that the only configuration change between the first and second weeks of testing was that a bulk email flag was turned off. The bulk email flag tunes how aggressively ETP marks some messages as SPAM, particularly marketing mailing list messages that a user may not have knowingly opted-in to receive. These messages may be seen as SPAM for some users but not others, so FireEye provides a flag to help organizations tune this detection. In this test, the bulk mail flag was on for the first week with more aggressive spam tuning and turned off for the second week for more relaxed spam tuning. During the two week testing period, none of the blocked HAM email was a person-to-person(s) message. In addition, no legitimate email message with a legitimate attachment was blocked. In other words, every personal email in the test set, with or without a clean attachment, was properly classified as not SPAM and promptly delivered to its intended recipient(s). Each blocked message was either a notification Page 4 of 9 June 9, 2015

containing a post to a mailing list (e.g., nanog or samba) or a Google Alert message (https://www.google.com/alerts). The ETP detected and blocked 77 out of 78 malcode samples. Each sample was relayed to the ETP as an attachment to an email message. The MD5 digest of the single malcode sample not detected by the ETP was provided privately to FireEye for analysis. Subsequent analysis of the SPAM corpus revealed that a SPAM email campaign lasting over 150 minutes had begun near the end of the last test session on April 3, 2015. Each of the 1261 email messages received by ICSA Labs had the same Subject and an attachment having a fixed length filename made up of random alphanumeric characters and a *.zip extension. As shown in the image below, the ETP modified the categorization of the messages associated with the campaign from "SPAM and Virus" to "Advanced Threat" after receiving just 11 messages in a little over 2 minutes. Both the number of messages and the time to elevate the risk category were very small when compared to the respective values observed for the overall campaign. In other words the ETP recognized the campaign quickly after analyzing a small fraction of the total number of messages. Note that categorizing the messages as "Advanced Threat" resulted in email alerts being sent by the ETP to configured administrative users. Page 5 of 9 June 9, 2015

Conclusions The FireEye Email Threat Prevention Cloud (ETP) service was very effective in detecting malware and SPAM messages in a live feed relayed from the ICSA Labs corpus. Spam detection effectiveness was measured at 99.45% and with the exception of a few newsletters, there were no false positives observed. Page 6 of 9 June 9, 2015

Appendix A Screenshot of the Quarantined messages tab in the Email Threat Prevention Cloud Administrative Interface. A filter has been applied so only those classified as Virus or Advanced Threat will be listed. Page 7 of 9 June 9, 2015

Screenshot of the Dashboard in the Email Threat Prevention Cloud Administrative Interface summarizing the current day s activity Page 8 of 9 June 9, 2015

Test Facility Information This report is issued by the authority of the Managing Director, ICSA Labs. Test Location ICSA Labs 1000 Bent Creek Blvd Mechanicsburg, PA 17050 Lab Report Date June 9, 2015 Page 9 of 9 June 9, 2015