Can Data Leakage Prevention Prevent Data Leakage?

Size: px
Start display at page:

Download "Can Data Leakage Prevention Prevent Data Leakage?"

Transcription

1 Can Data Leakage Prevention Prevent Data Leakage? Bachelor Thesis Matthias Luft First Examiner: Professor Dr. Felix C. Freiling Second Examiner: Dr. Thorsten Holz Advisor: Dr. Thorsten Holz University of Mannheim Laboratory for Dependable Distributed Systems

2

3 Eidesstattliche Erklärung Hiermit versichere ich, die vorliegende Arbeit ohne Hilfe Dritter und nur mit den angegebenen Quellen und Hilfsmitteln angefertigt zu haben. Alle Stellen, die aus den Quellen entnommen wurden, sind als solche kenntlich gemacht worden. Diese Arbeit hat in gleicher oder ähnlicher Form noch keiner Prüfungsbehörde vorgelegen. Mannheim, 27. Mai 2009 Matthias Luft

4

5 Abstract Data Leakage Prevention is the general term for a new approach to avoid data breaches. To achieve this aim, all currently available implementations perform analysis of intercepted data. This analysis is based on defined policies which describe valuable data. There are different possibilities to both define these content policies and to intercept data to make the analysis possible. This thesis examines exemplary DLP software to reveal security vulnerabilities of these solutions. These security vulnerabilities can result in different impacts like that data breaches can still happen or even the appearance of new leakage vectors. This review process is an essential step in the life cycle of every new software or concept. There should be a continuous cycle of test phases and examinations before a solution can be regarded to be dependable. The results of the performed tests allow general conclusions on the maturity of current products and show basic problems of the concept Data Leakage Prevention. Zusammenfassung Data Leakage Prevention ist der allgemeine Ausdruck für ein neues Konzept, das die Offenlegung von Daten verhindern soll. Um dieses Ziel zu erreichen, führen alle aktuell verfügbaren Implementierungen verschiedene inhaltsbasierte Untersuchungen auf abgefangenen Daten durch. Diese Analysen basieren auf Policies, die schützenswerte Daten beschreiben. Es existieren verschiedene Möglichkeiten, diese Policies zu definieren und Daten abzufangen, um die Analyse zu ermöglichen. Diese Arbeit untersucht exemplarische Lösungen, um enthaltene Sicherheitslücken aufzudecken. Diese Sicherheitslücken können verschiedene Auswirkungen nach sich ziehen. So kann beispielsweise die Offenlegung von Daten nicht wirksam verhindert werden oder es können sogar neue Möglichkeiten für das Auftreten von Datenverlust hinzukommen. Dieser Prüfungsprozess ist ein essentieller Schritt im Lebenszyklus jeder neuen Software oder jedes neuen Konzepts. Bevor eine Lösung als zuverlässig betrachtet werden kann, sollte ein kontinuierlicher Zyklus an Testphasen und Untersuchungen existieren. Das Ergebnis der durchgeführten Tests erlaubt allgemeine Rückschlüsse auf den Reifegrad aktueller Produkte und zeigt prinzipielle Probleme des Konzepts Data Leakage Prevention auf.

6

7 Contents 1 Introduction Tasks Outline Results Acknowledgments Definitions And Concepts Data Leakage Safeguarding confidentiality Multilevel Security Access Control Lists And Capabilities Information Flow Hippocratic Databases Leak Prevention/Bayesian Filter Summary Current DLP Approaches Outline Identify: How To Find Valuable Content Monitor: Controlling Every Channel React: Handling Policy Breaches Current DLP Products Summary Evaluation Testcases McAfee Host Data Loss Prevention Environment And Specifications Testcases Results Websense Data Security Suite Environment And Specifications Testcases Results Summary Abstraction To General Problems And Conclusions General Problems Alternative Solution Statements Conclusions i

8 A Security Goals 45 B Security Software Vulnerabilities 47 C Technical documentation 49 C.1 Filesystem Forensics C.2 Portscanner Results C.3 Attacks on SSL C.4 Fuzzing C.5 Man In The Middle Attacks ii

9 List of Figures 1 Architecture of McAfee Host Data Loss Prevention Structure of the test environment Text pattern for matching the string secret The complete reaction rule using the text pattern Copying the secret file to a USB stick The secret file is correctly blocked The blocked file is stored at the epo repository PDF files with removed leading line are not recognized NTFS alternate data stream is monitored correctly Unmonitored partition Monitored partition Information contained in file names is not monitored Files containing sensitive data are transmitted via SMB in plain text Architecture of the Websense Data Security Suite The file containing the filtered string is blocked A PDF document without its first line is not recognized EXIF comments in images are not recognized There are zero lines containing anything other than zero Additional search for the string SECRET on the empty disc Successful search after the valuable file was deleted by DLP The management server supported insufficient key sizes API hooks without (left) and with (right) installed DLP agent iii

10 iv

11 List of Tables 1 DLP testcases Testcases for McAfee DLP Performed testcases for McAfee Host Data Leakage Prevention and Websense Data Security Suite Vulnerabilities in security software v

12 vi

13 1 Introduction Data Leakage Prevention (DLP) is the general term for a new technology which has its focus on the big problem of information leakage. Many of the problems which emerge from security holes concern the unauthorized access on data. The impact of such incidents ranges from identity theft to law suites due to data breach rights. Data Leakage Prevention provides an approach which should avoid the possibility of data leakage. There are already several products on the market which fulfill the requirements to be called a DLP suite [QP08]. The complete market for these tools is very new and not yet matured. Every new technology should be reviewed and tested several times before it could be regarded as a dependable solution for the addressed problem. 1.1 Tasks This thesis covers basic parts of this review process by examining DLP solutions in face of mainly three questions: Is accidental leakage still possible? Is it possible to subvert the leakage prevention? Are there any vulnerabilities in the software? To answer these questions, a set of tests must be developed. These tests will examine which forms of data leakage can be avoided by implementing a DLP suite. Possible tools for this test suite will be encryption, obfuscation and embedment of the information using different data types. To assess the vulnerabilities of the software, it will be examined whether standardized technologies are used and, if it is possible, how the application reacts on modern fuzzing techniques. The two exemplary DLP solutions that get examined are the really new McAfee Host Data Loss Prevention and the Websense Data Security Suite which is one of the leading products. As the examination shows, both solutions contain several flaws which make them fail in important areas. These results allow the abstraction to general problems of the concept DLP and an interpretation of its capabilities. 1.2 Outline To go through the review process in a structured way, Section 2.1 defines the term Leakage and motivates the need for any kind of protection against it. To restrain DLP against other technologies which deal with access control and related problems, similar approaches are listed and illustrated in Section 2.2. Since the whole area of access control is a really broad field, the range of covered developments is limited to those that can be alternatives to DLP. 1

14 This theoretical groundwork allows the comparison of several key characteristics of DLP. These characteristics are represented in different stages during the prevention of data leakage and are the framework for most of the DLP solutions on the market. Therefore it is necessary to understand these principles to determine whether evaluated solutions follow these best practices. Section 3 provides an overview on these concepts. The evaluation of the two DLP solutions is based on a set of test cases and requirements which are explained in Section 4.1. There is also a structure provided for specifying the test results of the McAfee Host Data Loss Prevention (Section 4.2) and the Websense Data Security Suite (Section 4.3). These results allow the answering of the initial questions about the capabilities and limitations (Section 1.1) of the DLP concept. Section 5 sums up the gathered results and draws general conclusions on what DLP can be, what its problems are and what can be done instead. 1.3 Results As the title supposes, the main topic of this thesis is the question whether DLP can avoid data breaches. The results of the examination of the DLP solutions allow general conclusions on this question. Both solutions contain severe flaws which either restrict the ability to avoid data loss or even create new possibilities for attackers to retrieve confidential data. For example, the McAfee DLP solution did not delete confidential files that were copied to USB devices properly and the encryption of the Websense Data Security Suite was not sufficient. 1.4 Acknowledgments First of all, I would like to thank Professor Dr. Felix C. Freiling and Dr. Thorsten Holz, who gave me the possibility to write this thesis. Special thanks go to Dr. Thorsten Holz since he was willing to be both my advisor and second examiner. For the initial idea to write this thesis and a lot of helping contacts, I would like to thank Enno Rey and the ERNW GmbH. Thanks go also to Michael Thumann from ERNW who could provide the evaluation version of the McAfee DLP solution. For the evaluation version of the Websense Data Security Suite, I want to thank Jörg Kortmann from Websense and Marcel Sohn and Rainer Kunze from ConMedias. They provided the evaluation version in a very fast and unbureaucratic way. Again, thanks go to Dr. Thorsten Holz, Enno Rey and Phoebe Huber for providing lots of feedback for improving the structure, expression, and composition of my thesis. 2

15 2 Definitions And Concepts Data leakage is a very general term which can be used in a variety of meanings. In the context of DLP, it means a certain loss of data or, more precisely, the loss of confidentiality for data. To examine solutions which prevent data leakage, it is necessary to define the term data leakage to derive from this understanding which threats exist and must be controlled. Section 2.1 provides a definition of leakage which is used in the remainder of the document. This definition involves several characteristics of data leakage and derives from several examples which describe popular data breach incidents which occurred within the last year. Even if this selection is very limited, it represents different kinds of data leakage. The provided characteristics involve several typical attributes of data leakage and sums them up to a single definition. This basic groundwork is necessary to understand which threats arise from different types of data leakage and even more important which controls can be applied to mitigate them. As DLP is only one of these controls, Section 2.2 lists different approaches which also concern the area of access control and can be alternatives to DLP. 2.1 Data Leakage There are lots of well-known examples that represent different kinds of Data Leakage. One of the most popular incidents in 2008 was the selling of a camera on Ebay [Gua08]. This camera contained pictures of terror suspects and as internal classified documents of the MI6, the British intelligence service. This kind of leakage means an inadvertent loss of trust for the abilities and trustworthiness of an organization, but does not affect anyone other than the organization. Another british organization, the General Teaching Council of England, lost a CD containing data of more than teachers [BBC08]. The CD was sent to another office via an courier service, but did never arrive. Fortunately, all information was encrypted so that nobody can use the lost data. At this point, it is necessary to distinguish between data and information. The remainder of this thesis will refer to the term data when the pure content of any kind of media or communication channel is meant. In contrast data becomes information when it can be interpreted to transport any kind of message. Thus the loss of an encrypted CD means the loss of data. This encrypted data can not be interpreted to get the information which is on the CD, too. Other examples are even worse because personal data is disclosed. These kind of incidents can result in identity theft for thousands of people. One of the biggest leakages of this kind in 2008 happened to the German T-Mobile. 17 million customer data sets were stolen due to the exploitation of security vulnerabilities in different systems and databases [Spi08]. Those three examples are only a vanishingly small amount of data breaches that 3

16 happen every day. The website privacyrights.org [Cle] is focused on data breaches which affect individual persons. It provides a database of leakage incidents which disclosed more than 251 million data records of U.S. residents since January privacy issues just happened in December 2008, and the website just covers incidents inside of the USA. Articles like the yearly published 2008 Data Breach Investigations Report [BHV08] sum up the most popular incidents and provides statistics about those. Based on this exemplary selection, some of the characteristics of data leakage can be pointed out. These characteristics are independent from the kind of leakage and how it occurred. The following items just describe how data leakage feels and affects things: unintentional It is obvious that data leakage is never on behalf of the affected organization or person. Though Data Leakage can happen by misuse, accidental mistakes, or malicious activities. independent from leakage vector Information can leak on all channels it can be transmitted. So leakage is not restricted to be meant in a very technical way. The incidental data loss like the camera sold on Ebay is mostly based on inconsiderate user behavior which leads to a physical leakage. independent from leaked information Since we distinguished the terms data and information in the introduction, the term data leakage is more general than information leakage. As we can see in the case of the lost CD, even the loss of encrypted data when no valuable information gets disclosed affects the reputation of an organization. impact inadvertent Data Leakage affects always the confidentiality of data, thus it leads to a loss of confidentiality. As the previous item shows, also a loss of data which can not be interpreted can lead to a loss of reputation for the company. When data is disclosed, there is no possibility to undo this breach [JD07]. Nobody can determine how many persons read the data or even distributed it. This restricts the response process to controls like changing passwords, informing affected users, and closing security holes. Most of these characteristics mentioned the confidentiality of data. To classify controls and threats which concern data, there are three classical and primary security goals. The following definitions of these goals are given by the ISO/IEC standard and are also listed in Appendix A. 4

17 Confidentiality The property that information is not made available or disclosed to unauthorized individuals, entities, or processes Integrity Availability The property of safeguarding the accuracy and completeness of assets The property of being accessible and usable upon demand by an authorized entity Based on these presumptions and examples, the following, very general, definition of Data Leakage can be derived: Data Leakage is from the owner s point of view unintentional loss of confidentiality for any kind of data. Using this definition, it is necessary to define the term unintentional in more detail. As the definition states, data leakage is unintentional from the owner s point of view. But from the perception of an industrial spy s, data leakage can of course be intentional. So the following items define different ways how leakage can occur. Still, the characteristics explained above apply to every leakage incident, no matter which of the following scenarios happened: unconscious unintentional malicious third party There are many situations in which people do not recognize that their behavior can lead to data leakage. So most people do not know that files on any kind of data storage are easily recoverable when they are just deleted using standard commands. A file which is accidentally copied to an USB stick thus can lead to leakage even when the users recognizes his mistake and deletes it using the wrong tools. This means that the person is not aware of the possible leakage, he rather thinks acting absolutely right and avoiding the leakage. Unintentional leakage is quite similar to unconscious leakage. The main difference is that the leakage is recognized, but there is no possibility to stop or revoke the action like an was sent to an unintended recipient, e.g., due to a typo. Attackers and malware are still two of the biggest problems in the digital world. Since most pieces of malware are spread to gather more or less defined pieces of information e.g. every kind of private data like online banking accounts [HEF08], it is obvious that any kind of malicious activity can lead to data leakage. Data leakage can also happen without any mistake of the information owner. The example of the lost CD shows that even the 5

18 mistakes of business partners or service providers can lead to a loss of data and thus reputation. In such cases the argumentation would contain items like the careless choice of partners. careless Even if policies are deployed which describe the correct handling of sensitive information, users will disregard these rules. For example in one third of the incidents in the 2009 Data Breach Investigations Report which are caused by internal sources careless end-users are the culprits [BHV09]. If there is a policy which specifies that valuable information must be encrypted when sent to an external business partner, users will sent this unencrypted if there is a big hurry. Another point is a lack of awareness because data leakage never happened to them and for sure nothing will occur. The remainder of this thesis will use the defined characteristics as well as the different ways data leakage can occur. They will also influence the development of appropriate testcases for DLP solutions in Section Safeguarding confidentiality The loss of confidentiality is a common problem when engaging with information security. There are lots of approaches which follow a variety of ways to safeguard the confidentiality of data. Since they all use different technical controls and basic ideas we need to get an overview to differentiate them to the concept of DLP Multilevel Security Multilevel Security (MLS) systems were first developed in the seventies to map the military access controls to digital systems. The oldest concepts were the Bell- LaPadula-Model [BL76] and the Biba-Model [Bib77] which protect confidentiality and respectively integrity. MLS systems must be restrained against other approaches which also concern with access controls. For example the more general concept Mandatory Access Control (MAC) provides a framework for enforcing access rights on operating system level without users being able to change any permissions. So MLS can be implemented on top of MAC. Another related concept is the Need To Know principle which restrains access in a horizontal way; it defines in which area any access is even possible. A simple example are several organizational units in a company. Employees from different departments may have the security clearance to access files from other departments. But they simply do not need to know them to get their work done. It was also developed to fulfill the military needs for shared knowledge. Every single unit should only know the part of the overall strategy which it really needs to finish its particular function. MLS systems assign security labels to each subject and object. These security labels are used to prevent unauthorized access to data and allows the implemen- 6

19 tation of more granular access controls. These labels define the security clearance of subjects and objects and tell MLS apart from other approaches. The two classical variants safeguard either confidentiality the Bell-LaPadula model or integrity the Biba model. By now, there are several other approaches which ensure both security goals like the Lattice model [Den76] or the Chinese Wall Model [BN89]. Since DLP addresses the protection of confidentiality, the Bell-LaPadula model is an adequate example to explain the basic principles of MLS. As every MLS system, it assigns security labels to each subject and object. These labels represent usually the security clearances unclassified, confidential, secret and top secret. Based on these security clearances, every access is controlled and the MLS system decides whether it is granted or not. To enforce such a preventive approach, which ensures that data is only accessed if the check of the security levels succeeded, an integrated data classification is necessary. Based on these requirements the Bell-LaPadula model enforces two basic rules: No Read Up A subject can read an object, if its security clearance is higher or the same as the object s one No Write Down A subject can write an object, if its security clearance is lower or the same as the object s one As the first rule is obvious, the following example explains why the second rule assures the confidentiality of data: If an user with security level top secret composes a document, he must not be allowed to save it and assigning it the security level secret. If he could do so, anyone having security level secret would be able to read the document, what would be a violation of the needs for confidentiality. In contrast to DLP, this system depends on the constant use of a security classification scheme. It grants access based on meta information while DLP needs to analyze the content Access Control Lists And Capabilities Access Control Lists (ACLs) and capabilities are two more concepts of access control. Both approaches do not use access levels or security clearances but define exactly what a subject or object is allowed to do. An ACL can be considered as a table [And01] which is assigned to an object. This table contains every user and its access rights for this particular object. In contrast, capabilities define exactly what a subject is allowed to do, e.g. accessing file file.txt for read and write access. These approaches allow very fine grained access rules since every interaction of a subject and an object can be controlled. Obviously this is not practicable so that ACLs which contain groups of users are more common. Another problem is the enumeration and modification of access rights and capabilities: Using ACLs, 7

20 it is hard to find all the files to which a user has access and also hard to remove those access rights if a users is removed from the system. Inversely, capabilities complicate the enumeration of all users which have access to a certain object. This is even more complicated since user or processes can share capabilities and hand them over to other processes. So it is really hard to revoke a once given capability Information Flow One of the problems of MLS systems is the focus on the access of subjects to objects. If an access to an object is regarded to be valid, it is for example not further controlled whether the data is passed to any process which can be read by other users. Information Flow models [Den76] address this gap by monitoring the complete path of data between transitive processes or objects. For example the Bell-LaPadula model would allow the write access of user A on process P. There would be no further control whether user B has read access on P, so there would be a transitive information flow and so A P B A B. Therefore, Information Flow models assign also a security level to process P which is derived from the information it handles and its clearance level. Based on this additional label, it is possible to hand over the initial security clearance of the information for further protection. This additional labeling must be supported by all programs which process the data. Additional data mark mechanisms combined with updating instructions inserted by the compiler can assure the proper handling of the labeling information. Since there are some problems applying Information Flow models to real world applications, there are similar approaches which enhance the concept. A common problem of pure Information Flow models is the fact that two subjects with different security clearance cannot communicate with each other, only the flow from the higher security level to the lower one is possible. To resolve this problem there are Access Flow models [Sto81] which uses both kinds of labeling: labels for general access and labels for potential access depending on the further use. The labels for general access implement traditional MLS security levels and control the access of subjects to objects. Which labels for potential access are assigned depends on the function to derive this label from general access. It is possible to implement this function f which computes the potential access label p from the general access labels g 1, g 2,..., g n according to the environment and specific requirements. Basically it determines what can be done with the contained information, e.g., copying to documents with other security levels. 8

21 Based on these flow models, a system for taint analysis was implemented by Yin et al. [YSE + 07]. The system monitored data flow through an operation system to determine whether sensible information were accessed by processes other than the intended. This approach was used to detect malware but it shows that information flow models are capable of detecting any data leakage, too Hippocratic Databases Hippocratic Databases are aimed at transferring the principles of the hippocratic oath to database systems [AKSX02]. In particular, this means that more granular access controls exist to ensure that only the owner or an authorized user obtains information from the database system. They extend the principle of statistical databases which are able to provide statistical information without compromising sensitive information like for example the queries on databases which contain only a small amount of rows which would allow conclusions on the contained personal information. The ten characteristic principles of Hippocratic Databases [AKSX02] should be achieved by attaching so called attributes, e.g. authorized-users or retentionperiod, to all stored information. These attributes allow fine grained access control. Another important requirement is the absence of side channels. It must be achieved that executed queries do not provide additional information when compared to statistics, e.g. statistical data which is based only on a small number of data sets which allow further interpretation Leak Prevention/Bayesian Filter has become one of the most important communication mediums. The consequence of the resulting mass of sent messages is the arising thread of leakage. s containing confidential data sent to wrong recipients e.g. due to misspelling or wrong use of the auto completion feature of modern mail agents which completes addresses after the first letters are widely known and obvious examples. There are approaches to apply kind of inverse spam filters to face this thread. One of these uses machine learning methods to determine whether a recipient was an intended one for this certain content or not. There are different stages [CC07] to apply learning strategies to leakage. In a first step, recipient-message pairs are built and indexed using basic textual content analysis. These pairs are compared recipient wise to discover which pair is most different from the other ones. In addition to this baseline method, kind of social network analysis is used, e.g. statistical analysis how often two recipient are addressed in the same . Based on a real world training set of s [Coh], this approach was able to detect real leakage with a success rate up to 90%. Regarding the limited set of training data, it seems to be possible to reach a consistent recognition rate of 90% if this technique would be trained in a mail client with higher, real world rate. 9

22 2.3 Summary The developed definition of data leakage allows the understanding of basic leakage threats. Additionally, several approaches for mitigating various of these threats were presented. Based on this groundwork it is possible to show what DLP actually is and for what purposes it can be used. 10

23 3 Current DLP Approaches This chapter explains current approaches that are implemented in DLP solutions. The following definition of DLP solutions [Mog07] is a good source to understand all important aspects: Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use, through deep content analysis Based on this definition, it is possible to derive the three main capabilities of DLP solutions: Identify Monitor React Each of these steps in leakage prevention has to deal with the mentioned requirements to handle data at rest, in motion, and in use. Thus the remainder of this chapter explains in depth how the different challenges are handled in each situation. There are several products which address single requirements like to scan for certain content. Complete DLP solutions must cover all of these tasks that can also be fulfilled by single programs. This approach allows the central management of all components and thus the sharing of results for further steps. Nevertheless the remainder of this chapter explains in depth how the different work stages are handled by the complete DLP suites. This allows a more fine grained analysis of the single capabilities. 3.1 Outline The cited Definition of DLP provides a base for the structure of the remaining section. Since the single phases of DLP have very different requirements for both analysis processes and examination, the understanding of these phases is necessary for a structured evaluation. Thus Section 3.2 explains how valuable content can be defined. Section 3.3 lists different channels which can transport this valuable data and describes common practices how these channels can be intercepted. Finally, Section 3.4 describes how detected data breaches should be handled and reported. On this basis, Section 3.5 examines what requirements must be fulfilled by a product to be called a complete DLP suite. 3.2 Identify: How To Find Valuable Content If sensitive data should be protected, every kind of control mechanisms needs to know how the valuable data looks like. So in a first step, methods of defining 11

24 data and scanning for it are needed. It is not practicable to insert every piece of information that is worthy of protection into, for example, a database. A central management is needed since the policies must be consistent and manageable. This is not provided when all policies are spread through different places or tools. It is also necessary to provide generic methods to define data both as general and as special as needed. The following approaches provide capabilities to discover data in various way which define also the methods to describe data. Rule-Based Regular Expressions are the most common technique for defining data via an abstract pattern. At the same time, this is the biggest constraint since this approach produces a high rate of false positives due to the limited scope and missing context awareness. For example the term confidential can be used in various, even non confidential contexts. Due to the fast processing of regular expression on huge amounts of text, they can be used as a first filter to reduce the amount of data for further processing using more sophisticated methods. Database Fingerprinting If it is possible to identify a database that holds lot of sensitive data, this database can be used to perform exact file matching. The data gets extracted via live connections or nightly database dumps for checking whether the database data matches intercepted data. This method produces very few false positives but can have a clear impact on database performance and hardware requirements of the DLP server. Exact File Matching Like the extraction out of a database, existing amounts of data, e.g., on a file server can be hashed and indexed. Using these footprints, matching can be performed on any kind of file types with a low rate of false positives. But again, this heavily increases the hardware and bandwidth requirements. Partial Document Matching This technique processes complete documents to be able to match particular appearances in other documents. So every part of sensible documents can be matched even if they are only partially included in other documents or even just copy & pasted to s. To scan large amounts of documents in an efficient way, a special process called cyclic hashing is used. The first hash value indexes the first N characters of the document, the next value covers the next part which includes also an overlapping section of the first one. Thereby, it is important that the resulting index contains an overlapping map of the document. If suspicious documents should get 12

25 examined, the same algorithm can be used to determine whether there is sensible data included. Of course, this method produces a low rate of false positives if standard phrases are excluded, but again produces a high CPU load due to excessive hashing. Statistical Analysis Some DLP solutions use modern machine learning techniques either by processing an amount of training data or by learning continuously at work. Exactly as is the case with learning spam filters, this approach will lead to false positives and false negatives. Anyhow, the resulting evaluation can be used to calculate an overall leakage score. Conceptual If it is possible to define a dictionary of sensible data, the DLP solution can judge content based on the contained words. So unstructured but sensible or unwanted data can be assessed and scored. This extends the concept of regular expressions since a complete dictionary is used for comparison. If more than N of the defined terms were found in the intercepted data, the data is judged to be confidential. In contrast, regular expressions match immediately on every occurrence. Categories Many organizations use information classification based on categories like secret, internal or public. If documents include an area or label representing this category, the DLP solution can recognize and process this information. All of these approaches analyze content of data. The solution must thus have the ability to understand lots of different file types like Word documents, zip archives or images. This deep content analysis allows the further processing using the methods mentioned above. 3.3 Monitor: Controlling Every Channel In a second step, data must be accessible to allow the application of any kind of control. Since the last section was focused on the content analysis, this section regards the context of data. The context of data is again related to the different states of data: In motion, at rest, and in use. Data in motion can have the context of an or a FTP file transmission. This means a complete solution needs as many approaches for monitoring as ways of transmitting data exist. In Motion Data in motion basically means every data that is transferred over the network. So there are several monitoring points to intercept traffic. This includes both integration in web proxies or mail servers and passive network monitoring using a special network port or similar controls. 13

26 At Rest In Use The scanning of data at rest is one of the most important use cases for a DLP solution. So an organization can recognize where its sensitive data is distributed all about. Most of the scanning can be done using network sharing methods, but in some cases as the assessment of endpoint systems or application servers, a software agent is needed to access all data. Since every user behavior is a different use case for leakage prevention, it is not possible to remotely monitor data in use. To control every action a user may take, an endpoint agent is needed. This agent hooks up important operating system functions to recognize all actions, like copying to the clipboard, a user takes. 3.4 React: Handling Policy Breaches There are several controls to handle the different kinds of detected data loss. Depending on the state of data, it is necessary to have an appropriate reaction policy. It is not appropriate to delete sensible documents which are found on a public file server this would lead to a radical decrease of the availability of data even if it would avoid any leakage. There must exist fine grained possibilities to determine what controls should be applied. In the case of the file server, the file should get moved to a secure place leaving a moved to message. Encryption is also a way to secure discovered data: Providing a challenge code the password for decryption can be requested. The data then has to be moved to another place to avoid further policy breaches. It can also be necessary to go through an initial phase of reconnaissance to build and improve a basic rule set. Thus it must also be possible to define policies which just notify about possible data leakage instead of protecting the data from leaking. Similar techniques can protect the content of intercepted s or HTTP communication. These channels can be intercepted by proxies or are even designed to use mail gateways. All complete DLP suites provide plugins or services which integrate into the proxy and mail gateway landscape. This can be implemented using a plugin to extend the proxy functionality or by providing a dedicated service which acts as a proxy and forwards the received data to the real proxy. It is more difficult to recognize sensible data in an unknown network stream which is usually not intercepted by any station. Even if data can be monitored using a special network port, the protection of leaking data is hard. A possibility would be to terminate the connection by TCP reset packets [Inf81] using the sniffed sequence number. Since this connection break is not practicable due to some further factors, it is necessary to integrate the DLP solution into firewalls and routers or deploy endpoint agents. Using access to the network infrastructure, it would be possible 14

27 to deploy dynamic blocking rules which drop connections that transport sensitive data. Endpoint agents can be installed on every endpoint software and communicate with the central policy server. When they are installed and the latest policy is deployed, they can monitor all actions that take place on that particular system. It is even possible to monitor applications whose network streams can not be analyzed by intercepting the traffic e.g. due to proprietary protocols or encryption on a non-network layer. Additionally, the transfer of data to removable storage media has to be monitored and if a policy breach is detected blocked. 3.5 Current DLP Products In 2008, lots of DLP solutions were released on the market. These new products as well as new directions in DLP are summarized every year by Gartner [QP08]. The report provides an overview of the key features of current DLP solutions. It defines also a metric which minimal requirements have to be fulfilled that a product is called a DLP suite. Appropriate products must provide, e.g., complete monitoring of all network data and either monitoring of data at rest or in use. The central management interface is necessary to provide a possibility to control all functions of the solution in a effective way. These requirements distinguish complete solutions from products which focus only on one part of the complete DLP approach, like content discovery. In 2008, the report listed 16 products which met these requirements. This is a increase of almost 100% regarding the 9 solutions mentioned in 2007 [EQM07]. Despite this heavy growth the market is still called adolescent and evolving. This more competitive market results in more extensive capabilities of the solutions in 2008: In 2007, it was adequate to provide filtering or discovering on the network layer. In 2008, according to the evolving market, all products supported scanning of data in at least two of the three states. The integration of endpoint agents into the overall solution thereby is the most important change in product resources. 3.6 Summary The DLP-related definitions given in this section build the groundwork for the understanding of DLP solutions. Without this understanding, it would not be possible to develop appropriate testcases which cover all capabilities of the solutions and also all possible leakage vectors. Additionally, the listed characteristics of DLP suites will show why the two examined DLP solutions are representative examples for this class of products. 15

28 16

29 4 Evaluation Since the DLP market is still adolescent and evolving, it is important to evaluate current products before trusting them. Appendix B shows also that even security technologies contain frequently critical security vulnerabilities. New software or, even more general, new approaches need to be analyzed and reviewed to reveal any flaws. So the following analysis examines whether the two DLP solutions McAfee Host Data Loss Prevention [McA08] and Websense Data Security Suite [Web08] are able to protect the confidentiality of data in typical use cases. Since the implementation of DLP components on endpoint systems was one of the main changes in 2008 [QP08], the according endpoint agents of the two suites were interesting points of research. To examine these components, adequate testcases for endpoint agents are chosen from the developed general testcases (Section 4.1). The Sections 4.2 and 4.3 explain the system design of the solutions and list the gathered security related findings. 4.1 Testcases When evaluating any piece of software, the testcases derive from its application and specification. The test scenarios of a DLP endpoint solution therefore must cover all use cases which represent typical user behavior that could affect the confidentiality of data. In doing so both intentional and unintentional leakage of data must be handled properly. Intentional data leakage includes firstly malicious activities, but also an employee who is restricted by the DLP solution and wants to get his work done, e.g., by sending an containing important information to a colleague. These use cases lead to concrete technical checks which are summarized in Table 1. The sectioning derives from the different stages of a DLP process one more time. So the discovery functionalities must implement all different methods to define data properly. If regular expressions are one possibility to describe a certain kind of data, these regular expressions must be applied properly. Further tests must use different file types to check whether the solutions parse known file types properly and also recognize special functions like embedding further data structures. A compressed archive of Word document containing Excel sheets and images would be a first test. If this processing works it must be examined whether real deep content analysis is performed. This includes the processing of unknown file formats, even binary files, and the processing of known data which is tampered to slightly obfuscate the mime type for example by removing the first line of a pdf document. Another special case is the handling of encrypted data: Information should be encrypted when sent over untrustworthy channels, but attackers can use encryption to protect their communication, too. If there are corporate encryption tools which possibly use special mime types or other characteristics, it must be possible to add these signatures to the DLP solution. 17

30 As Section 3.5 shows, it gets more and more important to monitor all possible channels for data transmission. So the different monitoring modules must be evaluated whether they reliably intercept data. The first tests should contain checks whether sensitive data is recognized during the transfer to USB sticks, which must contain multiple partitions and different filesystems, and HTTP connections. Further steps need to get more sophisticated: Different network protocols, like FTP or more special protocols like SNMP, must be checked. The DLP solution must monitor all media regarding also special cases like alternate data streams of the NTFS filesystem, which append additional data in a second data stream to a file, and unknown blocks of network protocols. Even unknown network protocols should get examined using at least textual analysis. The last step of DLP, the reaction policies, must take care of proper response processes. Depending on the policy, it must be ensured that any reaction is performed correctly like the blocking of file transfers and transmission of notifications. To ensure these proper reaction, the actions should be blocking instead of reacting. This sophisticated differentiation is necessary for the following scenario: If a file transfer is already performed, it is hard to eliminate the possibilities of so called race conditions. If data is copied, e.g. to an USB stick, the USB stick may be removed before any reaction could be performed. Similarly, all reporting messages must be sent using secure channels. The event collector service must authenticate itself and ensure the message integrity as well as the client s identity. Since a lot of valuable information is needed to operate a DLP solution policy description, stored files from incidents the DLP server is a worthy aim for attackers. It is therefore necessary that the DLP solution was developed following secure coding principles and testing methodologies. If a solution shows up vulnerabilities regularly, this might not have been the case. Further examination can include the use of fuzzing methods to search for new vulnerabilities. Despite existing vulnerabilities, all access to both the management console and the clients must be encrypted and authenticated. Only authorized users should be able to read and change policies according to the access rights. The security of the system can be confirmed by a security certification like the EAL of the Common Criteria, at least it is a sign that the product was developed with security in mind. 4.2 McAfee Host Data Loss Prevention One of the new DLP solutions which arose in 2008 was the McAfee Host Data Loss Prevention suite. It is currently available in version 2.2 and exists both as a full DLP suite, including network monitoring and data discovery, and as a endpoint monitoring solution. The central management is realized via a plugin which can be integrated into the McAfee management console epolicy Orchestrator (epo). This management console is the central point for administering all kinds of McAfee security software including the anti virus components and endpoint agents. So the central maintenance of DLP policies can be easily integrated into the workflows of 18

31 Identify Monitor React System Security Are all methods to match data properly working? Are all file types handled properly? Are all file extensions handled properly? Are unknown data structures handled properly? Is encrypted data handled properly? Are all removable devices (USB, floppy disc, CD) monitored properly? Are all file systems monitored properly, including all special functionalities? Are all network protocols (Layer 2/3/4) handled properly? Are all intercepting network devices monitored properly? Is there a possibility to decrypt files using an enterprise encryption system? Is sensitive data blocked? Are all incidents reported properly? Are there reaction or blocking rules? Allow reaction rules race conditions? Is there a firewall/proxy integration to block network connections? Is all sensitive traffic encrypted? Exist any publicly available vulnerabilities? Can vulnerabilities easily found using simple vulnerability assessment methods? Are all access rights set properly? Is there a security certification like a Common Criteria Level? Table 1: DLP testcases 19

32 anti-virus administration. The solution basically provides the monitoring of removable storage media like USB sticks or floppy discs. All content which should be written to these media is monitored and analyzed based on the central policies. There are several methods of content monitoring which will be explained in Section Based on these specifications and characteristics, custom testcases are listed in Section The evaluation of these testcases and the resulting findings finally are listed in Section and centralized discussed in Section Environment And Specifications The McAfee DLP solution consists of several modules: epo Plugin epo Reporting Plugin EventCollectorService Endpoint Agent Database Instance Figure 1 shows the basic interaction of the individual components. The central tool for administration is the Policy Server. Using epo 4.0, the DLP management plugin integrates into the existing anti-virus console. This plugin allows defining DLP policies, reaction rules, global settings for the endpoints agents and assigning these policies to the managed systems. For storing the settings, a Microsoft SQL Server installation is needed. A new DLP database is created during the installation and is reserved for further use. Both the policy data by the management console and the reporting and notification events by the EventCollectorService are stored in the database. This EventCollectorService is listening for messages from the endpoint agents which report successful update processes or policy breaches. When the service receives data it writes it back to the database. Finally the reporting plugin can fetch this data and perform further processing. Since these components are all detached services, it is possible to install them on different machines. Dependent on the number of clients, this allows to scale the performance according to the number of clients. In the built test environment all services, including the database and the domain controller, ran on a single virtual machine as depicted in Figure 2. Due to the small number of two clients this lead to no performance issues. Using this environment, it is possible to monitor every removable storage device which is connected to each of the two client systems. There are several ways to define sensible data: Regular expressions, mime type recognition and manual tagging. The defined regular expressions are applied to all data which should be 20

33 Figure 1: Architecture of McAfee Host Data Loss Prevention read or written from removable mediums and met the specification of the policy rule. This rule specification can contain restrictions to certain mime types or file extensions. Additionally, it is possible to tag files manually and so mark special files which did not met any category. If a breach of policy is detected, a notification is shown to the user, the data is blocked and an alert messages is sent to the event collector service Testcases As mentioned in Section 4.1, the concrete evaluation of an application derives from its capabilities. The examination of an endpoint agents needs a subset of all testcases. Since only the endpoint agent for removable media monitoring is examined, no network monitoring is performed and so Table 2 lists the necessary evaluation items Results To examine the McAfee DLP solution specific tests were performed to determine whether the solution can fulfill the requirements listed in Table 2. To evaluate the capabilities of the system, a simple policy was created: Every file that contains the string SECRET should be blocked from being written to any removable media. 21

34 Figure 2: Structure of the test environment Discover Monitor React System Security Are regular expressions matched and is data blocked? Are all file types handled properly? Are all file extensions handled properly? Are all devices monitored? Are all file systems monitored properly? Is sensitive data blocked? Are all incidents reported properly? Is all sensitive traffic encrypted? Exist any obvious vulnerabilities? Table 2: Testcases for McAfee DLP 22

35 Figure 3: Text pattern for matching the string secret Figure 4: The complete reaction rule using the text pattern Additionally, a notification message should show up. The blocked data then should be stored for further analysis. This policy could be applied using the text pattern shown in Figure 3 and the reaction rule shown in Figure 4. The examination of this basic system again is divided in three sections which derive from the single stages identify, monitor and react. Identify The first simple test was the copying of a text file containing the text SECRET to an USB stick. As was not to be expected otherwise, the text file was blocked (Figure 5 and 6) and stored at the repository of the epo server (Figure 7). After this check of the correct functionality of the system, further tests regarding the file type recognition were performed. Every DLP solution supports lots of file 23

36 Figure 5: Copying the secret file to a USB stick Figure 6: The secret file is correctly blocked 24

37 Figure 7: The blocked file is stored at the epo repository 25

38 types that can be understood and parsed. So the following combinations of more sophisticated file types were generated: PDF document containing the string SECRET Zip archive containing this PDF document Word document containing an embedded excel tabular containing the string SECRET Zip archive containing this Word document Since all of this file types are officially supported all 4 files were detected to be valuable content and thus blocked. To check whether a real deep content analysis is performed, the test was repeated using the same PDF file but the first line, which contains usually hints on the file type, has been removed. The following listing shows that the first line %PDF-1.4 is missing in the second file secret-nomime.pdf : $ diff -a -u secret.pdf secret-nomime.pdf --- secret.pdf :00: secret-nomime.pdf :14: ,4 -%PDF-1.4 This slightly modified file was not detected by the scan engine as Figure 8 shows. In general, this means that unknown mime types are not monitored. Similarly, a PNG image which contained the EXIF comment (where EXIF is an standard for embedded meta data for images) SECRET was copied to an USB stick. This test should examine whether recognized document formats are parsed completely and correctly. As the file can be copied to an USB stick, this is another failing of the solution. Monitor The next step is to examine whether all possible channels are monitored properly to ensure that all data can be discovered and analyzed. In a first step, it was tested whether the monitoring engine is restricted to certain file systems. A file containing a NTFS alternate data stream was prepared and copied to the USB stick. This special extension of the NTFS file system was monitored correctly (Figure 9). Completely different file systems like the Linux ext3 file system which was included by installing a special driver was also monitored. A file containing the secret string was detected and blocked. 26

39 Figure 8: PDF files with removed leading line are not recognized Figure 9: NTFS alternate data stream is monitored correctly 27

40 Figure 10: Unmonitored partition Figure 11: Monitored partition In contrast, it is a severe restriction that devices containing more than one partition are not properly monitored. Using an USB hard drive containing three partitions, only the last mounted partition was monitored during several attempts. The first two ones were not monitored. Thus it was possible to write arbitrary data containing the secret string to the first two partitions (Figure 10 and 11). Another kind of a side channel is the possibility of copying a file which is named SECRET.txt to an USB stick. Even though the file name contains the valuable information SECRET, the file is not blocked (Figure 12). To ensure the stability of the endpoint agent, it is important to analyze the behavior of the system when a big file must be examined. Since the McAfee Host Data Leakage Prevention does not analyze files which are bigger than 500 MB, it was not possible to run a test using a 5GB file containing random data. 28

41 Figure 12: Information contained in file names is not monitored React Since the last two paragraphs figured out which data is recognized and monitored, the reaction methods need to be examined in detail. As Figure 6 showed, data gets deleted if valuable information is discovered. To check whether valuable data is really deleted, a completely empty USB stick was prepared. Every sector of the flash memory was overwritten with zero (more technical description for all further steps reside in Appendix C, special documentation for this paragraph in C.1). When this USB stick was inserted into the client system, it must be formatted and then could be mounted. A file containing SECRET several times was written to the stick and expectedly got blocked and deleted. After this process, the stick was examined on a very low level without using any filesystem structures (the used standard forensic methods are described in Appendix C.1). The resulting data still contained the secret string SECRET. Thus it is possible to bypass the DLP solution via recovering the deleted files. To eliminate the side effects of wear leveling techniques [Cha07], the test was verified using a floppy disc ending up with the same result. In a last step of reaction, the blocked data is stored on a central repository. This central repository is available via SMB and a provided network share. Monitoring this transmission it showed up that the file is just delivered via SMB without further encryption. Since SMB is a plain text protocol it was no problem to intercept the traffic and extract the sensible information (Figure 13). 29

42 Figure 13: Files containing sensitive data are transmitted via SMB in plain text System Security It is obvious that the DLP architecture is a worthwhile aim for attackers. A lot of valuable data is stored on the incident repository, descriptions of all secret patterns are available and no further searching would be necessary. Thus the overall security of the DLP system has to be appropriate to protect the mass of confidential data. The first phase of reconnaissance revealed several provided network services which could be discovered using port scanning techniques (results are listed in Appendix C.2). The most interesting port was TCP port 8443 of the epo server since it is used for logging in to the management console. The use of the encrypted HTTPS protocol for all traffic is definitely necessary, but the server supported insecure encryption algorithms with insufficient key lengths (Appendix C.3) which would allow the decryption of intercepted traffic. Usually most clients support strong cipher algorithms, but if any kind of mobile or embedded device is used to access the webservice, it is possible that only a weak cipher is negotiated. Another web interface on the central server can be reached using TCP port 80. It provides remote access to functions like DeleteAllEvents which purges all events from the reporting system. Since the unencrypted HTTP protocol is used the NTLM handshake used for authentication can be sniffed. If weak passwords [Ril06] are used the password can be brute forced and an attacker could log in to delete all events. To mitigate these threats, strong SSL ciphers with appropriate authentication should be used to ensure that no communication can be sniffed. To find programming flaws in the endpoint agent, all interfaces must be enumerated. In a first step, a port scan (Appendix C.2) revealed a listening network 30

43 service on TCP port 8081 on the client side. Since this interface allows an attacker to send arbitrary data to the service, it is important that no vulnerabilities exist in this exposed network interface. Otherwise it could be possible to compromise the complete system exploiting the management port of the DLP solution. The second step is the enumeration of local functions which are intercepted by the DLP solution. All software solutions which monitor activities on the system like anti virus or personal firewalls must intercept certain system interfaces to see whether they are accessed. It is for example possible to hook the operating system function that creates new files. Based on this interception, a DLP solution can read files that should be created and block these calls if they would harm any policies. Local attackers or malicious users could also inject code into these functions. This could for example disable the DLP solution or help the attacker gaining the higher privileges of the DLP solution. These so called API hooks were enumerated using memory analysis techniques (Appendix C.4). Using this knowledge on hooked functions and network ports, it was possible to analyze the behavior of these interfaces using so called fuzzing techniques [SGA07]. This black box analysis method reveals major programming flaws which can also lead to security vulnerabilities. In general, it gives an overview whether the application was properly tested and developed following secure coding best practices. The performed tests did interact with both the network socket and the hooked API functions and are described in detail in Appendix C.4. This basic evaluation did not reveal any crashes neither of the DLP client nor of the operating system. 4.3 Websense Data Security Suite According to the DLP market overview by Gartner [EQM07], the Websense Data Security Suite was one of the leading DLP solutions already in Nevertheless, it just provided network monitoring and discovery functionalities. Since an endpoint agent was added in 2008, it is a suitable completion to the DLP newcomer McAfee. Central management both of policies and client administration is realized by standalone or web applications. Like the McAfee Host Data Loss Prevention, the Data Security Suite monitors all removable storage media so that the same set of testcases can be applied Environment And Specifications Regarding only the functionality relevant to endpoint protection, the Websense Data Security Suite includes the following components: DSS Server Endpoint Agent 31

44 Figure 14: Architecture of the Websense Data Security Suite The DSS Server is the central management and analysis component. Additionally it provides log files, statistics and stored incidents. To ensure the availability of this core system, it is possible to operate multiple instances of it in this case, one of them must be the master DSS Server. Figure 14 shows the basic interaction between the single components. In contrast to the McAfee Host Data Leakage Prevention, the Websense solution installs its own database server per default. It is possible to define an existing database server, but since there is no need for a dedicated installation it is not listed in Figure 14. The DSS Server itself is divided into three tools: DSS Manager Management Console Policy Wizard The DSS Manager provides a frontend to access both analysis data and global configuration settings. There is detailed data on each incidents, statistics and time lines. Additionally it is possible to configure endpoint and server agents at this single point. The policies which define valuable data and adequate reaction rules can be created using the Management Console as well as users and roles can be administered. There are also a lot of predefined policies which protect classes of data which are very valuable in a country specific meaning. For example there is 32

45 Figure 15: The file containing the filtered string is blocked. a policy which protects social security numbers which are used in the USA. These policies can be accessed via the Policy Manager Testcases To make a statement about the security of the two DLP solutions, it is necessary to have a direct comparison. Since the two solutions can address the same leakage vector and use similar protection methods, the same testcases as for the McAfee solution were applied (Table 2). This comparison allows a more substantiated conclusion on the overall security and maturity of DLP solutions Results The performed tests resulted in different findings which are listed in the remainder of this section. Again, the same challenge as for the McAfee Host Data Loss Prevention was used: A policy that should avoid the copying of files which contained the string SECRET to USB media was deployed to an endpoint agent. To ensure the correct functionality, a PDF document containing the defined string was copied to an USB stick. As Figure 15 shows and the policy dictates, the file is blocked. The mentioned tests were performed using this test environment. The results are listed below and are divided in the categories identify, monitor, react and system security. 33

46 Figure 16: A PDF document without its first line is not recognized. Identify To ensure the correct processing of different file types and file operations, a PDF document containing the string SECRET was copied to an USB stick and the detection worked correctly: The file was blocked. Again, the next test was the copying of the same file after its first line was removed. This small change was enough to circumvent the DLP solution and the file was copied successfully to the USB stick as Figure 16 shows. This test controls the inspection of files for their mime type. Since also the McAfee DLP solution did not recognize this file without the mime type information, the same PNG file containing the EXIF comment SECRET was copied to the USB stick. This time, the Websense Data Security Suite did not block the copying as Figure 17 shows. Monitor The system monitors all tested channels correctly. For example both NTFS alternate data streams and the Linux ext3 filesystem were monitored and every information breach was detected. But a test of the stability of the application failed: During the copying of a 5 GB file filled with random data which contained also the string SECRET to an USB hard disk, the complete operating system freezed in all three test runs. Without the endpoint agent running, this process completed without any problems. In the worst case, this system freeze could be a hint that any kind of an overrun in the agent software exists. Depending on the kind of overrun, this could mean that this vulnerability is exploitable by an attacker. Otherwise this only affects the availability of the system. 34

47 Figure 17: EXIF comments in images are not recognized. React Per default, the communication between the endpoint agents and the DSS server is handled using HTTPS which implicitly means that every communication is encrypted. This encryption can be turned off so that it was possible to analyze the communication protocol. The following listing shows the plain text communication without its HTTP header information to improve readability that happens when the endpoint agent registers at the server after each start: Request (client to server): CPS_CLIENT xp-template N/A N/A.K... Response (server to client): CPS_CLIENT XaO......M.e.s.s.a.g.e..w.a.s..h.a.n.d.l.e.d..s.u.c.c.e.s.s.f.u.l.l.y... This protocol is vulnerable to at least one attack. An attacker is able to intercept the traffic from the client to the server and vice versa (Appendix C.5). If this happens, the attacker can drop the requests from the client to the server and reply arbitrary answers to the client. The complete response of the Server is predictable: Can be extracted from the Client re- CPS CLIENT : quest 78: Answer code which stands for Message was handled successfully. There are also other message codes like Incident was handled successfully 35

48 Message body: Derives from the answer code. Thus an attacker can intercept the reporting of an incident, drop the request and send the answer Incident was handled successfully to the client. The reporting of incidents would never reach the server and thus would never been reported. It could also be possible that an attacker is able to inject faked messages into the incident reporting system. It was not possible to replay an initial registration request of the client. But since there is no additional client verification using, for example, certificates, the session id (in the example above: CPS CLIENT ) is generated only on client side. This means that the server has no possibility to prove the identity of the client. If an attacker gets access to the data on the client system, he has access to all data the endpoint agent can use to generate the session id following a certain algorithm. It could be possible that an attacker explores this algorithm since every data and program code for this algorithm is resided on client side and is then able to generate valid session ids. System Security As mentioned in Section 4.3.3, the communication between the endpoint agent and the DSS server is encrypted due to the use of HTTPS. HTTPS is based on SSL which in turn uses certificates for the authentication of the two stations. Since the Websense Data Security Suite uses a certificate only on server side, the client is not authenticated. Additionally, the client does not verify whether the server s certificate is valid. Thus an attacker is able to perform an SSL man in the middle attack which is described in Appendix C.5. This results in the decryption of the protocol and this in turn in the disclosure of sensible data. Since only data that is judged to match the policy which is the valuable data is sent, this actually adds an additional vector for data leakage. 4.4 Summary The findings from Sections and show that the DLP solutions are not yet matured, even considering the fact that the Websense Data Security Suite is one of the leading solutions in this field. There were far too many possibilities for even accidental leakage (e.g. the copying of data to one of the unmonitored partitions of an USB hard drive) in the McAfee Host Data Leakage Prevention that it would be dangerous to rely on the system as a part of the security concept. And even if the scan engine of the Websense solution may be able to avoid accidental leakage, it introduces an additional leakage vector to the network due to the lack of mutual authentication. These findings are summarized in Table 3 to provide a fast overview on the capabilities of the solutions. If the solution passed a test, a check mark ( ) is used, otherwise a X takes place. 36

49 Test McAfee Websense Text file containing SECRET Text file named SECRET X PDF document containing SECRET Word file with embedded Excel table Zipped Word file with embedded Excel table PDF document without mime type information X X EXIF comment X X NTFS alternate data streams Third party filesystem (ext3) Multiple partitions on USB hard drive X Blocking of valuable data / Secure deletion X Proper encryption of management communication X X Proper Encryption of reported incidents X X Fuzzing Handling of big files X X Table 3: Performed testcases for McAfee Host Data Leakage Prevention and Websense Data Security Suite 37

50 38

51 5 Abstraction To General Problems And Conclusions The results of the evaluation allow the drawing of several conclusions. In a first step, it is possible to abstract the problems found in the DLP solutions to general problems of the DLP approach which are summarized in Section 5.1. The next step is the discussion of alternate solutions which can also prevent data breaches, in Section 5.2. These solutions point out advantages and disadvantages of other approaches. It also discusses typical security tools like patch management and data classification which are not yet deployed in most environments. 5.1 General Problems The different discovered vulnerabilities show that also software which should improve security needs lot of administration and development to be regarded as secure. So it is obvious that the costs to operate lot of security solutions especially in a secure way can easily exceed the available manpower. At the same time this manpower is missed at other tasks which are necessary to operate networks and systems in a secure way what includes lot more assignments than administering security software. These circumstances can also lead to a loss of availability if the newly deployed DLP solution is misunderstood due to a lack of time. During the test phase of the McAfee DLP solution, a check was misplaced due to similarly sounding configuration options. This lead to the complete block of USB devices on all clients instead of blocking only the data which matched the policy. If this had happened in an operative environment, it would mean a huge loss of availability on either information or the access on USB storage. So it is essential to realize that the protection of one security goal probably affects the security level of another requirement. In many cases this can be avoided by acting carefully or using other controls, but often a few implications arise like the blocking of false positives in the DLP context. These implications result from the high level of complexity of DLP solutions. To achieve the claim to block all confidential data in every possible state and medium, a lot of new software needs to be deployed endpoint agents, event collector services, plugins in every network node and so on. This mass of necessary changes affects the overall complexity of the network in a risky way. Since complexity is the biggest enemy of realistic and long-run security [Sch00], the deployment of such an extensive solution must be planned and tested extremely carefully. To use any technology in a safe way it is necessary to understand the solution. Again, the high level of complexity may avoid deeper understanding of the different controls and concepts. Any operative problem can be reduced if the product can be understood, but the big codebase for achieving the claimed features is still a problem. Every line of 39

52 code raises the probability of the existence of software bugs [Ber07] bugs in general and security flaws in particular which can both affect the safety and security of the system. This is true just as well for software which is to ensure security and so should be built with security in mind. Appendix B provides lots of examples of security tools which contain just as many bugs as all other software, so the initial statement keeps its validity. There is a high chance that further investigation will reveal lots of vulnerabilities which can even lead to system compromises in most of the DLP solutions [MM07]. And even if no security holes are discovered, the reporting of incidents can lead to a kind of denial of service attacks. If an attacker starts to generate notifications by accessing certain files, it is possible to trick the administrators into re-adjusting the notification policy. This can lead to decreasing attention concerning the DLP reports. If there are lot of notifications that remind the user that he is accessing valuable data, the user might get frustrated and also disregard the notification. The resulting nonacceptance might also allow scenarios like printing mistakes. If a user prints a sensible document by mistake and throws it to the paper basket, the information is not covered anymore by any DLP solution due to nonacceptance or habit. Even if a DLP solution could help by showing up a warning message, it is questionable whether awareness courses would not have been more efficient. Similar scenarios arise from insider threats. The Data Breach Investigations Report [BHV08] shows that 50% of all information leakage caused by insider threats are conducted by administrators. These people have access to almost every data since they have to administer it. Additionally, they know how a deployed DLP solution works and are able to disable or bypass it. By disregarding the reports due to a lack of time, too much reports or carelessness the DLP solution loses lots of its benefit. Thus the overall system security can be decreased by implementing an arbitrary DLP solution just for feeling more secure. This is actually kind of a reactive approach: Regarding popular examples of data breaches, there are still many incidents which occur due to classical software vulnerabilities, inappropriate configuration or weak passwords. So the deployment of a DLP solution should be thoroughly reviewed if the reason for the deployment is a particular data breach. As always the reasons and circumstances of the leakage must be regarded to define a reasonable learned lesson. Based on this procedure, the further steps for closing the gap can be determined without blindly applying arbitrary security controls without understanding them. 5.2 Alternative Solution Statements As Section 2.2 points out, there are many approaches which aim at the improvement of confidentiality. To understand which of them can be alternative solutions to the problem of leakage, it is necessary to examine the different kinds of leakage as listed in Section 2.1. DLP can possibly protect most against the occurrence of unintentional or unconscious leakage. If an is sent to a wrong recipient due 40

53 to a typing error or wrong use of the auto completion feature, there is no way to get this back. Auto completion in terms of usage means the feature of modern mail agents to complete the typed character B to a list of known recipients whose names start with a B. This may seduce users just to confirm the first mail address and send the what can easily lead to data leakage if there are two similar names. This scenario happened to the pharma company Eli Lilly when an employee sent an to the wrong recipient [Por08]. An additional check whether the should really be sent when confidential data is detected could help to prevent this kind of leakage. Maybe this would also prevent heedless people from continuing acting carelessly if the action would be monitored every time. In contrast the data loss through a third party like a service provider is hard to handle using DLP. Even if a CD is entirely encrypted which is the definitely right way when sending data its loss would result in press items of data loss. This kind of leakage should not result in the loss of reputation if the affected organization can show up an integrated corporate encryption architecture which is used all about. The last and probably worst case is malicious activity on the network. DLP may help to detect such activity because of peaks in leakage events. But as the results show there were several ways to bypass the DLP solution so a slightly skilled attacker would also find a way to transfer data out of the network. Since malware starts to use encrypted communication channels more frequently even the weak encryption algorithm of the Storm Worm [HSD + 08] would be enough to confuse DLP solutions even automated attacks can still result in leakage incidents. A risk analysis should be performed whether the existing systems and networks are secure so that an additional DLP solution would bring clear advantages. This risk analysis assumes that the administrators exhibit very detailed knowledge on the condition of all systems and networks. If this is not the case improving the cognition of the systems and networks should be clearly prioritized. If this general improvement is implemented and still more data security is needed the threat which should be mitigated must be identified. It is possible to use the defined kinds of leakage to access possible threat scenarios. If still the system compromise through malware or human attackers is the most important threat the use of a Multilevel Security system or access control lists can help to restrict access on data. The access to data can be restricted before accessing them just by the knowledge of the necessity of access. Another possibility would be the use of server based computing. The use of thin clients which just connect to terminal servers reduces the vulnerable surface to a few terminal servers. The secure operation of just a few systems is considerably easier than securing a large number of clients which all possibly run vulnerable systems or services. Many clients increase also the possible places where data is stored so leakage becomes more likely. In this scenario DLP solutions can help providing their ability to discover content. So an organization learns where its data is spread. This reveals also the maturity 41

54 level of the infrastructure: Are there well known and obeyed processes to store and manage data? When sensitive data is stored on every endpoint system, it is strongly necessary to provide possibilities to store data in a secure way e.g. using cryptography or central file servers. Using this cognition, further controls can be defined: If the valuable files are mainly stored on file servers the overall system security gets even more important. If lots of findings are resided in outgoing mailboxes the education of users could help and even decrease all types of leakage. The unconscious leakage of data shows that users can act in the best of their knowledge and leakage occurs anyhow. So any kind of awareness training can teach the users how to handle sensitive data regarding both proper encryption and accurate deletion. Adequate examples of the impact of leakage may also convince careless or resigned users to reconsider their behavior. Of course it is most necessary to provide tools to the users which allow them to get their work done with the same comfort. This is even more important since certain kinds of user related leakage cannot be prevented by technical controls as the example of the sold camera shows. This camera would have never been covered by a DLP solution. 5.3 Conclusions The initial question whether Data Leakage Prevention can prevent Data Leakage can be answered at least for the examined implementations: It cannot. The results from the examination show clearly which flaws exist in the software. The interpretation of the DLP concept is questionable, too. DLP can be a tool to get an survey of the data which is spread over the network. But since the solutions claim to provide protection instead of reconnaissance, this might lead to a completely wrong use without the implementation of a global data classification concept. The findings highlighted also the inability of preventing maliciously motivated leakage. For even a slightly motivated attacker, it is possible to carry away interesting data. If the assignment of a DLP solutions then comes back to the field of awareness and content discovery, it must be intensely evaluated whether the possible raise of awareness for confidential data justifies the highly increased complexity of the network. The discussion on this risk evaluation in Section 5.1 must be continued in more detail. The first implementations of DLP in live environments will certainly give a lot of feedback on these considerations. It is also necessary to examine the DLP solution more in depth in terms of software vulnerabilities. Even if the security of the software is not related to the approach of DLP, it is necessary to assure that the software does not contain any vulnerabilities. Otherwise, it is not possible to avoid leakage. As mentioned in Section 4.3.3, it must be proven that fake messages really can be injected into the Websense solution. Since this thesis could only cover two DLP solutions of 16, the most important work is to examine more DLP solutions. As mentioned in the introduction, every 42

55 software should be evaluated in terms of security. This evaluation process must go on for the other DLP solutions to get an overall picture of the security of these products. 43

56 44

57 A Security Goals According to the ISO/IEC standard [Int04], the security goals confidentiality, integrity and availability are defined as follows: Confidentiality The property that information is not made available or disclosed to unauthorized individuals, entities, or processes Integrity Availability The property of safeguarding the accuracy and completeness of assets The property of being accessible and usable upon demand by an authorized entity 45

58 46

59 B Security Software Vulnerabilities Table 4 lists several vulnerabilities of security solutions. All of these holes did affect the security of many systems and had impact on the secure operations. The Common Vulnerabilities and Exposures standard [MIT] is used for clear references to single advisories. CVE Number CVE CVE CVE CVE CVE CVE CVE CVE CVE CVE CVE CVE CVE CVE CVE CVE CVE CVE Description Remote arbitrary Code Execution in McAfee epolicy Orchestrator via crafted UDP packet Remote Arbitrary Code Execution in Sophos Anti Virus due to buffer overflows Arbitrary Code Execution in Kaspersky On-Demand Scanner via crafted archives Remote arbitrary Code Execution in Norton Personal Desktop Firewall and Internet Security Suite Possible Code Execution in Norton Personal Desktop Firewall Command Execution in Kaspersky Anti Virus due to integer overflow User assisted remote Code Execution in McAfee Enterprise Virus Scan via long unicode file name Remote arbitrary Code Execution in McAfee Security Center via crafted argument Remote arbitrary Code Execution in Norton AV solutions due to input validation errors Arbitrary Code Execution in Avira Anti Virus via crafted archives Remote arbitrary Code Execution in Kaspersky Online Scanner due to format string vulnerabilities Arbitrary Code Execution in NOD32 Antivirus via crafted archives Denial of Service in Outpost Desktop Firewall due to inproper input validation Remote arbitrary Code Execution in Norton AV solutions via long file names Arbitrary Code Execution in Comodo Anti Virus XSS Vulnerabilities in the Sophos Security Webfrontend Arbitrary Code Execution in McAfee Common Management Agent when high Debug Level enabled Local privilege escalation in Kaspersky Anti Virus Table 4: Vulnerabilities in security software 47

60 48

61 C Technical documentation This section contains further technical documentation which explains several methods of the examination. Additionally, screenshots are provided as a proof of concept. C.1 Filesystem Forensics The following steps were performed to check whether data is properly deleted by the DLP solution: Cleaning the medium by overwriting every sector of the memory with zero: # dd if=/dev/zero of=/dev/sdb Checking whether the exhaustion was successful (Figure 18) Inserting the storage medium into the client system and formatting it Search for the string SECRET on the formatted medium (Figure 19) Copying the file to the USB stick, waiting until it is deleted Searching again for the secret string, now successfully (Figure 20) This result is attributed to inproper deletion techniques. A filesystem contains usually several structures which are used for managing, e.g., creation, access and deletion of files. The used FAT filesystem contains two important areas: The so called File Allocation Table (FAT) area and the data area. The FAT contains pointers to the data area where the content of file is stored. When a file is deleted, the corresponding entry in the FAT is set to free, but the data is not touched. So it is possible to recover any data from storage mediums if just the FAT entry was modified instead of overwriting the data in the data area [Car05]. This principles are applicable to most other filesystems. C.2 Portscanner Results The following listings show the results of the portscanner nmap [Lyo09]. It provides an overview which services of the DLP solution listen on which TCP ports. Comments to describe the provided output are embedded on a single line with a leading #. 49

62 Figure 18: There are zero lines containing anything other than zero. Figure 19: Additional search for the string SECRET on the empty disc 50

63 Figure 20: Successful search after the valuable file was deleted by DLP # Portscan of the McAfee DLP policy server $ nmap -ssv p T5 # -ss: TCP Syn scan # -V: Version detection enabled # -p : all TCP ports are scanned # -T5: no pause between connection attempts, highest timing Starting Nmap 4.76 ( ) at :14 CET Warning: Giving up on port early because retransmission cap hit. Interesting ports on : Not shown: closed ports PORT STATE SERVICE VERSION 53/tcp open domain Microsoft DNS 80/tcp open http Microsoft IIS webserver /tcp open http Apache httpd # Apache Webserver of the DLP solution 88/tcp open kerberos-sec Microsoft Windows kerberos-sec 135/tcp open msrpc Microsoft Windows RPC 51

64 139/tcp open netbios-ssn 389/tcp open ldap 445/tcp open microsoft-ds Microsoft Windows 2003 microsoft-ds 464/tcp open kpasswd5? 593/tcp open ncacn_http Microsoft Windows RPC over HTTP /tcp open tcpwrapped 1025/tcp open msrpc Microsoft Windows RPC 1027/tcp open ncacn_http Microsoft Windows RPC over HTTP /tcp open msrpc Microsoft Windows RPC 1055/tcp open msrpc Microsoft Windows RPC 1433/tcp open ms-sql-s Microsoft SQL Server ; RTM # MS SQL Server needed for McAfee DLP 2383/tcp open unknown? 3268/tcp open ldap 3269/tcp open tcpwrapped 8081/tcp open http Network Associates epolicy Orchestrator (Computername: SWMTAPQ4NIFUN6U Version: ) # Agent listener. Provides policies and updates for the # endpoint agents 8443/tcp open ssl/unknown # McAfee epo 8444/tcp open ssl/unknown 8445/tcp open ssl/unknown 32467/tcp filtered unknown 43000/tcp open unknown? # EventCollectorService MAC Address: 00:0C:29:5D:15:50 (VMware) Service Info: OS: Windows Service detection performed. Please report any incorrect results at 52

65 Nmap done: 1 IP address (1 host up) scanned in seconds # Portscan of the McAfee DLP client system $ nmap -ssv p T5 # -ss: TCP Syn scan # -V: Version detection enabled # -p : all TCP ports are scanned # -T5: no pause between connection attempts, highest timing Starting Nmap 4.76 ( ) at :19 CET Warning: Giving up on port early because retransmission cap hit. Interesting ports on : Not shown: closed ports, 72 filtered ports PORT STATE SERVICE VERSION 135/tcp open msrpc Microsoft Windows RPC 139/tcp open netbios-ssn 445/tcp open microsoft-ds Microsoft Windows XP microsoft-ds 8081/tcp open http Network Associates epolicy Orchestrator (Computername: MCAFEE-DLP-CLIE Version: ) # Agentlistener, accepts e.g. wake up calls from policy server MAC Address: 00:0C:29:20:DF:BA (VMware) Service Info: OS: Windows Service detection performed. Please report any incorrect results at Nmap done: 1 IP address (1 host up) scanned in seconds C.3 Attacks on SSL SSL key sizes less than 128 bit can be easily decrypted. Since DES, which uses 64 bit key size, was first decrypted 1998 by a cluster of ASICs [Fou98], the decryption became less and less complex and by now is possible using standard hardware. So key sizes of 64 bit are inappropriate to protect any valuable information. Figure 21 shows that the DLP server supports lots of insecure encryption algorithms using 64 bit key sizes. All key sizes of 64 bit and less belong to this category since there are checksum bits which are accounted depending on the algorithm. 53

66 Figure 21: The management server supported insufficient key sizes 54

67 C.4 Fuzzing Fuzzing is a testing approach for software. The main idea is to use all kinds of input values for all interfaces of the software. This work can be done automatically when interfaces and possible input values are enumerated. The discovered reactions show how the code behaves in situations which were not intended by the author. The interfaces of the McAfee DLP solution were enumerated using a portscanner (Section C.2), which revealed a listening network socket, and the tool Memoryze [Sec08]. Memoryze dumps the memory of a running system and performs further analysis on this data like enumerating running processes or API hooks. Using these interfaces, two kinds of fuzzing were performed. The first test consisted of the fuzzing of the TCP socket 8081 on the client system. To perform a structured analysis of the service, it is necessary to explore possible interaction methods. In case of this network socket, the client responded to several HTTP requests: # nc GET / HTTP/1.0 Another request could be captured when sniffing the traffic data of the so called agent wakeup call : POST /spipe/pkg?agentguid=eaf6316b-b38e-405d-9d9d-1b8847e903fb\ &Source=Server_3.6.0 HTTP/1.0 Accept: application/octet-stream Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; SPIPE/2.0; Windows) Host: DLP-Server Content-Length: 367 Content-Type: application/octet-stream These requests were used to prepare fuzzing points using the tool The Art of Fuzzing [the06]. For example, the transmitted values of Host: and Content-Length: transported various values into the client software. These values were marked in the fuzzing tool and then during the fuzzing automatically replaced by lots of different combination of bytes which may cause buffer overflows, format string exceptions or integer overflows. These tests tried all of these combinations and run therefore about 40 minutes. 55

68 Figure 22: API hooks without (left) and with (right) installed DLP agent Using the tool Memoryze, two memory dumps of the same system were created. The first one before the DLP endpoint agent was installed and the second one after its installation. These two generated reports differed in several hooked API functions (Figure 22) which means that the installed solution adds API hooks, which were explained before, to the system to intercept system calls. These functions were fuzzed using the tool BSODhook [Mat07]. This tool performs fuzzing attacks on all kinds of API calls automatically. Using the enumerated calls from the tool Memoryze, all hooked API functions which means interfaces to the DLP solutions were tested by BSODhook. Theses tests consist of lots of different values which were given to the API function and met the requirements of the expected format. But even if these input data has the format the application expects, there is a wide range of possible values in this range which may force the application to crash. C.5 Man In The Middle Attacks A Man In The Middle Attack (mitm attack) is an attack where the attacker was able to intercept messages between his two victims. This principle can be applied to network based attacks as well as to attacks on different programs. In case of the network based attack, there are lots of possibilities to perform this attack, for example: 56

ERNW Newsletter 29 / November 2009

ERNW Newsletter 29 / November 2009 ERNW Newsletter 29 / November 2009 Dear Partners and Colleagues, Welcome to the ERNW Newsletter no. 29 covering the topic: Data Leakage Prevention A Practical Evaluation Version 1.0 from 19th of november

More information

A small leak will sink a great ship: An Empirical Study of DLP solutions

A small leak will sink a great ship: An Empirical Study of DLP solutions A small leak will sink a great ship: An Empirical Study of DLP solutions Matthias Luft, Thorsten Holz {mluft thorsten.holz}@informatik.uni-mannheim.de Agenda Problem Statement Motivation Key Concepts Evaluation

More information

A Buyer's Guide to Data Loss Protection Solutions

A Buyer's Guide to Data Loss Protection Solutions A Buyer's Guide to Data Loss Protection Solutions 2010 Websense, Inc. All rights reserved. Websense is a registered trademark of Websense, Inc. in the United States and certain international markets. Websense

More information

Websense Data Security Suite and Cyber-Ark Inter-Business Vault. The Power of Integration

Websense Data Security Suite and Cyber-Ark Inter-Business Vault. The Power of Integration Websense Data Security Suite and Cyber-Ark Inter-Business Vault The Power of Integration Websense Data Security Suite Websense Data Security Suite is a leading solution to prevent information leaks; be

More information

DATA LEAKAGE PREVENTION IMPLEMENTATION AND CHALLENGES

DATA LEAKAGE PREVENTION IMPLEMENTATION AND CHALLENGES DATA LEAKAGE PREVENTION IMPLEMENTATION AND CHALLENGES From This article focuses on common pitfalls when implementing a DLP solution to secure your organizational information assets. The article also lists

More information

ITAR Compliance Best Practices Guide

ITAR Compliance Best Practices Guide ITAR Compliance Best Practices Guide 1 Table of Contents Executive Summary & Overview 3 Data Security Best Practices 4 About Aurora 10 2 Executive Summary & Overview: International Traffic in Arms Regulations

More information

Five Tips to Ensure Data Loss Prevention Success

Five Tips to Ensure Data Loss Prevention Success Five Tips to Ensure Data Loss Prevention Success A DLP Experts White Paper January, 2013 Author s Note The content of this white paper was developed independently of any vendor sponsors and is the sole

More information

Are your multi-function printers a security risk? Here are five key strategies for safeguarding your data

Are your multi-function printers a security risk? Here are five key strategies for safeguarding your data Are your multi-function printers a security risk? Here are five key strategies for safeguarding your data Printer Security Challenges Executive Summary Security breaches can damage both your operations

More information

Comparison of Firewall, Intrusion Prevention and Antivirus Technologies

Comparison of Firewall, Intrusion Prevention and Antivirus Technologies White Paper Comparison of Firewall, Intrusion Prevention and Antivirus Technologies How each protects the network Juan Pablo Pereira Technical Marketing Manager Juniper Networks, Inc. 1194 North Mathilda

More information

McAfee Global Threat Intelligence File Reputation Service. Best Practices Guide for McAfee VirusScan Enterprise Software

McAfee Global Threat Intelligence File Reputation Service. Best Practices Guide for McAfee VirusScan Enterprise Software McAfee Global Threat Intelligence File Reputation Service Best Practices Guide for McAfee VirusScan Enterprise Software Table of Contents McAfee Global Threat Intelligence File Reputation Service McAfee

More information

Protecting personally identifiable information: What data is at risk and what you can do about it

Protecting personally identifiable information: What data is at risk and what you can do about it Protecting personally identifiable information: What data is at risk and what you can do about it Virtually every organization acquires, uses and stores personally identifiable information (PII). Most

More information

A Websense Research Brief Prevent Data Loss and Comply with Payment Card Industry Data Security Standards

A Websense Research Brief Prevent Data Loss and Comply with Payment Card Industry Data Security Standards A Websense Research Brief Prevent Loss and Comply with Payment Card Industry Security Standards Prevent Loss and Comply with Payment Card Industry Security Standards Standards for Credit Card Security

More information

AB 1149 Compliance: Data Security Best Practices

AB 1149 Compliance: Data Security Best Practices AB 1149 Compliance: Data Security Best Practices 1 Table of Contents Executive Summary & Overview 3 Data Security Best Practices 4 About Aurora 10 2 Executive Summary & Overview: AB 1149 is a new California

More information

Defending Against Cyber Attacks with SessionLevel Network Security

Defending Against Cyber Attacks with SessionLevel Network Security Defending Against Cyber Attacks with SessionLevel Network Security May 2010 PAGE 1 PAGE 1 Executive Summary Threat actors are determinedly focused on the theft / exfiltration of protected or sensitive

More information

Data Loss Prevention in the Enterprise

Data Loss Prevention in the Enterprise Data Loss Prevention in the Enterprise ISYM 525 Information Security Final Paper Written by Keneth R. Rhodes 12-01-09 In today s world data loss happens multiple times a day. Statistics show that there

More information

2. From a control perspective, the PRIMARY objective of classifying information assets is to:

2. From a control perspective, the PRIMARY objective of classifying information assets is to: MIS5206 Week 13 Your Name Date 1. When conducting a penetration test of an organization's internal network, which of the following approaches would BEST enable the conductor of the test to remain undetected

More information

7 Network Security. 7.1 Introduction 7.2 Improving the Security 7.3 Internet Security Framework. 7.5 Absolute Security?

7 Network Security. 7.1 Introduction 7.2 Improving the Security 7.3 Internet Security Framework. 7.5 Absolute Security? 7 Network Security 7.1 Introduction 7.2 Improving the Security 7.3 Internet Security Framework 7.4 Firewalls 7.5 Absolute Security? 7.1 Introduction Security of Communications data transport e.g. risk

More information

Analyzing HTTP/HTTPS Traffic Logs

Analyzing HTTP/HTTPS Traffic Logs Advanced Threat Protection Automatic Traffic Log Analysis APTs, advanced malware and zero-day attacks are designed to evade conventional perimeter security defenses. Today, there is wide agreement that

More information

Guideline on Auditing and Log Management

Guideline on Auditing and Log Management CMSGu2012-05 Mauritian Computer Emergency Response Team CERT-MU SECURITY GUIDELINE 2011-02 Enhancing Cyber Security in Mauritius Guideline on Auditing and Log Management National Computer Board Mauritius

More information

Web DLP Quick Start. To get started with your Web DLP policy

Web DLP Quick Start. To get started with your Web DLP policy 1 Web DLP Quick Start Websense Data Security enables you to control how and where users upload or post sensitive data over HTTP or HTTPS connections. The Web Security manager is automatically configured

More information

Fight fire with fire when protecting sensitive data

Fight fire with fire when protecting sensitive data Fight fire with fire when protecting sensitive data White paper by Yaniv Avidan published: January 2016 In an era when both routine and non-routine tasks are automated such as having a diagnostic capsule

More information

INSTANT MESSAGING SECURITY

INSTANT MESSAGING SECURITY INSTANT MESSAGING SECURITY February 2008 The Government of the Hong Kong Special Administrative Region The contents of this document remain the property of, and may not be reproduced in whole or in part

More information

Information Technology Policy

Information Technology Policy Information Technology Policy Security Information and Event Management Policy ITP Number Effective Date ITP-SEC021 October 10, 2006 Category Supersedes Recommended Policy Contact Scheduled Review RA-ITCentral@pa.gov

More information

How to Secure Your Environment

How to Secure Your Environment End Point Security How to Secure Your Environment Learning Objectives Define Endpoint Security Describe most common endpoints of data leakage Identify most common security gaps Preview solutions to bridge

More information

Application Security in the Software Development Lifecycle

Application Security in the Software Development Lifecycle Application Security in the Software Development Lifecycle Issues, Challenges and Solutions www.quotium.com 1/15 Table of Contents EXECUTIVE SUMMARY... 3 INTRODUCTION... 4 IMPACT OF SECURITY BREACHES TO

More information

Driving Company Security is Challenging. Centralized Management Makes it Simple.

Driving Company Security is Challenging. Centralized Management Makes it Simple. Driving Company Security is Challenging. Centralized Management Makes it Simple. Overview - P3 Security Threats, Downtime and High Costs - P3 Threats to Company Security and Profitability - P4 A Revolutionary

More information

HTTPS Inspection with Cisco CWS

HTTPS Inspection with Cisco CWS White Paper HTTPS Inspection with Cisco CWS What is HTTPS? Hyper Text Transfer Protocol Secure (HTTPS) is a secure version of the Hyper Text Transfer Protocol (HTTP). It is a combination of HTTP and a

More information

Data Security Incident Response Plan. [Insert Organization Name]

Data Security Incident Response Plan. [Insert Organization Name] Data Security Incident Response Plan Dated: [Month] & [Year] [Insert Organization Name] 1 Introduction Purpose This data security incident response plan provides the framework to respond to a security

More information

From Network Security To Content Filtering

From Network Security To Content Filtering Computer Fraud & Security, May 2007 page 1/10 From Network Security To Content Filtering Network security has evolved dramatically in the last few years not only for what concerns the tools at our disposals

More information

WHITE PAPER. Managed File Transfer: When Data Loss Prevention Is Not Enough Moving Beyond Stopping Leaks and Protecting Email

WHITE PAPER. Managed File Transfer: When Data Loss Prevention Is Not Enough Moving Beyond Stopping Leaks and Protecting Email WHITE PAPER Managed File Transfer: When Data Loss Prevention Is Not Enough Moving Beyond Stopping Leaks and Protecting Email EXECUTIVE SUMMARY Data Loss Prevention (DLP) monitoring products have greatly

More information

Larry Wilson Version 1.0 November, 2013. University Cyber-security Program Critical Asset Mapping

Larry Wilson Version 1.0 November, 2013. University Cyber-security Program Critical Asset Mapping Larry Wilson Version 1.0 November, 2013 University Cyber-security Program Critical Asset Mapping Part 3 - Cyber-Security Controls Mapping Cyber-security Controls mapped to Critical Asset Groups CSC Control

More information

Data Protection McAfee s Endpoint and Network Data Loss Prevention

Data Protection McAfee s Endpoint and Network Data Loss Prevention Data Protection McAfee s Endpoint and Network Data Loss Prevention Dipl.-Inform. Rolf Haas Principal Security Engineer, S+, CISSP rolf@mcafee.com January 22, 2013 for ANSWER SA Event, Geneva Position Features

More information

REPORT ON AUDIT OF LOCAL AREA NETWORK OF C-STAR LAB

REPORT ON AUDIT OF LOCAL AREA NETWORK OF C-STAR LAB REPORT ON AUDIT OF LOCAL AREA NETWORK OF C-STAR LAB Conducted: 29 th March 5 th April 2007 Prepared By: Pankaj Kohli (200607011) Chandan Kumar (200607003) Aamil Farooq (200505001) Network Audit Table of

More information

The Information Leak Detection & Prevention Guide

The Information Leak Detection & Prevention Guide The Information Leak Detection & Prevention Guide Essential Requirements for a Comprehensive Data Leak Prevention System April 2007 GTB Technologies 4685 MacArthur Court Newport Beach, CA 92660 WWW.GTTB.COM

More information

Data Loss Prevention Program

Data Loss Prevention Program Data Loss Prevention Program Safeguarding Intellectual Property Author: Powell Hamilton Senior Managing Consultant Foundstone Professional Services One of the major challenges for today s IT security professional

More information

Data Leakage: What You Need to Know

Data Leakage: What You Need to Know Data Leakage: What You Need to Know by Faith M. Heikkila, Pivot Group Information Security Consultant Data leakage is a silent type of threat. Your employee as an insider can intentionally or accidentally

More information

Database Security Guideline. Version 2.0 February 1, 2009 Database Security Consortium Security Guideline WG

Database Security Guideline. Version 2.0 February 1, 2009 Database Security Consortium Security Guideline WG Database Security Guideline Version 2.0 February 1, 2009 Database Security Consortium Security Guideline WG Table of Contents Chapter 1 Introduction... 4 1.1 Objective... 4 1.2 Prerequisites of this Guideline...

More information

Cisco IPS Tuning Overview

Cisco IPS Tuning Overview Cisco IPS Tuning Overview Overview Increasingly sophisticated attacks on business networks can impede business productivity, obstruct access to applications and resources, and significantly disrupt communications.

More information

Enterprise Cybersecurity Best Practices Part Number MAN-00363 Revision 006

Enterprise Cybersecurity Best Practices Part Number MAN-00363 Revision 006 Enterprise Cybersecurity Best Practices Part Number MAN-00363 Revision 006 April 2013 Hologic and the Hologic Logo are trademarks or registered trademarks of Hologic, Inc. Microsoft, Active Directory,

More information

ICTN 4040. Enterprise Database Security Issues and Solutions

ICTN 4040. Enterprise Database Security Issues and Solutions Huff 1 ICTN 4040 Section 001 Enterprise Information Security Enterprise Database Security Issues and Solutions Roger Brenton Huff East Carolina University Huff 2 Abstract This paper will review some of

More information

Technology Blueprint. Protect Your Email. Get strong security despite increasing email volumes, threats, and green requirements

Technology Blueprint. Protect Your Email. Get strong security despite increasing email volumes, threats, and green requirements Technology Blueprint Protect Your Email Get strong security despite increasing email volumes, threats, and green requirements LEVEL 1 2 3 4 5 SECURITY CONNECTED REFERENCE ARCHITECTURE LEVEL 1 2 4 5 3 Security

More information

Websense Data Security Solutions

Websense Data Security Solutions Data Security Suite Data Discover Data Monitor Data Protect Data Endpoint Data Security Solutions What is your confidential data and where is it stored? Who is using your confidential data and how? Protecting

More information

THE EXECUTIVE GUIDE TO DATA LOSS PREVENTION. Technology Overview, Business Justification, and Resource Requirements

THE EXECUTIVE GUIDE TO DATA LOSS PREVENTION. Technology Overview, Business Justification, and Resource Requirements THE EXECUTIVE GUIDE TO DATA LOSS PREVENTION Technology Overview, Business Justification, and Resource Requirements Introduction to Data Loss Prevention Intelligent Protection for Digital Assets Although

More information

Vs Encryption Suites

Vs Encryption Suites Vs Encryption Suites Introduction Data at Rest The phrase "Data at Rest" refers to any type of data, stored in the form of electronic documents (spreadsheets, text documents, etc.) and located on laptops,

More information

Achieving PCI Compliance Using F5 Products

Achieving PCI Compliance Using F5 Products Achieving PCI Compliance Using F5 Products Overview In April 2000, Visa launched its Cardholder Information Security Program (CISP) -- a set of mandates designed to protect its cardholders from identity

More information

Chapter 23. Database Security. Security Issues. Database Security

Chapter 23. Database Security. Security Issues. Database Security Chapter 23 Database Security Security Issues Legal and ethical issues Policy issues System-related issues The need to identify multiple security levels 2 Database Security A DBMS typically includes a database

More information

Security Services. 30 years of experience in IT business

Security Services. 30 years of experience in IT business Security Services 30 years of experience in IT business Table of Contents 1 Security Audit services!...!3 1.1 Audit of processes!...!3 1.1.1 Information security audit...3 1.1.2 Internal audit support...3

More information

Beyond the Hype: Advanced Persistent Threats

Beyond the Hype: Advanced Persistent Threats Advanced Persistent Threats and Real-Time Threat Management The Essentials Series Beyond the Hype: Advanced Persistent Threats sponsored by Dan Sullivan Introduction to Realtime Publishers by Don Jones,

More information

Secure Email Inside the Corporate Network: INDEX 1 INTRODUCTION 2. Encryption at the Internal Desktop 2 CURRENT TECHNIQUES FOR DESKTOP ENCRYPTION 3

Secure Email Inside the Corporate Network: INDEX 1 INTRODUCTION 2. Encryption at the Internal Desktop 2 CURRENT TECHNIQUES FOR DESKTOP ENCRYPTION 3 A Tumbleweed Whitepaper Secure Email Inside the Corporate Network: Providing Encryption at the Internal Desktop INDEX INDEX 1 INTRODUCTION 2 Encryption at the Internal Desktop 2 CURRENT TECHNIQUES FOR

More information

HANDBOOK 8 NETWORK SECURITY Version 1.0

HANDBOOK 8 NETWORK SECURITY Version 1.0 Australian Communications-Electronic Security Instruction 33 (ACSI 33) Point of Contact: Customer Services Team Phone: 02 6265 0197 Email: assist@dsd.gov.au HANDBOOK 8 NETWORK SECURITY Version 1.0 Objectives

More information

Uncover security risks on your enterprise network

Uncover security risks on your enterprise network Uncover security risks on your enterprise network Sign up for Check Point s on-site Security Checkup. About this presentation: The key message of this presentation is that organizations should sign up

More information

Network Security Policy

Network Security Policy Network Security Policy I. PURPOSE Attacks and security incidents constitute a risk to the University's academic mission. The loss or corruption of data or unauthorized disclosure of information on campus

More information

WHITE PAPER. FortiWeb and the OWASP Top 10 Mitigating the most dangerous application security threats

WHITE PAPER. FortiWeb and the OWASP Top 10 Mitigating the most dangerous application security threats WHITE PAPER FortiWeb and the OWASP Top 10 PAGE 2 Introduction The Open Web Application Security project (OWASP) Top Ten provides a powerful awareness document for web application security. The OWASP Top

More information

White paper. Why Encrypt? Securing email without compromising communications

White paper. Why Encrypt? Securing email without compromising communications White paper Why Encrypt? Securing email without compromising communications Why Encrypt? There s an old saying that a ship is safe in the harbour, but that s not what ships are for. The same can be said

More information

Security Management. Keeping the IT Security Administrator Busy

Security Management. Keeping the IT Security Administrator Busy Security Management Keeping the IT Security Administrator Busy Dr. Jane LeClair Chief Operating Officer National Cybersecurity Institute, Excelsior College James L. Antonakos SUNY Distinguished Teaching

More information

CA Technologies Data Protection

CA Technologies Data Protection CA Technologies Data Protection can you protect and control information? Johan Van Hove Senior Solutions Strategist Security Johan.VanHove@CA.com CA Technologies Content-Aware IAM strategy CA Technologies

More information

quick documentation Die Parameter der Installation sind in diesem Artikel zu finden:

quick documentation Die Parameter der Installation sind in diesem Artikel zu finden: quick documentation TO: FROM: SUBJECT: ARND.SPIERING@AS-INFORMATIK.NET ASTARO FIREWALL SCAN MIT NESSUS AUS BACKTRACK 5 R1 DATE: 24.11.2011 Inhalt Dieses Dokument beschreibt einen Nessus Scan einer Astaro

More information

anomaly, thus reported to our central servers.

anomaly, thus reported to our central servers. Cloud Email Firewall Maximum email availability and protection against phishing and advanced threats. If the company email is not protected then the information is not safe Cloud Email Firewall is a solution

More information

Network Management and Monitoring Software

Network Management and Monitoring Software Page 1 of 7 Network Management and Monitoring Software Many products on the market today provide analytical information to those who are responsible for the management of networked systems or what the

More information

Architecture. The DMZ is a portion of a network that separates a purely internal network from an external network.

Architecture. The DMZ is a portion of a network that separates a purely internal network from an external network. Architecture The policy discussed suggests that the network be partitioned into several parts with guards between the various parts to prevent information from leaking from one part to another. One part

More information

Advanced File Integrity Monitoring for IT Security, Integrity and Compliance: What you need to know

Advanced File Integrity Monitoring for IT Security, Integrity and Compliance: What you need to know Whitepaper Advanced File Integrity Monitoring for IT Security, Integrity and Compliance: What you need to know Phone (0) 161 914 7798 www.distology.com info@distology.com detecting the unknown Integrity

More information

Network Security. Tampere Seminar 23rd October 2008. Overview Switch Security Firewalls Conclusion

Network Security. Tampere Seminar 23rd October 2008. Overview Switch Security Firewalls Conclusion Network Security Tampere Seminar 23rd October 2008 1 Copyright 2008 Hirschmann 2008 Hirschmann Automation and and Control GmbH. Contents Overview Switch Security Firewalls Conclusion 2 Copyright 2008 Hirschmann

More information

SANS Top 20 Critical Controls for Effective Cyber Defense

SANS Top 20 Critical Controls for Effective Cyber Defense WHITEPAPER SANS Top 20 Critical Controls for Cyber Defense SANS Top 20 Critical Controls for Effective Cyber Defense JANUARY 2014 SANS Top 20 Critical Controls for Effective Cyber Defense Summary In a

More information

Defending Against. Phishing Attacks

Defending Against. Phishing Attacks Defending Against Today s Targeted Phishing Attacks DeFending Against today s targeted phishing attacks 2 Introduction Is this email a phish or is it legitimate? That s the question that employees and

More information

Data Management Policies. Sage ERP Online

Data Management Policies. Sage ERP Online Sage ERP Online Sage ERP Online Table of Contents 1.0 Server Backup and Restore Policy... 3 1.1 Objectives... 3 1.2 Scope... 3 1.3 Responsibilities... 3 1.4 Policy... 4 1.5 Policy Violation... 5 1.6 Communication...

More information

Network Security: 30 Questions Every Manager Should Ask. Author: Dr. Eric Cole Chief Security Strategist Secure Anchor Consulting

Network Security: 30 Questions Every Manager Should Ask. Author: Dr. Eric Cole Chief Security Strategist Secure Anchor Consulting Network Security: 30 Questions Every Manager Should Ask Author: Dr. Eric Cole Chief Security Strategist Secure Anchor Consulting Network Security: 30 Questions Every Manager/Executive Must Answer in Order

More information

Austin Peay State University

Austin Peay State University 1 Austin Peay State University Identity Theft Operating Standards (APSUITOS) I. PROGRAM ADOPTION Austin Peay State University establishes Identity Theft Operating Standards pursuant to the Federal Trade

More information

ensure prompt restart of critical applications and business activities in a timely manner following an emergency or disaster

ensure prompt restart of critical applications and business activities in a timely manner following an emergency or disaster Security Standards Symantec shall maintain administrative, technical, and physical safeguards for the Symantec Network designed to (i) protect the security and integrity of the Symantec Network, and (ii)

More information

External Vulnerability Assessment. -Technical Summary- ABC ORGANIZATION

External Vulnerability Assessment. -Technical Summary- ABC ORGANIZATION External Vulnerability Assessment -Technical Summary- Prepared for: ABC ORGANIZATI On March 9, 2008 Prepared by: AOS Security Solutions 1 of 13 Table of Contents Executive Summary... 3 Discovered Security

More information

Firewalls, Tunnels, and Network Intrusion Detection

Firewalls, Tunnels, and Network Intrusion Detection Firewalls, Tunnels, and Network Intrusion Detection 1 Part 1: Firewall as a Technique to create a virtual security wall separating your organization from the wild west of the public internet 2 1 Firewalls

More information

Bendigo and Adelaide Bank Ltd Security Incident Response Procedure

Bendigo and Adelaide Bank Ltd Security Incident Response Procedure Bendigo and Adelaide Bank Ltd Security Incident Response Procedure Table of Contents 1 Introduction...1 2 Incident Definition...2 3 Incident Classification...2 4 How to Respond to a Security Incident...4

More information

McAfee Data Protection Solutions

McAfee Data Protection Solutions McAfee Data Protection Solutions Tamas Barna System Engineer CISSP, Security+ Eastern Europe The Solution: McAfee Data Protection McAfee Data Loss Prevention Full control and absolute visibility over user

More information

BlackBerry Enterprise Service 10. Secure Work Space for ios and Android Version: 10.1.1. Security Note

BlackBerry Enterprise Service 10. Secure Work Space for ios and Android Version: 10.1.1. Security Note BlackBerry Enterprise Service 10 Secure Work Space for ios and Android Version: 10.1.1 Security Note Published: 2013-06-21 SWD-20130621110651069 Contents 1 About this guide...4 2 What is BlackBerry Enterprise

More information

Understanding and Selecting a DLP Solution. Rich Mogull Securosis

Understanding and Selecting a DLP Solution. Rich Mogull Securosis Understanding and Selecting a DLP Solution Rich Mogull Securosis No Wonder We re Confused Data Loss Prevention Data Leak Prevention Data Loss Protection Information Leak Prevention Extrusion Prevention

More information

10 Building Blocks for Securing File Data

10 Building Blocks for Securing File Data hite Paper 10 Building Blocks for Securing File Data Introduction Securing file data has never been more important or more challenging for organizations. Files dominate the data center, with analyst firm

More information

Securing Endpoints without a Security Expert

Securing Endpoints without a Security Expert How to Protect Your Business from Malware, Phishing, and Cybercrime The SMB Security Series Securing Endpoints without a Security Expert sponsored by Introduction to Realtime Publishers by Don Jones, Series

More information

A Review of Anomaly Detection Techniques in Network Intrusion Detection System

A Review of Anomaly Detection Techniques in Network Intrusion Detection System A Review of Anomaly Detection Techniques in Network Intrusion Detection System Dr.D.V.S.S.Subrahmanyam Professor, Dept. of CSE, Sreyas Institute of Engineering & Technology, Hyderabad, India ABSTRACT:In

More information

Trend Micro Data Protection

Trend Micro Data Protection Trend Micro Data Protection Solutions for privacy, disclosure and encryption A Trend Micro White Paper I. INTRODUCTION Enterprises are faced with addressing several common compliance requirements across

More information

McAfee Network Security Platform

McAfee Network Security Platform McAfee Network Security Platform Next Generation Network Security Youssef AGHARMINE, Network Security, McAfee Network is THE Security Battleground Who is behind the data breaches? 81% some form of hacking

More information

Email DLP Quick Start

Email DLP Quick Start 1 Email DLP Quick Start TRITON - Email Security is automatically configured to work with TRITON - Data Security. The Email Security module registers with the Data Security Management Server when you install

More information

How To Protect A Network From Attack From A Hacker (Hbss)

How To Protect A Network From Attack From A Hacker (Hbss) Leveraging Network Vulnerability Assessment with Incident Response Processes and Procedures DAVID COLE, DIRECTOR IS AUDITS, U.S. HOUSE OF REPRESENTATIVES Assessment Planning Assessment Execution Assessment

More information

Cyber Security. BDS PhantomWorks. Boeing Energy. Copyright 2011 Boeing. All rights reserved.

Cyber Security. BDS PhantomWorks. Boeing Energy. Copyright 2011 Boeing. All rights reserved. Cyber Security Automation of energy systems provides attack surfaces that previously did not exist Cyber attacks have matured from teenage hackers to organized crime to nation states Centralized control

More information

10 Potential Risk Facing Your IT Department: Multi-layered Security & Network Protection. September 2011

10 Potential Risk Facing Your IT Department: Multi-layered Security & Network Protection. September 2011 10 Potential Risk Facing Your IT Department: Multi-layered Security & Network Protection September 2011 10 Potential Risks Facing Your IT Department: Multi-layered Security & Network Protection 2 It s

More information

Best Practices Top 10: Keep your e-marketing safe from threats

Best Practices Top 10: Keep your e-marketing safe from threats Best Practices Top 10: Keep your e-marketing safe from threats Months of work on a marketing campaign can go down the drain in a matter of minutes thanks to an unforeseen vulnerability on your campaign

More information

IDS or IPS? Pocket E-Guide

IDS or IPS? Pocket E-Guide Pocket E-Guide IDS or IPS? Differences and benefits of intrusion detection and prevention systems Deciding between intrusion detection systems (IDS) and intrusion prevention systems (IPS) is a particularly

More information

Websense Web Security Gateway: Integrating the Content Gateway component with Third Party Data Loss Prevention Applications

Websense Web Security Gateway: Integrating the Content Gateway component with Third Party Data Loss Prevention Applications Websense Web Security Gateway: Integrating the Content Gateway component with Third Party Data Loss Prevention Applications November, 2010 2010 Websense, Inc. All rights reserved. Websense is a registered

More information

WHITE PAPER Cloud-Based, Automated Breach Detection. The Seculert Platform

WHITE PAPER Cloud-Based, Automated Breach Detection. The Seculert Platform WHITE PAPER Cloud-Based, Automated Breach Detection The Seculert Platform Table of Contents Introduction 3 Automatic Traffic Log Analysis 4 Elastic Sandbox 5 Botnet Interception 7 Speed and Precision 9

More information

Comprehensive Malware Detection with SecurityCenter Continuous View and Nessus. February 3, 2015 (Revision 4)

Comprehensive Malware Detection with SecurityCenter Continuous View and Nessus. February 3, 2015 (Revision 4) Comprehensive Malware Detection with SecurityCenter Continuous View and Nessus February 3, 2015 (Revision 4) Table of Contents Overview... 3 Malware, Botnet Detection, and Anti-Virus Auditing... 3 Malware

More information

European developer & provider ensuring data protection User console: Simile Fingerprint Filter Policies and content filtering rules

European developer & provider ensuring data protection User console: Simile Fingerprint Filter Policies and content filtering rules Cloud Email Firewall Maximum email availability and protection against phishing and advanced threats. If the company email is not protected then the information is not safe Cloud Email Firewall is a solution

More information

Why Leaks Matter. Leak Detection and Mitigation as a Critical Element of Network Assurance. A publication of Lumeta Corporation www.lumeta.

Why Leaks Matter. Leak Detection and Mitigation as a Critical Element of Network Assurance. A publication of Lumeta Corporation www.lumeta. Why Leaks Matter Leak Detection and Mitigation as a Critical Element of Network Assurance A publication of Lumeta Corporation www.lumeta.com Table of Contents Executive Summary Defining a Leak How Leaks

More information

Firewall Testing Methodology W H I T E P A P E R

Firewall Testing Methodology W H I T E P A P E R Firewall ing W H I T E P A P E R Introduction With the deployment of application-aware firewalls, UTMs, and DPI engines, the network is becoming more intelligent at the application level With this awareness

More information

McAfee Data Loss Prevention 9.3.0

McAfee Data Loss Prevention 9.3.0 Product Guide Revision E McAfee Data Loss Prevention 9.3.0 For use with epolicy Orchestrator 4.5, 4.6, 5.0 Software COPYRIGHT Copyright 2014 McAfee, Inc. Do not copy without permission. TRADEMARK ATTRIBUTIONS

More information

Symantec Cyber Threat Analysis Program Program Overview. Symantec Cyber Threat Analysis Program Team

Symantec Cyber Threat Analysis Program Program Overview. Symantec Cyber Threat Analysis Program Team Symantec Cyber Threat Analysis Program Symantec Cyber Threat Analysis Program Team White Paper: Symantec Security Intelligence Services Symantec Cyber Threat Analysis Program Contents Overview...............................................................................................

More information

FBLA Cyber Security aligned with Common Core 6.14. FBLA: Cyber Security RST.9-10.4 RST.11-12.4 RST.9-10.4 RST.11-12.4 WHST.9-10.4 WHST.11-12.

FBLA Cyber Security aligned with Common Core 6.14. FBLA: Cyber Security RST.9-10.4 RST.11-12.4 RST.9-10.4 RST.11-12.4 WHST.9-10.4 WHST.11-12. Competency: Defend and Attack (virus, spam, spyware, Trojans, hijackers, worms) 1. Identify basic security risks and issues to computer hardware, software, and data. 2. Define the various virus types and

More information

Radware s Behavioral Server Cracking Protection

Radware s Behavioral Server Cracking Protection Radware s Behavioral Server Cracking Protection A DefensePro Whitepaper By Renaud Bidou Senior Security Specialist,Radware October 2007 www.radware.com Page - 2 - Table of Contents Abstract...3 Information

More information

Choose Your Own - Fighting the Battle Against Zero Day Virus Threats

Choose Your Own - Fighting the Battle Against Zero Day Virus Threats Choose Your Weapon: Fighting the Battle against Zero-Day Virus Threats 1 of 2 November, 2004 Choose Your Weapon: Fighting the Battle against Zero-Day Virus Threats Choose Your Weapon: Fighting the Battle

More information

ESET Security Solutions for Your Business

ESET Security Solutions for Your Business ESET Security Solutions for Your Business It Is Our Business Protecting Yours For over 20 years, companies large and small have relied on ESET to safeguard their mission-critical infrastructure and keep

More information

Basics of Internet Security

Basics of Internet Security Basics of Internet Security Premraj Jeyaprakash About Technowave, Inc. Technowave is a strategic and technical consulting group focused on bringing processes and technology into line with organizational

More information