Remote Fingerprinting and Multisensor Data Fusion
|
|
|
- Bruce Barker
- 9 years ago
- Views:
Transcription
1 Remote Fingerprinting and Multisensor Data Fusion 1 Samuel Oswald Hunter Dept. Computer Science Rhodes University Grahamstown, South Africa [email protected] Barry Irwin Dept. Computer Science Rhodes University Grahamstown, South Africa [email protected] Etienne Stalmans Dept. Computer Science Rhodes University Grahamstown, South Africa [email protected] John Richter Dept. Computer Science Rhodes University Grahamstown, South Africa [email protected] Abstract Network fingerprinting is the technique by which a device or service is enumerated in order to determine the hardware, software or application characteristics of a targeted attribute. Although fingerprinting can be achieved by a variety of means, the most common technique is the extraction of characteristics from an entity and the correlation thereof against known signatures for verification. In this paper we identify multiple hostdefining metrics and propose a process of unique host tracking through the use of two novel fingerprinting techniques. We then illustrate the application of host fingerprinting and tracking for increasing situational awareness of potentially malicious hosts. In order to achieve this we provide an outline of an adapted multisensor data fusion model with the goal of increasing situational awareness through observation of unsolicited network traffic. Index Terms remote fingerprinting, data fusion, situational awareness I. INTRODUCTION REMOTE host fingerprinting involves the inferred identification of software, services and hardware of devices based on observed characteristics that match known signatures. The characteristics of the device could be observed passively over a network, however in most cases probes are sent to the device in order to elicit a response. While host fingerprinting is most commonly used to map attack vectors during the reconnaissance phase of an attack, we propose the application of fingerprinting techniques towards hosts on the Internet that have malicious intent. This allows one to not only learn more about the characteristics of compromised hosts, but also to track these nefarious hosts in dynamic IP address space. When a previously observed host is re-identified it allows us to construct a history of past discrepancies which provides insight into the characteristics and behavior of these hosts. With the exception of traffic anomalies such as those caused by miss configured hardware or software services [1], unsolicited network traffic may be considered as either potentially malicious (PM), truly malicious (TM) or as a result of malicious (RoM) activity. We classify PM activity as traffic consisting of probes from host discovery scans, port scans or vulnerability scanning; these types of scans are often produced by tools such as Nmap 1 and Nessus 2. TM activity consists of exploitation attempts against a system, such as an entity attempting to exploit the MS vulnerability [18] on a host. Truly malicious traffic is often caused by worms exploiting a system or a malicious hacker using a tool such as Metasploit 3. RoM activity, also known as backscatter, consists of traffic received as the result of certain types of Distributed Denial of Service (DDoS) attacks [12]. When compromised hosts (bots) are used in a DDoS attack they create spoofed source IP addresses for the duration of their attack. In certain cases the spoofed IP address will overlap with the address range of our monitoring sensors enabling the detection of backscatter or RoM traffic from the attack. The vast majority of unsolicited traffic can thus be seen as originating from nefarious hosts who have some form of malicious intent. By monitoring this traffic we are able to extract existing attack methods and detect new techniques [1][15]. This type of traffic is, for the most part, obfuscated by legitimate traffic on production systems. There are, however, two techniques that are used to detect and capture unsolicited traffic: network telescopes and honeypots. This paper introduces a normative method of discovering hosts that have potential for producing PM and TM traffic as well as instigating RoM traffic through the enumeration of bots associated with a Fast-flux domain in section IV-B. This work proposes four categories to better define remote host fingerprinting; software, physical, associative and behavioral. Data observed by unsolicited traffic monitoring sensors undergo a data fusion process which is defined by our adapted model of the multisensor data fusion model initially described by Waltz [20]. This model was later adapted for use with cyber situational awareness [4] and distributed intrusion detection systems [3]. By creating our own adaption of a multisensor data fusion model we show how it can be used to gain 1 Nmap - Port Scanner, 2 Nessus - Vulnerability Scanner, 3 Metasploit - Penetration Testing Software,
2 2 situational awareness with regards to potentially malicious hosts on the Internet. Two new techniques, latency multilateration and associative fingerprinting, are introduced in this work. It was found that the latency-based technique, as a form of physical fingerprinting, showed promising results as a supporting metric; however due to some sporadic results it cannot be used reliably on its own. Associative fingerprinting assumes a membership based relationship between observed hosts and some physical or logical entity. A 1000 compromised hosts from the Kelihos/Hlux botnet [14] were enumerated: characteristics of these hosts were analysed and this research shows how they provided insight into unique attributes that can be used to associate hosts with a given botnet. The topic of host-tracking, excluding application level tracking, has seen very little research. Host tracking is a besteffort attempt relying on probabilities based on repeatedly observed characteristics. It is, however, of great interest to monitor behavior of hosts in dynamic IP address space over time. Our research based on host tracking, may also have farreaching implications in fields such as computer forensics and information warfare. In this paper we provide several hostdefining metrics, which, while used as separate fingerprints might not provide a unique representation of a host but will, when used as a collective, fingerprint multiple, inherently different attributes of a device. This then increases the probability that the device can be uniquely identifiable. By making use of multiple host-defining metrics, obtained through remote fingerprinting and the adapted data fusion model for monitoring unsolicited network traffic, we create situational awareness that can provide deeper insight into the representation of malicious hosts on the Internet. The observation of their behavior and the ability to keep track each of these hosts over time may allow us to identify the most prolific sources of malicious behaviour on the Internet and possible infer their intentions against our own networks. Section II will discuss our adapted multisensor data fusion model and the various data sensors of which it comprises. In Section IIIwe define the four categories of remote host fingerprinting which can provide metrics for host tracking. Section IV provides a discussion and the results of the two fingerprinting techniques; latency-based multilateration and associative fingerprinting. II. MULTISENSOR DATA FUSION Multisensor fusion techniques combine data from multiple sensors and related information from data stores to achieve greater confidence and accuracies in results and to produce inferences that would not have been as reliable had they been produced from from a single sensor s observations [6][10][20]. Data fusion involves the hierarchical transformation of data from multiple and often heterogeneous sources. The transformation process is coupled with a decision making and inference based classification of the data characteristics. The context of the observed environment and the relationships between entities therein are also of importance during the fusion process. Traditional military command and control (C2) systems would make use of multiple field sensors to observe and measure data primitives such as radiation, acoustic and thermal energy, nuclear particles and other observable signals. The sensors would then be used in a data fusion process in order to correlate, aggregate and associate the data in order to assist in an automated decision making process or support system. While in principle the process of multisensor data fusion would provide significant advantages over the observations of a single data source, in [7], Hall states that besides the statistical advantage gained by combining same-source data, the use of multiple types of sensors might - in practice - produce worse results than could be obtained by making use of a single, more appropriate sensor for the application. Hall further explains that these sub-optimal results are caused by an attempt to combine accurate data with inaccurate or biased data. We propose an adaptation of the generic data fusion model by Waltz [20], while noting the work done by Bass [3][4] in which the generic model was adapted and a framework for next generation distributed intrusion detection systems was outlined. The model is adapted by including active response mechanisms to events, handling unsolicited network traffic as input and generating situational awareness as output. Active reconnaissance of objects modeled in the data fusion process allows for a more detailed and effective representation of those objects. The active reconnaissance forms part of remote host fingerprinting. The remainder of this section discusses the adapted fusion model in greater detail and introduces a variety of data sensors for the capture of unsolicited traffic. A. Data Sensors Network telescopes[12], also known as darknets[5] are, in their most simplest form a service operating over an unused but publicly accessible IP range. The service s sole purpose is to log all traffic destined towards it. The use of network telescopes in the assessment of potentially malicious traffic as a result of unsolicited communication has gained popularity over the last decade for its ease of deployment and low cost investment. Typically a honeypot would emulate one or more vulnerable services in an attempt to lure malicious entities into interacting with it. Traffic observed by network telescopes and honeypot sensors would be disseminated and entered into the data fusion process through the use of a messaging framework [9]. The messaging framework was implemented using the Advanced Message Queuing Protocol (AMQP) with a RabbitMQ 4 broker. A publish/subscribe model was chosen as it would allow for scalable data exposure and simplified distribution of work amongst fingerprinting modules. Modular data exposure is achieved through the use of JSON encoding. Two of the main advantages of using a message queuing service with a publish/subscribe model is the near realtime data exposure and the scalability inherent with a publish/subscribe data distribution model. 4 RabbitMQ - Messaging Framework,
3 3 The Fast-Flux Botnet Enumeration and Online Scraping tools that were developed for this research are responsible for producing support-based information, as apposed to the primitive data observations of the network telescopes and honeypots. This information assists in decision making and threat analysis at later stages of the multisensor data fusion process. 1) Network Telescopes: Previous research concerned with the analysis of network telescope traffic has been successful in observation, identification and tracking of Distributed Denial of Service attacks [12], worm propagation [8][16] and network scanning [2]. The function of network telescopes - to capture traffic destined for unused address space - is particularly attractive as no legitimate traffic should exist on an unused range. By capturing only unsolicited traffic, network telescopes show that they are well suited to data capture, assisting in understanding the state of illegitimate and potentially malicious traffic on the Internet. As a result,network telescopes where chosen as one of the data sensors to be incorporated with the multisensor data fusion. Network telescopes are used to provide primitive observations. Metrics from the traffic that are important are listed in Table I. Timestamp Source IP Source Port Protocol UDP Payload (in some cases) Sequence Number Destination IP Destination Port Flags Table I ATTRIBUTES OF INTEREST FROM NETWORK TELESCOPE SENSOR TRAFFIC. 2) Honeypots: A honeypot is a device or service that operates in a network, designed to detect various forms of malicious interaction that initiate connections with it. In doing so honeypots can serve as a defensive mechanism by acting as decoys for an attacker, but also as a valuable resource when observing nefarious traffic. A Dionaea 5 honeypot was used as one of the sensors. Data observed by the Dionaea honeypot was exposed through the XMPP protocol for realtime dissemination. The honeypot provided primitive observation. Metrics from the honeypot data are listed in Table II. Timestamp Source IP Attributes Attack Description Destination IP Table II ATTRIBUTES OF INTEREST FROM HONEYPOT SENSOR DATA. 3) Fast Flux Botnet Enumeration: Fast-Flux is a technique used to rapidly update DNS information. While it can be used for legitimate purposes such as load sharing, it has become popular with resilient botnets [13]. It has been shown that bot herders make use of Fast-Flux DNS techniques to host content within a botnet. This allows address mapping to constantly shift between different bots, making it very difficult to shut down [13]. Bots within the botnet relay content back 5 Dionaea - Honeypot, to a central server, also known as the mothership [13]. Our own research into the Kelihos/Hlux botnet confirms this through the detection of an Nginx 6 web server on each of the bots found. This web server can act as a reverse proxy to forward content from a central server. In order to analyse the characteristics of a compromised host (bot) that belongs to a botnet, a tool to harvest IP addresses of hosts belonging to a Fast-Flux botnet was developed. The data sample was obtained by harvesting 1000 unique IP addresses from the Kelihos/Hlux botnet during March of Figure 1 shows that the majority of IP addresses were returned between 5 and 22 times, while the most returned IP address was recorded on 77 different occurrences. Figure 1. Frequency of IP addresses that where returned during the Kelihos/Hlux botnet enumeration. The x-axis shows the number of times a given IP was returned during the enumeration process for discovering a 1000 unique IP addresses. As mentioned earlier, the Fast-Flux Botnet Enumeration tool developed for this research does not act as a primitive data sensor: instead it provides supporting information to be used in association with entities observed by the network telescope and honeypot sensors with botnet or bot-like characteristics. The tool takes a Fast-Flux domain as input and a target number of hosts to enumerate. The Fast-Flux Botnet Enumeration tool will then find, through multiple DNS requests, the requested number of hosts, stopping once a certain number of attempts at finding a new host has been reached. The IP addresses of the hosts are then enumerated by performing a basic port scan on each of the hosts and a HTTP GET request to port 80 is performed. Results are shown in Section IV-B. 4) Online Sources: While strictly speaking the Online Scraping Tools and Fast-Flux Botnet Enumeration tool do not constitute data sensors in the adapted multi-sensor data fusion model, they are listed in this section for brevity. They form part of Level 4 (Resource Management). There are numerous websites on the Internet that maintain databases of information concerning malicious activity on the Internet,as well as data regarding the sources of malicious activity. By making use of these sources it is possible to produce rich datasets containing information such as past 6 Nginx - HTTP and reverse proxy server,
4 4 discrepancies of hosts, the time at which malicious activity was first observed and the kind of malicious traffic the host produced. These sources of information, while useful when used as supporting data, can also be used as a data source for finding malicious hosts to analyse. These sites contain large quantities of information on historical malicious activity and are updated on a regular basis, ranging from continuously to daily. We made use of projecthoneypot.org in order to identify the nature of malicious activity from IP addresses that crossreferenced with their database. B. Fusion Model Inputs to the data fusion process include the sensor data outlined in Section II-A, priori data, supporting data collected in real-time through reconnaissance techniques such as port scans, and Operating System identification. Remote fingerprinting support data is where the adaption of the generic multi-sensor data fusion model becomes most apparent. The process of multisensor fusion is concerned with the adaption, correlation and analysis of data from different sources in order to evaluate a situation or event and then assist in the decisionmaking process or initiating direct action. Figure 2 illustrates the adapted model for monitoring unsolicited network traffic to produce situational awareness. Unsolicited traffic sensors provide raw data, in the form of network packets from network telescopes and processed data in the form of events registered by honeypots. This data forms the unsolicited traffic input into the data fusion model. The multi-sensor data fusion model provides an abstract representation of data aggregation, alignment and analysis through the use different levels of processing. These levels form a hierarchy of processing that data undergoes in order to create knowledge. Level 0 data refinement was originally concerned with the calibration of equipment, sensor adjustment and filtering of data [4] such as out-lier removal. Level 0 (Data Refinement) consists of determining the validity of the source IP address by determining if the address falls in bogon address space. Level 1 (Object Refinement) is the processing of received data ; aligning the data to a common frame of reference such as timestamp, sequence numbers and location of IP address through ASN. This includes correlation of data, such as packets with the same source IP address. Data is also classified at this level as potentially malicious traffic, truly malicious traffic or response of malicious traffic. This Level is concerned with the creation of an object entity and the association of characteristics to that entity in order to create an Object Base. Level 2 (Situational Refinement) provides situational knowledge regarding the object base after it has been aligned, correlated and analysed. At this level, aggregated sets of objects may be detected by examining their co-ordinated behavior. An example of this could include the observation of backscatter traffic from a DDoS attack or the traffic from an actual DDoS attack on some portion of the sensors. The Situational Refinement process takes into consideration the various sensors involved, as well as the information provided by Level 4 (Resource Management), which constitutes any additional real-time and priori data retrieval such as the use of fingerprinting techniques and searching for past discrepancies associated with characteristics of the Object Base. Characteristics of objects that are inspected at at this level include: common points of origin (source ip or address range), protocols, common targets and attack rates. It is important to note that unlike the traditional application of multisensor data fusion, data will often only be observed by a single sensor or, alternatively, be observed by multiple sensors but at disjoint points in time. As an illustration, consider a host infected with an SSH worm. The worm scans network ranges in order to find further hosts to infect, and might scan a honeypot first and a day later reach one of the network telescope sensor ranges. The Level 4 (Resource Management) process resolves this disjoint information. This part of the model is used to collect additional information and refer back to older data in order to correlate characteristics. Continuing with this example: Level 3 (Threat Assessment) could make use of the time difference between detection of the SSH worm scanning the honeypot and the network telescope to infer the rate at which this host is scanning network ranges. The Threat Assessment process is concerned with the possible threats that unsolicited traffic might pose, as well as the implications of these threats. Data from this level is combined with information from Level 2 to construct a Situation Base. The Situation Base represents aggregated information from a single entity or multiple entities, combining information from all the levels in the multisensor data fusion. This represents an event that constitutes a small subset of the growing situational awareness that ultimately corresponds to the available intelligence on a malicious host demographic. III. REMOTE FINGERPRINTING The process of fingerprinting involves the enumeration of attributes from devices or services and the correlation of these attributes with a list of known signatures of potential devices or services. One of the objectives of this research is to establish, with some certainty, a unique fingerprint of a host so that we would be able to identify the host again at different points in time and maintain an account of its activity. As not all hosts connected to the Internet make use of static IP addresses, this is a non-trivial process: the vast majority of residential and mobile connectivity is achieved through the leasing of dynamic IP addresses from an Internet Service Provider. While it is impossible to conclusively discover the identity of a host without physical access to that device, we attempt to fingerprint and identify a host with as much certainty as possible. The ability to keep track of a host in dynamic IP address space is applicable in the fields of computer forensics and information warfare. Where previously
5 5 Figure 2. Unsolicited network traffic fusion for advanced situational awareness adapted from [20][3]. this could only be effectively achieved through the deployment of call home code on that host, this is now possible - to a degree - through a holistic fingerprinting process, where multiple heterogeneous remote fingerprinting techniques are combined to create a logical and unique representation of the host. The remainder of this section outlines four unique categories of remote fingerprinting. It should be noted that it is impossible to remotely fingerprint a host with complete confidence. Fingerprinting relies on traffic from a host, captured either passively or as a result of probing. While it might be unlikely that the host is attempting to interfere or even be aware of the fingerprinting process, research [17] has been performed and applications have been developed for just this purpose. It is assumed that the vast majority of hosts on the Internet are not attempting to subvert our remote fingerprinting attempts. A. Four Categories of Fingerprinting In order to uniquely identify a host, the combination of multiple fingerprinting techniques that measure, observe and determine heterogeneous characteristics of a host are required. The four categories of fingerprinting that have been identified are software characteristics, physical characteristics, association/affiliation and behavioral patterns. These four categories represent host defining metrics that, when measured correctly, are capable of creating a unique logical representation of a host that can be used for unique identification. There exist two distinct types of fingerprinting: active and passive. Passive fingerprinting is undetectable but less reliable. This is achieved by inspecting packets streams either through a Man-in-the-Middle attack, through reflected traffic such as backscatter, or through incoming unsolicited traffic. A popular passive fingerprinting tools is p0f 7. p0f inspects packets and 7 p0f - Purely passive fingerprinting, compares the characteristics of packets to known signatures of different Operating Systems. The process of active fingerprinting uses probes that are sent to a target in order to elicit a response. This response is then analysed to determine the subset of the characteristics representing the device. 1) Software Characteristics: Software characteristics include metrics such as Operating System and OS version, open ports, services running on those ports and the versions of those services as well as unique configurations. The three most popular tools used for fingerprinting within this category are nmap, p0f and hping 8. This research made extensive use of nmap and p0f. Enumerating the software characteristics of a host may provide insight into the purpose and intent of that host. It also maps an attack vector when targeting that host with offensive capabilities, as it provides information concerning the possible Operating System and services that may be vulnerable to attack. In certain instances it may also provide a unique value used to identify the host. For example, it may obtain the public SSH key of the host if it is running an SSH service. 2) Physical Characteristics: The physical characteristics of a host are represented by attributes such as hardware and geographic location in the world. A previous study [11] into clock skews for remote fingerprinting of hardware has shown promise. In order to fingerprint a device, the study exploits microscopic deviations in device hardware allowing the authors to uniquely identify devices even when behind a NAT or firewall [11]. Another physical characteristic of a host is the host s MAC address, which represents a unique hardware address of a Network Interface Card(NIC). There are, however, two challenges in obtaining the MAC address of a host: the MAC address cannot be obtained across subnets and it is trivial to spoof a MAC address. The third metric for physical attributes is geographic location. This can be determined by the IP address of a host, however this is subject to potential inaccuracy. For our research we have built further on the concept of geographic locality to fingerprint a host by implementing latency based measurement, discussed in Section IV-A. 3) Association: We define associative fingerprinting as the identification of a relationship, association or affiliation between a host and some physical or logical entity. While this approach is normative, to our knowledge it has not been formally defined or used as a fingerprinting technique. An Internet Service Provider is an example of a host/entity relationship: in it s simplest form an ISP is responsible for providing Internet connectivity and an IP address to a device. This relationship is, however, not absolutely unique and can thus not be used as a fingerprint that defines a unique characteristic of a host. It is, in fact, slightly more unique than a dynamic IP address as it has been leased. Botnets provide another example of a host/entity relationship: the host has a logical association with an entity: the botnet. This can be logically extended by associating the host to a specific botnet. The method by which we identify a host as a bot and then associate that host with a specific 8 Hping - Network Scanner,
6 6 botnet is discussed in Section IV-B. 4) Behavioral: We consider behavior attributes of a hosts to include time spent online or offline and average congestion or bandwidth available to that host. The behavioral characteristics of a host are the most challenging to monitor and collect remotely. Without authorization to run monitoring tools on the host, the metric becomes a best effort attempt coupled with inferences. The following methods will be explored and expanded on in future work: Connection time monitoring requires one of two prerequisites - a static IP address or a resolvable domain name. If the host is reachable through either of those methods it is possible to send periodic probes and construct a behavioral pattern for the time that host spends connected to the Internet from the responses. Bandwidth availability and load can be determined through probes such as ICMP ping requests - monitoring the response times provides an indication of network activity. If, for example, the responses take longer from 17:00 to 18:00 GMT every day, we could infer that the host is performing backups to an off-site location. This would constitute a valid behavior attribute that, together with other fingerprinting techniques, could be used to uniquely represent a host. The process of latency multilateration starts with numerous ping probes that are sent from the three base stations towards a target IP address. The timings of the ping responses are recorded, outliers are removed and the average time is calculated. This produces a 3-tuple representing average time taken by each set of probes. The process of comparing these results with others to determine the likelihood of two hosts being the same is a non-trivial problem, especially with large datasets. To overcome this the 3-tuple was mapped to euclidean 3-space, representing each of the three timings as a value on an x, y and z axis. IV. LATENCY BASED AND ASSOCIATIVE FINGERPRINTING The ability to track (or re-identify correctly in this research) a host over a period of time is a valuable capability and is applicable in fields such as network forensics and information warfare. This section will illustrate two novel fingerprinting techniques and show how they can be used to identify and re-identify hosts on the Internet in order to construct a profile for each of those hosts in dynamic IP space. Successful host tracking requires multiple fingerprinting techniques to be used in concert to create a holistic and detailed fingerprint. Four categories were defined in Section III-A,representing the different types of fingerprinting metrics required for successful host tracking. In this section we elaborate and show results based on the latency multilateration and associate fingerprinting techniques. A. Latency Multilateration Multilateration is a technique used to determine the location of an object by measuring the difference in distance from multiple stations with known locations by broadcasting signals at known intervals [19]. Unlike triangulation which is concerned with the measurements of absolute distance and angles, multilateration uses only timing and ranges of distance to plot multiple hyperbolic curves that intersect. This reveals a small number of potential locations, thus producing a fix [19]. Adapting the technique of multilateration towards network based fingerprinting involves the measurements of latency (the time it takes for a packet to reach a destination) from multiple base stations. For the initial investigation into latency based multilateration a proof of concept application was developed that made use of three free web based ping services, situated in geographically separate locations. Figure 3. 3D Scatter plot of raw data without outlier removal from 10 ping scans to a single host. This results in a series of points in space which allows for easier comparisons between hosts as the distance between two points in space is a trivial calculation as shown with Algorithm 1. Algorithm 1 Calculate the distance between two points in space. d = (x 2 x 1 ) 2 + (y 2 y 1 ) 2 + (z 2 z 1 ) 2 A 3D scatter plot of raw time measurements is shown in Figure 3. This shows two distinct groupings of values and a small set of extreme outliers. While the dense groupings illustrate similar results over time, the presence of two separated groups also show the sporadic nature of network congestion which is the biggest concern for latency based multilateration. By applying a distance threshold to determine whether two sets of results represent the same host, we are able to create a comparable logical representation of devices that are geographically separated, thus remotely fingerprinting a physical characteristic of devices on the Internet. During our testing of latency based multilateration fingerprinting, tests against various static IP addresses over a period of time were performed. Figure 4 shows 10 scans that were run against a host with a one hour interval between each scan. The results show definite groupings of latency from each of the three base
7 7 stations used. Due to the unpredictable nature of congestion and physical interferences, however, the spread of latency across the base station with the highest time difference were not ideal. Figure 4. Latency results from three base stations targeting a specific host. The box plots of latency results in Figure 5 show an anomaly that was encountered during testing: all three base stations revealed similar results to those shown in Figure 5. The target IP address that was being tested in this instance was under the control of these researchers, and thus it is known that while these measurements were being taken, the host experienced considerable load over a nine hour period, resulting in a visible deviation from the expected, normal latency. This result motivates that while latency based multilateration for fingerprinting holds merit, it cannot be reliably used as a single metric for fingerprinting a host. Figure 5. Timing results from a ping base station showing unreliability in latency based measurements. For the initial investigation three free online ping tools were used to perform the testing. By making use of these publicly accessible sites it is acknowledged that the results achieved, while relatively accurate, may be improved upon by the use of dedicated ping base stations. B. Associative Fingerprinting As discussed in Section III-A3, we define associative fingerprinting as the identification of an affiliation of infected hosts with a given botnet. Associative fingerprinting is only applicable if we are able to identify the host as belonging to a botnet - this is done through the inspection of characteristics of a host, comparing them to known characteristics of botnets. The script created to enumerate Fast-Flux botnets continually queries a known Fast-Flux domain to record the IP addresses returned, noting the frequency with which this occurs. At the same time as harvesting IP addresses, a port scan is performed on each host. In addition to the port scan, a HTTP GET request is sent, and a record of the response from each of the hosts is kept. As mentioned in Section II-A3 these bots act as a content relay to increase the persistence of the botnet by creating thousands of hosts that hide the location of malicious content such as phishing websites. Figure 1 showed the frequency of returned IP addresses during the process of enumerating 1000 bots. This data is interesting as outliers were returned more than three times as frequently as the majority of hosts. These outlying hosts might be used by the bot herders for specific tasks such as propagating updates or hiding a second layer of Fast-Flux nameservers. These hosts might also represent the bots with the most available bandwidth and could become likely targets during a botnet take-down procedure. The results of the HTTP GET request to port 80 found that of the 756 hosts that replied all of them shared the following two characteristics: The web server being used was: Apache - Nginx/ The last modified date in the header was set to: Sun, 11 Mar :20:42 GMT. It is inferred from this that the bots in the Kelihos/Hlux botnet were all configured using the same configuration script and that all of the bots were running an Nginx webserver which could act as a reverse proxy. Using these characteristics a signature to identify hosts belonging to this specific botnet has been determined. While this may not be valid in the future, it will remain useful as long as the botnet is functioning in it s present form. The bots have maintained the same HTTP responses for three weeks after initially gathering this data. Any phishing payloads present on these bots are only be accessible through a specific URL, as the bot acts as a reverse proxy,forwarding data from the mothership, thus HTTP GET requests to the root directory on the web server are likely to only return details regarding the initial configuration of the bot. A basic port scan was also run against each of these hosts, however due to the time requirements of a portscan and the dynamic nature of bots connecting and disconnecting from the botnet over time only 361 of the portscans returned results. The top seven open ports are shown in Table III. The Microsoft Windows RPC Service has, over many years, reported a multitude of vulnerabilities and it is expected that the vast majority of bots connected to a botnet would have open ports for this service as it is likely the cause of the initial compromise. From the HTTP GET responses it can be seen that all 1000 hosts were running a Nginx HTTP web server.
8 8 Port Frequency Description HTTP Windows RPC Service Windows RPC Service Windows RPC/SMB Service HTTPS Windows RPC Service SMTP Table III TOP 7 TCP PORTS FROM PORTSCANS OF THE KEILIOS/HLUX BOTNET. The port scans, however, were only able to detect 361 of these hosts. Of the 361 hosts scanned, 26 were running a mail server - these hosts were likely used to send spam s and propagate phishing attacks. This data was obtained through the reconnaissance of the Kelihos/Hlux botnet to construct a signature of the hosts that belong to this botnet and, potentially, other similar botnets. By using this signature it is possible to differentiate between a subset of legitimate and potentially malicious hosts on the Internet. These characteristics would also assist in the fingerprinting and tracking of hosts detected over a period of time. V. CONCLUSION Through the application of remote host fingerprinting techniques and an adapted multisensor data fusion model this research has shown how to generate situational awareness regarding nefarious hosts on the Internet. This was achieved by identifying four categories of remote fingerprinting; software, physical, behavioral and associative. These categories provide a holistic representation of heterogeneous device characteristics. Not only do these techniques provide information regarding the physical and logical characteristics of a device but, when used as a collective, could enable unique identification and re-identification of hosts in dynamic IP space. We have created an abstract model to represent the multisensor data fusion of unsolicited network traffic, this formal method can be applied to the creation of frameworks for generating situational awareness based on unsolicited traffic. This research introduces two novel fingerprinting techniques: latency multilateration and associative fingerprinting. While it has been shown that latency-based multilateration is susceptible to network congestion and anomalies, the majority of results obtained during testing were consistent enough to warrant further investigation into this technique as a logical representation of physical locality on the Internet. Associative or affiliation based fingerprinting shows how a relationship between a host and some entity could be exploited to fingerprint that host using the relationship as a support metric in addition to other fingerprinting techniques. It is hoped that through the application of techniques outlined in this paper, sufficient situational awareness could be generated to provide insight into the climate of malicious hosts on the Internet as well as supporting the development of defensive measures to protect our networks. ACKNOWLEDGMENT The authors would like to thank the CSIR DPSS for funding and support of this research. REFERENCES [1] Michael Bailey, Evan Cooke, Farnam Jahanian, Andrew Myrick, and Sushant Sinha. Practical darknet measurement. [2] Richard J Barnett and Barry Irwin. Towards a taxonomy of network scanning techniques. In Proceedings of the 2008 annual research conference of the South African Institute of Computer Scientists and Information Technologists on IT research in developing countries: riding the wave of technology, SAICSIT 08, pages 1 7, New York, NY, USA, ACM. [3] Tim Bass. Multisensor data fusion for next generation distributed intrusion detection systems. In In Proceedings of the IRIS National Symposium on Sensor and Data Fusion, pages 24 27, [4] Tim Bass. Intrusion detection systems & multisensor data fusion: Creating cyberspace situational awareness, [5] Team CYMRU. The darknet project. Online, June [6] David Lee Hall. Mathematical Techniques in Multisensor Data Fusion. Artech House, Inc., Norwood, MA, USA, [7] D.L. Hall and J. Llinas. An introduction to multisensor data fusion. Proceedings of the IEEE, 85(1):6 23, [8] Uli Harder, Matt W. Johnson, Jeremy T. Bradley, and William J. Knottenbelt. Observing internet worm and virus attacks with a small network telescope. Electron. Notes Theor. Comput. Sci., 151(3):47 59, June [9] Samuel O. Hunter and Barry Irwin. Tartarus: A honeypot based malware tracking and mitigation framework. In ISSA. ISSA, Pretoria, South Africa, [10] Lawrence A. Klein. Sensor and Data Fusion Concepts and Applications. Society of Photo-Optical Instrumentation Engineers (SPIE), Bellingham, WA, USA, 2nd edition, [11] T Kohno, A Broido, and K C Claffy. Remote physical device fingerprinting, [12] David Moore, Colleen Shannon, Douglas J. Brown, Geoffrey M. Voelker, and Stefan Savage. Inferring internet denial-of-service activity. ACM Trans. Comput. Syst., 24(2): , May [13] Jose Nazario and Thorsten Holz. As the net churns: Fast-flux botnet observations. In rd International Conference on Malicious and Unwanted Software (MALWARE), pages IEEE, October [14] Stefan Ortloff. Faq: Disabling the new hlux/kelihos botnet. Online, March _Hlux_Kelihos_Botnet. [15] Niels Provos and Thorsten Holz. Virtual honeypots: from botnet tracking to intrusion detection. Addison-Wesley Professional, first edition, [16] Colleen Shannon and David Moore. The spread of the witty worm. IEEE Security and Privacy, 2(4):46 50, July [17] Matthew Smart, G. Robert Malan, and Farnam Jahanian. Defeating tcp/ip stack fingerprinting. In Proceedings of the 9th conference on USENIX Security Symposium - Volume 9, SSYM 00, pages 17 17, Berkeley, CA, USA, USENIX Association. [18] Security TechCenter. Mictrosoft security bulletin ms critical. Online, October [19] Alexander ter Kuile. Multilateration - mlat in action. Online, December [20] Edward L. Waltz and James Llinas. Multisensor Data Fusion. Artech House, Inc., Norwood, MA, USA, 1990.
Agenda. Taxonomy of Botnet Threats. Background. Summary. Background. Taxonomy. Trend Micro Inc. Presented by Tushar Ranka
Taxonomy of Botnet Threats Trend Micro Inc. Presented by Tushar Ranka Agenda Summary Background Taxonomy Attacking Behavior Command & Control Rallying Mechanisms Communication Protocols Evasion Techniques
Advanced Honeypot Architecture for Network Threats Quantification
Advanced Honeypot Architecture for Network Threats Quantification Mr. Susheel George Joseph M.C.A, M.Tech, M.Phil(CS) (Associate Professor, Department of M.C.A, Kristu Jyoti College of Management and Technology,
Literature Review: Network Telescope Dashboard and Telescope Data Aggregation
Literature Review: Network Telescope Dashboard and Telescope Data Aggregation Samuel Oswald Hunter 20 June 2010 1 Introduction The purpose of this chapter is to convey to the reader a basic understanding
Tartarus: A honeypot based malware tracking and mitigation framework
Tartarus: A honeypot based malware tracking and mitigation framework Samuel Oswald Hunter Dept. Computer Science Rhodes University Grahamstown, South Africa Email: [email protected] Barry Irwin Dept.
Radware s Behavioral Server Cracking Protection
Radware s Behavioral Server Cracking Protection A DefensePro Whitepaper By Renaud Bidou Senior Security Specialist,Radware October 2007 www.radware.com Page - 2 - Table of Contents Abstract...3 Information
Second-generation (GenII) honeypots
Second-generation (GenII) honeypots Bojan Zdrnja CompSci 725, University of Auckland, Oct 2004. [email protected] Abstract Honeypots are security resources which trap malicious activities, so they
Overview of Network Security The need for network security Desirable security properties Common vulnerabilities Security policy designs
Overview of Network Security The need for network security Desirable security properties Common vulnerabilities Security policy designs Why Network Security? Keep the bad guys out. (1) Closed networks
LASTLINE WHITEPAPER. Using Passive DNS Analysis to Automatically Detect Malicious Domains
LASTLINE WHITEPAPER Using Passive DNS Analysis to Automatically Detect Malicious Domains Abstract The domain name service (DNS) plays an important role in the operation of the Internet, providing a two-way
Intro to Firewalls. Summary
Topic 3: Lesson 2 Intro to Firewalls Summary Basic questions What is a firewall? What can a firewall do? What is packet filtering? What is proxying? What is stateful packet filtering? Compare network layer
Characterization and Analysis of NTP Amplification Based DDoS Attacks
Characterization and Analysis of NTP Amplification Based DDoS Attacks L. Rudman Department of Computer Science Rhodes University Grahamstown [email protected] B. Irwin Department of Computer Science
Honeyd Detection via Packet Fragmentation
Honeyd Detection via Packet Fragmentation Jon Oberheide and Manish Karir Networking Research and Development Merit Network Inc. 1000 Oakbrook Drive Ann Arbor, MI 48104 {jonojono,mkarir}@merit.edu Abstract
Divide and Conquer Real World Distributed Port Scanning
Divide and Conquer Real World Distributed Port Scanning Ofer Maor CTO Hacktics 16 Feb 2006 Hackers & Threats I, 3:25PM (HT1-302) Introduction Divide and Conquer: Real World Distributed Port Scanning reviews
The Nexpose Expert System
Technical Paper The Nexpose Expert System Using an Expert System for Deeper Vulnerability Scanning Executive Summary This paper explains how Rapid7 Nexpose uses an expert system to achieve better results
Introducing IBM s Advanced Threat Protection Platform
Introducing IBM s Advanced Threat Protection Platform Introducing IBM s Extensible Approach to Threat Prevention Paul Kaspian Senior Product Marketing Manager IBM Security Systems 1 IBM NDA 2012 Only IBM
SHARE THIS WHITEPAPER. Top Selection Criteria for an Anti-DDoS Solution Whitepaper
SHARE THIS WHITEPAPER Top Selection Criteria for an Anti-DDoS Solution Whitepaper Table of Contents Top Selection Criteria for an Anti-DDoS Solution...3 DDoS Attack Coverage...3 Mitigation Technology...4
Analysis of Attacks towards Turkish National Academic Network
Analysis of Attacks towards Turkish National Academic Network Murat SOYSAL, Onur BEKTAŞ Abstract Monitoring unused IP address is an emerging method for capturing Internet security threads. Either an attack
Network Security. Dr. Ihsan Ullah. Department of Computer Science & IT University of Balochistan, Quetta Pakistan. April 23, 2015
Network Security Dr. Ihsan Ullah Department of Computer Science & IT University of Balochistan, Quetta Pakistan April 23, 2015 1 / 24 Secure networks Before the advent of modern telecommunication network,
Host Discovery with nmap
Host Discovery with nmap By: Mark Wolfgang [email protected] November 2002 Table of Contents Host Discovery with nmap... 1 1. Introduction... 3 1.1 What is Host Discovery?... 4 2. Exploring nmap s Default
Symantec Cyber Threat Analysis Program Program Overview. Symantec Cyber Threat Analysis Program Team
Symantec Cyber Threat Analysis Program Symantec Cyber Threat Analysis Program Team White Paper: Symantec Security Intelligence Services Symantec Cyber Threat Analysis Program Contents Overview...............................................................................................
CMPT 471 Networking II
CMPT 471 Networking II Firewalls Janice Regan, 2006-2013 1 Security When is a computer secure When the data and software on the computer are available on demand only to those people who should have access
Network Security. Tampere Seminar 23rd October 2008. Overview Switch Security Firewalls Conclusion
Network Security Tampere Seminar 23rd October 2008 1 Copyright 2008 Hirschmann 2008 Hirschmann Automation and and Control GmbH. Contents Overview Switch Security Firewalls Conclusion 2 Copyright 2008 Hirschmann
Malicious Network Traffic Analysis
Malicious Network Traffic Analysis Uncover system intrusions by identifying malicious network activity. There are a tremendous amount of network based attacks to be aware of on the internet today and the
Firewall and UTM Solutions Guide
Firewall and UTM Solutions Guide Telephone: 0845 230 2940 e-mail: [email protected] Web: www.lsasystems.com Why do I need a Firewall? You re not the Government, Microsoft or the BBC, so why would hackers
Guide to DDoS Attacks December 2014 Authored by: Lee Myers, SOC Analyst
INTEGRATED INTELLIGENCE CENTER Technical White Paper William F. Pelgrin, CIS President and CEO Guide to DDoS Attacks December 2014 Authored by: Lee Myers, SOC Analyst This Center for Internet Security
Networking for Caribbean Development
Networking for Caribbean Development BELIZE NOV 2 NOV 6, 2015 w w w. c a r i b n o g. o r g N E T W O R K I N G F O R C A R I B B E A N D E V E L O P M E N T BELIZE NOV 2 NOV 6, 2015 w w w. c a r i b n
Detecting peer-to-peer botnets
Detecting peer-to-peer botnets Reinier Schoof & Ralph Koning System and Network Engineering University of Amsterdam mail: [email protected], [email protected] February 4, 2007 1 Introduction Spam,
REPORT ON AUDIT OF LOCAL AREA NETWORK OF C-STAR LAB
REPORT ON AUDIT OF LOCAL AREA NETWORK OF C-STAR LAB Conducted: 29 th March 5 th April 2007 Prepared By: Pankaj Kohli (200607011) Chandan Kumar (200607003) Aamil Farooq (200505001) Network Audit Table of
HONEYD (OPEN SOURCE HONEYPOT SOFTWARE)
HONEYD (OPEN SOURCE HONEYPOT SOFTWARE) Author: Avinash Singh Avinash Singh is a Technical Evangelist currently worksing at Appin Technology Lab, Noida. Educational Qualification: B.Tech from Punjab Technical
INCREASE NETWORK VISIBILITY AND REDUCE SECURITY THREATS WITH IMC FLOW ANALYSIS TOOLS
WHITE PAPER INCREASE NETWORK VISIBILITY AND REDUCE SECURITY THREATS WITH IMC FLOW ANALYSIS TOOLS Network administrators and security teams can gain valuable insight into network health in real-time by
HONEYPOT SECURITY. February 2008. The Government of the Hong Kong Special Administrative Region
HONEYPOT SECURITY February 2008 The Government of the Hong Kong Special Administrative Region The contents of this document remain the property of, and may not be reproduced in whole or in part without
FIREWALLS & NETWORK SECURITY with Intrusion Detection and VPNs, 2 nd ed. Chapter 4 Finding Network Vulnerabilities
FIREWALLS & NETWORK SECURITY with Intrusion Detection and VPNs, 2 nd ed. Chapter 4 Finding Network Vulnerabilities Learning Objectives Name the common categories of vulnerabilities Discuss common system
Protecting Critical Infrastructure
Protecting Critical Infrastructure SCADA Network Security Monitoring March 20, 2015 Table of Contents Introduction... 4 SCADA Systems... 4 In This Paper... 4 SCADA Security... 4 Assessing the Security
Security Event Management. February 7, 2007 (Revision 5)
Security Event Management February 7, 2007 (Revision 5) Table of Contents TABLE OF CONTENTS... 2 INTRODUCTION... 3 CRITICAL EVENT DETECTION... 3 LOG ANALYSIS, REPORTING AND STORAGE... 7 LOWER TOTAL COST
IBM Protocol Analysis Module
IBM Protocol Analysis Module The protection engine inside the IBM Security Intrusion Prevention System technologies. Highlights Stops threats before they impact your network and the assets on your network
Intrusion Detection Systems and Supporting Tools. Ian Welch NWEN 405 Week 12
Intrusion Detection Systems and Supporting Tools Ian Welch NWEN 405 Week 12 IDS CONCEPTS Firewalls. Intrusion detection systems. Anderson publishes paper outlining security problems 1972 DNS created 1984
A LITERATURE REVIEW OF NETWORK MONITORING THROUGH VISUALISATION AND THE INETVIS TOOL
A LITERATURE REVIEW OF NETWORK MONITORING THROUGH VISUALISATION AND THE INETVIS TOOL Christopher Schwagele Supervisor: Barry Irwin Computer Science Department, Rhodes University 29 July 2010 Abstract Network
IBM. Vulnerability scanning and best practices
IBM Vulnerability scanning and best practices ii Vulnerability scanning and best practices Contents Vulnerability scanning strategy and best practices.............. 1 Scan types............... 2 Scan duration
Chapter 9 Firewalls and Intrusion Prevention Systems
Chapter 9 Firewalls and Intrusion Prevention Systems connectivity is essential However it creates a threat Effective means of protecting LANs Inserted between the premises network and the to establish
Network Security Incident Analysis System for Detecting Large-scale Internet Attacks
Network Security Incident Analysis System for Detecting Large-scale Internet Attacks Dr. Kenji Rikitake Security Advancement Group NICT, Japan September 6, 2005 Our goals Collaborative monitoring, centralized
Distributed Denial of Service Attack Tools
Distributed Denial of Service Attack Tools Introduction: Distributed Denial of Service Attack Tools Internet Security Systems (ISS) has identified a number of distributed denial of service tools readily
IPv6 SECURITY. May 2011. The Government of the Hong Kong Special Administrative Region
IPv6 SECURITY May 2011 The Government of the Hong Kong Special Administrative Region The contents of this document remain the property of, and may not be reproduced in whole or in part without the express
Network Security Monitoring and Behavior Analysis Pavel Čeleda, Petr Velan, Tomáš Jirsík
Network Security Monitoring and Behavior Analysis Pavel Čeleda, Petr Velan, Tomáš Jirsík {celeda velan jirsik}@ics.muni.cz Part I Introduction P. Čeleda et al. Network Security Monitoring and Behavior
Network Monitoring Tool to Identify Malware Infected Computers
Network Monitoring Tool to Identify Malware Infected Computers Navpreet Singh Principal Computer Engineer Computer Centre, Indian Institute of Technology Kanpur, India [email protected] Megha Jain, Payas
A host-based firewall can be used in addition to a network-based firewall to provide multiple layers of protection.
A firewall is a software- or hardware-based network security system that allows or denies network traffic according to a set of rules. Firewalls can be categorized by their location on the network: A network-based
Cyber Security. BDS PhantomWorks. Boeing Energy. Copyright 2011 Boeing. All rights reserved.
Cyber Security Automation of energy systems provides attack surfaces that previously did not exist Cyber attacks have matured from teenage hackers to organized crime to nation states Centralized control
DoS: Attack and Defense
DoS: Attack and Defense Vincent Tai Sayantan Sengupta COEN 233 Term Project Prof. M. Wang 1 Table of Contents 1. Introduction 4 1.1. Objective 1.2. Problem 1.3. Relation to the class 1.4. Other approaches
Decoding DNS data. Using DNS traffic analysis to identify cyber security threats, server misconfigurations and software bugs
Decoding DNS data Using DNS traffic analysis to identify cyber security threats, server misconfigurations and software bugs The Domain Name System (DNS) is a core component of the Internet infrastructure,
DoS/DDoS Attacks and Protection on VoIP/UC
DoS/DDoS Attacks and Protection on VoIP/UC Presented by: Sipera Systems Agenda What are DoS and DDoS Attacks? VoIP/UC is different Impact of DoS attacks on VoIP Protection techniques 2 UC Security Requirements
Analysis of Network Packets. C DAC Bangalore Electronics City
Analysis of Network Packets C DAC Bangalore Electronics City Agenda TCP/IP Protocol Security concerns related to Protocols Packet Analysis Signature based Analysis Anomaly based Analysis Traffic Analysis
White Paper. Intelligent DDoS Protection Use cases for applying DDoS Intelligence to improve preparation, detection and mitigation
White Paper Intelligent DDoS Protection Use cases for applying DDoS Intelligence to improve preparation, detection and mitigation Table of Contents Introduction... 3 Common DDoS Mitigation Measures...
1 hours, 30 minutes, 38 seconds Heavy scan. All scanned network resources. Copyright 2001, FTP access obtained
home Network Vulnerabilities Detail Report Grouped by Vulnerability Report Generated by: Symantec NetRecon 3.5 Licensed to: X Serial Number: 0182037567 Machine Scanned from: ZEUS (192.168.1.100) Scan Date:
Firewalls, NAT and Intrusion Detection and Prevention Systems (IDS)
Firewalls, NAT and Intrusion Detection and Prevention Systems (IDS) Internet (In)Security Exposed Prof. Dr. Bernhard Plattner With some contributions by Stephan Neuhaus Thanks to Thomas Dübendorfer, Stefan
2010 Carnegie Mellon University. Malware and Malicious Traffic
Malware and Malicious Traffic What We Will Cover Introduction Your Network Fundamentals of networks, flow, and protocols Malicious traffic External Events & Trends Malware Networks in the Broad Working
SPAM FILTER Service Data Sheet
Content 1 Spam detection problem 1.1 What is spam? 1.2 How is spam detected? 2 Infomail 3 EveryCloud Spam Filter features 3.1 Cloud architecture 3.2 Incoming email traffic protection 3.2.1 Mail traffic
A Review of Anomaly Detection Techniques in Network Intrusion Detection System
A Review of Anomaly Detection Techniques in Network Intrusion Detection System Dr.D.V.S.S.Subrahmanyam Professor, Dept. of CSE, Sreyas Institute of Engineering & Technology, Hyderabad, India ABSTRACT:In
CYBER ATTACKS EXPLAINED: PACKET CRAFTING
CYBER ATTACKS EXPLAINED: PACKET CRAFTING Protect your FOSS-based IT infrastructure from packet crafting by learning more about it. In the previous articles in this series, we explored common infrastructure
THE SMARTEST WAY TO PROTECT WEBSITES AND WEB APPS FROM ATTACKS
THE SMARTEST WAY TO PROTECT WEBSITES AND WEB APPS FROM ATTACKS INCONVENIENT STATISTICS 70% of ALL threats are at the Web application layer. Gartner 73% of organizations have been hacked in the past two
CS5008: Internet Computing
CS5008: Internet Computing Lecture 22: Internet Security A. O Riordan, 2009, latest revision 2015 Internet Security When a computer connects to the Internet and begins communicating with others, it is
Symantec Endpoint Protection 11.0 Network Threat Protection (Firewall) Overview and Best Practices White Paper
Symantec Endpoint Protection 11.0 Network Threat Protection (Firewall) Overview and Best Practices White Paper Details: Introduction When computers in a private network connect to the Internet, they physically
IntruPro TM IPS. Inline Intrusion Prevention. White Paper
IntruPro TM IPS Inline Intrusion Prevention White Paper White Paper Inline Intrusion Prevention Introduction Enterprises are increasingly looking at tools that detect network security breaches and alert
The Advantages of a Firewall Over an Interafer
FIREWALLS VIEWPOINT 02/2006 31 MARCH 2006 This paper was previously published by the National Infrastructure Security Co-ordination Centre (NISCC) a predecessor organisation to the Centre for the Protection
WHITE PAPER. FortiGate DoS Protection Block Malicious Traffic Before It Affects Critical Applications and Systems
WHITE PAPER FortiGate DoS Protection Block Malicious Traffic Before It Affects Critical Applications and Systems Abstract: Denial of Service (DoS) attacks have been a part of the internet landscape for
Cisco Security Intelligence Operations
Operations Operations of 1 Operations Operations of Today s organizations require security solutions that accurately detect threats, provide holistic protection, and continually adapt to a rapidly evolving,
Detection of illegal gateways in protected networks
Detection of illegal gateways in protected networks Risto Vaarandi and Kārlis Podiņš Cooperative Cyber Defence Centre of Excellence Tallinn, Estonia [email protected] 1. Introduction In this
Firewalls, Tunnels, and Network Intrusion Detection
Firewalls, Tunnels, and Network Intrusion Detection 1 Part 1: Firewall as a Technique to create a virtual security wall separating your organization from the wild west of the public internet 2 1 Firewalls
Description: Objective: Attending students will learn:
Course: Introduction to Cyber Security Duration: 5 Day Hands-On Lab & Lecture Course Price: $ 3,495.00 Description: In 2014 the world has continued to watch as breach after breach results in millions of
Fuzzy Network Profiling for Intrusion Detection
Fuzzy Network Profiling for Intrusion Detection John E. Dickerson ([email protected]) and Julie A. Dickerson ([email protected]) Electrical and Computer Engineering Department Iowa State University
On-Premises DDoS Mitigation for the Enterprise
On-Premises DDoS Mitigation for the Enterprise FIRST LINE OF DEFENSE Pocket Guide The Challenge There is no doubt that cyber-attacks are growing in complexity and sophistication. As a result, a need has
CS 356 Lecture 17 and 18 Intrusion Detection. Spring 2013
CS 356 Lecture 17 and 18 Intrusion Detection Spring 2013 Review Chapter 1: Basic Concepts and Terminology Chapter 2: Basic Cryptographic Tools Chapter 3 User Authentication Chapter 4 Access Control Lists
Protecting Your Organisation from Targeted Cyber Intrusion
Protecting Your Organisation from Targeted Cyber Intrusion How the 35 mitigations against targeted cyber intrusion published by Defence Signals Directorate can be implemented on the Microsoft technology
How To Prevent Hacker Attacks With Network Behavior Analysis
E-Guide Signature vs. anomaly-based behavior analysis News of successful network attacks has become so commonplace that they are almost no longer news. Hackers have broken into commercial sites to steal
Passive Vulnerability Detection
Page 1 of 5 Passive Vulnerability Detection "Techniques to passively find network security vulnerabilities" Ron Gula [email protected] September 9, 1999 Copyright 1999 Network Security Wizards
CS 356 Lecture 16 Denial of Service. Spring 2013
CS 356 Lecture 16 Denial of Service Spring 2013 Review Chapter 1: Basic Concepts and Terminology Chapter 2: Basic Cryptographic Tools Chapter 3 User Authentication Chapter 4 Access Control Lists Chapter
CSE331: Introduction to Networks and Security. Lecture 18 Fall 2006
CSE331: Introduction to Networks and Security Lecture 18 Fall 2006 Announcements Project 2 is due next Weds. Homework 2 has been assigned: It's due on Monday, November 6th. CSE331 Fall 2004 2 Attacker
Barracuda Intrusion Detection and Prevention System
Providing complete and comprehensive real-time network protection Today s networks are constantly under attack by an ever growing number of emerging exploits and attackers using advanced evasion techniques
Cisco IPS Tuning Overview
Cisco IPS Tuning Overview Overview Increasingly sophisticated attacks on business networks can impede business productivity, obstruct access to applications and resources, and significantly disrupt communications.
Course Content Summary ITN 261 Network Attacks, Computer Crime and Hacking (4 Credits)
Page 1 of 6 Course Content Summary ITN 261 Network Attacks, Computer Crime and Hacking (4 Credits) TNCC Cybersecurity Program web page: http://tncc.edu/programs/cyber-security Course Description: Encompasses
How To Set Up An Ip Firewall On Linux With Iptables (For Ubuntu) And Iptable (For Windows)
Security principles Firewalls and NAT These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/) Host vs Network
Firewall Firewall August, 2003
Firewall August, 2003 1 Firewall and Access Control This product also serves as an Internet firewall, not only does it provide a natural firewall function (Network Address Translation, NAT), but it also
SECURING APACHE : DOS & DDOS ATTACKS - II
SECURING APACHE : DOS & DDOS ATTACKS - II How DDoS attacks are performed A DDoS attack has to be carefully prepared by the attackers. They first recruit the zombie army, by looking for vulnerable machines,
Cisco Remote Management Services for Security
Cisco Remote Management Services for Security Innovation: Many Take Advantage of It, Some Strive for It, Cisco Delivers It. Cisco Remote Management Services (RMS) for Security provide around the clock
Bridging the gap between COTS tool alerting and raw data analysis
Article Bridging the gap between COTS tool alerting and raw data analysis An article on how the use of metadata in cybersecurity solutions raises the situational awareness of network activity, leading
Managed Intrusion, Detection, & Prevention Services (MIDPS) Why E-mail Sorting Solutions? Why ProtectPoint?
Managed Intrusion, Detection, & Prevention Services (MIDPS) Why E-mail Sorting Solutions? Why ProtectPoint? Why? Focused on Managed Intrusion Security Superior-Architected Hardened Technology Security
A Critical Investigation of Botnet
Global Journal of Computer Science and Technology Network, Web & Security Volume 13 Issue 9 Version 1.0 Year 2013 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals
Concierge SIEM Reporting Overview
Concierge SIEM Reporting Overview Table of Contents Introduction... 2 Inventory View... 3 Internal Traffic View (IP Flow Data)... 4 External Traffic View (HTTP, SSL and DNS)... 5 Risk View (IPS Alerts
CYBERTRON NETWORK SOLUTIONS
CYBERTRON NETWORK SOLUTIONS CybertTron Certified Ethical Hacker (CT-CEH) CT-CEH a Certification offered by CyberTron @Copyright 2015 CyberTron Network Solutions All Rights Reserved CyberTron Certified
Penetration Testing. NTS330 Unit 1 Penetration V1.0. February 20, 2011. Juan Ortega. Juan Ortega, [email protected]. 1 Juan Ortega, juaorteg@uat.
1 Penetration Testing NTS330 Unit 1 Penetration V1.0 February 20, 2011 Juan Ortega Juan Ortega, [email protected] 1 Juan Ortega, [email protected] 2 Document Properties Title Version V1.0 Author Pen-testers
Protecting DNS Critical Infrastructure Solution Overview. Radware Attack Mitigation System (AMS) - Whitepaper
Protecting DNS Critical Infrastructure Solution Overview Radware Attack Mitigation System (AMS) - Whitepaper Table of Contents Introduction...3 DNS DDoS Attacks are Growing and Evolving...3 Challenges
Fuzzy Network Profiling for Intrusion Detection
Fuzzy Network Profiling for Intrusion Detection John E. Dickerson ([email protected]) and Julie A. Dickerson ([email protected]) Electrical and Computer Engineering Department Iowa State University
Intrusion Detection and Cyber Security Monitoring of SCADA and DCS Networks
Intrusion Detection and Cyber Security Monitoring of SCADA and DCS Networks Dale Peterson Director, Network Security Practice Digital Bond, Inc. 1580 Sawgrass Corporate Parkway, Suite 130 Sunrise, FL 33323
Arbor s Solution for ISP
Arbor s Solution for ISP Recent Attack Cases DDoS is an Exploding & Evolving Trend More Attack Motivations Geopolitical Burma taken offline by DDOS attack Protests Extortion Visa, PayPal, and MasterCard
Deep Security Vulnerability Protection Summary
Deep Security Vulnerability Protection Summary Trend Micro, Incorporated This documents outlines the process behind rules creation and answers common questions about vulnerability coverage for Deep Security
Security Threat Kill Chain What log data would you need to identify an APT and perform forensic analysis?
Security Threat Kill Chain What log data would you need to identify an APT and perform forensic analysis? This paper presents a scenario in which an attacker attempts to hack into the internal network
Comparison of Firewall, Intrusion Prevention and Antivirus Technologies
White Paper Comparison of Firewall, Intrusion Prevention and Antivirus Technologies How each protects the network Juan Pablo Pereira Technical Marketing Manager Juniper Networks, Inc. 1194 North Mathilda
SANS Top 20 Critical Controls for Effective Cyber Defense
WHITEPAPER SANS Top 20 Critical Controls for Cyber Defense SANS Top 20 Critical Controls for Effective Cyber Defense JANUARY 2014 SANS Top 20 Critical Controls for Effective Cyber Defense Summary In a
Penetration Testing Report Client: Business Solutions June 15 th 2015
Penetration Testing Report Client: Business Solutions June 15 th 2015 Acumen Innovations 80 S.W 8 th St Suite 2000 Miami, FL 33130 United States of America Tel: 1-888-995-7803 Email: [email protected]
