How To Write An Forensics Framework For Cyber Crimes
|
|
|
- Harriet Henry
- 5 years ago
- Views:
Transcription
1 digital investigation 5 (2009) available at journal homepage: Towards an integrated forensic analysis framework Rachid Hadjidj, Mourad Debbabi*, Hakim Lounis, Farkhund Iqbal, Adam Szporer, Djamel Benredjem Computer Security Laboratory, Concordia University, 1455 de Maisonneuve West, EV 7-642, Montreal, Quebec, Canada H3G 1M8 article info Article history: Received 29 October 2007 Received in revised form 6 January 2009 Accepted 14 January 2009 Keywords: Cyber crimes forensics social networks Classification Clustering Statistical analysis abstract Due to its simple and inherently vulnerable nature, communication is abused for numerous illegitimate purposes. spamming, phishing, drug trafficking, cyber bullying, racial vilification, child pornography, and sexual harassment are some common mediated cyber crimes. Presently, there is no adequate proactive mechanism for securing systems. In this context, forensic analysis plays a major role by examining suspected accounts to gather evidence to prosecute criminals in a court of law. To accomplish this task, a forensic investigator needs efficient automated tools and techniques to perform a multi-staged analysis of ensembles with a high degree of accuracy, and in a timely fashion. In this article, we present our forensic analysis software tool, developed by integrating existing state-of-the-art statistical and machinelearning techniques complemented with social networking techniques. In this framework we incorporate our two proposed authorship attribution approaches; one is presented for the first time in this article. ª 2009 Elsevier Ltd. All rights reserved. 1. Motivations and background In the majority of mediated cyber crimes, the victimization tactics used vary from simple anonymity to identity theft and impersonation. Due to two inherent limitations, communication is exposed to such illegitimate uses. One, there is no mechanism for message encryption at the sender end and/or an integrity check at the recipient end. Two, the widely used protocol, Simple Mail Transfer Protocol, lacks a source authentication mechanism. In fact, the metadata in the header of an , containing information about the sender and the path along which the message has traveled, can easily be forged or anonymized. Installing antiviruses, filters, firewalls, and scanners is insufficient to secure communication (Teng et al., 2004). In this context, cyber forensic investigation (also called digital investigation) is employed to collect credible evidence by analyzing collections to prosecute criminals in the court of law. The scope of analysis ranges from simple keyword searching to authorship attribution of anonymous e- mails. For instance, an investigator may want to get an overview of an collection by computing simple statistics such as the distribution of s per sender/recipient domains. In some situations an investigator may try to narrow down the scope of investigation by selecting (usually few) malicious s from regular ones. For this purpose, usually contentbased clustering is applied to divide s into different groups on the basis of the subject matter of s (Li et al., 2006). The conceived subject matter could be the type of crime, such as pornography, hacking, or terrorism, etc., in which s were instrumental in committing those crimes (Kulkarni and Pedersen, 2005). s can be clustered on the basis of stylometric features to determine the writing styles of different individuals contained in an collection * Corresponding author. Tel.: þ ; fax: þ address: [email protected] (M. Debbabi) /$ see front matter ª 2009 Elsevier Ltd. All rights reserved. doi: /j.diin
2 digital investigation 5 (2009) (Holmes, 1998). Wei et al. (2008) have proposed a clustering algorithm for detecting relationships among different spam e- mails to identify relationships between spam campaigns. The features they used are extracted and derived from headers and attachments of spam s. An investigator may be interested in detecting a similarity between certain e- mails in cases of plagiarism detection and authorship analysis (de Vel et al., 2001). social network analysis techniques are used to study the communication patterns of individuals at the account level, without analyzing the actual contents of s (Stolfo et al., 2006). The need exists to develop an integrated analysis tool by using the above-mentioned innovative techniques. This will help forensic experts to efficiently analyze collections (which are usually huge), within a limited time frame. Mining Toolkit (EMT) (Stolfo et al., 2006) is one such framework that computes the behavior profile of users based on their e- mail accounts. These profiles are then employed to detect the anomalous behavior of those users. This toolkit is useful for generating reports by summarizing archives. However, the toolkit does not address the issue of authorship attribution and similarity detection, as addressed separately by Abbasi and Chen (2008). Zheng et al. (2006), on the other hand, proposed a stylometry-based framework that is used for authorship identification only. As described in this paper, we have designed and implemented a comprehensive software toolkit called Integrated Forensic Analysis Framework (IEFAF) to fit an investigator niche. The framework will help to assist investigators in gathering clues and evidences during investigations in which communications are relevant. Major functionalities of IEFAF include: The ability to investigate archives and compute the required statistical distributions to give an overview about an entire collection to the investigator. Using different visualization techniques, results are plotted for the purpose of clarity and understanding. The framework is compatible with a variety of data formats coming from different databases. The capability of keyword searching by using SQL like queries. The development of data mining models to help classify s in different categories, or cluster them according to some undiscovered relationships. The detection of anomalous behaviors by matching the observed communication with the pre-recorded normal communication model of users. The usual communication patterns of a user within its cliques are collected though social network analysis techniques. The performance of authorship analysis, on the basis of stylometric features, to help identify the most plausible authors of anonymous s. The capability to map selected IP addresses by applying our geographical localization technique to determine the physical location of that particular IP. Apart from developing the framework (IEFAF), we are proposing a new approach of mining style variation to address the authorship attribution problem. In traditional authorship attribution technique, writing style features are extracted from the entire collection of a person, irrespective of to whom the s are written. It is usually assumed that the stylometric features found in one s documents remain consistent and are not controlled (neither consciously or unconsciously) by the writers. However, the fact is that a substantial variation in the style of an individual can be seen in both the contents as well as the stylometric features depending on the recipient and the context. In this paper we are proposing techniques for capturing the style variation of a person across his/her entire e- mail communication. A detailed description of our proposed technique is given in Section 2.3 Moreover, in IEFAF we have incorporated a novel authorship attribution approach, published in DFRWS 2008 (Iqbal et al., 2008). This technique of mining write-prints is based on the concept of frequent itemset (Agrawal et al., 1993), borrowed from data mining domain. This helps to capture the combination of features that occur frequently in a person s s. The rest of the paper is organized as follows: Section 2 describes our proposed approach, Section 3 elaborates on different modules of our framework, and Section 4 contains concluding remarks and future directions. 2. Proposed approach The theoretical foundation of our framework is based on different well established techniques of statistical analysis, text mining (classification and clustering), and stylometric features analysis, together with behavioral modeling achieved by using social networking techniques. Stylometry is the statistical study of five different writing style (lexical, syntactic, structural, domain-specific and idiosyncratic) features (see Section 2.3.1). social network analysis is complemented by statistical analysis to develop more precise behavioral profiles at the account level of a user. Stylometric features analysis is applied to learn about a user writing behavior at the content level. These two types of models, together with machine-learning techniques, are employed to analyze anonymous s in authorship attribution problem. In forensic investigation, it is imperative to localize individuals and their resources for collecting more concrete evidence. Therefore, we complement our framework with the capability of geographic localization. The detailed description of how are the aforesaid innovative techniques are helpful in the context of forensic analysis and how are they incorporated in our framework, is given in the next subsections statistic analysis Statistical analysis of accounts by observing their communication patterns manifests a great deal of information. For instance, to view a brief summary of an corpus, simple statistics like number of s per sender, per recipient, per sender domain, per recipient domain, per class and per cluster 1 are calculated (see Fig. 5). Moreover, 1 Classes and clusters are determined by applying classification and clustering respectively.
3 126 digital investigation 5 (2009) computing similar statistics, including ing frequency during different parts of the day, average size, and average attachment size (if any) of a user help reveal some non-trivial information. For instance, an user may send on average s to his/her co-workers during day time, which may drop to s at night. Similarly, the average mail size of a user may be 2 5 KB, with usually short attachments, if any. If the same account suddenly transmits hundreds of large sized s with heavy attachments towards certain unknown recipients, which reveals the possibility of suspicious behavior. This may help investigators to narrow down the investigation scope by short listing the number of suspects. More explicitly, accounts that show some kind of unusual behavior are selected for further investigation. Determining the total number of users (senders/recipients) within an collection, finding all the recipients of each user, and determining whether an has been replied to or not, helps during investigation. Statistical distributions can be computed over a certain period of time and for a specific set of s. Additional statistics can be computed dynamically by sending appropriate SQL queries to the database. A more advanced use of statistical distributions can help compute users profiles that can be used for authorship identification (Mendenhall, 1887; Farringdon, 2001). To compute statistics on an corpus, each is first loaded from its raw files, and relevant fields, such as the sender, recipient, subject, and message body, etc., are extracted. Extracted information 2 is stored in database tables mining Data mining is a mathematical process designed to explore large amounts of data by capturing consistent patterns and relationships between data objects. By employing mathematical models, the knowledge acquired from interesting patterns is applied to make predictions about the unseen dataset. The application of data mining techniques to an dataset has been very successful in cyber crime investigation. Several studies (Abbasi and Chen, 2008; de Vel, 2000) signify the importance of mining for resolving issues of identity theft and plagiarism in forensic investigation. In our framework, we have used classification to identify the topic and/or the author of s. Clustering, on the other hand, is used to cluster s on the basis of contents and stylometric features classification In general, the process of classification starts by data cleaning, followed by features extraction. The extracted features are bifurcated into two groups, training and testing sets. Each instance of the training data has a definite category, called class label. The training set is given as input to a classification function (classifier) to develop a model. Common classifiers include decision tree (Quinlan, 1986), neural networks (Lippmann, 1987), and Support Vector Machine (SVM) (Joachims, 1998). The developed model is tested with the testing set by 2 In the current version of our framework, we do not handle attachments. assuming that the class labels are not known. The validated model is then employed for classification of unseen data. Usually, the larger the training set, the better the accuracy of the model. In the context of classification, the body and subject of an are converted to a vector of metrics called features. The features set that we used in our experiments is described in Section Usually, each (subject and body) is converted into a stream of characters. Using java tokenizer API, each character stream is converted into distinct tokens or words. Some of the words may appear in different forms (for instance, verb, noun, and adjective, etc.), or different tenses (such as present, past, and future). Such words are stemmed to their common root. For instance, finance, financial, and financing may be converted to finance. Porter2 is a common stemming algorithm used by the data mining community. Syntactic features, also known as style markers (punctuation and all-purpose short words called stop-words), are treated differently in different data mining applications. For example, they are dropped in topic-based classification and kept in author-based classification due to their significant discriminating capabilities in identifying authors based on their writings. In our experiments, we used more than 300 function words (as listed in Zheng et al., 2006). Certain word sequences like United States of America and United Arab Emirates, etc., often appear together; that may increase features dimensionality. Therefore, we developed a module to automatically scan those sequences and treat them as single tokens. Using vector space model representation, each E i is converted into an n-dimensional vector of features E i ¼ {f 1,., f n }. Once all the s are converted into feature vectors, normalization is applied to the columns as needed. The purpose of normalization is to limit all the values of a certain feature in a specific range and avoid overweighing some attribute over others. The selected columns are scanned for the maximum digit, then all the cells are divided by that number. In our framework, we apply classification for two purposes. One, to classify new s on the basis of topic, and two, to identity the true author of an anonymous . A detailed description of the two types of application is given below Topic-based classification Most spam filtering and scanning techniques are based on topic or content-based classification. Analogously, in forensic analysis, s are classified as malicious if their contents are matched to a particular cyber criminal taxonomy. In contrast to traditional keyword searching, which is inefficient and error prone, classification techniques are more precise and robust to noise and dimensionality. For instance, to identify s (usually from a huge collection) that promote drug trafficking, one can perform a simple search with the word drug or other related keywords. However, the criminal community often uses special expressions and encrypted messages to communicate covertly with each other. Most of the culprits use different names and speech artifacts to hide information. Classifiers, on the other hand, are not limited to a few keywords and instead are trained on multidimensional data, and thus do not suffer from information hiding.
4 digital investigation 5 (2009) In our framework, topic classification is achieved using a classical text mining approach (Forsyth and Holmes, 1996). After the pre-treatment phase (discussed in Section 2.2.1), the given set of s is divided into a training ((2/3) of total s) and a testing set ((1/3) of total s). Each instance from the data set may contain either two class labels or multiple class labels, depending on the number of target groups/categories. The investigator, for instance, may want to classify an as malicious or non malicious (normal), or he may wish to classify s in more than two categories, such as pornography, spamming and terrorism. The class label in this case is the crime type/group. It s worth mentioning that in topic-based classification, the context-independent words, called stop words (function words and punctuation), are removed and only the contentspecific features are retained. Frequency of each of the token is calculated. The resultant frequencies are normalized to a value between 0 and 1. As a result, each E i is represented as {f 1,., f n }, where each feature f i is a normalized frequency of a word w i. The next step is to apply a classification model to the set of feature vectors. For this purpose, we use a data mining software called Weka. 3 The feature vectors are converted into Weka compatible format, Attribute-Relation File Format (ARFF). To evaluate our implementation, we performed experiments on the Enron corpus made available by MIT at We considered a subset containing around 300 messages classified manually into two classes: those dealing with company business (official) and those that were personal. Each class contained 150 s. A training set was constructed by randomly selecting 100 s from each class, while the remaining 50 s were used for testing. The same process was repeated 10 times to construct 10 different training and testing sets. We employed 3 5 different classifiers. The precision of the classifiers varied between 76% and 89%, with an average of about 81%. The classifier precision, computed as the percentage of true positives ( s correctly classified), is used to measure the model s accuracy Author-based classification The second application of classification in our framework is to identify the author of an anonymous . The class label used for this purpose is the author or sender of an . This section is given here for the purpose of completeness, while the detailed description of authorship identification is given in Section clustering Clustering is the process of grouping data in semantically similar sets to achieve simplification by modeling data by its clusters (Gunopulos et al., 1998). In case of mining, we used clustering to group s on the basis of discussion topic and authorship. To cluster s by discussion topic, s are processed for features extraction in the same way as discussed in Section The only difference is that instead of computing the frequency of a word in each document, we compute the perceived importance of a word in all 3 Weka is available at the documents. For this purpose, we employ the commonly used tf_idf function (Joachims, 1998): tf idf j;i ¼ tf j;i idf j;i where tf_idf j,i is the perceived importance of a word w j in E i, tf j,i is the frequency of word w j in E i, idf j;i ¼ logðn=df i Þ is the inverse document frequency, with N the total number of s and df j the number of s where the word w j appeared. We have used the three most commonly used clustering algorithms: Expectation Maximization (EM), K-Means, and bisecting K-Means. Once the clusters are obtained, each cluster is tagged with the most and the least frequent words/ phrases found in the respective cluster. Tagging clusters with the least frequent words, helps in finding the inter-cluster relationship. In addition to identifying the subject matter of a group of s, clustering can also be employed to speed up query-based keyword searching. Instead of scanning each for a keyword, all the s are first clustered and then each cluster is tagged with the most frequent words, which are then matched with the keyword in question. The matched clusters are retrieved in the order of relevance to the search criterion (query contents). Another application of clustering is to identify the most plausible author of an anonymous . In this case, the stylometric features are not discarded but are used to differentiate between writings of different suspects. The rest of the preprocessing is analogous to the one discussed in Section Clustering is applied to anonymous s, as well as s with known authors. Resulting clusters are tagged with the most frequent senders. Since clustering is performed on the basis of writing style features, s within a cluster would belong ideally to one particular individual. The anonymous appearing in a cluster where a specific sender is the most frequent, then that particular sender is declared to be the most probable author of the disputed anonymous . This is because that specific sender is the one who has more s similar to the disputed authorship attribution Anonymity in communication is one of the main issues exploited by terrorists, pedophiles, and scammers. Falsifying sender name, address, and the path along which an travels is generally termed as spoofing and forging, which can be done even by a novice user. In this context, forensic analysis of s, with special focus on authorship attribution, can help prosecute the offender of misuse by means of law (Teng et al., 2004). Traditionally, finger prints are used to uniquely identify individuals during criminal investigations within courts of law. Analogously, word-prints or write-prints constituted by the writing style features of an author can be used to discriminate his/her writings from that of others. The goal is to determine the likelihood that a specific individual is the author of an anonymous by examining his/her previously written s. The problem of authorship identification in the context of forensics is distinct from traditional authorship problems in two ways. First, by assumption, the true author should certainly be one of the suspects. Second, s, though are
5 128 digital investigation 5 (2009) short in size but usually contain rich information as an normally consists of header, subject, body and attachments. More formally, a cyber forensic investigator attempts to determine the author of a disputed anonymous a, and who has to be one of the suspects {S 1,., S n }. The main issue here is to precisely identify the most plausible author from the suspects {S 1,., S n } and present the findings in a court of law. In current literature (Teng et al., 2004; Zheng et al., 2006), authorship identification is considered as a text classification problem. The process starts by extracting the writing style features from the previously known s of a person. Using these features, a classifier is trained; then, the developed model is applied to the anonymous to identify its conceivable author. The authorship attribution technique has been successful in resolving ownership disputes over literary and historic documents. However, due to the special characteristics of an dataset, its application to is more challenging. The commonly used features in the field of authorship analysis (Corney et al., 2002; Zheng et al., 2006) are lexical, syntactical, structural, content-specific attributes and idiosyncratic features (see Section 2.3.1). In most of the previous studies, stylometric features are extracted from the entire ensemble of a suspect s s, disregarding the topic, time, and recipients of the s. However, the fact is that the writing style of an individual varies from recipient to recipient and evolves with the time and context (de Vel et al., 2001). This change may occur in both the contents as well as the style markers. For instance, s that a person writes to his job colleagues are more formal than what she writes to her family members and friends. Coworkers of a financial company may talk more about meetings, promotion schemes, customer problems and solutions, salaries and bonuses, etc. s exchanged among friends may discuss about trips, visits, funny stories and jokes. The writing style features, including the selection and distribution of function words and punctuation, may be different in different contexts. Moreover, a person may be more formal and careful in using structural features, such as greeting and farewell comments, in s written to his boss. One may prefer to put complete signatures, including designation and contact information, in job communications. More importantly, malicious s are mostly anonymous and will be devoid of such traceable information. Though some research work (de Vel, 2000; de Vel et al., 2001) recognizes that some style variations exist with respect to different recipients, most choose to ignore such variations and focus on obtaining the so-called permanent writing traits of a suspect. However, with this approach, the contents and writing styles found in malicious s may be overshadowed by regular s because malicious s are usually much fewer in number than regular s. As a result, the classifier built from all the s would capture the writing styles from the regular s but may not be able to capture all the variations of writing styles of the same suspect. The classifier may be very accurate for classifying regular s, but fail to accurately classify malicious s, which, ironically, is the objective of building the classifier. The need is to investigate the impact of a suspect s style variation on authorship attribution. In this study, we are proposing a novel approach of mining style variations to precisely extract the more representative writing style features of a suspect (see Section for details). The major advantages of our proposed approach are: Model representativeness: the different writing styles of a suspect are captured separately without intermingling them. The developed classification model is a reasonably true representative. Increased accuracy: the developed model will be able to precisely match the disputed with the malicious behavior (as learnt from the malicious s) of a potential suspect. Generic application: our experimental results indicate that the proposed approach can be applied to increase the accuracy of authorship identification when the dataset contains s written on diversified topics. It can be a first step in solving the authorship identification problem in a more generic and natural way Stylometric features Writing styles are defined in terms of stylometric features. Writing patterns are usually the characteristics of words usage, words sequence, composition and layouts, common spelling and grammatical mistakes, vocabulary richness, hyphenation, and punctuation. However, there is no such features set that is optimized and is applicable equally in all domains. The commonly used features that are found in various authorship analysis studies (Baayen et al., 1996; Iqbal et al., 2008; Zheng et al., 2003) contain lexical, syntactical, structural and content-specific attributes. Recently, Abbasi and Chen (2008) presented a more comprehensive list of stylistic features by including idiosyncratic characteristics of writing styles. A brief description of the relative discriminating capability of each of these features is given below. Token-based Features are collected either in terms of characters or words. In terms of characters, for instance, frequency of letters, frequency of capital letters, total number of characters per token and character count per sentence are the most relevant metrics. These indicate the preference of an individual for certain special characters or symbols or the preferred representation of certain units. Word-based lexical features may include word length distribution, words per sentence, and vocabulary richness. Syntactic Features: Baayen et al. (1996) were the first who discovered that punctuation and function words are context-independent and thus can be applied to identify writers based on their written works. Structural Features are used to measure the over all appearance and layout of a document. For instance, average paragraph length, number of paragraphs per document, presence of greetings and their position within an , are common structural features. Content-specific Features are collection of certain keywords commonly found in a specific domain and may vary from context to context even for the same author. Zheng et al. (2003, 2006) used around 11 keywords (such as obo and sexy etc.) from the cyber crime taxonomy in authorship analysis experimentation. Idiosyncratic Features: common spelling mistakes such as transcribing f instead of ph say in phishing and grammatical mistakes such as sentences containing incorrect form of verbs. The list
6 digital investigation 5 (2009) of such characteristics varies from person to person and is difficult to control Proposed attribution approach An investigator is provided with s previously written by potential suspects. The available s could be in different formats, written in different languages, and may contain images, video clips, and/or HTML/XML tags. Our framework supports most of the common formats. Presently, we are considering s that are written in English only. In other words, we extract the textual part of the body, written in English, and drop all other parts of an message. The proposed approach consists of two major steps: grouping or categorization, followed by classification. As shown in Fig. 1, first the entire collection E i of a suspect S i, where S i {S 1,., S n }, is divided into distinct groups {S i G 1,., S i G k }. We have used both header information as well as the e- mail body for grouping s. For instance, grouping is performed on the basis of recipient, sender, time stamp, and combination of them. In case of the body, the known data mining technique called clustering is applied to detect similarity among s based on contents. Clustering is performed on contents and stylometric features. Next, using sender recipient, sender time stamp, and cluster tag as class labels, a classifier is built, as depicted in s (E 1 ) of S 1 Grouping/Clustering s (E n ) of S n S 1 G 1 S n G S 1 S S n G n G S k 1 G 1 G k 2 2 Anonymous Matching with S j G i Features Extraction Grouping/Clustering s: n-dimensional feature vectors with Class Labels Class Label represents a distinct Writing Style Generation Training Set Classification Model Fig. 1 Mining style variation of S i. Testing Set Validation Conceivable Author Fig. 1. The classifier thus built captures the isolated and distinct styles without being misled by the overlapping behavior of an author. The anonymous a is parsed and its features are extracted. The extracted features are applied to the developed classification model to identify its true author. In this case, the matching paths within the classifier are increased, thus increasing the chances that the anonymous is precisely attributed to its true author. Prior to describing our proposed approach in detail, we need to explain the stylometric features that we used in our experiments Stylometric features used. There are more than 1000 (Abbasi and Chen, 2008) that are commonly used. In our experiments, we used around 400 features including lexical, syntactic, and structural features. Most of these features including 303 function words, are listed and explained by Zheng et al. (2006) and de Vel et al. (2001) Categorization phase: mining class labels Grouping s of a suspect is done on the basis of body, as well as header information. To perform the first type of grouping, we employed clustering technique. Clustering is on the basis of either contents or writing style features. The latter type of grouping is straightforward and is done by using sender, recipient, and time stamp. At the end of grouping phase, each of a group is tagged with the respective group label. These labels are later used as class labels during the process of classification Categorization based on body. In this section we study how to capture style variations by applying clustering. There are two types of clustering: content-based and stylometry-based. Content-based clustering is used to determine the topic of discussion within a dataset (Li et al., 2006). Stylometry-based clustering, on the other hand, is used to identify the different writing styles contained within a data collection (Baayen et al., 1996). The process of applying clustering in both cases is the same. The only difference is in the preprocessing step. In content-based clustering the common type of preprocessing is performed. More explicitly, once each is converted into a bag-of-words, the style markers (function words and punctuation) are dropped. The rest of the tokens are processed in the same way as described in Section Unlike content-based clustering, in which style markers are dropped, in stylometry-based clustering the syntactic features are maintained. The rest of the pre-treatment is performed in a manner analogous to that described in Section Once all the s of each author are converted into vectors of features, clustering is applied. As discussed in Section 2.2.4, we have used the three more commonly used clustering algorithms: Expectation Maximization (EM), K-Means, and bisecting K-Means. Clustering is applied to s of each author independently. The resultant clusters of an author, for example S 1, are labeled as fs 1 C 1 ; S 1 C 2 ;.; S 1 C k g. Similarly, s of another author, S 2, are clustered separately, and resultant clusters are labeled as fs 2 C 1 ; S 2 C 2 ;.; S 2 C k g. The cluster labels are used as class labels during the classification phase.
7 130 digital investigation 5 (2009) Categorization based on header. In the traditional classification approach of authorship identification, e- mail sender is used as a class label. However, in our study we divide s of the same author into different groups. This division is based on recipient and time stamp, differentiating the different writing styles of the same user. The intuition behind using the time stamp for grouping is that some researchers, like Stolfo et al. (2006), believe that people behave differently at different times of day. People usually communicate with different categories of people at different times. For instance, most of the s that a person writes during day time are exchanged with his/ her co-workers. Similarly, s written in the evening may be exchanged with his/her family members and friends. Likewise, very few of the s that are exchanged at midnight may be written to one s job colleagues. For simplicity, we divide the whole 24 hours day into three time brackets: morning, evening, and night. Therefore, s of a sender are divided into three categories: s sent in the morning are tagged as SM, s sent in the evening are tagged as SE, and those sent at night are tagged as SN, where S represents sender. SM, SE and SN are used as class labels during the classification phase Classification phase Once s of all the senders are divided into distinct groups and, thus, the respective class labels for each group are determined, the next phase is to apply classification. This phase consists of features extraction, model generation, and model application (see Fig. 1). A brief description of each of these steps is given below Features extraction. Each body is converted into an n-dimensional vector of features. A feature could be a word frequency, ratio of two quantities, or a boolean value. All the feature types that we used in our framework are described in Section The features extraction process is elaborated in Section Model generation and validation. Prior to the application of classification algorithms, the group is first divided into training and testing sets (see Section 2.2.1). At the end of features extraction phase, we have two sets of features vectors (training and testing) for each suspect. Using the training set, some selected classifiers are employed to generate a model. Using the testing sets, the generated models are validated prior to their actual use. The validation (effectiveness) of the model is a function of its power to correctly classify the test s Author identification. If the error approximation is below a certain acceptable threshold, the model is employed. The disputed anonymous is processed and converted into a features vector in the manner similar to the one adopted for known mail. Using the developed classification model, the conceivable class label of the unseen is identified. The class label indicates the author of that Experimental evaluation To evaluate our approach, we used s from the Enron corpus. We considered a selection of 63 s from 3 different senders. For each sender we selected 3 different recipients, with 7 s sent to each of them. We constructed six different groups of training and testing sets. Each group is derived from the set by randomly selecting (2/ 3) of the s as the training set and considering the remaining s as the testing set (see Section 2.2.2). In our experiments we used the two common classifiers, SVM and C4.5 (Decision Tree). Weka data mining software package has its own version of C4.5, called J48. To check the effect of class labels on the accuracy of classifiers, we performed classification experiments for class labels: sender, sender recipient, and sender cluster. Setting the class label as sender represents the traditional approach, while sender recipient, and sender cluster represents our proposed technique. We ignored time stamp because the initial results that we obtained were similar to the traditional approach. The a 1 SVM b C Accuracy Accuracy Selection 0 Average Selection Average Sender Sender-Recipient Sender-Cluster Sender Sender-Recipient Sender-Cluster Fig. 2 Accuracy VS class labels for (a) SVM and (b) C4.5.
8 digital investigation 5 (2009) reason could be that the dataset used was not representative. The same set of experiments was repeated for both classifiers (SVM and C4.5) on all six groups of training and testing sets. Experimental results are depicted in Fig. 2, where figure (a) represents SVM results and figure (b) shows C4.5 results. Employing the SVM classifier, we obtained an average accuracy of 71% for the classical approach (classification by sender), and 69% and 83% (respectively, for sender recipient and sender cluster classes) for the proposed approach. Using C4.5, the results followed a similar trend, with average accuracies of 77%, 73%, and 83%, respectively, for sender, sender recipient, and sender cluster based classification. As shown, the accuracy obtained for the classification by sender cluster seems very encouraging. It shows a noticeable gain in accuracy (10% for SVM and 6% for C4.5) compared with the classical approach (classification by sender). This suggests the relevance of considering author style variation in authorship attribution. On the other hand, the results of sender recipient based classification shows a slight decrease (particularly for SVM) in accuracy compared to the classical approach. The rationale behind this could be explained in two ways. One, considering each sender recipient as a different class creates too many classes and, thus, is difficult for the classifier to handle. Classification studies indicate that SVM is more sensitive than Decision Tree to the number of classes. This observation is supported by our experimental results (Fig. 2). Two, it is not true that the writing style of an author changes for each of his correspondents. The number of class labels in applying the sender recipient approach can be reduced by converging recipients that belong to the same common domain. A deep analysis of these results indicates that accuracy can be improved further, provided the s contain diversified topics and are written to different groups of recipients social networks Social network analysis is the study of communication links between people. social network analysis allows the modeling of flows and users activities to analyze relationships and detect misuses that manifest abnormal behaviors (Bhattacharyya et al., 2002). An explicit form of social networks for an corpus can be depicted as a graph, where nodes are senders and receivers, and edges represent traffic. However, other less explicit forms of social networks can be inferred based on different measures like authorship and content proximity. The structure of a person s social network manifests a great deal of information about his/her behavior and about the people (friends, colleagues, family members, etc.) with whom he/she interacts. For instance, one can know how often a person maintains distinct relationships between groups of people, and for how long. One can also guess whether these people have close friends, or regular interactions, whether these interactions can be distinguished based on roles (such as work, friendship, family, etc.), and what type of views a particular group of people exchange. During the course of an investigation, social networks can be used to discover interesting information about potential suspects. For instance, who are their collaborators, which of their s are malicious, or when are the periods of their suspicious activities. Our framework provides some interesting information rendering and exploration capabilities for social networks. Social networks are labeled with some simple statistics computed about users, domains, and flows. We use three types of graphs to depict social networks. The first type temporal model, is the user network augmented with time information about s, laid out to show how flow evolves over time, as shown in Fig. 3 social network: temporal model (left), spring-mass model (right).
9 132 digital investigation 5 (2009) Fig. 4 Details editor. Fig. 3 (Left side view). From this network it is easy to identify causality effects between s, such as in a situation in which an is received by a user, who in turn sends another at a later time. If both s are classified as discussing about the same topic (drugs, for instance), then by following the chain of these s one can identify potential collaborators. In the second graph, called user network, nodes represent users and edges are traffic (see Fig. 6). The flow of s between accounts can be filtered according to the different classes and clusters of s computed during the last classification and clustering, and within a specific period of time. In the third graph, domain network, nodes are e- mail domains (servers) and edges represent traffic. The flow between domains can also be filtered according to the different classes and clusters, and within a specific period of time. social networks are visualized using several techniques. One of the most interesting is based on a spring-mass model, where nodes are considered as small masses with positive charges and links as springs connecting them. Since nodes have positive charges they tend to push each other apart, 4 but those connected by springs tend to stay agglomerated. By adjusting the strength of the springs according to the intensity of flow, we can exhibit very interesting structural patterns and community structures in a social network as 4 Gravitational forces are neglected. shown in Fig. 3 (Right side view). Nodes are laid out in an iterative process, where the force on each node resulting from the repulsion of all other nodes, the friction of the environment, and the action of all springs connected to it, is computed. The position of each node is recomputed according to the force applied on it and a preestablished time step using a standard Newtonian equation of motion. So, for node n i, if: F i ¼ is the total forces acting on n i, m i ¼ mass of n i, T ¼ time step, x 0 i ¼ previous position of n i, x i ¼ current position of n i. v i ¼ (x i x 0 i) the current speed of n i, the new position of n i is x T ¼ x i þ v i T þð1=2þðf i =m i ÞT 2. Another interesting layout capability is the combination of social network rendering with the localization capability described in Section 2.5. Social networks are rendered directly on true maps to illustrate a geographical dimension. If the address of a user or a server is known, its corresponding node is displayed on its geographic location. Some statistical information computed on a social network is rendered graphically using graphical features of nodes and links: size, shape, color, thickness, etc. For instance, user and domain centrality values are shown by node sizes; the bigger the centrality of a node, the bigger its size. The intensity of flow is reflected by the thickness of a link. User and classes or clusters are
10 digital investigation 5 (2009) Fig. 5 Statistics viewer. shown by node and link colors and shapes. Nodes associated with users can be replaced with their photos to provide a more intuitive and elegant representation. To identify community structures in a social network, we use Newman s approach (Newman, 2003), which uses a metric Q to evaluate the community structure within a social network. Q evaluates the difference between the fraction of links that fall within communities, and the expected value of the same quantity if the links fall at random, with no regard to the community structure. Therefore, the value of Q approaches zero for a network having no structure, while it takes a higher value with increasing community structure. If e i,j is the fraction of links in the network between communities c i and c j, and a i ¼ P j e i;j, then the metric Q ¼ P i ðe i;i a 2 i Þ. The algorithm is a repetitive process, where Q is optimized using a hierarchical agglomerative clustering approach, starting from the initial configuration wherein each node is a community by itself. Communities are greedily merged to achieve the highest increase or minimal decrease in Q geographic localization In most investigations, localization of resources and individuals is imperative. An investigator needs to understand the geographical scope of his/her investigation. This will help him/her to correlate facts, identify potential suspects, and target locations for collecting clues and evidences. We add a geographic visualization capability called interactive map viewer in our framework to view and explore geographic sites of relevance in an forensic investigation. This capability can also be used to localize information related to potential suspects, servers, and flow. An is rendered on the map as an arrow between the geographic locations of the sender and receiver accounts. If the physical addresses of the sender and receiver are known, an arrow between these two locations is drawn. Other information, such as the flow between users, can be rendered directly on the map by labeling arrows that connect them. This process renders social networks directly on the map viewer. Although it is not difficult to forge an header to hide an author s identity, not all users have the required skills to do so, or even think about doing so. An easier alternative would be to acquire an account on a public server with a fake identity. To detect this kind of forgery, the localization of an server during an investigation can have a great impact on the results. For instance, localizing an server that is hosting a suspicious account can trigger the decision to confiscate such an account for collecting further clues and evidence. After an examination of the contents of the confiscated account using the authorship identification capabilities, if the identity of the account holder is not compatible with the suspect, this could suggest that the suspect is masquerading as a different user. If the server does not exist or
11 134 digital investigation 5 (2009) Fig. 6 Social networks viewer. does not host the user account, this would suggest that the user has forged the address. In both cases authorship analysis could help in identifying the true identity of the user. We start by giving a brief description of the first four modules, followed by a detailed description of the explorer and the functionalities it provides to an investigator. 3. Our framework (IEFAF) IEFAF is an integrated analysis platform in which a security analyst can perform a variety of tasks related to analysis. IEFAF is programmed in Java using several Java technologies like Java Swing, the Java Mail API, and JDBC. Swing is used to build the graphical interface and for information rendering in different visual formats (tree, list, picture, etc.). The Java Mail API is used to parse s in several file formats and extract relevant information. JDBC allows us to connect to and navigate a JDBC-compliant database system and store information. IEFAF is composed of five sub-modules that can be used separately or jointly to build and explore decision support models. These modules are: Inter-database browser Statistics explorer Data mining explorer Weka submodule explorer 3.1. Inter-database browser As its name suggests, the inter-database browser allows a user to browse several JDBC-compliant databases (Oracle, Sybase SQL server, etc.), through a single interface. The drill down capability to navigate through different data tables and views is implemented by using tree structures. The inter-database browser extracts relationship information from the metadata of a database, which are then used to automatically construct the associated physical entity relationship diagram. To allow the navigation to span several databases, the user can manually supply relationships between tables and views from different databases. The interdatabase browser uses these relationships to create connections between entity relationship diagrams of connected databases and display them as if they belong to a single database. Some of the functionalities implemented in the database browser include: Dynamic creation of connections to JDBC-compliant databases.
12 digital investigation 5 (2009) Fig. 7 Data mining viewer. Data exploration in different tables and views. Creation of relationships between different databases to span across them using drill down capability. Ability to issue and persist SQL statements. Preparation of datasets to create data mining models using the Weka tool Creating ARFF files The Weka native data storage file is the Attribute-Relation File Format (ARFF). An ARFF file is simply a set of records similar to a table in a database. An ARFF file consists of a list of the record instances. Attribute names and types are specified at the top of the file. The attribute values of each record are separated by commas. In our framework, the data obtained from any internal or external source can be automatically converted into an ARFF file. It provides the option of converting the results of an SQL query to an ARFF file Statistics explorer The statistics explorer (depicted in Fig. 5) allows us to associate an SQL query with very elegant charts in two or three dimensions to gain a deep insight into data. Charts are constructed using ExpressChart API, which provides a chart viewer with very interesting interactive functionalities. These functionalities range from simple transformations zooming, resizing, and rotating - to switching between views in both two and three dimensions. Charts are grouped in different categories and displayed in a tree structure. The user can dynamically create new categories and new charts, which can be accessed with a simple click of mouse Data mining explorer The data mining explorer provides the capability to explore and query data mining models. Models are organized in a tree structure in three main categories: classifiers, clusters, and association rules. Data mining models can be dynamically constructed and integrated into the data mining explorer in an appropriate category. Further categories can be added dynamically to customize the organization of models. Each data mining model is labeled with a description that explains its use and the kind of decision support capability it offers. To interrogate the model the user is prompted to enter the information required to make the prediction Weka submodule To complement the implemented functionalities, we have integrated Weka software package into our framework. Weka includes methods for most of the standard data mining problems: regression, classification, clustering, association rule mining, and attribute selection. It provides extensive support for the whole process of data mining, including preparing data,
13 136 digital investigation 5 (2009) constructing and evaluating learning algorithms, and visualizing the input data as well as the results of machine learning explorer The explorer allows a multi-staged analysis of s by using social networking techniques, text mining, geographical rendering, and statistical analysis to gain an indepth view of the underlying information. The explorer works with a database backend for a fast and convenient analysis of data. s are organized in several virtual folders. The contents of each folder can be viewed separately, or jointly by merging all folders. A folder viewer displays a list of its s. Several folder viewers can be opened at the same time, with the possibility of moving s between them. A folder viewer offers some classical functionalities, like sorting and searching, and advanced functionalities related to data mining and social networks analysis. Advanced functionalities are presented through the following five different sub-modules: Details editor Map viewer Statistics viewer Social network viewer Data mining viewer Details editor The details editor (see Fig. 4), offers three sub-views of an s contents: text, HTML, and raw format. The text subview displays the textual contents of an , the HTML subview renders an as a web page (provided it contains HTML tags). The raw sub-view shows the in its original format, along with its metadata (without any processing) Map viewer The map viewer displays descriptive information about flows directly on real geographic maps. For instance, when an is selected in the folder viewer, an arrow is drawn between the geographic locations of the sender and recipient domains. Moreover, if the physical addresses of the sender and receiver are known, an arrow between these two locations is drawn as well. Other information, such as the flow of s between participants, is rendered directly on the map. The joining arrows can be labeled with descriptive information (number of s exchanged, the topic of conversation between the nodes, etc.) Statistics viewer We compute several statistics on accounts and traffic and display them using appropriate charts for easy and intuitive interpretation (see Fig. 5). Statistical models, as mentioned in Section 2.1, are created in the statistics explorer and are inserted automatically in the statistics viewer. A statistical model is created by specifying appropriate SQL query to extract relevant data from an database and to associate it with an appropriate chart Social network viewer In order to analyze and investigate the nature of communication between individuals and communities, we have implemented a submodule called social network viewer. A typical output of this module is shown in Fig. 6. We compute a set of cliques and display them in the form of networks, in a full-fledged graph editor. The user can explore these networks and transform them if needed to gain an insight into the dynamics of traffic. The communication among different individuals/communities and between an individual and a community, as well as the number of s exchanged, can be seen within the viewer. Different views of traffic can be displayed. The intensity of flow between parties is indicated by the thickness of the communication link between them. Thickness increases with the increase in traffic. Different coloring schemes are used to identity different classes and clusters Data mining viewer The data mining viewer (shown in Fig. 7) enables us to build machine-learning models by employing several different machine-learning algorithms over different kinds of datasets. This helps a user evaluate different data mining techniques. Functionalities of the data mining viewer can be split into two categories: classification and clustering. Classification allows us to build data decision models on sets of s that are already classified, whereas clustering is employed to identify hidden relationships and structures in an corpus. 4. Conclusion As a result of growing misuse, investigators need efficient automated methods and tools for analyzing s. In our work, we developed an analysis framework to assist investigators gather clues and evidence in an investigation in which communication is relevant. The framework offers different functionalities ranging from storing, editing, searching, and querying to more advanced functionalities such as authorship attribution and account localization. Extending traditional authorship identification techniques, we have proposed a new technique of mining style variation. This will help to capture the change that occurs in the style of person with respect to different contexts/recipients. To obtain more credible results, the level of cohesion and harmony among different analysis techniques needs to be increased. social networks need to be further explored; they are rich sources of learning about cyber criminal activities. references Abbasi A, Chen H. Writeprints: a stylometric approach to identitylevel identification and similarity detection in cyberspace. ACM Transactions on Information Systems March 2008;26(2). Agrawal R, Imielinski T, Swami A. Mining association rules between sets of items in large databases. ACM SIGMOD Record June 1993;22(2):
14 digital investigation 5 (2009) Baayen RH, Van Halteren H, Tweedie FJ. Outside the cave of shadows: using syntactic annotation to enhance authorship attribution. Literary and Linguistic Computing 1996;2: Bhattacharyya M, Hershkop S, Eskin E, Stolfo SJ. MET: an experimental system for malicious tracking. In: Proc. of the 2002 new security paradigms workshop (NSPW-2002), Virginia Beach, VA; Corney Malcolm, Vel Olivier, Anderson Alison, Mohay George. Gender-preferential text mining of discourse. In: Proc. 18th annual computer security applications conference 2002; p de Vel O. Mining authorship. In: Proc. workshop on text mining, ACM international conference on knowledge discovery and data mining (KDD); de Vel O, Anderson A, Corney M, Mohay G. Mining content for author identification forensics. SIGMOD Record December 2001;30(4): Farringdon JM. Analyzing for authorship: a guide to the Cusum technique. University of Wales Press.; Forsyth RS, Holmes DI. Feature-finding for text classification. Literary and Linguistic Computing 1996;11(4): Gunopulos D, Agrawal R, Gehrke J, Raghavan P. Automatic subspace clustering of high dimensional data for data mining applications. In: Proc. of ACM SIGMOD conference, Seattle, WA; Holmes DI. The evolution of stylometry in humanities. Literary and Linguistic Computing 1998;13(3): Iqbal F, Hadjidj R, Fung BCM, Debbabi M. A novel approach of mining write-prints for authorship attribution in forensics. Digital Investigation 2008;5: Joachims T. Text categorization with support vector machines: learning with many relevant features. In: Proc. 10th European conference on machine learning (ECML); p Kulkarni A, Pedersen T. Name discrimination and clustering using unsupervised clustering and labelling of similar contexts. In: Proc. 2nd Indian international conference on artificial intelligence (IICAI-05); p Hua Li, Dou Shen, Benyu Zhang, Zheng Chen, Qiang Yang. Adding semantics to clustering; 2006, Lippmann RP. An introduction to computing with neural networks. IEEE Acoustics Speech and Signal Processing Magazine 1987;4(2):4 22. Mendenhall TC. The characteristic curves of composition. Science 1887;11(11): Newman MEJ. The structure and function of complex networks. SIAM Review 2003;45: Quinlan JR. Induction of decision trees. Machine Learning 1986; 1(1): Stolfo J, Creamer G, Hershkop S. A temporal based forensic analysis of electronic communication. In: Proc. of ACM international conference on digital government research; Teng J, Ma J, Lai I, Li Ying. authorship mining based on svm for computer forensic. In: Proc. third international conference on machine learning and cyhemetics, Shanghai; August Wei C, Sprague A, Skjellum A, Warner G. Mining spam to identify common origins for forensic application. New York, NY, USA: ACM; p Zheng R, Li J, Chen H, Huang Z. A framework for authorship identification of online messages: writing-style features and classification techniques. Journal of the American Society for Information Science and Technology February 2006;57(3): Zheng R, Qin Y, Huang Z, Chen H. Authorship analysis in cybercrime investigation. In: Proc. 1st NSF/NIJ symposium. ISI Springer-Verlag; p
A Unified Data Mining Solution for Authorship Analysis in Anonymous Textual Communications
A Unified Data Mining Solution for Authorship Analysis in Anonymous Textual Communications Farkhund Iqbal, Hamad Binsalleeh, Benjamin C. M. Fung, Mourad Debbabi Concordia Institute for Information Systems
131-1. Adding New Level in KDD to Make the Web Usage Mining More Efficient. Abstract. 1. Introduction [1]. 1/10
1/10 131-1 Adding New Level in KDD to Make the Web Usage Mining More Efficient Mohammad Ala a AL_Hamami PHD Student, Lecturer m_ah_1@yahoocom Soukaena Hassan Hashem PHD Student, Lecturer soukaena_hassan@yahoocom
PSG College of Technology, Coimbatore-641 004 Department of Computer & Information Sciences BSc (CT) G1 & G2 Sixth Semester PROJECT DETAILS.
PSG College of Technology, Coimbatore-641 004 Department of Computer & Information Sciences BSc (CT) G1 & G2 Sixth Semester PROJECT DETAILS Project Project Title Area of Abstract No Specialization 1. Software
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Web Document Clustering
Web Document Clustering Lab Project based on the MDL clustering suite http://www.cs.ccsu.edu/~markov/mdlclustering/ Zdravko Markov Computer Science Department Central Connecticut State University New Britain,
ELEVATING FORENSIC INVESTIGATION SYSTEM FOR FILE CLUSTERING
ELEVATING FORENSIC INVESTIGATION SYSTEM FOR FILE CLUSTERING Prashant D. Abhonkar 1, Preeti Sharma 2 1 Department of Computer Engineering, University of Pune SKN Sinhgad Institute of Technology & Sciences,
Introduction. A. Bellaachia Page: 1
Introduction 1. Objectives... 3 2. What is Data Mining?... 4 3. Knowledge Discovery Process... 5 4. KD Process Example... 7 5. Typical Data Mining Architecture... 8 6. Database vs. Data Mining... 9 7.
Clustering Technique in Data Mining for Text Documents
Clustering Technique in Data Mining for Text Documents Ms.J.Sathya Priya Assistant Professor Dept Of Information Technology. Velammal Engineering College. Chennai. Ms.S.Priyadharshini Assistant Professor
Database Marketing, Business Intelligence and Knowledge Discovery
Database Marketing, Business Intelligence and Knowledge Discovery Note: Using material from Tan / Steinbach / Kumar (2005) Introduction to Data Mining,, Addison Wesley; and Cios / Pedrycz / Swiniarski
How To Write An Article On Email Authorship Analysis
CEAI: CCM based Email Authorship Identification Model Sarwat Nizamani 1,2, Nasrullah Memon 1 1 The Mærsck McKinne Møller Institute, University of Southern Denmark 2 Department of Computer Science, Sindh
Messaging Forensic Framework for Cybercrime Investigation
Messaging Forensic Framework for Cybercrime Investigation Farkhund Iqbal A Thesis in The Department of Computer Science and Software Engineering Presented in Partial Fulfillment of the Requirements for
Knowledge Discovery from patents using KMX Text Analytics
Knowledge Discovery from patents using KMX Text Analytics Dr. Anton Heijs [email protected] Treparel Abstract In this white paper we discuss how the KMX technology of Treparel can help searchers
Index Terms Domain name, Firewall, Packet, Phishing, URL.
BDD for Implementation of Packet Filter Firewall and Detecting Phishing Websites Naresh Shende Vidyalankar Institute of Technology Prof. S. K. Shinde Lokmanya Tilak College of Engineering Abstract Packet
Search and Information Retrieval
Search and Information Retrieval Search on the Web 1 is a daily activity for many people throughout the world Search and communication are most popular uses of the computer Applications involving search
Understanding Web personalization with Web Usage Mining and its Application: Recommender System
Understanding Web personalization with Web Usage Mining and its Application: Recommender System Manoj Swami 1, Prof. Manasi Kulkarni 2 1 M.Tech (Computer-NIMS), VJTI, Mumbai. 2 Department of Computer Technology,
Efficient Techniques for Improved Data Classification and POS Tagging by Monitoring Extraction, Pruning and Updating of Unknown Foreign Words
, pp.290-295 http://dx.doi.org/10.14257/astl.2015.111.55 Efficient Techniques for Improved Data Classification and POS Tagging by Monitoring Extraction, Pruning and Updating of Unknown Foreign Words Irfan
Building a Question Classifier for a TREC-Style Question Answering System
Building a Question Classifier for a TREC-Style Question Answering System Richard May & Ari Steinberg Topic: Question Classification We define Question Classification (QC) here to be the task that, given
A Review of Anomaly Detection Techniques in Network Intrusion Detection System
A Review of Anomaly Detection Techniques in Network Intrusion Detection System Dr.D.V.S.S.Subrahmanyam Professor, Dept. of CSE, Sreyas Institute of Engineering & Technology, Hyderabad, India ABSTRACT:In
How To Write A Summary Of A Review
PRODUCT REVIEW RANKING SUMMARIZATION N.P.Vadivukkarasi, Research Scholar, Department of Computer Science, Kongu Arts and Science College, Erode. Dr. B. Jayanthi M.C.A., M.Phil., Ph.D., Associate Professor,
Data Pre-Processing in Spam Detection
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 11 May 2015 ISSN (online): 2349-784X Data Pre-Processing in Spam Detection Anjali Sharma Dr. Manisha Manisha Dr. Rekha Jain
Data Mining for Digital Forensics
Digital Forensics - CS489 Sep 15, 2006 Topical Paper Mayuri Shakamuri Data Mining for Digital Forensics Introduction "Data mining is the analysis of (often large) observational data sets to find unsuspected
A FUZZY BASED APPROACH TO TEXT MINING AND DOCUMENT CLUSTERING
A FUZZY BASED APPROACH TO TEXT MINING AND DOCUMENT CLUSTERING Sumit Goswami 1 and Mayank Singh Shishodia 2 1 Indian Institute of Technology-Kharagpur, Kharagpur, India [email protected] 2 School of Computer
Digital Identity & Authentication Directions Biometric Applications Who is doing what? Academia, Industry, Government
Digital Identity & Authentication Directions Biometric Applications Who is doing what? Academia, Industry, Government Briefing W. Frisch 1 Outline Digital Identity Management Identity Theft Management
DATA MINING TOOL FOR INTEGRATED COMPLAINT MANAGEMENT SYSTEM WEKA 3.6.7
DATA MINING TOOL FOR INTEGRATED COMPLAINT MANAGEMENT SYSTEM WEKA 3.6.7 UNDER THE GUIDANCE Dr. N.P. DHAVALE, DGM, INFINET Department SUBMITTED TO INSTITUTE FOR DEVELOPMENT AND RESEARCH IN BANKING TECHNOLOGY
La Cañada Unified School District Personnel Use of Technology Regulations (AR 4163.4) Also known as the Staff Technology and Internet Use Policy
LCUSD Personnel Use of Technology Regulations (AR 4163.4) Updated 08/21/08 p. 1 of 5 La Cañada Unified School District Personnel Use of Technology Regulations (AR 4163.4) Also known as the Staff Technology
Email Spam Detection Using Customized SimHash Function
International Journal of Research Studies in Computer Science and Engineering (IJRSCSE) Volume 1, Issue 8, December 2014, PP 35-40 ISSN 2349-4840 (Print) & ISSN 2349-4859 (Online) www.arcjournals.org Email
Spam Filtering Based On The Analysis Of Text Information Embedded Into Images
Journal of Machine Learning Research 7 (2006) 2699-2720 Submitted 3/06; Revised 9/06; Published 12/06 Spam Filtering Based On The Analysis Of Text Information Embedded Into Images Giorgio Fumera Ignazio
Natural Language to Relational Query by Using Parsing Compiler
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 3, March 2015,
Visualizing Threats: Improved Cyber Security Through Network Visualization
Visualizing Threats: Improved Cyber Security Through Network Visualization Intended audience This white paper has been written for anyone interested in enhancing an organizational cyber security regime
IMPROVISATION OF STUDYING COMPUTER BY CLUSTER STRATEGIES
INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND SCIENCE IMPROVISATION OF STUDYING COMPUTER BY CLUSTER STRATEGIES C.Priyanka 1, T.Giri Babu 2 1 M.Tech Student, Dept of CSE, Malla Reddy Engineering
Categorical Data Visualization and Clustering Using Subjective Factors
Categorical Data Visualization and Clustering Using Subjective Factors Chia-Hui Chang and Zhi-Kai Ding Department of Computer Science and Information Engineering, National Central University, Chung-Li,
Data Mining on Social Networks. Dionysios Sotiropoulos Ph.D.
Data Mining on Social Networks Dionysios Sotiropoulos Ph.D. 1 Contents What are Social Media? Mathematical Representation of Social Networks Fundamental Data Mining Concepts Data Mining Tasks on Digital
Social Media Mining. Data Mining Essentials
Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers
Component visualization methods for large legacy software in C/C++
Annales Mathematicae et Informaticae 44 (2015) pp. 23 33 http://ami.ektf.hu Component visualization methods for large legacy software in C/C++ Máté Cserép a, Dániel Krupp b a Eötvös Loránd University [email protected]
How To Solve The Kd Cup 2010 Challenge
A Lightweight Solution to the Educational Data Mining Challenge Kun Liu Yan Xing Faculty of Automation Guangdong University of Technology Guangzhou, 510090, China [email protected] [email protected]
Botnet Detection Based on Degree Distributions of Node Using Data Mining Scheme
Botnet Detection Based on Degree Distributions of Node Using Data Mining Scheme Chunyong Yin 1,2, Yang Lei 1, Jin Wang 1 1 School of Computer & Software, Nanjing University of Information Science &Technology,
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
A Frequency-Based Approach to Intrusion Detection
A Frequency-Based Approach to Intrusion Detection Mian Zhou and Sheau-Dong Lang School of Electrical Engineering & Computer Science and National Center for Forensic Science, University of Central Florida,
Recurrent Patterns Detection Technology. White Paper
SeCure your Network Recurrent Patterns Detection Technology White Paper January, 2007 Powered by RPD Technology Network Based Protection against Email-Borne Threats Spam, Phishing and email-borne Malware
STATISTICA. Financial Institutions. Case Study: Credit Scoring. and
Financial Institutions and STATISTICA Case Study: Credit Scoring STATISTICA Solutions for Business Intelligence, Data Mining, Quality Control, and Web-based Analytics Table of Contents INTRODUCTION: WHAT
Vendor briefing Business Intelligence and Analytics Platforms Gartner 15 capabilities
Vendor briefing Business Intelligence and Analytics Platforms Gartner 15 capabilities April, 2013 gaddsoftware.com Table of content 1. Introduction... 3 2. Vendor briefings questions and answers... 3 2.1.
A Symptom Extraction and Classification Method for Self-Management
LANOMS 2005-4th Latin American Network Operations and Management Symposium 201 A Symptom Extraction and Classification Method for Self-Management Marcelo Perazolo Autonomic Computing Architecture IBM Corporation
How to select the right Marketing Cloud Edition
How to select the right Marketing Cloud Edition Email, Mobile & Web Studios ith Salesforce Marketing Cloud, marketers have one platform to manage 1-to-1 customer journeys through the entire customer lifecycle
Machine Learning using MapReduce
Machine Learning using MapReduce What is Machine Learning Machine learning is a subfield of artificial intelligence concerned with techniques that allow computers to improve their outputs based on previous
Using LSI for Implementing Document Management Systems Turning unstructured data from a liability to an asset.
White Paper Using LSI for Implementing Document Management Systems Turning unstructured data from a liability to an asset. Using LSI for Implementing Document Management Systems By Mike Harrison, Director,
International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015
RESEARCH ARTICLE OPEN ACCESS Data Mining Technology for Efficient Network Security Management Ankit Naik [1], S.W. Ahmad [2] Student [1], Assistant Professor [2] Department of Computer Science and Engineering
Hillstone T-Series Intelligent Next-Generation Firewall Whitepaper: Abnormal Behavior Analysis
Hillstone T-Series Intelligent Next-Generation Firewall Whitepaper: Abnormal Behavior Analysis Keywords: Intelligent Next-Generation Firewall (ingfw), Unknown Threat, Abnormal Parameter, Abnormal Behavior,
Chapter 6. The stacking ensemble approach
82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described
Comparison of K-means and Backpropagation Data Mining Algorithms
Comparison of K-means and Backpropagation Data Mining Algorithms Nitu Mathuriya, Dr. Ashish Bansal Abstract Data mining has got more and more mature as a field of basic research in computer science and
IC05 Introduction on Networks &Visualization Nov. 2009. <[email protected]>
IC05 Introduction on Networks &Visualization Nov. 2009 Overview 1. Networks Introduction Networks across disciplines Properties Models 2. Visualization InfoVis Data exploration
Flattening Enterprise Knowledge
Flattening Enterprise Knowledge Do you Control Your Content or Does Your Content Control You? 1 Executive Summary: Enterprise Content Management (ECM) is a common buzz term and every IT manager knows it
Mining Text Data: An Introduction
Bölüm 10. Metin ve WEB Madenciliği http://ceng.gazi.edu.tr/~ozdemir Mining Text Data: An Introduction Data Mining / Knowledge Discovery Structured Data Multimedia Free Text Hypertext HomeLoan ( Frank Rizzo
Final Project Report
CPSC545 by Introduction to Data Mining Prof. Martin Schultz & Prof. Mark Gerstein Student Name: Yu Kor Hugo Lam Student ID : 904907866 Due Date : May 7, 2007 Introduction Final Project Report Pseudogenes
Practical Data Science with Azure Machine Learning, SQL Data Mining, and R
Practical Data Science with Azure Machine Learning, SQL Data Mining, and R Overview This 4-day class is the first of the two data science courses taught by Rafal Lukawiecki. Some of the topics will be
SPATIAL DATA CLASSIFICATION AND DATA MINING
, pp.-40-44. Available online at http://www. bioinfo. in/contents. php?id=42 SPATIAL DATA CLASSIFICATION AND DATA MINING RATHI J.B. * AND PATIL A.D. Department of Computer Science & Engineering, Jawaharlal
Mining Frequent Sequences for Emails in Cyber Forensics Investigation
Mining Frequent Sequences for Emails in Cyber Investigation Priyanka V. Kayarkar NIRT, RGPV, Bhopal Prashant Ricchariya NIRT, RGPV, Bhopal Anand Motwani NIRT, RGPV, Bhopal ABSTRACT The goal of Digital
Web Mining. Margherita Berardi LACAM. Dipartimento di Informatica Università degli Studi di Bari [email protected]
Web Mining Margherita Berardi LACAM Dipartimento di Informatica Università degli Studi di Bari [email protected] Bari, 24 Aprile 2003 Overview Introduction Knowledge discovery from text (Web Content
Introduction to Pattern Recognition
Introduction to Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University [email protected] CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)
Using Data Mining for Mobile Communication Clustering and Characterization
Using Data Mining for Mobile Communication Clustering and Characterization A. Bascacov *, C. Cernazanu ** and M. Marcu ** * Lasting Software, Timisoara, Romania ** Politehnica University of Timisoara/Computer
Data Mining Project Report. Document Clustering. Meryem Uzun-Per
Data Mining Project Report Document Clustering Meryem Uzun-Per 504112506 Table of Content Table of Content... 2 1. Project Definition... 3 2. Literature Survey... 3 3. Methods... 4 3.1. K-means algorithm...
Numerical Algorithms Group
Title: Summary: Using the Component Approach to Craft Customized Data Mining Solutions One definition of data mining is the non-trivial extraction of implicit, previously unknown and potentially useful
EVILSEED: A Guided Approach to Finding Malicious Web Pages
+ EVILSEED: A Guided Approach to Finding Malicious Web Pages Presented by: Alaa Hassan Supervised by: Dr. Tom Chothia + Outline Introduction Introducing EVILSEED. EVILSEED Architecture. Effectiveness of
Sentiment analysis on tweets in a financial domain
Sentiment analysis on tweets in a financial domain Jasmina Smailović 1,2, Miha Grčar 1, Martin Žnidaršič 1 1 Dept of Knowledge Technologies, Jožef Stefan Institute, Ljubljana, Slovenia 2 Jožef Stefan International
Anti Spamming Techniques
Anti Spamming Techniques Written by Sumit Siddharth In this article will we first look at some of the existing methods to identify an email as a spam? We look at the pros and cons of the existing methods
Université de Montpellier 2 Hugo Alatrista-Salas : [email protected]
Université de Montpellier 2 Hugo Alatrista-Salas : [email protected] WEKA Gallirallus Zeland) australis : Endemic bird (New Characteristics Waikato university Weka is a collection
W. Heath Rushing Adsurgo LLC. Harness the Power of Text Analytics: Unstructured Data Analysis for Healthcare. Session H-1 JTCC: October 23, 2015
W. Heath Rushing Adsurgo LLC Harness the Power of Text Analytics: Unstructured Data Analysis for Healthcare Session H-1 JTCC: October 23, 2015 Outline Demonstration: Recent article on cnn.com Introduction
Data Mining Applications in Higher Education
Executive report Data Mining Applications in Higher Education Jing Luan, PhD Chief Planning and Research Officer, Cabrillo College Founder, Knowledge Discovery Laboratories Table of contents Introduction..............................................................2
International Journal of Recent Trends in Electrical & Electronics Engg., Feb. 2014. IJRTE ISSN: 2231-6612
Spoofing Attack Detection and Localization of Multiple Adversaries in Wireless Networks S. Bhava Dharani, P. Kumar Department of Computer Science and Engineering, Nandha College of Technology, Erode, Tamilnadu,
Application of Data Mining Techniques in Intrusion Detection
Application of Data Mining Techniques in Intrusion Detection LI Min An Yang Institute of Technology [email protected] Abstract: The article introduced the importance of intrusion detection, as well as
What is Visualization? Information Visualization An Overview. Information Visualization. Definitions
What is Visualization? Information Visualization An Overview Jonathan I. Maletic, Ph.D. Computer Science Kent State University Visualize/Visualization: To form a mental image or vision of [some
Author Identification for Turkish Texts
Çankaya Üniversitesi Fen-Edebiyat Fakültesi, Journal of Arts and Sciences Say : 7, May s 2007 Author Identification for Turkish Texts Tufan TAŞ 1, Abdul Kadir GÖRÜR 2 The main concern of author identification
Technical Report. The KNIME Text Processing Feature:
Technical Report The KNIME Text Processing Feature: An Introduction Dr. Killian Thiel Dr. Michael Berthold [email protected] [email protected] Copyright 2012 by KNIME.com AG
CHAPTER 1 INTRODUCTION
1 CHAPTER 1 INTRODUCTION Exploration is a process of discovery. In the database exploration process, an analyst executes a sequence of transformations over a collection of data structures to discover useful
Author Gender Identification of English Novels
Author Gender Identification of English Novels Joseph Baena and Catherine Chen December 13, 2013 1 Introduction Machine learning algorithms have long been used in studies of authorship, particularly in
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca
Clustering Adrian Groza Department of Computer Science Technical University of Cluj-Napoca Outline 1 Cluster Analysis What is Datamining? Cluster Analysis 2 K-means 3 Hierarchical Clustering What is Datamining?
PineApp Anti IP Blacklisting
PineApp Anti IP Blacklisting Whitepaper 2011 Overview ISPs outbound SMTP Services Individual SMTP relay, not server based (no specific protection solutions are stated between the sender and the ISP backbone)
CHAPTER 1 INTRODUCTION
21 CHAPTER 1 INTRODUCTION 1.1 PREAMBLE Wireless ad-hoc network is an autonomous system of wireless nodes connected by wireless links. Wireless ad-hoc network provides a communication over the shared wireless
Taxonomies in Practice Welcome to the second decade of online taxonomy construction
Building a Taxonomy for Auto-classification by Wendi Pohs EDITOR S SUMMARY Taxonomies have expanded from browsing aids to the foundation for automatic classification. Early auto-classification methods
ModusMail Software Instructions.
ModusMail Software Instructions. Table of Contents Basic Quarantine Report Information. 2 Starting A WebMail Session. 3 WebMail Interface. 4 WebMail Setting overview (See Settings Interface).. 5 Account
Analysis of Spam Filter Methods on SMTP Servers Category: Trends in Anti-Spam Development
Analysis of Spam Filter Methods on SMTP Servers Category: Trends in Anti-Spam Development Author André Tschentscher Address Fachhochschule Erfurt - University of Applied Sciences Applied Computer Science
BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL
The Fifth International Conference on e-learning (elearning-2014), 22-23 September 2014, Belgrade, Serbia BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL SNJEŽANA MILINKOVIĆ University
Search Result Optimization using Annotators
Search Result Optimization using Annotators Vishal A. Kamble 1, Amit B. Chougule 2 1 Department of Computer Science and Engineering, D Y Patil College of engineering, Kolhapur, Maharashtra, India 2 Professor,
Azure Machine Learning, SQL Data Mining and R
Azure Machine Learning, SQL Data Mining and R Day-by-day Agenda Prerequisites No formal prerequisites. Basic knowledge of SQL Server Data Tools, Excel and any analytical experience helps. Best of all:
Intelligent Analysis of User Interactions in a Collaborative Software Engineering Context
Intelligent Analysis of User Interactions in a Collaborative Software Engineering Context Alejandro Corbellini 1,2, Silvia Schiaffino 1,2, Daniela Godoy 1,2 1 ISISTAN Research Institute, UNICEN University,
Galaxy Morphological Classification
Galaxy Morphological Classification Jordan Duprey and James Kolano Abstract To solve the issue of galaxy morphological classification according to a classification scheme modelled off of the Hubble Sequence,
The What, Why, and How of Email Authentication
The What, Why, and How of Email Authentication by Ellen Siegel: Director of Technology and Standards, Constant Contact There has been much discussion lately in the media, in blogs, and at trade conferences
Reputation Network Analysis for Email Filtering
Reputation Network Analysis for Email Filtering Jennifer Golbeck, James Hendler University of Maryland, College Park MINDSWAP 8400 Baltimore Avenue College Park, MD 20742 {golbeck, hendler}@cs.umd.edu
Towards a Visually Enhanced Medical Search Engine
Towards a Visually Enhanced Medical Search Engine Lavish Lalwani 1,2, Guido Zuccon 1, Mohamed Sharaf 2, Anthony Nguyen 1 1 The Australian e-health Research Centre, Brisbane, Queensland, Australia; 2 The
Filtering Noisy Contents in Online Social Network by using Rule Based Filtering System
Filtering Noisy Contents in Online Social Network by using Rule Based Filtering System Bala Kumari P 1, Bercelin Rose Mary W 2 and Devi Mareeswari M 3 1, 2, 3 M.TECH / IT, Dr.Sivanthi Aditanar College
WYNYARD ADVANCED CRIME ANALYTICS POWERFUL SOFTWARE TO PREVENT AND SOLVE CRIME
WYNYARD ADVANCED CRIME ANALYTICS POWERFUL SOFTWARE TO PREVENT AND SOLVE CRIME HELPING LAW ENFORCEMENT AGENCIES SOLVE CRIMES FASTER, WITH LOWER COSTS AND FEWER RESOURCES. 1 Wynyard Group Advanced Crime
. Learn the number of classes and the structure of each class using similarity between unlabeled training patterns
Outline Part 1: of data clustering Non-Supervised Learning and Clustering : Problem formulation cluster analysis : Taxonomies of Clustering Techniques : Data types and Proximity Measures : Difficulties
Data, Measurements, Features
Data, Measurements, Features Middle East Technical University Dep. of Computer Engineering 2009 compiled by V. Atalay What do you think of when someone says Data? We might abstract the idea that data are
Ipswitch IMail Server with Integrated Technology
Ipswitch IMail Server with Integrated Technology As spammers grow in their cleverness, their means of inundating your life with spam continues to grow very ingeniously. The majority of spam messages these
