Subscriber classification within telecom networks utilizing big data technologies and machine learning



Similar documents
Optimization and analysis of large scale data sorting algorithm based on Hadoop

Evaluating partitioning of big graphs

Software tools for Complex Networks Analysis. Fabrice Huet, University of Nice Sophia- Antipolis SCALE (ex-oasis) Team

BSPCloud: A Hybrid Programming Library for Cloud Computing *

LARGE-SCALE GRAPH PROCESSING IN THE BIG DATA WORLD. Dr. Buğra Gedik, Ph.D.

Big Data and Scripting Systems beyond Hadoop

Practical Graph Mining with R. 5. Link Analysis

The Performance Characteristics of MapReduce Applications on Scalable Clusters

Introduction to Big Data! with Apache Spark" UC#BERKELEY#

Graph Processing and Social Networks

Large-Scale Data Sets Clustering Based on MapReduce and Hadoop

From GWS to MapReduce: Google s Cloud Technology in the Early Days

KEYWORD SEARCH OVER PROBABILISTIC RDF GRAPHS

MapReduce Approach to Collective Classification for Networks

Chapter 7. Using Hadoop Cluster and MapReduce

Spark. Fast, Interactive, Language- Integrated Cluster Computing

Impact of Feature Selection on the Performance of Wireless Intrusion Detection Systems

Data quality in Accounting Information Systems

Predict Influencers in the Social Network

An Introduction to APGL

Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment

An Overview of Knowledge Discovery Database and Data mining Techniques

Using Data Mining for Mobile Communication Clustering and Characterization

Text Mining Approach for Big Data Analysis Using Clustering and Classification Methodologies

A Serial Partitioning Approach to Scaling Graph-Based Knowledge Discovery

A Study on Workload Imbalance Issues in Data Intensive Distributed Computing

Data Mining Algorithms Part 1. Dejan Sarka

Experiments in Web Page Classification for Semantic Web

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Policy-based Pre-Processing in Hadoop

Chapter 6. The stacking ensemble approach

Redundant Data Removal Technique for Efficient Big Data Search Processing

Analysis of Algorithms, I

DATA ANALYSIS II. Matrix Algorithms

A Performance Analysis of Distributed Indexing using Terrier

Data Mining - Evaluation of Classifiers

Social Media Mining. Data Mining Essentials

Research on Clustering Analysis of Big Data Yuan Yuanming 1, 2, a, Wu Chanle 1, 2

Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU

Social Media Mining. Network Measures

Journée Thématique Big Data 13/03/2015

Data Mining Practical Machine Learning Tools and Techniques

Introduction to Machine Learning Using Python. Vikram Kamath

Map-Based Graph Analysis on MapReduce

FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

Big Data and Scripting Systems build on top of Hadoop

Protein Protein Interaction Networks

Implementing Graph Pattern Mining for Big Data in the Cloud

Pentaho Data Mining Last Modified on January 22, 2007

CSE-E5430 Scalable Cloud Computing Lecture 11

Classification On The Clouds Using MapReduce

Spark and the Big Data Library

Distributed Framework for Data Mining As a Service on Private Cloud

Pattern-Aided Regression Modelling and Prediction Model Analysis

Machine Learning Big Data using Map Reduce

Enhancing Dataset Processing in Hadoop YARN Performance for Big Data Applications

Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique

Evaluating HDFS I/O Performance on Virtualized Systems

Leveraging Ensemble Models in SAS Enterprise Miner

Supervised Learning (Big Data Analytics)

COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Big Data by the numbers

Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

DATA MINING TECHNIQUES SUPPORT TO KNOWLEGDE OF BUSINESS INTELLIGENT SYSTEM

How to use Big Data in Industry 4.0 implementations. LAURI ILISON, PhD Head of Big Data and Machine Learning

BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL

MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research


Energy Efficient MapReduce

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

Machine Learning over Big Data

Similarity Search in a Very Large Scale Using Hadoop and HBase

Subgraph Patterns: Network Motifs and Graphlets. Pedro Ribeiro

Distributed forests for MapReduce-based machine learning

How To Use Neural Networks In Data Mining

Analysis Tools and Libraries for BigData

Mining Large Datasets: Case of Mining Graph Data in the Cloud

Neural Networks and Back Propagation Algorithm

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

BIG DATA What it is and how to use?

Storage and Retrieval of Large RDF Graph Using Hadoop and MapReduce

D A T A M I N I N G C L A S S I F I C A T I O N

Mining Interesting Medical Knowledge from Big Data

Cloud Computing. Lectures 10 and 11 Map Reduce: System Perspective

Data Mining. Nonlinear Classification

An Empirical Study of Two MIS Algorithms

DECISION TREE INDUCTION FOR FINANCIAL FRAUD DETECTION USING ENSEMBLE LEARNING TECHNIQUES

Survey on Scheduling Algorithm in MapReduce Framework

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

Asking Hard Graph Questions. Paul Burkhardt. February 3, 2014

Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) ( ) Roman Kern. KTI, TU Graz

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Big Data Analytics Hadoop and Spark

USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS

Network (Tree) Topology Inference Based on Prüfer Sequence

Transcription:

Subscriber classification within telecom networks utilizing big data technologies and machine learning Jonathan Magnusson Uppsala University Box 337 751 05 Uppsala, Sweden jonathanmagnusson@ hotmail.com ABSTRACT This paper describes a scalable solution for identifying influential subscribers in for example telecom networks. The solution estimates one weighted value of influence out of several Social Network Analysis(SNA) metrics. The novel method for aggregation of several metrics utilizes machine learning to train models. A prototype solution has been implemented on a Hadoop platform to support scalability and to reduce hard ware cost by enabling the usage of commodity computers. The SNA algorithms have been adapted to efficiently execute on the MapReduce distributed platform. The prototype solution has been tested on a Hadoop cluster. The tests have verified that the solution can scale to support networks with millions of subscribers. Both real data from a telecom network operator with 2.4 million subscribers and synthetic data for networks up to 100 million subscribers have been used to verify the scalability and accuracy of the solution. The correlation between metrics have been analyzed to identify the information gain from each metric. Categories and Subject Descriptors [] General Terms Algorithms, Experimentation, Performance, Verification. Keywords Social Network Analysis, Big data, Machine Learning, Telecommunication, Scalability 1. INTRODUCTION As of late, telecommunication operators are facing a development where the amount of subscriber data is rapidly increasing. This provides an opportunity of gaining competitive advantage by turning the data into knowledge. The case considered in this article is that of identifying influential subscribers in a telecom network, that is, subscribers Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. BigMine 12, August 12, 2012 Beijing, China Copyright c 2012 ACM 978-1-4503-1547-0/12/08...$10.00 Tor Kvernvik Ericsson Research Färögatan 6 164 80 Stockholm, Sweden tor.kvernvik@ericsson.com that may have influence over other subscribers. Generally, the same solution can be used for other types of networks and types of classifications. Finding influential actors within a social network is one of the main topics considered in the field of Social Network Analysis (SNA). SNA incorporates a number of methods widely used to discover characteristics of individual actors of a network. A popular group of such methods are centrality measures. Some metrics only take into account local properties of networks while other implements a more global approach. For some centrality metrics, information regarding the complete graph is required to obtain an accurate result. Graphs of these types are generally very large, as some operators have more than 100 million subscribers. Computation and storing capacity can become an issue for very large graphs. A solution to the capacity challenges can be found by using parallel computing. Specifically, the MapReduce-like framework of Hadoop has been used for this study. This framework provides means by which several parts of the data can be analyzed simultaneously, with the output being the aggregated result. Although SNA has been around for some time, it has only recently gained a status as a proper field of science. Thus, some parts of the research within the subject has been speculative. For instance, the accuracy of the centrality metrics as a measure of importance can be criticized as it cannot be objectively proven. To increase the accuracy of the importance estimation, several different metrics are taken into consideration. Some of the questions to be answered in this paper are: - What is required to enable Social Network Algorithms to execute efficiently on a MapReduce platform? - What level of performance can be achieved, both capacity and accuracy wise? - What is the correlation between different SNA metrics? Are there any metrics that can be removed from the analysis to reduce the execution time without reducing accuracy? The goal is to propose a solution that accurately classifies influential subscribers based on several SNA metrics. The solution shall be easy to manage for an operator and it shall

be possible to easily adapt the solution to support different flavors and cultural differences of influence. 2. ESTIMATION OF INFLUENTIALITY In this section a general background of the technologies and platform that is required for the influence estimation solution is described. 2.1 Social Network Analysis 2.1.1 Graphs The simplest type of graph is an undirected and unweighted graph. An undirected graph is defined by the edges always being two-way relations and in an unweighted graph all edges are equally strong. A more descriptive type of graph is the directed and weighted graph. Directed implies that the relations can be existing as a single-way or two-way relation. Weighted refers to the fact that relations may be of varying importance. This is represented by a number between 0 and 1 called the weight w. For an unweighted graph, all weights are 1. To represent a graph, adjacency matrices and adjacency lists are frequently used. 2.1.2 Some Common Metrics Analyzing graphs often involves calculation of a number of metrics. One group of metrics is the measures of centrality. These are quantities that illustrate the importance of each vertex within the network. The importance can be with respect of being related to other actors within the network or being minimally separated from all other actors within the network. Figure 1 shows an example network where vertices of the highest centrality for each metric type has been indicated. Notice that the centrality metrics generally do not pinpoint the same vertex with the highest score. Betweenness Centrality Closeness Centrality Eigenvector Centrality Degree Centrality Figure 1: A small social network graph, where the most important vertices according to different centrality measures has been marked. Data courtesy David Krackhardt Degree Centrality Degree centrality is simply the number of vertices that a given vertex has a relation to. Classically, this number is only relevant for unweighted graphs, but has been extended for weighted graphs to be the sum of all weights of a vertex relations [1]. This metric is readily calculated with access to an adjacency matrix or an adjacency list. In an adjacency list the length of the array listing the neighbors of a vertex is the degree of the vertex. This can be calculated in O(n) time. For directed graphs this metric can be expanded into two separate metrics; in-degree and out-degree. This is the number of relations into and out from the vertex, respectively. Eigenvector Centrality Eigenvector centrality can be applied to both weighted and unweighted graphs, as well as directed and undirected. This can be thought of as an iterative process, where in each step a vertex sends information to its neighbors. The amount of information sent is dependent of the importance of the vertex itself and the strength of the relation to the receiving vertex. As the adjacency matrix is of size n n, and n usually is very large, it might be computationally costly to calculate eigenvectors of the matrix. A solution to this is an iterative method called the power iteration. The method consists of repeatedly multiplying the adjacency matrix of the graph with a vector b, which is updated as the result of the multiplication at each step. Additionally, the vector b is normalized at each step. The k th iteration can be represented as follows: b k+1 = Ab k Ab k (1) The initial vector, b 0 can be chosen randomly, but a common choice is to give each vertex equal starting value, e.g. 1. Betweenness Centrality Betweenness centrality is a concept commonly used in analysis of graphs. It measures to what degree actors have to go through a specific actor, via the geodesic path, in order to reach other actors. This is a metric of computational heaviness, as it involves calculating the shortest paths between all vertices in the graph. An algorithm calculating betweenness centrality for sparse networks was proposed by Brandes in [2]. Brandes algorithm incorporates breadth-first search (BFS), an algorithm for finding all shortest paths from a single source. The computational cost for this algorithm is of the order O(nm). An approximation of this measure that requires significantly shorter computation times for large networks is the ego betweenness centrality. The ego network of a vertex, called central vertex, is the network containing the central vertex and its closest neighbors, as well as all relations between these vertices. The ego betweenness centrality of a vertex is essentially the betweenness centrality of the vertex within its ego-network. A simple way of calculating this, given the adjacency matrix of the ego network, is described in [3]. 2.2 Machine Learning supported aggregation of metrics Influence is a complex property that has different meanings depending on the context it is used for. It can be derived from a single SNA metric but then only one aspect of influence is taken into account. A more accurate analysis is performed by including a number of metrics. It is a challenge to know how to aggregate the metrics.

In the sections below different methods of Machine Learning, so called supervised learning methods are described. For these types of methods input examples with known outputs are needed to train the machine. 2.2.1 Decision trees The decision tree algorithm constructs a set of rules that forms a tree like structure. This rule set is used to classify examples, or instances, into one of a number of predefined outcomes. Each instance is composed of a number of attributes which characterize the instance. The tree is a set of nodes, each representing a rule, used to classify the input instances of the problem. Ideally, each instance in the training set can be classified correctly using the decision tree. However, this is not always possible for an arbitrary set of instances. Figure 2 illustrates this with an example, where subscribers are classified as influential or not influential depending on social network analysis metrics calculated for them. Figure 2: A simple example of a decision tree, based on the social network analysis metrics introduced in earlier sections. Yes indicates importance. DC=Degree Centrality, EBC= Ego Betweeness Centrality, idc=indegree Centrality, EC=Eigenvector Centrality A commonly used basic algorithm for constructing a decision tree is called ID3 [4]. In a real industrial environment a few difficulties usually occurs that cannot be handled by the ID3 algorithm alone. As a response to this some further development have been made to the algorithm, resulting in a system for decision tree induction named C4.5 [5]. It increases the robustness of the data handling and enable support for numeric attributes, missing values, noisy data and generating rules from trees. In the case of noisy data, or when the training set is small, artificial patterns might be detected. This is known as overfitting. To avoid this potential problem reduced-error pruning with a validation set is commonly used. This will result in a less complex decision tree that performs at least as well as the original tree, in many cases better. However, when training data is scarce, sparing instances to the validation data could decrease the classifying qualities of the decision tree even further [6]. 2.2.2 Logistic regression Logistic regression is the hypothesis of linear regression wrapped between the boundaries 0 and 1, with the Sigmoid function. In order to fit complex shaped functions into the hypothesis model, input could be mapped into complex functions of the original input attributes. For instance, higher order polynomial combinations could be created. Gradient descent can be used to optimize the parameters of theta. A thorough analysis and description of different aspects of logistic regression can be found in [7]. 2.2.3 Neural Networks Inspired by the way neurons in a biological brain communicate with each other through impulse signals, the neural network model is constituted of connected nodes. These nodes are usually arranged into layers. Common for all layer implementations of neural networks is an input layer and an output layer. To represent more complex patterns, hidden layers is sometimes placed between the input and output layer. The number of output layer and nodes in each layer may vary and the optimal set-up is usually dependent on the actual case. The neural network algorithm Voted Perceptron [8] was developed by Freund and Schapires. The voted perceptron method is based on the perceptron algorithm of Rosenblatt and Frank [9]. The algorithm takes advantage of data that are linearly separable with large margins. Using voted perceptrons one gets comparable performance with Support Vector Machines but learning is much faster. 2.3 Social Network Analysis Algorithms on a Hadoop platform This section describes the Hadoop platform and how this impacts the SNA algorithms. Large sets of data are becoming more common within a number of fields. As a response to this new methods of processing this data have to be developed. An important method is parallelization. MapReduce [10] is a framework of parallelizing a computation process on a cluster of computers. Hadoop is an open-source project utilizing the MapReduce concept. Hadoop is a framework which deals with the parallelization, fault tolerance and scheduling of machine communication for large tasks. In a Hadoop cluster, one computer has a special role, the Master. It contains meta-data that controls and organizes the process, with scheduling and load-balancing as well as fault handling. At first the procedure is divided into M map tasks and R reduce tasks by the Master, where M is usually chosen to make the size of each part between 32 and 64 MB, by default. The Master then distributes the work processes among the other machines; the Workers or Slaves. Each input data is assigned a key and a value. This is processed in the map task and the output is a new key with a new value. This is called the intermediate data, which will

be the input to the reduce task. The Master keeps track of the intermediate data and based on the key of this data, it is spread into different reduce tasks. More specifically, all data with the same key is presented to the reducer as a package. The data is further processed in the reduce task, which do not start until all map tasks are finished and all intermediate data is available. During the reduce process, the records are inspected and handled one by one using an iterator which is finally giving an output key and a value. A very important part of the MapReduce framework is the blocking mechanism. This is the way MapReduce deals with machine failures. The Master sends signals to each Worker periodically and if it does not receive an answer within a reasonable time the machine is deemed broken and the process it was performing is assumed failed. If the task is not completed it will be re-scheduled to a different machine. If the task is completed however, it will be treated differently depending on if it is a map task or a reduce task. For a map task it will be reprocessed as the output data is stored locally on the failed machine, thus unreachable. A reduce task, on the other hand, stores its output in a global file system, called HDFS in the case of Hadoop. Hence, it could be used and it is not necessary to redo the task. For a task to be suitable to be processed in a MapReducelike fashion it must be dividable in independent parts. At a first glance it might seem as graph problems is not dividable into independent parts, but the fact that every vertex can be processes by its own makes the division possible. An example of this can be found in [11]. Many parallelized algorithms for graph analysis must be done iteratively. This might increase the total time of the job since Hadoop writes the reducer output to a file in the HDFS. For very large files this might take a while. A number of projects are trying to work around this problem. Notable examples are Pregel [12], Spark [13] and Hama [14]. For the user the job is to define a mapper and a reducer, and in addition to this a driver program must be made that defines the properties of each job and decides the steps between and around the map and reduce tasks. Moreover, a commonly used type of job is a combine task. A combiner is in many ways very similar to a reduce task. However, it operates locally on output given from a single map task. The aim is to reduce the output from one mapper into a single output message for each reduce task addressed. In some applications the reduce task can be used as a combine task. An example is the iterative job of finding eigenvector centrality in a graph. The same is true for PageRank, which is a modified version of eigenvector centrality. This, along with a few other optimizations, was shown to have reducing effect on computational time of PageRank in [15]. 3. DESCRIPTION OF THE SOLUTION A prototype implementation running on a Hadoop cluster was developed. The solution for performing the process of influence estimation is described in this chapter and will henceforth be referred to as the prototype. There are a large number of social network analysis algorithms and methods that can be used to find valuable information about the social network representing the telecommunication users, derived from mobile network data. One is centrality methods, chapter 2.1.2, which can be applied to find the most central nodes in the network. A common approach of analyzing social networks is by considering these centrality measures separately. By doing this only one aspect of influence is taken into account and some information existing within the network is ignored. A more accurate analysis could be performed by including as much information as possible e.g. hidden patterns of the relations between the different flavors of importance. However, it is a challenge for the network operator to know which algorithms to use and how to evaluate the resulting measures from all different algorithms in order to pinpoint the most influential subscribers or other interesting segments of subscribers. In order to deal with this, we used machine learning. The proposed solution is to generate a social network graph from the information in a Call Data Record (CDR) file. Using this graph, a number of SNA metrics are calculated. At this point machine learning algorithms are used to create a model for classifying each subscriber. To be able to create the model, a training set must be provided. The training set is constituted of a sub-set of the users in the network in which, for each user, the predefined attribute i.e. level of influence is known. This model can then be used to derive information from the users of the complete network. Figure 3 gives a synoptic view of the prototype. Figure 3: The prototype solution. As this process generally is performed on very large networks, a parallel implementation is beneficially used to ensure scalability. Thus, the prototype has been implemented in the Hadoop framework, described in section 2.3. For testing the complete system, the data described in 4.1 is used as input. The prototype can be divided into three main parts, according to figure 3. These are Pre-Processing, SNA Algorithms and Influence Weight Estimation. Each of the parts of this system will be described in further detail in the following sections.

3.1 Pre-Processing: Filtering CDRs and creating network graphs When access to the CDRs is established, the first part of the pre-processing involves filtering each record and only keeping the relevant information. This typically involves removing records that are missing or having irregularly formated entries. The vertices to be included in the graphs will be those found in the CDRs after filtering has taken place. These filtered records are the input to the procedure of creating network graphs, both a weighted and directed graph and an unweighted and undirected graph. To test the prototype, all records in the CDR file were filtered in order to remove all irrelevant data. When creating the unweighted and undirected graph no threshold were demanded, a single call in any direction is sufficient for the existence of a relation. 3.2 Calculation of centrality metrics by using SNA Algorithms This part involves determination of several of the SNA metrics described in section 2.1.2. The prototype tested has 9 algorithms implemented. These are total degree centrality, in-degree centrality, out-degree centrality and eigenvector centrality. These metrics are calculated both in the weighted and the unweighted case. Additionally, ego betweenness centrality is calculated only for the unweighted case since it does not extend to the weighted case in a natural way. The output from this part of the prototype is a list of all subscribers with the 9 corresponding SNA metrics. This list constitutes the input for the next step, section 3.3. 3.3 Influence weight estimation using Supervised Learning This step of the process aims to automize the aggregation of the results from the SNA metrics as possible, to relieve this responsibility from the operator of the system. A model is created by using supervised machine learning, section 3.3.1. The step Influence Weight Estimation, in figure 3, is executing a trained model that aggregates the different SNA metrics in order to get one value of influence for each subscriber. This value is then used to identify the most influential subscribers. A list of the most influential subscribers is sent as the output from the prototype, figure 3. 3.3.1 Training of the model As a preparation step before the actual execution of the model, the model has to be generated. The training method used is called supervised learning. In supervised learning, a training set has to be provided with a known level of influence for each subscriber. The most natural and most likely the best way of obtaining an accurate training set is to let the network operator provide it. For instance, it is likely that the network operator have performed marketing campaigns in the past. Analysis of the result of such a campaign could very well yield information regarding individual subscribers ability to spread information through the network, by for instance studying the amount of buys of the marketed service that has occurred among friends of the subscribers observed. If this kind of training set with labeled data is absent however, there are other ways to generate a training set. The following approach has been used during the tests of the prototype. The measure of influence chosen has been that of exact betweenness centrality. This algorithm is very difficult to perform in reasonable time for large networks. Thus, calculating betweenness centrality for a large network is not possible. A more suitable approach is to do the determination of exact betweenness centrality for only a subset of the complete network. It is not at all guaranteed that exact betweenness centrality is an accurate measure of influence in the network, but was assumed to be good enough to verify the prototype. A fast way of dividing graphs into sub-graphs is fast-unfolding community detection, described in [16]. This will ensure that few relations to subscribers outside the sub-graph are removed and the value of betweenness centrality calculated within the community will be approximately the same, for the subscribers in the community, as if the calculation were made on the whole network. From the data set, section 4.1, a community of 20737 subscribers were found to be appropriately large and was used as training set. 66% of the subscribers in the community were chosen randomly to be used for training and the rest were used as a validation set. Note that it is not at all guaranteed that betweenness centrality is an accurate measure of influence in the network. Rather, it is an appropriate measure to use for generating the training set as it is not represented among the attributes for each subscriber. Thus, the machine learning algorithms will probably not be able to find any trivial decision models for classifying subscribers. The values obtained by these calculations will hence act as the concept to be predicted by a combination of the other attributes, namely the 9 social network analysis metrics mentioned above. During the experiment for testing the prototype the centrality calculated for the community were used to rank the subscribers. The top 30 % of the community subscribers, with respect to betweenness centrality, is regarded as influential. The remaining subscribers are considered non-influential. The objective function was then to predict the class of each specific subscriber. When the training set is available there are a number of Machine Learning methods to fit the example attributes to the given concept, see section 2.2. The three different Machine Learning techniques, described in section 2.2, were implemented in the prototype. When implemented to deal with the classified data, decision tree generating algorithms were used to create the decision model, in line with the general ideas formalized in section 2.2.1. Information about the algorithms related to logistic regression and neural network can be found in section 2.2.2 and 2.2.3, respectively. All machine learning algorithms were performed using the Weka 3.6.5 toolbox. 4. EXPERIMENTS A number of tests were performed to verify the scalability and accuracy of the solution. 4.1 Datasets

4.1.1 Real traffic data Traffic data was used from a telecom operator with 2.4 million subscribers. The traffic data was collected during one month. Both weighted and un-weighted graphs were used. As several SNA metrics are designed to be applied with weighted graphs whereas others are preferably used without weights. For a weighted graph the relations are separated by adding a weight that reflects the strength of the relation. The weights were calculated by using the type of, numbers of, recency and duration of the transactions, e.g. telephone calls, between the subscribers. 4.1.2 Synthetic data Synthetic data was also generated to test scalability for very large networks i.e. up to 100 million nodes. A randomly generated graph and a graph generated based on the Erdõs- Rényi algorithm [17] was used. The Random graph generator determines the degree of each vertex in the graph according to a uniform distribution over a customizable interval. Now, for each vertex a number of other vertices, equal to the predefined degree for the vertex, is chosen as neighbors. Additionally, if the graph to be created is weighted, a random weight between 0 and 1 is picked. This may result in some overlap as identical edges could be generated twice. Should that situation occur, the edge will be created only once for the unweighted case and the weights added if the graph generated is weighted. The Erdõs-Rényi graph generating algorithm is somewhat more elaborate. A detailed definition of it is available in [17]. For each pair of vertices in the graph, an edge will connect them with a constant probability. In practice, a vertex is created and the connection test is performed for all previously created vertices. That way every pair is tested only once and double edge creation is avoided. Both these algorithm is readily implemented in a parallel environment. This as neither of them requires any information regarding the graph except the local environment within the graph. This is important if very large networks are to be generated. 4.2 System setup For the Social Network Analysis algorithms that were implemented to execute in parallel, the Hadoop 0.20.2 framework was used. The cluster consisted of 11 computers. The master had 16 processors with 4 cores each, running at a clock rate of 2.67 GHz. Furthermore, the master had a total of 48 GB of RAM. The slaves had 2 processors, each having 2 cores running at 3 GHz. The RAM was 4 GB each. The Java 6 SE environment was used. Common for all nodes in the cluster was that they used Ubuntu 10.04.2 LTS. All machine learning were performed on a laptop using the Weka 3.6.5 toolbox. The operating system of the machine was Windows 7 32-bit and the Java 6 SE environment was used. The CPU had a clock rate of 2.56 GHz and the RAM summed up to 4 GB, of which 2.98 GB was available for use. 4.3 Experiments and results The results of the experiments on the prototype. 4.3.1 Distribution of the computational time for the prototype Figure 4 shows the distribution of the execution times between the different steps and metrics. The number of iterations for each step can also be seen. In general it can be noted that most of the algorithms can be executed in the order of one hour or less. The exceptions are the iterative jobs, eigenvector and weighted eigenvector centrality, which both needed more than 24 hours for completion. This is partly due to that the Hadoop platform generates a new map reduce job for each iteration. In terms of the Hadoop framework, this involves storing the output of a job on the distributed file system and later reading and interpreting it as input for another job. The writing/reading procedure is quite time consuming and costly when it comes to hard drive space. Process Time (s) Number of Iterations Filtering CDRs and Creating Unweighted, Undirected Graph 1031 1 Calculating Unweighted Degree and Ego Betweenness Centrality 3787 1 Calculating Unweighted Eigenvector Centrality 126789 108 Filtering CDRs and Creating Weighted, Directed Graph 1851 1 Calculating Weighted Degree 31 1 Calculating Unweighted Eigenvector Centrality 176929 40 Running VotedPerceptron 15 1 Total Time 310433 Total Time Excluding Eigenvector Centralities 6715 Figure 4: Time needed to compute different centrality measures using the computer cluster described in section 4.2. The input data was generated in accordance with Erdõs-Rényi algorithm. 4.3.2 Results and performance of the Social Network Analysis Algorithms The prototype had in total 9 SNA algorithms implemented. They are variations of the four centrality methods mentioned in section 2.1; namely degree, betweenness, eigenvector and ego betweenness. The first three metrics are calculated both for weighted and un-weighted graphs. However, ego betweenness was only calculated for the un-weighted case. The output from the SNA step, figure 3, is a list of all different SNA metrics values for each subscriber. The execution time was measured related to the number of processing computers and to the number of vertices. The distribution of the total time between the different phases was also analyzed. The data described in chapter 4.1 was used. Synthetic data was used for the tests of more than 2.4 million vertices. The scalability for the different SNA metrics is shown in figure 5. Overall all SNA algorithms except betweenness centrality are linearly scalable to the number of vertices in the network. Due to the iterative nature of the eigenvector algorithm it showed a high complexity and has shown to be less suitable for the Hadoop platform and was not tested on a larger network than 2.4 Million vertices. This is further described in chapter 4.3.1. Both ego betweeness and all versions of degree centrality were possible to execute in a few minutes for a 100 million vertices network. Figure 6 shows the scalability related to the number of executing processors in the Hadoop cluster. Data from a

4.3.3 Aggregation of metrics using supervised Machine Learning (ML) Three alternative supervised machine learning methods were tested i.e. decision tree, neural networks and logistic regression. To estimate the quality of the different machine learning methods a few standard performance measures have been calculated. Specifically, these are Accuracy, Precision, Recall and F-value. Accuracy is the percentage of correctly classified instances. Precision is the number of correctly classified instances of the positive class, in this case the influential subscribers, divided by the total number of positively classified instances. Recall is the number of correctly classified instances of the positive class over the total number of instances of positive class in the validation set. Lastly, F-value is a combination of Precision and Recall which can be expressed as F = 2 P R P + R (2) Figure 5: Execution time for the SNA algorithms, running on the full cluster. where P is the precision and R is the recall. Figure 7 shows accuracy, precision, recall and F-value for the three tested ML methods. The results show that the different machine learning algorithms has approximately the same ability to classify the subscribers correctly. The VotedPerceptron algorithm achieved the best F-value (0.546). The result needs to be improved but with a larger and more accurate way of retrieving the training set the quality would probably be considerably improved. Time (s) eeness Relative Speed-Up 2500 2000 1500 1000 500 uster Scalability Ego Betweenness Cluster Scalability Ego Betweenness Cluster Scalability 0 0 25 27 29 31 33 25 27 29 31 33 Degree Number of Cores Number of Cores Cluster Relative Speed-up 1,3 1,25 1,2 1,15 1,1 1,05 1 1 1,05 1,1 1,15 1,2 1,25 Relative Number of Cores Time (s) 80 70 60 50 40 30 20 10 Degree Cluster Scalability Figure 6: Scalability of the cluster Degree Cluster Scalability Ego Betweeness Linear Degree Linear (Linear) 10 million vertex network, generated with the Erdõs-Rényi algorithm, was used, chapter 4.1. Figure 6 shows the scalability of the Ego betweenness and degree centrality v.s. the number of cores. The cluster relative speed-up shows that the relative speed-up for ego betweenness is slightly higher than linear whereas for degree centrality somewhat slower. The squares connected by the black line shows linear speedup for comparison. Figure 7: Quality of the tested ML methods. 4.3.4 Correlation between SNA metrics The information gain from each SNA metric was analyzed by checking the correlation between the different metrics both with weighted and un-weighted edges. Figure 8 shows how the 9 social network analysis metrics correlate for the CDR data set generated from a telecom network. The correlations are normalized, implying that the upper limit is 1. Note that ego betweenness centrality do not correlate well with any of the other metrics. Weighted degree centrality on the other hand has quite pronounced correlation to the other metrics. Additionally a close correlation 0.9995 can be seen between degree centrality and eigenvector centrality. Ego betweenness shows less correlation with the other metrics. Figure 8: Correlation between 9 different social network analysis metrics, calculated for an actual CDR data set generated from a telecommunication network. DC=Degree Centrality, EBC= Ego Betweeness Centrality, idc=inegree Centrality, odc=outdegree Centrality, EC=Eigenvector Centrality, w indicates weighted

5. DISCUSSION The conclusion from this study is that Machine learning can be efficiently used to create a model for how to aggregate the different metrics into one value determining the level of influence of each subscriber. Hadoop is a suitable platform for social network analysis algorithms. The exceptions are iterative algorithms. For example the eigenvector algorithm is not scaling well on Hadoop due to its recursive nature. Our tests also show that the eigenvector metric correlates closely with the degree centrality metric. This means that eigenvector centrality can be excluded without reducing the accuracy of the aggregated influence. Degree centrality and ego betweenness scales almost linearly on the Hadoop cluster. The training set that was used was less than 1% of the entire graph. This size may be too small to be a good representation of the entire graph. The method used for generating the training set, exact betweenness, may also be replaced by a more suitable method that gives a more accurate level of influence. The tests show that the different machine learning algorithms has approximately the same ability to classify the subscribers correctly. The VotedPerceptron algorithm [8] achieved the best F-value (0.546). There are two possibilities for improvements. Either by adding more SNA metrics and/or by providing a better training set that better resembles the attributes. A few examples of SNA metrics that could be added are LineRank [18], Eccentricity Centrality [19] and Shapley Value [20]. 6. RELATED WORK Paper [21] describes a scalable solution for SNA for telecom networks. This solution focused on efficient partitioning and did not utilize the map reduce technology. Other papers on distributed computation of graphs are Pregel [12], Graph twiddling in a MapReduce world [22], Disnet [23], SNAP [24] and NetworkX [25]. 7. REFERENCES [1] M. E. J. Newman, Analysis of Weighted Networks. Physical Review, 70, 056131, 2005. [2] Ulrik Brandes, A Faster Algorithm for Betweenness Centrality. Journal of Mathematical Sociology, 25(2):163-177, 2001. [3] Martin Everett and Stephen P. Borgatti, Ego network betweenness. Social Networks, 27:31-38, 2005. [4] J. R. Quinlan, Induction of Decision Trees. Machine Learning, 81-106, 1986. [5] J. R. Quinlan, C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, 1993. [6] Tom M. Mitchell, Machine Learning. McGraw-Hill, 1997. [7] David W. Hosmer,Stanley Lemeshow, Applied logistic regression. John Wiley & Sons, 2000. [8] Y. Freund, R. E. Schapire, Large margin classification using the perceptron algorithm. 11th Annual Conference on Computational Learning Theory, New York, NY, 209-217, 1998. [9] Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65, 386âĂŞ407. (Reprinted in Neurocomputing, MIT Press, 1988.) [10] Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large Clusters. Proceedings of OSDI 04: 6th Symposium on Operating System Design and Implemention, 2004. [11] Course page by Jimmy Lin, Data-Intensive Information Processing Applications. retrieved 7 November 2011, http://www.umiacs.umd.edu/ jimmylin/cloud-2010- Spring/syllabus.html. [12] Grzegorz Malewicz, Matthew H. Austern, Aart J. C. Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and Grzegorz Czajkowski, Pregel: A System for Large-Scale Graph Processing. PODC 09 Proceedings of the 28th ACM symposium on Principles of distributed computing, 2009. [13] Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, Ion Stoica, Spark: Cluster Computing with Working Sets. HotCloud 10 Proceedings of the 2nd USENIX conference on Hot topics in cloud computing, 2010. [14] Sangwon Seo, Edward J. Yoon, HAMA: An Efficient Matrix Computation with the MapReduce Framework. 2010 IEEE Second International Conference on Cloud Computing Technology and Science (CloudCom), 2010. [15] Jimmy Lin and Michael Schatz, Design Patterns for Efficient Graph Algorithms in MapReduce. MLG 10 Proceedings of the Eighth Workshop on Mining and Learning with Graphs, 2010. [16] Vincent D. Blondel, Jean-Loup Guillaume, Renaud Lambiotte and Etienne Lefebvre, Fast unfolding of communities in large networks. Journal of Statistical Mechanics, P10008, 2008. [17] P. Erdõs and A. Rényi, On The Evolution of Random Graphs. Publications of the Mathematical Institute of the Hungarian Academy of Sciences, 1960. [18] U Kang, Spiros Papadimitriou, Jimeng Sun, Hanghang Tong, Centralities in Large Networks: Algorithms and Observations. SIAM / Omnipress 119-130, 2011. [19] Katarzyna Musia l, Przemys law Kazienko and Piotr Bródka, User Position Measures in Social Networks. SNA-KDD 09 Proceedings of the 3rd Workshop on Social Network Mining and Analysis 2009. [20] Ramasuri Narayanam, Y. Narahari, A Shapley Value Based Approach to Discover Influential Nodes in Social Networks. Nature Physics 8:130-147, 2011. [21] Fredrik Hildorsson, Scalable Solutions for Social Network Analysis. Diva portal, 2009. [22] Cohen, Graph twiddling in a MapReduce world. Comp. in Science and Eng., vol. 11, 2009 29-41, [23] Ryan Lichtenwalter, Nitesh V. Chawla: DisNet: A Framework for Distributed Graph Computation. ASONAM Conference, 263-270, 2011 [24] David A. Bader, Kamesh Madduri, SNAP, Small-world Network Analysis and Partitioning: an open-source parallel graph framework for the exploration of large-scale networks [25] NetworkX provides data structures for graphs along with graph algorithms, generators and drawing tools, http://www.networkx.lanl.gov.