Big Data Platform of a System Recommendation in Cloud Environment



Similar documents
Big Data With Hadoop

Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)

A Divided Regression Analysis for Big Data

Hypertable Architecture Overview

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Open source large scale distributed data management with Google s MapReduce and Bigtable

Apache Hadoop. Alexandru Costan

International Journal of Advance Research in Computer Science and Management Studies

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

Hadoop Architecture. Part 1

Hadoop IST 734 SS CHUNG

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Log Mining Based on Hadoop s Map and Reduce Technique

Chapter 7. Using Hadoop Cluster and MapReduce

Distributed File Systems

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

THE HADOOP DISTRIBUTED FILE SYSTEM

NoSQL and Hadoop Technologies On Oracle Cloud

How To Scale Out Of A Nosql Database

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

The Hadoop Distributed File System

CSE-E5430 Scalable Cloud Computing Lecture 2

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

The Hadoop Distributed File System

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

HDFS Users Guide. Table of contents

IJFEAT INTERNATIONAL JOURNAL FOR ENGINEERING APPLICATIONS AND TECHNOLOGY

Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique

A programming model in Cloud: MapReduce

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

Big Data Storage Architecture Design in Cloud Computing

A Brief Outline on Bigdata Hadoop

Snapshots in Hadoop Distributed File System

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

Introduction to Cloud Computing

Processing of Hadoop using Highly Available NameNode

Distributed File Systems

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Cloud Computing at Google. Architecture

Reduction of Data at Namenode in HDFS using harballing Technique

Hadoop Ecosystem B Y R A H I M A.

Xiaoming Gao Hui Li Thilina Gunarathne

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

An Approach to Implement Map Reduce with NoSQL Databases

GraySort and MinuteSort at Yahoo on Hadoop 0.23

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Design of Electric Energy Acquisition System on Hadoop

Hosting Transaction Based Applications on Cloud

Introduction to Hadoop

Comparing SQL and NOSQL databases

Apache Hadoop FileSystem and its Usage in Facebook

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Hadoop Distributed File System. Jordan Prosch, Matt Kipps

Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies

HDFS Architecture Guide

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

Distributed Filesystems

Sector vs. Hadoop. A Brief Comparison Between the Two Systems

Suresh Lakavath csir urdip Pune, India

Open source Google-style large scale data analysis with Hadoop

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

UPS battery remote monitoring system in cloud computing

Hadoop & its Usage at Facebook

Fault Tolerance in Hadoop for Work Migration

HADOOP MOCK TEST HADOOP MOCK TEST I

Apache HBase. Crazy dances on the elephant back

!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

BIG DATA What it is and how to use?

Survey on Scheduling Algorithm in MapReduce Framework


CDH AND BUSINESS CONTINUITY:

Cloud Storage Solution for WSN Based on Internet Innovation Union

Data-Intensive Computing with Map-Reduce and Hadoop

Big Data and Apache Hadoop s MapReduce

NoSQL Data Base Basics

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January Website:

Large-Scale Data Sets Clustering Based on MapReduce and Hadoop

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Design and Evolution of the Apache Hadoop File System(HDFS)

Big Data Analytics: Hadoop-Map Reduce & NoSQL Databases

Accelerating and Simplifying Apache

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Application Development. A Paradigm Shift

Integrating Big Data into the Computing Curricula

marlabs driving digital agility WHITEPAPER Big Data and Hadoop

BIG DATA TRENDS AND TECHNOLOGIES

A very short Intro to Hadoop

Hadoop Distributed File System (HDFS) Overview

L1: Introduction to Hadoop

Transcription:

, pp. 133-142 http://dx.doi.org/10.14257/ijseia.2015.9.12.12 Big Data Platform of a System Recommendation in Cloud Environment Jinhong Kim 1 and Sung-Tae Hwang 2,*1 1 College of Information and Communication Eng., Sungkyunkwan University, 2,* College of Engineering, Hansung University { 1 jinhkm, 2,* cosmosth}@gmail.com Abstract Cloud Computing is one of the emerging technologies. This research paper aimed to outline cloud computing and its features, and considered cloud computing for machine learning and data mining. The goal of the paper was to develop a recommendation and search system using big data platform on cloud environment. The main focus was on the study and understanding of Hadoop, one of the new technologies used in the cloud for scalable batch processing, and HBase data model which is a scalable database on top of the Hadoop file system. Accordingly, this project involved the design, analysis and implementation phases for developing the search and recommendation system for staffing purpose. So, mainly the action research method was being followed for this. Keywords: Cloud Computing, MapReduce, Hadoop, Recommendation Algorithm 1. Introduction In the present day software era the ever growing data demands elastically scalable data centers which can be conveniently accessible with high quality in a secure way. This demand led to cloud computing as one of the emerging technologies of today. Cloud computing releases computer services, computer software, and data storage away from locally hosted infrastructure and into cloud-based solutions [1-3]. The ability of the cloud services to provide apparently an unlimited supply of computing power on demand to users has caught the attention of industry as well as academia. In the past couple of years software providers have been moving more and more applications to the cloud. Several big companies such as Amazon, Google, Microsoft, Yahoo, and IBM are using the cloud. Today, forward-thinking business leaders are using the cloud within their enterprise data centers to take advantage of the best practices that cloud computing has established, namely scalability, agility, automation, and resource sharing. By using a cloud-enabled application platform, companies can choose a hybrid approach to cloud computing that an organization's existing infrastructure to launch new cloud-enabled applications. This hybrid approach allows IT departments to focus on innovation for the business, reducing both capital and operational costs and automating the management of complex technologies. Cloud services claim to provide nearly everything needed to run a project without owning any IT infrastructure [4-6]. From e-mail, Web hosting to fully managed applications resources can be provided on demand. This helps to reduce development cost and hardware cost. Large corporations are regularly faced with the problem of staffing new projects. Assigning to projects based on their list of competences and project requirements requires complex queries. Cloud computing can provide a scalable method for answering these complex queries [7-9]. The goal of this paper is to get an understanding of cloud computing. This includes the study and understanding of Hadoop Distributed File System (HDFS), HBase Data Storage and MapReduce. The main goal is * Sung-Tae Hwang is the corresponding author ISSN: 1738-9984 IJSEIA Copyright c 2015 SERSC

to develop and implement a recommendation and search system for staffing using cloud computing. This system is for competences mining and is based on cloud-based technology using Hadoop. The study is to use Web-based Competence Data for data mining as an input for searching and recommending the required profile. This study is done only for the data which has all the required information. 2. Conceptual Background Cloud computing is basically services-on-demand over the Internet. It is a natural evolution of the widespread adoption of virtualization, service-oriented architecture and utility computing [10]. 1) Cloud computing is a computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. 2) There is no formal definition commonly shared in industry, unlike for Web 2.0, and it is very broadly defined as on-demand provisioning of application, resources, and services that allow resources to be scaled up and down. 3) Clouds are a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or services). These resources can be dynamically reconfigured to adjust to a variable load, allowing also for optimum resource utilization. Cloud computing includes a number of technologies such as Virtualization, Web services, Service Oriented Architecture, Web 2.0, Web Mashup. One of the key new technologies used in the cloud are scalable batch processing systems [11-13]. The main reference implementation here is Google MapReduce and its open source implementation Apache Hadoop originally developed at Yahoo. Distribution and scalability are playing important roles in the cloud, and for that Hadoop can be used. Hadoop is a cloud computing program created to deal with the growing demands of modern computing and storage of massive amounts of data. Many companies, especially ones that operate through demanding websites, e.g., Amazon, Facebook, Yahoo, eharmony and ebay, use Hadoop. 3. Recommendable Search Algorithms The recommendation algorithm is for analyzing data for a particular problem to find the items a user is looking for and to produce a predicted likeliness score or a list of top N recommended items for a given user. Different algorithms can be used based on each use case [14]. For this paper, K-nearest neighbor has been used for finding a similar replacement of leaving a project. User-based Collaborative Filtering Algorithm is used for predicting missing competences. 3.1. K-Nearest Neighbor Algorithm (KNN Algorithm) The K-nearest neighbor algorithm (KNN) is part of supervised learning that has been used in many applications in the field of data mining, statistical pattern recognition and many others. KNN is a method for classifying objects based on the closest training examples in the feature space. An object is classified by a majority vote of its neighbors. K is always a positive integer. The neighbors are taken from a set of objects for which the correct classification is known. It is usual to use the Euclidean distance, though other distance measures can also be used instead. One of the advantages of the KNN is that it is well suited for multi-modal classes as its classification decision is based on a small neighborhood of similar objects [15-18]. So, even if the target class is multi-modal (i.e., consists of objects whose independent variables have different characteristics for different subsets), it can still lead to good accuracy. A major drawback of the similarity measure used in the KNN is that it uses all features equally in computing similarities. This can lead 134 Copyright c 2015 SERSC

to poor similarity measures and classification errors, when only a small subset of the features is useful for classification. 3.2. User-based Collaborative Filtering Algorithm A user-based collaborative filtering algorithm produces a recommendation list for the object user according to the view of other users. The assumption is that users with similar preferences will rate products similarly. Thus missing ratings for a user can be predicted by first finding a neighborhood of similar users and then aggregating the ratings of these users to form a prediction. User rating data can be a matrix A(m,n), where m represents the number of users, n represents the number of items [19]. Element R (at the i th line and j th column) represents the rating of the item j rated by user i. A cosine similarity algorithm can be used to measure the similarity between user i and j. Similarity between user i and user j is sim (i, j): sim(i, j) = cos(i, j) = i * j / i * j, Rating of item i rated by user u can be predicted by rating of nearest neighbors set NBSu rated by user u. NBSu is to the nearest neighbor set of user u;, Pu,i have predicted rating by user u on item i;, and sim(u,n) is to the semblance between user u and n. According to the rating of items, select N items that have the highest rating to compose a recommendation set and recommend them to the object user. 3.3. REST Services REST is an architectural style which is based on Web standards and the HTTP protocol. In REST-based architecture everything is a resource. A resource is accessed via a common interface based on the HTTP standard methods. In REST architecture there is a REST server which provides access to the resources and a REST client which accesses and modifies the REST resources. every resource should support the HTTP common operations. Resources are identified by global IDs (which are typically URIs). REST allows that resources have different representations, e.g., text, XML, JSON etc [20]. The client can ask for specific representation via the HTTP protocol (Content Negotiation). The HTTP standard methods which are typically used in REST are PUT, GET, POST and DELETE. 4. Big Data Platform (BDP) Hadoop comes with a distributed filesystem called HDFS (Hadoop Distributed Filesystem). HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets [21-23]. The Hadoop filesystem is designed for storing petabytes of a file with streaming data access using the idea that most efficient data processing pattern is a write-once, readmany-times pattern. HDFS stores metadata on a dedicated server, called NameNode. Application data are stored on other servers called DataNodes. All the servers are fully connected and communicate with each other using TCP-based protocols. 4.1. Architecture of BDP HDFS is based on master/slave architecture. To read a file, HDFS client first contacts NameNode. It returns list of addresses of the DataNodes that have a copy of the blocks of the file. Then client connects to the closest DataNodes directly for each block and requests the transfer of the desired block. For writing to a file, HDFS client first creates an empty file without any blocks. File creation is only possible when the client has writing permission and a new file does not exist in the system. NameNode records new file creation and allocates data blocks to list of suitable DataNodes to host replicas of the first block of the file. Replication of data makes DataNodes in pipeline. When the first block is Copyright c 2015 SERSC 135

filled, new DataNodes are requested to host replicas of the next block. A new pipeline is organized, and the client sends the further bytes of the file. Each choice of DataNodes is likely to be different [24]. A HDFS cluster consists of a single NameNode (as master) and a number of DataNodes (as slaves). The NameNode and DataNodes are pieces of software designed to run on commodity machines. These machines typically run a GNU/Linux operating system (OS). The usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNodes software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment one machine usually runs one DataNode.Affiliations are centered, italicized, not bold. Include e-mail addresses if possible. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode. 1) CheckpointNode - The Checkpoint node periodically downloads the latest fsimage and edits from the active NameNode to create checkpoints by merging them locally and then to upload new checkpoints back to the active NameNode. This requires the same memory space as that of NameNode and so checkpoint needs to be run on separate machine. Namespace information lost if either the checkpoint or the journal is missing, so it is highly recommended to configure HDFS to store the checkpoint and journal in multiple storage directories [25]. The Checkpoint node uses parameter fs.checkpoint.period to check the interval between two consecutive checkpoints. The Interval time is in seconds (default is 3600 second). The Edit log file size is specified by parameter fs.checkpoint.size (default size 64MB) and a checkpoint triggers if size exceeds. Multiple checkpoint nodes may be specified in the cluster configuration file. 2) BackupNode - The Backup node has the same functionality as the Checkpoint node. In addition, it maintains an in-memory, up-to-date copy of the file system namespace that is always synchronized with the active NameNode state. Along with accepting a journal stream of the filesystem edits from the NameNode and persisting this to disk, the Backup node also applies those edits into its own copy of the namespace in memory, thus creating a backup of the namespace. DataNodes and NameNode connections are established by handshake where namespace ID and the software version of the DataNodes are verified. The namespace ID is assigned to the file system instance when it is formatted. The namespace ID is stored persistently on all nodes of the cluster. A different namespace ID node cannot join the cluster. A new DataNode without any namespace ID can join the cluster and receive the cluster s namespace ID and DataNode registers with the NameNode with storage ID. A DataNode identifies block replicas in its possession to the NameNode by sending a block report. A block report contains the block id, the generation stamp and the length for each block replica the server hosts. The first block report is sent immediately after the DataNodes registrations. Subsequent block reports are sent every hour and provide the NameNode with an up-to date view of where block replicas are located on the cluster. 4.2. Data Model of BDP HBase uses a data model which is similar to Google BigTable s data model. The applications keep the rows of data in labeled tables. Each row has a sorting key and an arbitrary number of columns. Table row keys are byte arrays. Sorting of table rows can be done by row key and the table s primary key [26]. Rows of one table can have a variable number of columns. A column name is of the form "<family>:<label>" with an arbitrary string of bytes. A table is created with a <family> set, known as column families. All column family have same prefix. The family name must be string whereas the label can be of any arbitrary bytes. Column can be updated by adding new column family members as per the update request by client. A new <label> can be used in any writing operation, 136 Copyright c 2015 SERSC

without any previous specification. HBase stores column families physically grouped on the disk, so the items in a certain column have the same particular read/write characteristics and contain similar data. By default only single row may be locked at a time. Row writes are always atomic, but single row may be locked to perform both read and write operations on that row atomically. Recent versions allow blocking several rows, if the option has been explicitly activated. The HBase data is modeled as a multidimensional map in which values (the table cells) are indexed by four keys: value = Map(TableName, RowKey, ColumnKey, Timestamp), where: i) TableName is a string; ii) RowKey and ColumnKey are binary values (Java byte[]); iii) Timestamp is a 64-bit integer (Java long); iv) value is an uninterrupted array of bytes (Java byte[]).binary data is encoded in Base64 for transmission over the wire [27]. The row key is the primary key of the table and is typically a string. Rows are sorted by row key in lexicographic order. 5. Recommendation System (RS) In big software organizations successful products is one of the most important aspects of organization achievement. Team formation directly affects the performance of the team and the product development quality. Staffing is an important issue to be analyzed when software development is undertaken as a value-driven business. Creating competencebased competitiveness is a great challenge because competence identification and consequently, its management, helps in innovation, decision support, faster process and product quality improvement, and constitute an important input to the creation of the organizational knowledge. The finding a value driven developer-to-project assignment is a complex task. Predicting the compatibility of one with other team members is much more complex. It just considers a match between these competences with job competence requirements. The existing system does not support the fitness of this with team members in terms of interpersonal compatibility, which is equally important while finding a team member for a particular project and we are shown Figure 1. Figure 1. Recommendation System Architecture To develop such a smart, scalable recommendation and search system, a cloud cluster of four servers was configured. Hadoop framework has been chosen to provide the scalability and efficient data mining. HBase distributed column-oriented database on top of Hadoop was chosen for storing data. The recommendation was based on K-N-N algorithm to find the replacement of a person leaving from the project. The competence data and project data were two input files for processing. These two data files were loaded to HDFS and saved in CSV file format. The lists of competences were grouped. Grouping was done on the basis of target use i.e., [28]. where competences can be used. MapReduce implements this logic of clustering the competence lists into groups of competence. Copyright c 2015 SERSC 137

MapReduce function reads input files from HDFS, processes them and creates table in HBase to store the output data in table format. Data mining involves the finding the distance between on their competence. REST services provide access to output results stored in HBase tables. It also supports basic database CRUD operations to the HBase table. New profiles of competences can be created. Existing profiles can be updated and deleted. All profiles and a profile at a time can be retrieved. Competence data and project data were saved into HBase tables by using MapReduce. Competence data and project data files were loaded as CSV files in Hadoop distributed file system (HDFS). MapReduce function takes these files from HDFS as input for mapper class and maps to an intermediate <key,value> pairs. Then the reducer combines these intermediate pairs based on similar key to produce the reduced output. And finally, loads this reduced output by creating table in HBase and loading data to the created table. A snippet of source code for mapping input CSV file to intermediate <key,value> pairs in the MapReduce framework. /* * This method is called as many times as there are rows in CSV * file. * @param key the key * @param line the line from CSV file * @param context the map where the data is written from CSV file * @throws IOException Signals that an I/O exception has occurred. * @throws InterruptedException the interrupted exception */ @Override public void map(longwritable key, Text line, Context context) throws IOException, InterruptedException { if (rowswritten == 0) { rowswritten = 1; // Skip the first row as its a "header" row return; } // Save the information from CSV file to map. Map item's // key is the ID value of the CSV file's row String[] values = line.tostring().split(projectconstants.separatorchar); MapWritable map = new MapWritable(); map.put(newintwritable(projectconstants.projectstartdateindex), new Text( values[projectconstants.projectstartdateindex])); map.put(newintwritable(projectconstants.projectenddateindex), new Text( values[projectconstants.projectenddateindex])); map.put(new IntWritable(ProjectConstants.ROLEINDEX), newtext(values[projectconstants.roleindex])); map.put(new IntWritable(ProjectConstants), new Text( values[projectconstants. INDEX])); ImmutableBytesWritable outkey = new ImmutableBytesWritable( Bytes.toBytes(values[ProjectConstants.IDINDEX])); context.write(outkey, map); } } 138 Copyright c 2015 SERSC

6. Recommendation System Development As we mention that recommendable search algorithm and architecture system, in big software organizations successful products is one of the most important aspects of organization achievement. Team formation directly affects the performance of the team and the product development quality. Staffing is an important issue to be analyzed when software development is undertaken as a value-driven business. Creating competencebased competitiveness is a great challenge because competence identification and consequently, its management, helps in innovation, decision support, faster process and product quality improvement, and constitute an important input to the creation of the firm s organizational knowledge. The finding a value driven developer-to-project assignment is a complex task. Predicting the compatibility of one with other team members is much more complex. To develop such a smart, scalable recommendation and search system, a cloud cluster of four servers was configured. Hadoop framework has been chosen to provide the scalability and efficient data mining. HBase distributed column-oriented database on top of Hadoop was chosen for storing data [29]. Hadoop engine was deployed over the 4 node cluster and Java runtime was setup to use the common Hadoop configuration, as specified by the NameNode (master node) in the cluster. The recommendation was based on K-N-N algorithm to find the replacement of a person leaving from the project. The competence data and project data were two input files for processing. These two data files were loaded to HDFS and saved in CSV file format. The lists of competences were grouped. Grouping was done on the basis of target use i.e., where competences can be used. MapReduce implements this logic of clustering the competence lists into groups of competence. MapReduce function reads input files from HDFS, processes them and creates table in HBase to store the output data in table format. Data mining involves the finding the distance between on their competence. REST services provide access to output results stored in HBase tables. It also supports basic database operations to the HBase table. New profiles of competences can be created. Existing profiles can be updated and deleted. All profiles and a profile at a time can be retrieved. Competence data and project data were saved into HBase tables by using MapReduce. As shown in Figure 2, competence data and project data files were loaded as CSV files in Hadoop distributed file system (HDFS). MapReduce function takes these files from HDFS as input for mapper class and maps to an intermediate <key,value> pairs. Then the reducer combines these intermediate pairs based on similar key to produce the reduced output. And finally, loads this reduced output by creating table in HBase and loading data to the created table. 7. Conclusions Figure 2. Competence Data Load by MapReduce Hadoop is an open-source cloud computing implementation and provides all its necessary features such as scalability, reliability and fault tolerance. As one of the goals, this project explored the Hadoop and its architecture and how HDFS supports cloud computing features to deliver cloud services. HBase is an open source, non-relational, distributed also known as Hadoop database. It provides random, realtime read/write access to Big Data. The main goal of the project was to build recommendation and search Copyright c 2015 SERSC 139

system for staffing using Cloud based technology Hadoop. So, Hadoop was installed on a four node cluster, and HBase was installed on top of Hadoop to provide scalable data storage. In a competitive market, a software development organization s major goal is to maximize value creation for a given investment. Therefore, resource management is a very crucial factor in enterprise-level value maximization. An efficient staffing system is required to balance between project technical needs and organizational constraints. Acknowledgments This research was financially supported by Hansung University. References [1] L. Zheng, Y. Wang, J. Qi and D. Liu, Research and Improvement of Personalized Recommendation Algorithm Based on Collaborative Filtering, IJCSNS International Journal of Computer Science and Network Security, vol. 7, no.7, (2007) July. [2] T. Bogers and A. van den Bosch, Collaborative and Content-based Filtering for Item Recommendation on Social Bookmarking Websites, To appear in Proceedings of the ACM RecSys 09 workshop on Recommender Systems and the Social Web, (2009). [3] R. Liebregts and T. Bogers, Design and Implementation of a University-wide Expert Search Engine, In M. Boughanem, editors, Proceedings of the 31st European Conference on Information Retrieval (ECIR 2009), volume 5478 of Lecture Notes in Computer Science, (2009), pages 587 594. [4] T. Bogers and A. van den Bosch, Using Language Modeling for Spam Detection in Social Reference Manager Websites, In R. Aly, editors, Proceedings of the 9th Belgian-Dutch Information Retrieval Workshop (DIR 2009), (2009), pp. 87 94. [5] A. van den Bosch and T. Bogers, Efficient Context-Sensitive Word Completion for Mobile Devices, In MobileHCI 2008: Proceedings of the 10th International Conference on Human- Computer Interaction with Mobile Devices and Services, IOP-MMI special track, (2008), pp.465 470. [6] K. Hofmann, K. Balog, T. Bogers, and M. de Rijke, Integrating Contextual Factors into Topic centric Retrieval Models for Finding Similar Experts, In Proceedings of ACM SIGIR 2008 Workshop on Future Challenges in Expert Retrieval, (2008), pp. 29 36. [7] M.C. Puerta Melguizo, O. Muñoz Ramos, L. Boves, T. Bogers and A. van den Bosch, A Personalized Recommender System for Writing in the Internet Age, In Proceedings of the LREC 2008 workshop on Natural Language Processing Resources, Algorithms, and Tools for Authoring Aids, (2008). [8] J. Kimura and H. Shibasaki, Recent Advances in Clinical Neurophysiology, Proceedings of the 10th International Congress of EMG and Clinical Neurophysiology, Kyoto, Japan, (1995) October 15-19. [9] S. Ibrahim, Adaptive Disk I/O Scheduling for MapReduce in Virtualized Environment, in: Proceedings of the 2011 International Conference on Parallel Processing (ICPP 11). ICPP 11, taipei, Taiwan, (2011), pp. 335 344. [10] R. T. Kaushik and M. Bhandarkar, GreenHDFS: towards an energyconserving, storage-efficient, hybrid Hadoop compute cluster, In: Proceedings of the 2010 international conference on Power aware computing and systems, HotPower 10, Vancouver, BC, Canada: USENIX Association, (2010), pp. 1 9. [11] C. Li, Making geo-replicated systems fast as possible, consistent when necessary, In: Proceedings of the 10th USENIX conference on Operating Systems Design and Implementation, O S D I 1 2. H o l l y w o o d, C A, U S A : U S E N I X A s s o c i a t i o n, (2012), pp. 265 78. [12] S. Weil, Ceph: A Scalable, High-Performance Distributed File System, In:Proceedings of the 7th Conference on Operating Systems Design and Implementation (OSDI 06), (2006). [13] H. Wang, Distributed systems meet economics: pricing in the cloud, In: Proceedings of the 2nd USENIX Workshop on Hot Topics in Cloud Computing (Hot-Cloud 10), HotCloud 10. Boston, MA: USENIX Association, (2010). [14] H. Wada, Data Consistency Properties and the Trade -offs in Commercial Cloud Storage: the Consumers Perspective, In: CIDR 2011, Fifth Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA, January 9-12, 2011, Online Proceedings, (2011), pp. 134 143. [15] G. Urdaneta, G. Pierre and M. van Steen, Wikipedia Workload Analysis for Decentralized Hosting, In: Elsevier Computer Networks, vol. 53, no. 11, (2009), pp. 1830 1845. [16] R. H. Thomas, A Majority consensus approach to concurrency control for multiple copy databases, In: ACM Trans, Database Syst., vol. 4, no. 2, (1979), pp. 180 209. [17] A. T. Tai and J. F. Meyer, Performability Management in Distributed Database Systems: An Adaptive Concurrency Control Protocol, In: Proceedings of the 4th International Workshop on Modeling, Analysis and Simulation of Computer and Telecommunications Systems, MASCOTS 96, IEEE Computer Society, (1996). [18] B. Chun and S. Lee, A Study on Big Data Processing Mechanism & Applicability, International Journal of Software Engineering and Its Applications, vol. 8, no. 8, (2014), pp. 73-82. 140 Copyright c 2015 SERSC

[19] S. Ha, S. Lee and K. Lee, Standardization Requirements Analysis on Big Data in Public Sector based on Potential Business Models, International Journal of Software Engineering and Its Applications, vol. 8, no. 11, (2014), pp. 165-172. [20] S. Jeon, B. Hong, J. Kwon, Y. Kwak and S. Song, Redundant Data Removal Technique for Efficient Big Data Search Processing, International Journal of Software Engineering and Its Applications, vol. 7, no. 4, (2014), pp. 427-436. [21] P. Vincent, L. Badri and M. Badri, Regression Testing of Object-Oriented Software: Towards a Hybrid Technique, International Journal of Software Engineering and Its Applications, vol. 7, no. 4, (2013), pp. 227-240. [22] V. Gupta, D. S. Chauhan and K. Dutta, Regression Testing based Requirement Prioritization of Desktop Software Applications Approach, International Journal of Software Engineering and Its Applications, vol. 7, no. 6, (2013), pp. 9-18. [23] B. L. Bowerman, R. T. O Connell and A. B. Koehler, Forecasting, Time Series, and Regression, An Applied Approach, Brooks/Cole, (2005). [24] R. Scheaffer, W. Mendenhall III, R. L. Ott and K. G. Gerow, Elementary Survey Sampling, 7th Edition, Duxbury, (2011). [25] M. Riondato, Sampling-based Randomized Algorithms for Big Data Analytics, PhD dissertation in the Department of Computer Science at Brown University, (2014). [26] J. Lu and D. LiBias, Correction in a Small Sample from Big Data, IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 11, (2013), pp. 2658-2663. [27] J. Kim and S-T. Hwang, Adaptive Consistency Approaches for Cloud Computing Platform, International Journal of Software Engineering and Its Applications, vol. 9, no. 7, (2015), pp. 127-134. [28] R. Liu, A. Aboulnaga and K. Salem, DAX: a widely distributed multitenant storage service for DBMS hosting, Proceedings of the 39th international conference on Very Large Data Bases, PVLDB 13, Trento, Italy: VLDB Endowment, (2013), pp. 253-264. [29] H. Liu, Cutting MapReduce Cost with Spot Market, Proceedings of the 3rd USENIX conference on Hot topics in cloud computing, HotCloud 11, Portland, OR, (2011). Copyright c 2015 SERSC 141

142 Copyright c 2015 SERSC