Big Data Platform of a System Recommendation in Cloud Environment
|
|
|
- Branden Miller
- 10 years ago
- Views:
Transcription
1 , pp Big Data Platform of a System Recommendation in Cloud Environment Jinhong Kim 1 and Sung-Tae Hwang 2,*1 1 College of Information and Communication Eng., Sungkyunkwan University, 2,* College of Engineering, Hansung University { 1 jinhkm, 2,* cosmosth}@gmail.com Abstract Cloud Computing is one of the emerging technologies. This research paper aimed to outline cloud computing and its features, and considered cloud computing for machine learning and data mining. The goal of the paper was to develop a recommendation and search system using big data platform on cloud environment. The main focus was on the study and understanding of Hadoop, one of the new technologies used in the cloud for scalable batch processing, and HBase data model which is a scalable database on top of the Hadoop file system. Accordingly, this project involved the design, analysis and implementation phases for developing the search and recommendation system for staffing purpose. So, mainly the action research method was being followed for this. Keywords: Cloud Computing, MapReduce, Hadoop, Recommendation Algorithm 1. Introduction In the present day software era the ever growing data demands elastically scalable data centers which can be conveniently accessible with high quality in a secure way. This demand led to cloud computing as one of the emerging technologies of today. Cloud computing releases computer services, computer software, and data storage away from locally hosted infrastructure and into cloud-based solutions [1-3]. The ability of the cloud services to provide apparently an unlimited supply of computing power on demand to users has caught the attention of industry as well as academia. In the past couple of years software providers have been moving more and more applications to the cloud. Several big companies such as Amazon, Google, Microsoft, Yahoo, and IBM are using the cloud. Today, forward-thinking business leaders are using the cloud within their enterprise data centers to take advantage of the best practices that cloud computing has established, namely scalability, agility, automation, and resource sharing. By using a cloud-enabled application platform, companies can choose a hybrid approach to cloud computing that an organization's existing infrastructure to launch new cloud-enabled applications. This hybrid approach allows IT departments to focus on innovation for the business, reducing both capital and operational costs and automating the management of complex technologies. Cloud services claim to provide nearly everything needed to run a project without owning any IT infrastructure [4-6]. From , Web hosting to fully managed applications resources can be provided on demand. This helps to reduce development cost and hardware cost. Large corporations are regularly faced with the problem of staffing new projects. Assigning to projects based on their list of competences and project requirements requires complex queries. Cloud computing can provide a scalable method for answering these complex queries [7-9]. The goal of this paper is to get an understanding of cloud computing. This includes the study and understanding of Hadoop Distributed File System (HDFS), HBase Data Storage and MapReduce. The main goal is * Sung-Tae Hwang is the corresponding author ISSN: IJSEIA Copyright c 2015 SERSC
2 to develop and implement a recommendation and search system for staffing using cloud computing. This system is for competences mining and is based on cloud-based technology using Hadoop. The study is to use Web-based Competence Data for data mining as an input for searching and recommending the required profile. This study is done only for the data which has all the required information. 2. Conceptual Background Cloud computing is basically services-on-demand over the Internet. It is a natural evolution of the widespread adoption of virtualization, service-oriented architecture and utility computing [10]. 1) Cloud computing is a computing capability that provides an abstraction between the computing resource and its underlying technical architecture (e.g., servers, storage, networks), enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. 2) There is no formal definition commonly shared in industry, unlike for Web 2.0, and it is very broadly defined as on-demand provisioning of application, resources, and services that allow resources to be scaled up and down. 3) Clouds are a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or services). These resources can be dynamically reconfigured to adjust to a variable load, allowing also for optimum resource utilization. Cloud computing includes a number of technologies such as Virtualization, Web services, Service Oriented Architecture, Web 2.0, Web Mashup. One of the key new technologies used in the cloud are scalable batch processing systems [11-13]. The main reference implementation here is Google MapReduce and its open source implementation Apache Hadoop originally developed at Yahoo. Distribution and scalability are playing important roles in the cloud, and for that Hadoop can be used. Hadoop is a cloud computing program created to deal with the growing demands of modern computing and storage of massive amounts of data. Many companies, especially ones that operate through demanding websites, e.g., Amazon, Facebook, Yahoo, eharmony and ebay, use Hadoop. 3. Recommendable Search Algorithms The recommendation algorithm is for analyzing data for a particular problem to find the items a user is looking for and to produce a predicted likeliness score or a list of top N recommended items for a given user. Different algorithms can be used based on each use case [14]. For this paper, K-nearest neighbor has been used for finding a similar replacement of leaving a project. User-based Collaborative Filtering Algorithm is used for predicting missing competences K-Nearest Neighbor Algorithm (KNN Algorithm) The K-nearest neighbor algorithm (KNN) is part of supervised learning that has been used in many applications in the field of data mining, statistical pattern recognition and many others. KNN is a method for classifying objects based on the closest training examples in the feature space. An object is classified by a majority vote of its neighbors. K is always a positive integer. The neighbors are taken from a set of objects for which the correct classification is known. It is usual to use the Euclidean distance, though other distance measures can also be used instead. One of the advantages of the KNN is that it is well suited for multi-modal classes as its classification decision is based on a small neighborhood of similar objects [15-18]. So, even if the target class is multi-modal (i.e., consists of objects whose independent variables have different characteristics for different subsets), it can still lead to good accuracy. A major drawback of the similarity measure used in the KNN is that it uses all features equally in computing similarities. This can lead 134 Copyright c 2015 SERSC
3 to poor similarity measures and classification errors, when only a small subset of the features is useful for classification User-based Collaborative Filtering Algorithm A user-based collaborative filtering algorithm produces a recommendation list for the object user according to the view of other users. The assumption is that users with similar preferences will rate products similarly. Thus missing ratings for a user can be predicted by first finding a neighborhood of similar users and then aggregating the ratings of these users to form a prediction. User rating data can be a matrix A(m,n), where m represents the number of users, n represents the number of items [19]. Element R (at the i th line and j th column) represents the rating of the item j rated by user i. A cosine similarity algorithm can be used to measure the similarity between user i and j. Similarity between user i and user j is sim (i, j): sim(i, j) = cos(i, j) = i * j / i * j, Rating of item i rated by user u can be predicted by rating of nearest neighbors set NBSu rated by user u. NBSu is to the nearest neighbor set of user u;, Pu,i have predicted rating by user u on item i;, and sim(u,n) is to the semblance between user u and n. According to the rating of items, select N items that have the highest rating to compose a recommendation set and recommend them to the object user REST Services REST is an architectural style which is based on Web standards and the HTTP protocol. In REST-based architecture everything is a resource. A resource is accessed via a common interface based on the HTTP standard methods. In REST architecture there is a REST server which provides access to the resources and a REST client which accesses and modifies the REST resources. every resource should support the HTTP common operations. Resources are identified by global IDs (which are typically URIs). REST allows that resources have different representations, e.g., text, XML, JSON etc [20]. The client can ask for specific representation via the HTTP protocol (Content Negotiation). The HTTP standard methods which are typically used in REST are PUT, GET, POST and DELETE. 4. Big Data Platform (BDP) Hadoop comes with a distributed filesystem called HDFS (Hadoop Distributed Filesystem). HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets [21-23]. The Hadoop filesystem is designed for storing petabytes of a file with streaming data access using the idea that most efficient data processing pattern is a write-once, readmany-times pattern. HDFS stores metadata on a dedicated server, called NameNode. Application data are stored on other servers called DataNodes. All the servers are fully connected and communicate with each other using TCP-based protocols Architecture of BDP HDFS is based on master/slave architecture. To read a file, HDFS client first contacts NameNode. It returns list of addresses of the DataNodes that have a copy of the blocks of the file. Then client connects to the closest DataNodes directly for each block and requests the transfer of the desired block. For writing to a file, HDFS client first creates an empty file without any blocks. File creation is only possible when the client has writing permission and a new file does not exist in the system. NameNode records new file creation and allocates data blocks to list of suitable DataNodes to host replicas of the first block of the file. Replication of data makes DataNodes in pipeline. When the first block is Copyright c 2015 SERSC 135
4 filled, new DataNodes are requested to host replicas of the next block. A new pipeline is organized, and the client sends the further bytes of the file. Each choice of DataNodes is likely to be different [24]. A HDFS cluster consists of a single NameNode (as master) and a number of DataNodes (as slaves). The NameNode and DataNodes are pieces of software designed to run on commodity machines. These machines typically run a GNU/Linux operating system (OS). The usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNodes software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment one machine usually runs one DataNode.Affiliations are centered, italicized, not bold. Include addresses if possible. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode. 1) CheckpointNode - The Checkpoint node periodically downloads the latest fsimage and edits from the active NameNode to create checkpoints by merging them locally and then to upload new checkpoints back to the active NameNode. This requires the same memory space as that of NameNode and so checkpoint needs to be run on separate machine. Namespace information lost if either the checkpoint or the journal is missing, so it is highly recommended to configure HDFS to store the checkpoint and journal in multiple storage directories [25]. The Checkpoint node uses parameter fs.checkpoint.period to check the interval between two consecutive checkpoints. The Interval time is in seconds (default is 3600 second). The Edit log file size is specified by parameter fs.checkpoint.size (default size 64MB) and a checkpoint triggers if size exceeds. Multiple checkpoint nodes may be specified in the cluster configuration file. 2) BackupNode - The Backup node has the same functionality as the Checkpoint node. In addition, it maintains an in-memory, up-to-date copy of the file system namespace that is always synchronized with the active NameNode state. Along with accepting a journal stream of the filesystem edits from the NameNode and persisting this to disk, the Backup node also applies those edits into its own copy of the namespace in memory, thus creating a backup of the namespace. DataNodes and NameNode connections are established by handshake where namespace ID and the software version of the DataNodes are verified. The namespace ID is assigned to the file system instance when it is formatted. The namespace ID is stored persistently on all nodes of the cluster. A different namespace ID node cannot join the cluster. A new DataNode without any namespace ID can join the cluster and receive the cluster s namespace ID and DataNode registers with the NameNode with storage ID. A DataNode identifies block replicas in its possession to the NameNode by sending a block report. A block report contains the block id, the generation stamp and the length for each block replica the server hosts. The first block report is sent immediately after the DataNodes registrations. Subsequent block reports are sent every hour and provide the NameNode with an up-to date view of where block replicas are located on the cluster Data Model of BDP HBase uses a data model which is similar to Google BigTable s data model. The applications keep the rows of data in labeled tables. Each row has a sorting key and an arbitrary number of columns. Table row keys are byte arrays. Sorting of table rows can be done by row key and the table s primary key [26]. Rows of one table can have a variable number of columns. A column name is of the form "<family>:<label>" with an arbitrary string of bytes. A table is created with a <family> set, known as column families. All column family have same prefix. The family name must be string whereas the label can be of any arbitrary bytes. Column can be updated by adding new column family members as per the update request by client. A new <label> can be used in any writing operation, 136 Copyright c 2015 SERSC
5 without any previous specification. HBase stores column families physically grouped on the disk, so the items in a certain column have the same particular read/write characteristics and contain similar data. By default only single row may be locked at a time. Row writes are always atomic, but single row may be locked to perform both read and write operations on that row atomically. Recent versions allow blocking several rows, if the option has been explicitly activated. The HBase data is modeled as a multidimensional map in which values (the table cells) are indexed by four keys: value = Map(TableName, RowKey, ColumnKey, Timestamp), where: i) TableName is a string; ii) RowKey and ColumnKey are binary values (Java byte[]); iii) Timestamp is a 64-bit integer (Java long); iv) value is an uninterrupted array of bytes (Java byte[]).binary data is encoded in Base64 for transmission over the wire [27]. The row key is the primary key of the table and is typically a string. Rows are sorted by row key in lexicographic order. 5. Recommendation System (RS) In big software organizations successful products is one of the most important aspects of organization achievement. Team formation directly affects the performance of the team and the product development quality. Staffing is an important issue to be analyzed when software development is undertaken as a value-driven business. Creating competencebased competitiveness is a great challenge because competence identification and consequently, its management, helps in innovation, decision support, faster process and product quality improvement, and constitute an important input to the creation of the organizational knowledge. The finding a value driven developer-to-project assignment is a complex task. Predicting the compatibility of one with other team members is much more complex. It just considers a match between these competences with job competence requirements. The existing system does not support the fitness of this with team members in terms of interpersonal compatibility, which is equally important while finding a team member for a particular project and we are shown Figure 1. Figure 1. Recommendation System Architecture To develop such a smart, scalable recommendation and search system, a cloud cluster of four servers was configured. Hadoop framework has been chosen to provide the scalability and efficient data mining. HBase distributed column-oriented database on top of Hadoop was chosen for storing data. The recommendation was based on K-N-N algorithm to find the replacement of a person leaving from the project. The competence data and project data were two input files for processing. These two data files were loaded to HDFS and saved in CSV file format. The lists of competences were grouped. Grouping was done on the basis of target use i.e., [28]. where competences can be used. MapReduce implements this logic of clustering the competence lists into groups of competence. Copyright c 2015 SERSC 137
6 MapReduce function reads input files from HDFS, processes them and creates table in HBase to store the output data in table format. Data mining involves the finding the distance between on their competence. REST services provide access to output results stored in HBase tables. It also supports basic database CRUD operations to the HBase table. New profiles of competences can be created. Existing profiles can be updated and deleted. All profiles and a profile at a time can be retrieved. Competence data and project data were saved into HBase tables by using MapReduce. Competence data and project data files were loaded as CSV files in Hadoop distributed file system (HDFS). MapReduce function takes these files from HDFS as input for mapper class and maps to an intermediate <key,value> pairs. Then the reducer combines these intermediate pairs based on similar key to produce the reduced output. And finally, loads this reduced output by creating table in HBase and loading data to the created table. A snippet of source code for mapping input CSV file to intermediate <key,value> pairs in the MapReduce framework. /* * This method is called as many times as there are rows in CSV * file. key the key line the line from CSV file context the map where the data is written from CSV file IOException Signals that an I/O exception has occurred. InterruptedException the interrupted exception public void map(longwritable key, Text line, Context context) throws IOException, InterruptedException { if (rowswritten == 0) { rowswritten = 1; // Skip the first row as its a "header" row return; } // Save the information from CSV file to map. Map item's // key is the ID value of the CSV file's row String[] values = line.tostring().split(projectconstants.separatorchar); MapWritable map = new MapWritable(); map.put(newintwritable(projectconstants.projectstartdateindex), new Text( values[projectconstants.projectstartdateindex])); map.put(newintwritable(projectconstants.projectenddateindex), new Text( values[projectconstants.projectenddateindex])); map.put(new IntWritable(ProjectConstants.ROLEINDEX), newtext(values[projectconstants.roleindex])); map.put(new IntWritable(ProjectConstants), new Text( values[projectconstants. INDEX])); ImmutableBytesWritable outkey = new ImmutableBytesWritable( Bytes.toBytes(values[ProjectConstants.IDINDEX])); context.write(outkey, map); } } 138 Copyright c 2015 SERSC
7 6. Recommendation System Development As we mention that recommendable search algorithm and architecture system, in big software organizations successful products is one of the most important aspects of organization achievement. Team formation directly affects the performance of the team and the product development quality. Staffing is an important issue to be analyzed when software development is undertaken as a value-driven business. Creating competencebased competitiveness is a great challenge because competence identification and consequently, its management, helps in innovation, decision support, faster process and product quality improvement, and constitute an important input to the creation of the firm s organizational knowledge. The finding a value driven developer-to-project assignment is a complex task. Predicting the compatibility of one with other team members is much more complex. To develop such a smart, scalable recommendation and search system, a cloud cluster of four servers was configured. Hadoop framework has been chosen to provide the scalability and efficient data mining. HBase distributed column-oriented database on top of Hadoop was chosen for storing data [29]. Hadoop engine was deployed over the 4 node cluster and Java runtime was setup to use the common Hadoop configuration, as specified by the NameNode (master node) in the cluster. The recommendation was based on K-N-N algorithm to find the replacement of a person leaving from the project. The competence data and project data were two input files for processing. These two data files were loaded to HDFS and saved in CSV file format. The lists of competences were grouped. Grouping was done on the basis of target use i.e., where competences can be used. MapReduce implements this logic of clustering the competence lists into groups of competence. MapReduce function reads input files from HDFS, processes them and creates table in HBase to store the output data in table format. Data mining involves the finding the distance between on their competence. REST services provide access to output results stored in HBase tables. It also supports basic database operations to the HBase table. New profiles of competences can be created. Existing profiles can be updated and deleted. All profiles and a profile at a time can be retrieved. Competence data and project data were saved into HBase tables by using MapReduce. As shown in Figure 2, competence data and project data files were loaded as CSV files in Hadoop distributed file system (HDFS). MapReduce function takes these files from HDFS as input for mapper class and maps to an intermediate <key,value> pairs. Then the reducer combines these intermediate pairs based on similar key to produce the reduced output. And finally, loads this reduced output by creating table in HBase and loading data to the created table. 7. Conclusions Figure 2. Competence Data Load by MapReduce Hadoop is an open-source cloud computing implementation and provides all its necessary features such as scalability, reliability and fault tolerance. As one of the goals, this project explored the Hadoop and its architecture and how HDFS supports cloud computing features to deliver cloud services. HBase is an open source, non-relational, distributed also known as Hadoop database. It provides random, realtime read/write access to Big Data. The main goal of the project was to build recommendation and search Copyright c 2015 SERSC 139
8 system for staffing using Cloud based technology Hadoop. So, Hadoop was installed on a four node cluster, and HBase was installed on top of Hadoop to provide scalable data storage. In a competitive market, a software development organization s major goal is to maximize value creation for a given investment. Therefore, resource management is a very crucial factor in enterprise-level value maximization. An efficient staffing system is required to balance between project technical needs and organizational constraints. Acknowledgments This research was financially supported by Hansung University. References [1] L. Zheng, Y. Wang, J. Qi and D. Liu, Research and Improvement of Personalized Recommendation Algorithm Based on Collaborative Filtering, IJCSNS International Journal of Computer Science and Network Security, vol. 7, no.7, (2007) July. [2] T. Bogers and A. van den Bosch, Collaborative and Content-based Filtering for Item Recommendation on Social Bookmarking Websites, To appear in Proceedings of the ACM RecSys 09 workshop on Recommender Systems and the Social Web, (2009). [3] R. Liebregts and T. Bogers, Design and Implementation of a University-wide Expert Search Engine, In M. Boughanem, editors, Proceedings of the 31st European Conference on Information Retrieval (ECIR 2009), volume 5478 of Lecture Notes in Computer Science, (2009), pages [4] T. Bogers and A. van den Bosch, Using Language Modeling for Spam Detection in Social Reference Manager Websites, In R. Aly, editors, Proceedings of the 9th Belgian-Dutch Information Retrieval Workshop (DIR 2009), (2009), pp [5] A. van den Bosch and T. Bogers, Efficient Context-Sensitive Word Completion for Mobile Devices, In MobileHCI 2008: Proceedings of the 10th International Conference on Human- Computer Interaction with Mobile Devices and Services, IOP-MMI special track, (2008), pp [6] K. Hofmann, K. Balog, T. Bogers, and M. de Rijke, Integrating Contextual Factors into Topic centric Retrieval Models for Finding Similar Experts, In Proceedings of ACM SIGIR 2008 Workshop on Future Challenges in Expert Retrieval, (2008), pp [7] M.C. Puerta Melguizo, O. Muñoz Ramos, L. Boves, T. Bogers and A. van den Bosch, A Personalized Recommender System for Writing in the Internet Age, In Proceedings of the LREC 2008 workshop on Natural Language Processing Resources, Algorithms, and Tools for Authoring Aids, (2008). [8] J. Kimura and H. Shibasaki, Recent Advances in Clinical Neurophysiology, Proceedings of the 10th International Congress of EMG and Clinical Neurophysiology, Kyoto, Japan, (1995) October [9] S. Ibrahim, Adaptive Disk I/O Scheduling for MapReduce in Virtualized Environment, in: Proceedings of the 2011 International Conference on Parallel Processing (ICPP 11). ICPP 11, taipei, Taiwan, (2011), pp [10] R. T. Kaushik and M. Bhandarkar, GreenHDFS: towards an energyconserving, storage-efficient, hybrid Hadoop compute cluster, In: Proceedings of the 2010 international conference on Power aware computing and systems, HotPower 10, Vancouver, BC, Canada: USENIX Association, (2010), pp [11] C. Li, Making geo-replicated systems fast as possible, consistent when necessary, In: Proceedings of the 10th USENIX conference on Operating Systems Design and Implementation, O S D I 1 2. H o l l y w o o d, C A, U S A : U S E N I X A s s o c i a t i o n, (2012), pp [12] S. Weil, Ceph: A Scalable, High-Performance Distributed File System, In:Proceedings of the 7th Conference on Operating Systems Design and Implementation (OSDI 06), (2006). [13] H. Wang, Distributed systems meet economics: pricing in the cloud, In: Proceedings of the 2nd USENIX Workshop on Hot Topics in Cloud Computing (Hot-Cloud 10), HotCloud 10. Boston, MA: USENIX Association, (2010). [14] H. Wada, Data Consistency Properties and the Trade -offs in Commercial Cloud Storage: the Consumers Perspective, In: CIDR 2011, Fifth Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA, January 9-12, 2011, Online Proceedings, (2011), pp [15] G. Urdaneta, G. Pierre and M. van Steen, Wikipedia Workload Analysis for Decentralized Hosting, In: Elsevier Computer Networks, vol. 53, no. 11, (2009), pp [16] R. H. Thomas, A Majority consensus approach to concurrency control for multiple copy databases, In: ACM Trans, Database Syst., vol. 4, no. 2, (1979), pp [17] A. T. Tai and J. F. Meyer, Performability Management in Distributed Database Systems: An Adaptive Concurrency Control Protocol, In: Proceedings of the 4th International Workshop on Modeling, Analysis and Simulation of Computer and Telecommunications Systems, MASCOTS 96, IEEE Computer Society, (1996). [18] B. Chun and S. Lee, A Study on Big Data Processing Mechanism & Applicability, International Journal of Software Engineering and Its Applications, vol. 8, no. 8, (2014), pp Copyright c 2015 SERSC
9 [19] S. Ha, S. Lee and K. Lee, Standardization Requirements Analysis on Big Data in Public Sector based on Potential Business Models, International Journal of Software Engineering and Its Applications, vol. 8, no. 11, (2014), pp [20] S. Jeon, B. Hong, J. Kwon, Y. Kwak and S. Song, Redundant Data Removal Technique for Efficient Big Data Search Processing, International Journal of Software Engineering and Its Applications, vol. 7, no. 4, (2014), pp [21] P. Vincent, L. Badri and M. Badri, Regression Testing of Object-Oriented Software: Towards a Hybrid Technique, International Journal of Software Engineering and Its Applications, vol. 7, no. 4, (2013), pp [22] V. Gupta, D. S. Chauhan and K. Dutta, Regression Testing based Requirement Prioritization of Desktop Software Applications Approach, International Journal of Software Engineering and Its Applications, vol. 7, no. 6, (2013), pp [23] B. L. Bowerman, R. T. O Connell and A. B. Koehler, Forecasting, Time Series, and Regression, An Applied Approach, Brooks/Cole, (2005). [24] R. Scheaffer, W. Mendenhall III, R. L. Ott and K. G. Gerow, Elementary Survey Sampling, 7th Edition, Duxbury, (2011). [25] M. Riondato, Sampling-based Randomized Algorithms for Big Data Analytics, PhD dissertation in the Department of Computer Science at Brown University, (2014). [26] J. Lu and D. LiBias, Correction in a Small Sample from Big Data, IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 11, (2013), pp [27] J. Kim and S-T. Hwang, Adaptive Consistency Approaches for Cloud Computing Platform, International Journal of Software Engineering and Its Applications, vol. 9, no. 7, (2015), pp [28] R. Liu, A. Aboulnaga and K. Salem, DAX: a widely distributed multitenant storage service for DBMS hosting, Proceedings of the 39th international conference on Very Large Data Bases, PVLDB 13, Trento, Italy: VLDB Endowment, (2013), pp [29] H. Liu, Cutting MapReduce Cost with Spot Market, Proceedings of the 3rd USENIX conference on Hot topics in cloud computing, HotCloud 11, Portland, OR, (2011). Copyright c 2015 SERSC 141
10 142 Copyright c 2015 SERSC
Big Data With Hadoop
With Saurabh Singh [email protected] The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)
Journal of science e ISSN 2277-3290 Print ISSN 2277-3282 Information Technology www.journalofscience.net STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS) S. Chandra
A Divided Regression Analysis for Big Data
Vol., No. (0), pp. - http://dx.doi.org/0./ijseia.0...0 A Divided Regression Analysis for Big Data Sunghae Jun, Seung-Joo Lee and Jea-Bok Ryu Department of Statistics, Cheongju University, 0-, Korea [email protected],
Hypertable Architecture Overview
WHITE PAPER - MARCH 2012 Hypertable Architecture Overview Hypertable is an open source, scalable NoSQL database modeled after Bigtable, Google s proprietary scalable database. It is written in C++ for
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A COMPREHENSIVE VIEW OF HADOOP ER. AMRINDER KAUR Assistant Professor, Department
Open source large scale distributed data management with Google s MapReduce and Bigtable
Open source large scale distributed data management with Google s MapReduce and Bigtable Ioannis Konstantinou Email: [email protected] Web: http://www.cslab.ntua.gr/~ikons Computing Systems Laboratory
Apache Hadoop. Alexandru Costan
1 Apache Hadoop Alexandru Costan Big Data Landscape No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard, except Hadoop 2 Outline What is Hadoop? Who uses it? Architecture HDFS MapReduce Open
International Journal of Advance Research in Computer Science and Management Studies
Volume 2, Issue 8, August 2014 ISSN: 2321 7782 (Online) International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online
Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop
Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop Kanchan A. Khedikar Department of Computer Science & Engineering Walchand Institute of Technoloy, Solapur, Maharashtra,
Hadoop Architecture. Part 1
Hadoop Architecture Part 1 Node, Rack and Cluster: A node is simply a computer, typically non-enterprise, commodity hardware for nodes that contain data. Consider we have Node 1.Then we can add more nodes,
Hadoop IST 734 SS CHUNG
Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to
Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. https://hadoop.apache.org. Big Data Management and Analytics
Overview Big Data in Apache Hadoop - HDFS - MapReduce in Hadoop - YARN https://hadoop.apache.org 138 Apache Hadoop - Historical Background - 2003: Google publishes its cluster architecture & DFS (GFS)
Log Mining Based on Hadoop s Map and Reduce Technique
Log Mining Based on Hadoop s Map and Reduce Technique ABSTRACT: Anuja Pandit Department of Computer Science, [email protected] Amruta Deshpande Department of Computer Science, [email protected]
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
Distributed File Systems
Distributed File Systems Mauro Fruet University of Trento - Italy 2011/12/19 Mauro Fruet (UniTN) Distributed File Systems 2011/12/19 1 / 39 Outline 1 Distributed File Systems 2 The Google File System (GFS)
DATA MINING WITH HADOOP AND HIVE Introduction to Architecture
DATA MINING WITH HADOOP AND HIVE Introduction to Architecture Dr. Wlodek Zadrozny (Most slides come from Prof. Akella s class in 2014) 2015-2025. Reproduction or usage prohibited without permission of
Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh
1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets
Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc [email protected]
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc [email protected] What s Hadoop Framework for running applications on large clusters of commodity hardware Scale: petabytes of data
THE HADOOP DISTRIBUTED FILE SYSTEM
THE HADOOP DISTRIBUTED FILE SYSTEM Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Presented by Alexander Pokluda October 7, 2013 Outline Motivation and Overview of Hadoop Architecture,
NoSQL and Hadoop Technologies On Oracle Cloud
NoSQL and Hadoop Technologies On Oracle Cloud Vatika Sharma 1, Meenu Dave 2 1 M.Tech. Scholar, Department of CSE, Jagan Nath University, Jaipur, India 2 Assistant Professor, Department of CSE, Jagan Nath
How To Scale Out Of A Nosql Database
Firebird meets NoSQL (Apache HBase) Case Study Firebird Conference 2011 Luxembourg 25.11.2011 26.11.2011 Thomas Steinmaurer DI +43 7236 3343 896 [email protected] www.scch.at Michael Zwick DI
CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)
CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2016 MapReduce MapReduce is a programming model
The Hadoop Distributed File System
The Hadoop Distributed File System Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA {Shv, Hairong, SRadia, Chansler}@Yahoo-Inc.com Presenter: Alex Hu HDFS
CSE-E5430 Scalable Cloud Computing Lecture 2
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 14.9-2015 1/36 Google MapReduce A scalable batch processing
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
The Hadoop Distributed File System
The Hadoop Distributed File System The Hadoop Distributed File System, Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler, Yahoo, 2010 Agenda Topic 1: Introduction Topic 2: Architecture
Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data
Introduction to Hadoop HDFS and Ecosystems ANSHUL MITTAL Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Topics The goal of this presentation is to give
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after
HDFS Users Guide. Table of contents
Table of contents 1 Purpose...2 2 Overview...2 3 Prerequisites...3 4 Web Interface...3 5 Shell Commands... 3 5.1 DFSAdmin Command...4 6 Secondary NameNode...4 7 Checkpoint Node...5 8 Backup Node...6 9
IJFEAT INTERNATIONAL JOURNAL FOR ENGINEERING APPLICATIONS AND TECHNOLOGY
IJFEAT INTERNATIONAL JOURNAL FOR ENGINEERING APPLICATIONS AND TECHNOLOGY Hadoop Distributed File System: What and Why? Ashwini Dhruva Nikam, Computer Science & Engineering, J.D.I.E.T., Yavatmal. Maharashtra,
Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique
Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique Mahesh Maurya a, Sunita Mahajan b * a Research Scholar, JJT University, MPSTME, Mumbai, India,[email protected]
A programming model in Cloud: MapReduce
A programming model in Cloud: MapReduce Programming model and implementation developed by Google for processing large data sets Users specify a map function to generate a set of intermediate key/value
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
Big Data Storage Architecture Design in Cloud Computing
Big Data Storage Architecture Design in Cloud Computing Xuebin Chen 1, Shi Wang 1( ), Yanyan Dong 1, and Xu Wang 2 1 College of Science, North China University of Science and Technology, Tangshan, Hebei,
A Brief Outline on Bigdata Hadoop
A Brief Outline on Bigdata Hadoop Twinkle Gupta 1, Shruti Dixit 2 RGPV, Department of Computer Science and Engineering, Acropolis Institute of Technology and Research, Indore, India Abstract- Bigdata is
Snapshots in Hadoop Distributed File System
Snapshots in Hadoop Distributed File System Sameer Agarwal UC Berkeley Dhruba Borthakur Facebook Inc. Ion Stoica UC Berkeley Abstract The ability to take snapshots is an essential functionality of any
A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS
A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS Dr. Ananthi Sheshasayee 1, J V N Lakshmi 2 1 Head Department of Computer Science & Research, Quaid-E-Millath Govt College for Women, Chennai, (India)
Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems
Processing of massive data: MapReduce 2. Hadoop 1 MapReduce Implementations Google were the first that applied MapReduce for big data analysis Their idea was introduced in their seminal paper MapReduce:
Introduction to Cloud Computing
Introduction to Cloud Computing Cloud Computing I (intro) 15 319, spring 2010 2 nd Lecture, Jan 14 th Majd F. Sakr Lecture Motivation General overview on cloud computing What is cloud computing Services
Processing of Hadoop using Highly Available NameNode
Processing of Hadoop using Highly Available NameNode 1 Akash Deshpande, 2 Shrikant Badwaik, 3 Sailee Nalawade, 4 Anjali Bote, 5 Prof. S. P. Kosbatwar Department of computer Engineering Smt. Kashibai Navale
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
Introduction to Hadoop. New York Oracle User Group Vikas Sawhney
Introduction to Hadoop New York Oracle User Group Vikas Sawhney GENERAL AGENDA Driving Factors behind BIG-DATA NOSQL Database 2014 Database Landscape Hadoop Architecture Map/Reduce Hadoop Eco-system Hadoop
Cloud Computing at Google. Architecture
Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale
Reduction of Data at Namenode in HDFS using harballing Technique
Reduction of Data at Namenode in HDFS using harballing Technique Vaibhav Gopal Korat, Kumar Swamy Pamu [email protected] [email protected] Abstract HDFS stands for the Hadoop Distributed File System.
Hadoop Ecosystem B Y R A H I M A.
Hadoop Ecosystem B Y R A H I M A. History of Hadoop Hadoop was created by Doug Cutting, the creator of Apache Lucene, the widely used text search library. Hadoop has its origins in Apache Nutch, an open
Xiaoming Gao Hui Li Thilina Gunarathne
Xiaoming Gao Hui Li Thilina Gunarathne Outline HBase and Bigtable Storage HBase Use Cases HBase vs RDBMS Hands-on: Load CSV file to Hbase table with MapReduce Motivation Lots of Semi structured data Horizontal
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related Summary Xiangzhe Li Nowadays, there are more and more data everyday about everything. For instance, here are some of the astonishing
An Approach to Implement Map Reduce with NoSQL Databases
www.ijecs.in International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 4 Issue 8 Aug 2015, Page No. 13635-13639 An Approach to Implement Map Reduce with NoSQL Databases Ashutosh
GraySort and MinuteSort at Yahoo on Hadoop 0.23
GraySort and at Yahoo on Hadoop.23 Thomas Graves Yahoo! May, 213 The Apache Hadoop[1] software library is an open source framework that allows for the distributed processing of large data sets across clusters
Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop
Lecture 32 Big Data 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop 1 2 Big Data Problems Data explosion Data from users on social
Design of Electric Energy Acquisition System on Hadoop
, pp.47-54 http://dx.doi.org/10.14257/ijgdc.2015.8.5.04 Design of Electric Energy Acquisition System on Hadoop Yi Wu 1 and Jianjun Zhou 2 1 School of Information Science and Technology, Heilongjiang University
Hosting Transaction Based Applications on Cloud
Proc. of Int. Conf. on Multimedia Processing, Communication& Info. Tech., MPCIT Hosting Transaction Based Applications on Cloud A.N.Diggikar 1, Dr. D.H.Rao 2 1 Jain College of Engineering, Belgaum, India
Introduction to Hadoop
Introduction to Hadoop 1 What is Hadoop? the big data revolution extracting value from data cloud computing 2 Understanding MapReduce the word count problem more examples MCS 572 Lecture 24 Introduction
Comparing SQL and NOSQL databases
COSC 6397 Big Data Analytics Data Formats (II) HBase Edgar Gabriel Spring 2015 Comparing SQL and NOSQL databases Types Development History Data Storage Model SQL One type (SQL database) with minor variations
Apache Hadoop FileSystem and its Usage in Facebook
Apache Hadoop FileSystem and its Usage in Facebook Dhruba Borthakur Project Lead, Apache Hadoop Distributed File System [email protected] Presented at Indian Institute of Technology November, 2010 http://www.facebook.com/hadoopfs
Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components
Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components of Hadoop. We will see what types of nodes can exist in a Hadoop
Hadoop Distributed File System. Jordan Prosch, Matt Kipps
Hadoop Distributed File System Jordan Prosch, Matt Kipps Outline - Background - Architecture - Comments & Suggestions Background What is HDFS? Part of Apache Hadoop - distributed storage What is Hadoop?
Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies
Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com Image
HDFS Architecture Guide
by Dhruba Borthakur Table of contents 1 Introduction... 3 2 Assumptions and Goals... 3 2.1 Hardware Failure... 3 2.2 Streaming Data Access...3 2.3 Large Data Sets... 3 2.4 Simple Coherency Model...3 2.5
Hadoop Distributed File System. T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela
Hadoop Distributed File System T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela Agenda Introduction Flesh and bones of HDFS Architecture Accessing data Data replication strategy Fault tolerance
Distributed Filesystems
Distributed Filesystems Amir H. Payberah Swedish Institute of Computer Science [email protected] April 8, 2014 Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 1 / 32 What is Filesystem? Controls
Sector vs. Hadoop. A Brief Comparison Between the Two Systems
Sector vs. Hadoop A Brief Comparison Between the Two Systems Background Sector is a relatively new system that is broadly comparable to Hadoop, and people want to know what are the differences. Is Sector
Suresh Lakavath csir urdip Pune, India [email protected].
A Big Data Hadoop Architecture for Online Analysis. Suresh Lakavath csir urdip Pune, India [email protected]. Ramlal Naik L Acme Tele Power LTD Haryana, India [email protected]. Abstract Big Data
Open source Google-style large scale data analysis with Hadoop
Open source Google-style large scale data analysis with Hadoop Ioannis Konstantinou Email: [email protected] Web: http://www.cslab.ntua.gr/~ikons Computing Systems Laboratory School of Electrical
Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] [email protected]
Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] [email protected] Hadoop, Why? Need to process huge datasets on large clusters of computers
UPS battery remote monitoring system in cloud computing
, pp.11-15 http://dx.doi.org/10.14257/astl.2014.53.03 UPS battery remote monitoring system in cloud computing Shiwei Li, Haiying Wang, Qi Fan School of Automation, Harbin University of Science and Technology
Hadoop & its Usage at Facebook
Hadoop & its Usage at Facebook Dhruba Borthakur Project Lead, Hadoop Distributed File System [email protected] Presented at the Storage Developer Conference, Santa Clara September 15, 2009 Outline Introduction
Fault Tolerance in Hadoop for Work Migration
1 Fault Tolerance in Hadoop for Work Migration Shivaraman Janakiraman Indiana University Bloomington ABSTRACT Hadoop is a framework that runs applications on large clusters which are built on numerous
HADOOP MOCK TEST HADOOP MOCK TEST I
http://www.tutorialspoint.com HADOOP MOCK TEST Copyright tutorialspoint.com This section presents you various set of Mock Tests related to Hadoop Framework. You can download these sample mock tests at
Apache HBase. Crazy dances on the elephant back
Apache HBase Crazy dances on the elephant back Roman Nikitchenko, 16.10.2014 YARN 2 FIRST EVER DATA OS 10.000 nodes computer Recent technology changes are focused on higher scale. Better resource usage
!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets
!"#$%&' ( Processing LARGE data sets )%#*'+,'-#.//"0( Framework for o! reliable o! scalable o! distributed computation of large data sets 4(5,67,!-+!"89,:*$;'0+$.
Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] June 3 rd, 2008
Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] June 3 rd, 2008 Who Am I? Hadoop Developer Core contributor since Hadoop s infancy Focussed
BIG DATA What it is and how to use?
BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14
Survey on Scheduling Algorithm in MapReduce Framework
Survey on Scheduling Algorithm in MapReduce Framework Pravin P. Nimbalkar 1, Devendra P.Gadekar 2 1,2 Department of Computer Engineering, JSPM s Imperial College of Engineering and Research, Pune, India
http://www.paper.edu.cn
5 10 15 20 25 30 35 A platform for massive railway information data storage # SHAN Xu 1, WANG Genying 1, LIU Lin 2** (1. Key Laboratory of Communication and Information Systems, Beijing Municipal Commission
CDH AND BUSINESS CONTINUITY:
WHITE PAPER CDH AND BUSINESS CONTINUITY: An overview of the availability, data protection and disaster recovery features in Hadoop Abstract Using the sophisticated built-in capabilities of CDH for tunable
Cloud Storage Solution for WSN Based on Internet Innovation Union
Cloud Storage Solution for WSN Based on Internet Innovation Union Tongrang Fan 1, Xuan Zhang 1, Feng Gao 1 1 School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang,
Data-Intensive Computing with Map-Reduce and Hadoop
Data-Intensive Computing with Map-Reduce and Hadoop Shamil Humbetov Department of Computer Engineering Qafqaz University Baku, Azerbaijan [email protected] Abstract Every day, we create 2.5 quintillion
Big Data and Apache Hadoop s MapReduce
Big Data and Apache Hadoop s MapReduce Michael Hahsler Computer Science and Engineering Southern Methodist University January 23, 2012 Michael Hahsler (SMU/CSE) Hadoop/MapReduce January 23, 2012 1 / 23
NoSQL Data Base Basics
NoSQL Data Base Basics Course Notes in Transparency Format Cloud Computing MIRI (CLC-MIRI) UPC Master in Innovation & Research in Informatics Spring- 2013 Jordi Torres, UPC - BSC www.jorditorres.eu HDFS
Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! [email protected]
Big Data Processing, 2014/15 Lecture 5: GFS & HDFS!! Claudia Hauff (Web Information Systems)! [email protected] 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind
Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January 2015. Email: [email protected] Website: www.qburst.com
Lambda Architecture Near Real-Time Big Data Analytics Using Hadoop January 2015 Contents Overview... 3 Lambda Architecture: A Quick Introduction... 4 Batch Layer... 4 Serving Layer... 4 Speed Layer...
Large-Scale Data Sets Clustering Based on MapReduce and Hadoop
Journal of Computational Information Systems 7: 16 (2011) 5956-5963 Available at http://www.jofcis.com Large-Scale Data Sets Clustering Based on MapReduce and Hadoop Ping ZHOU, Jingsheng LEI, Wenjun YE
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON BIG DATA MANAGEMENT AND ITS SECURITY PRUTHVIKA S. KADU 1, DR. H. R.
Design and Evolution of the Apache Hadoop File System(HDFS)
Design and Evolution of the Apache Hadoop File System(HDFS) Dhruba Borthakur Engineer@Facebook Committer@Apache HDFS SDC, Sept 19 2011 Outline Introduction Yet another file-system, why? Goals of Hadoop
Big Data Analytics: Hadoop-Map Reduce & NoSQL Databases
Big Data Analytics: Hadoop-Map Reduce & NoSQL Databases Abinav Pothuganti Computer Science and Engineering, CBIT,Hyderabad, Telangana, India Abstract Today, we are surrounded by data like oxygen. The exponential
Accelerating and Simplifying Apache
Accelerating and Simplifying Apache Hadoop with Panasas ActiveStor White paper NOvember 2012 1.888.PANASAS www.panasas.com Executive Overview The technology requirements for big data vary significantly
Prepared By : Manoj Kumar Joshi & Vikas Sawhney
Prepared By : Manoj Kumar Joshi & Vikas Sawhney General Agenda Introduction to Hadoop Architecture Acknowledgement Thanks to all the authors who left their selfexplanatory images on the internet. Thanks
Application Development. A Paradigm Shift
Application Development for the Cloud: A Paradigm Shift Ramesh Rangachar Intelsat t 2012 by Intelsat. t Published by The Aerospace Corporation with permission. New 2007 Template - 1 Motivation for the
Integrating Big Data into the Computing Curricula
Integrating Big Data into the Computing Curricula Yasin Silva, Suzanne Dietrich, Jason Reed, Lisa Tsosie Arizona State University http://www.public.asu.edu/~ynsilva/ibigdata/ 1 Overview Motivation Big
marlabs driving digital agility WHITEPAPER Big Data and Hadoop
marlabs driving digital agility WHITEPAPER Big Data and Hadoop Abstract This paper explains the significance of Hadoop, an emerging yet rapidly growing technology. The prime goal of this paper is to unveil
BIG DATA TRENDS AND TECHNOLOGIES
BIG DATA TRENDS AND TECHNOLOGIES THE WORLD OF DATA IS CHANGING Cloud WHAT IS BIG DATA? Big data are datasets that grow so large that they become awkward to work with using onhand database management tools.
A very short Intro to Hadoop
4 Overview A very short Intro to Hadoop photo by: exfordy, flickr 5 How to Crunch a Petabyte? Lots of disks, spinning all the time Redundancy, since disks die Lots of CPU cores, working all the time Retry,
Hadoop Distributed File System (HDFS) Overview
2012 coreservlets.com and Dima May Hadoop Distributed File System (HDFS) Overview Originals of slides and source code for examples: http://www.coreservlets.com/hadoop-tutorial/ Also see the customized
L1: Introduction to Hadoop
L1: Introduction to Hadoop Feng Li [email protected] School of Statistics and Mathematics Central University of Finance and Economics Revision: December 1, 2014 Today we are going to learn... 1 General
