A REVIEW: Distributed File System
|
|
|
- Heather Bradley
- 10 years ago
- Views:
Transcription
1 Journal of Computer Networks and Communications Security VOL. 3, NO. 5, MAY 2015, Available online at: EISSN (Online) / ISSN (Print) A REVIEW: System Shiva Asadianfam 1, Mahboubeh Shamsi 2 and shahrad kashany 3 1, 3 Department of Computer Engineering, Qom Branch, Islamic Azad University, Qom, Iran 2 Department of Electrical & Computer Engineering, Qom University of Technology, Qom, Iran 1 [email protected], 2 [email protected], 3 [email protected] ABSTRACT Data collection in the world are growing and expanding. This capability, it is important that the infrastructure must be able to store a huge collection of data on their number grows every day. The conventional methods are used in data centers for capacity building, costs of software, hardware and management of this process will be very high, today. File system architecture is that, independent of the data stored in its metadata. All of System (DFS) are used for processing, storing and analyzing very large amount of unstructured data. The most common file system can be noted Google File System (GFS) and Hadoop System (HDFS). In this paper, we present a review on design of distributed file system. File systems compared by scalability, availability, compatibility, extensibility, autonomy and etc. Keywords: System, Big Data, Design of DFS, Hadoop System(HDFS), Google File System(GFS), Review of DFS. 1 INTRODUCTION In these years, increasing number of organizations are facing the problem of explosion of data and the size of the databases used in today s enterprises has been growing at exponential rates[1]. Data is generated through many sources like business processes, transactions, social networking sites, web servers, etc. and remains in structured as well as unstructured form [2]. The need for such a level of scalability and increasing volumes of data center storage environment and the need for sustainable, large companies Webbased service provider has to meet your needs, choose another type of storage media systems and distributed file management based on the storage media of objectbased. System (DFS) is a set of client and server services that allow an organization using Microsoft Windows servers to organize many distributed SMB file shares into a distributed file system. DFS provides location transparency and redundancy to improve data availability in the face of failure or heavy load by allowing shares in multiple different locations to be logically grouped under one folder, or DFS root [3]. Such as, ChienMing Wang, Chi Chang Huang and HuanMing Liang [7] are presented a distributed file system, called ASDF. That is an autonomous and scalable distributed file system. Lubos Matejka and et al., [9] are presented a distributed file system, which is named KIVDFS. That is suitable for mobile devices. In this article, we present a review on design of distributed file system. This paper structured is follows. In Section 2, the background information required for the better understanding of DFSs presented in this paper is discussed. In this section, first the precise definition of the System (DFS) is expressed. The most common System architecture are explained, and in Section 3, the studies in design of DFS are presented. Finally, in Section 4, the general conclusion are discussed. 2 BACKGROUND In this section, the background of distributed file system, popular DFS and a brief description of each DFS are presented.
2 230 S. Asadianfam et. al / Journal of Computer Networks and Communications Security, 3 (5), May Design Goals of DFS Data collection in the world are growing and expanding. It is important that the infrastructure must be able to store a huge collection of data on their number grows every day. To integrate massive distributed storage resources for providing large storage space is an important and challenging issue in Cloud and Grid computing [7]. Distributed file systems may aim for "transparency" in a number of aspects. That is, they aim to be "invisible" to client programs, which "see" a system which is similar to a local file system. Behind the scenes, the distributed file system handles locating files, transporting data, and potentially providing other features listed below [4]: Access transparency is that clients are unaware that files are distributed and can access them in the same way as local files are accessed. Location transparency; a consistent name space exists encompassing local as well as remote files. The name of a file does not give its location. Concurrency transparency; all clients have the same view of the state of the file system. This means that if one process is modifying a file, any other processes on the same system or remote systems that are accessing the files will see the modifications in a coherent manner. Failure transparency; the client and client programs should operate correctly after a server failure. Heterogeneity; file service should be provided across different hardware and operating system platforms. Scalability; the file system should work well in small environments (1 machine, a dozen machines) and also scale gracefully to huge ones (hundreds through tens of thousands of systems). Replication transparency; to support scalability, we may wish to replicate files across multiple servers. Clients should be unaware of this. Migration transparency; files should be able to move around without the client's knowledge. 2.2 The Most Common DFS Google, one of the Internet giants that scalability problems and issues facing storage media and the creation of a distributed file system, the Google engineers presented a solution to the problem in 2003 [5]. Google File System (GFS) describe documents that have resulted in a much broader distributed file system applications is highly similar to the GFS [6]. The system file named Apache Hadoop System (HDFS) has been introduced [6] Google File System [5] Google tends to store large data files and application files using "producerconsumer queues", is used. Google, more details of the technical architecture of GFS is kept confidential for obvious reasons. But in an article published by Sanjay Ghemawat, a member of the Google team, Howard Gobioff, based engineer and ShunTak Leung was a member of the senior engineers in 2003, Google File System (GFS) has been specially designed keeping in mind the priorities. The aim of this paper is that the GFS, conversion of a large number of servers and hard disk drives for lowcost, to a range of hundreds of terabytes of data that can be stored and managed, and in the event of a fault or hardware failures can solve the problem there. The next element of its architecture servers provide a "chunk servers" are known Hadoop System [6] The Hadoop System (HDFS) is a distributed file system providing fault tolerance and designed to run on commodity hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. Hadoop provides a distributed file system (HDFS) that can store data across thousands of servers, and a means of running work (Map/Reduce jobs) across those machines, running the work near the data. HDFS has master/slave architecture. Large data is automatically split into chunks which are managed by different nodes in the Hadoop cluster. 3 STUDIES IN DISTRIBIUTED FILE SYSTEM Much research in various designs has been performed. In this article, just several cases are
3 231 S. Asadianfam et. al / Journal of Computer Networks and Communications Security, 3 (5), May 2015 presented. In this section, the focus is on those tasks undertaken for unstructured data, small image storing, and advantages of these designs of distributed file system. So, First P2P, Client/Server technologies explain in DFS design and then DFS designs based on HDFS will presented. Table 1 summarizes the kind of designs of distributed file system and advantages related to present work. 3.1 Related studies on P2P technology for DFS design ChienMing Wang, ChiChang Huang and Huan Ming Liang [7] are presented a distributed file system, called ASDF. The proposed system consists of three parts: the core, client and services. While sharing many of the same goals as previous distributed file systems such as scalability, availability, reliability, and performance, their system is also designed with an emphasis on compatibility, extensibility and autonomy. They said, ASDF will be useful in Cloud and Grid computing. 3.2 Related studies on Client/Server technology for DFS design The proposed file system by TzongJye Liu, ChunYan Chung, ChiaLin Lee [8] stores the file replications in different network area. They also design a load detection module in the proposed system. The module will detect the load of each file servers and improve the overall system performance [8]. Lubos Matejka and et al., [9] are presented a distributed file system, which is named KIVDFS. Google file system are not suitable for mobile devices. The system is based on the clientserver architecture. The client module allows the users to work with data. In their design and implementation, this module appears in three versions: As a standalone program. As a core module of the operating system. As a FUSE module. The client communicates with the server module. The client can be connected by any kind of network providing a TCP/IP stack [9]. KIVDFS increase the throughput of the system, the security of the data and improves the service availability in large heavily loaded [9]. In the paper [10], is presented a distributed file system, called QFS. That consists of two major components: the management node (master), storage node (chunk). Management node is the central node, its main role is load balancing and scheduling. The management nodes record the information such as status information of grouping and chunk server, and it doesn't include the information of file index, so the amount of memory that is occupied is relatively less. In addition, when the client (application) and the chunk server access the master server, the master server scan the information of grouping and chunk server in the memory, and then gives the answer [10]. In the paper [11], are presented a distributed file system. In this system, Name Server is responsible for data classification and allocation of storage. Name Server used to support the Name Server works. Data Server placed under the Name Server, in a Data server is a type of image to save it. Data Server are not related, and do not know what kind of video is stored on others. To manage images, a fundamental approach to describe the data and then use the descriptive information for the implementation of the operation. Raw data, basic features, lowlevel features and semantic features are used to describe the image data. A tree structure is used to display image data. The root node, raw data refers to data files stored images. In the middle child of the root node, lowlevel features and key characteristics of states. Lowlevel features include image data features, such as color, texture, shape of images. Basic features include attributes such as name, type, author, and creation time. The leaf node, using the expression of semantic features that include good writer, explaining the topic, and a lowlevel features. 3.3 Related studies on DFS design Based on HDFS Konstantin Shvachko and et al., [12] are peresented a distributed file system, called HDFS. HDFS architecture is very close to the GFS. It will follow three layer structure and single reference. Each cluster or clusters of Apache Hadoop, a reference server is called NameNode. This server is responsible for tracking metadata and address of copies of each of the blocks that hold 64 MB of data. NameNode copy the data in cluster and systems are responsible for writing and reading data on the Data Node responsible. By default, each data block three times Copy (Data Replication) is: First block is written in current server. In a current server on a same cluster. In a server on another cluster. But, you can change the settings of the clusters, then the number of copies to be changed. This has two major advantages:
4 232 S. Asadianfam et. al / Journal of Computer Networks and Communications Security, 3 (5), May 2015 First you against hardware failures, such as the burning of the hard disk, server hardware problems and are safe. If any of the servers for reasons outside the network, the data on other servers are called. The second advantage of this feature is that you no longer need to use the RAID technology (strategies for data replication), and you can use up your hard drive space. In the paper [13], present a NameNode cluster file system based on HDFS, called Clover. This file system exploits two critical features: an improved 2PC protocol which ensures consistent metadata update on multiple metadata servers and a shared storage pool which provides robust persistent metadata storage and supports the operation of distributed transactions [13]. Further experimental results show their system can achieve better metadata expandability ranging from 10% to 90% by quantized metrics when each extra server is added, while preserving similar I/O performance [13]. hats [14] is a novel redesign of HDFS [11] into a multitiered storage system. An important difference between hats and HDFS is the design of the DataNode. In HDFS [11], each participating node hosts a single DataNode instance, constituting multiple storage devices, regardless of their characteristics such as supported I/O rates and capacity. In contrast, each participating node in hats hosts multiple DataNode instances, where each instance represents only one type of storage device. Liu Jiang and et al., [15] proposed the optimization of file system which provides a better performance based on small files. Farag Azzedin [16] proposed a similar architecture with HDFS, but NameNode is distributed. P2P Table 1: Overview of the 10 DFS studies Article Name Author (s) Publish Year Pages Advantages ASDF: An Autonomous and Scalable System ChienMing, Wang, ChiChang Huang, HuanMing Liang 11th IEEE/ACM Symposium on Cluster, Cloud and Grid Computing Scalability, Reliability, Compatibility, extensibility, autonomy Name of DFS ASDF A High Performance and Low Cost System TzongJye Liu, ChunYan Chung, ChiaLin Lee IEEE low cost, reliability, scalability, improve the overall system performance C/S System with Online Multi Master Replicas Luboš Matějka, Jiří Šafařík, Ladislav Pešička Second Eastern European Regional the Engineering of Computer Based Systems Scalability proper for mobile devices, Increase throughput, Increase the security of the data, improves the service availability KIV DFS
5 233 S. Asadianfam et. al / Journal of Computer Networks and Communications Security, 3 (5), May 2015 A Kind Of System Based On Massive Small Files Storage DI LIU, SHTJIE KUANG1 IEEE improve metadata management, high concurrency, storage efficiency, simple design, efficient design, storage capacity, scalability QFS C/S Distributed file system and classification for small images Shaojian Zhuo, Xing Wu, Wu Zhang and Wanchun Dou IEEE Green Computing and Communications Reduce meta data, High Speed access The Hadoop System Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler IEEE Scalability, improves the overall availability, all of Goals in DFS design HDFS Clover: A distributed file system of expandable metadata service derived from HDFS Youwei Wang, Jiang Zhou, Can Ma, Weiping Wang, Dan Meng, Jason Kei IEEE Cluster Computing Expandablity, bottleneckless, no single point of failure Clover Based on HDFS hats: A Heterogeneity Aware Tiered Storage for Hadoop The optimization of HDFS based on small files Krish K.R., Ali Anwar, Ali R. Butt Liu Jiang, Bing Li, Meina Song 14th IEEE/ACM Symposium on Cluster, Cloud and Grid Computing the 3rd IEEE Broadband network and Multimedia Technology Improve throughput, improve job completion time, improve the utilization of the storage devices Increase the speed of reading, reduce the read request, reduce the write request received by the name node hats Towards a scalable HDFS architecture F. Azzedin IEEE Collaboration Technologies and Systems (CTS) Highly available, Widely scalable, fault tolerant,
6 234 S. Asadianfam et. al / Journal of Computer Networks and Communications Security, 3 (5), May RESULTS Big data is set of data name used to refer much more than that with common software can be obtained in a reasonable time, refine, and manage and process. The concept of "size" of big data is changing continuously and gradually gets larger. So, distributed file system should manage and process this large volume of data. To integrate massive distributed storage resources for providing large storage space is an important and challenging issue in Cloud and Grid computing. In this article, various design of the distributed file system were discussed. A brief description of the use of these Systems were provided to achieve our goals by using big data. So, All of DFSs are used for processing, storing and analyzing very large amount of unstructured data. 5 REFERENCES [1] Aditya B. Patel, Manashvi Birla, Ushma Nair, Addressing Big Data Problem Using Hadoop and Map Reduce, NIRMA university international conference on engineering, nuicone, 0608december, [2] Impetus white paper, March, 2011, Planning Hadoop/NoSQL Projects for 2011 by Technologies, Available: ninghadoopnosqlprojectsfor2011/ , March, [3] ystem_%28microsoft%29 [4] Clustered file system, Available: tem#distributed_file_systems, December [5] Sanjay Ghemawat, Howard Gobioff, and Shun Tak Leung, The Google File System, SOSP 03, Bolton Landing, New York, USA, October 19 22, [6] Apache Software Foundation. Official apache Hadoop website, Aug, [7] ChienMing Wang, ChiChang Huang and HuanMing Liang, ASDF: An Autonomous and Scalable System, 1th IEEE/ACM Symposium on Cluster, Cloud and Grid Computing, DOI /CCGrid.21, pp , [8] TzongJye Liu, ChunYan Chung, ChiaLin Lee, A High Performance and Low Cost System, IEEE /11/$26.00,pp. 4750, [9] Luboš Matějka, Jiří Šafařík, Ladislav Pešička, System with Online Multi Master Replicas, Second Eastern European Regional the Engineering of Computer Based Systems, DOI /ECBSEERC.12, pp. 1318, [10] DI LIU, SHTJIE KUANG1, A Kind Of System Based On Massive Small Files Storage, IEEE /$31.00, pp , [11] Shaojian Zhuo, Xing Wu, Wu Zhang and Wanchun Dou, Distributed file system and classification for small images, IEEE Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, DOI /GreenComiThings CPSCom.422, pp , 2013 [12] Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler, The Hadoop System, IEEE /10/$26.00, pp 110, [13] Youwei Wang, Jiang Zhou,Can Ma, Weiping Wang, Dan Meng, Jason Kei, Clover: A distributed file system of expandable metadata service derived from HDFS, IEEE Cluster Computing, DOI /CLUSTER.54,pp , [14] Krish K.R., Ali Anwar, Ali R. Butt, hats: A HeterogeneityAware Tiered Storage for Hadoop, 14th IEEE/ACM Symposium on Cluster, Cloud and Grid Computing, DOI /CCGrid.51, pp , [15] Liu Jiang, Bing Li,Meina Son, The optimization of HDFS based on small files, Proceeding of ICBNMT2010, the 3rd IEEE Broadband network and Multimedia Technology, pp , [16] F. Azzedin. Towards a scalable HDFS architecture, in IEEE Conference on Collaboration Technologies and Systems (CTS), pp , 2013.
Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique
Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique Mahesh Maurya a, Sunita Mahajan b * a Research Scholar, JJT University, MPSTME, Mumbai, India,[email protected]
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
International Journal of Advance Research in Computer Science and Management Studies
Volume 2, Issue 8, August 2014 ISSN: 2321 7782 (Online) International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online
THE HADOOP DISTRIBUTED FILE SYSTEM
THE HADOOP DISTRIBUTED FILE SYSTEM Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Presented by Alexander Pokluda October 7, 2013 Outline Motivation and Overview of Hadoop Architecture,
Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! [email protected]
Big Data Processing, 2014/15 Lecture 5: GFS & HDFS!! Claudia Hauff (Web Information Systems)! [email protected] 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
Processing of Hadoop using Highly Available NameNode
Processing of Hadoop using Highly Available NameNode 1 Akash Deshpande, 2 Shrikant Badwaik, 3 Sailee Nalawade, 4 Anjali Bote, 5 Prof. S. P. Kosbatwar Department of computer Engineering Smt. Kashibai Navale
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
Hadoop Distributed File System. T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela
Hadoop Distributed File System T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela Agenda Introduction Flesh and bones of HDFS Architecture Accessing data Data replication strategy Fault tolerance
HDFS Space Consolidation
HDFS Space Consolidation Aastha Mehta*,1,2, Deepti Banka*,1,2, Kartheek Muthyala*,1,2, Priya Sehgal 1, Ajay Bakre 1 *Student Authors 1 Advanced Technology Group, NetApp Inc., Bangalore, India 2 Birla Institute
Scalable Multiple NameNodes Hadoop Cloud Storage System
Vol.8, No.1 (2015), pp.105-110 http://dx.doi.org/10.14257/ijdta.2015.8.1.12 Scalable Multiple NameNodes Hadoop Cloud Storage System Kun Bi 1 and Dezhi Han 1,2 1 College of Information Engineering, Shanghai
Distributed File Systems
Distributed File Systems Mauro Fruet University of Trento - Italy 2011/12/19 Mauro Fruet (UniTN) Distributed File Systems 2011/12/19 1 / 39 Outline 1 Distributed File Systems 2 The Google File System (GFS)
CS2510 Computer Operating Systems
CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction
CS2510 Computer Operating Systems
CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction
NoSQL and Hadoop Technologies On Oracle Cloud
NoSQL and Hadoop Technologies On Oracle Cloud Vatika Sharma 1, Meenu Dave 2 1 M.Tech. Scholar, Department of CSE, Jagan Nath University, Jaipur, India 2 Assistant Professor, Department of CSE, Jagan Nath
Hadoop Architecture. Part 1
Hadoop Architecture Part 1 Node, Rack and Cluster: A node is simply a computer, typically non-enterprise, commodity hardware for nodes that contain data. Consider we have Node 1.Then we can add more nodes,
Distributed File Systems
Distributed File Systems Alemnew Sheferaw Asrese University of Trento - Italy December 12, 2012 Acknowledgement: Mauro Fruet Alemnew S. Asrese (UniTN) Distributed File Systems 2012/12/12 1 / 55 Outline
Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop
Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop Kanchan A. Khedikar Department of Computer Science & Engineering Walchand Institute of Technoloy, Solapur, Maharashtra,
Hadoop Big Data for Processing Data and Performing Workload
Hadoop Big Data for Processing Data and Performing Workload Girish T B 1, Shadik Mohammed Ghouse 2, Dr. B. R. Prasad Babu 3 1 M Tech Student, 2 Assosiate professor, 3 Professor & Head (PG), of Computer
DESIGN ARCHITECTURE-BASED ON WEB SERVER AND APPLICATION CLUSTER IN CLOUD ENVIRONMENT
DESIGN ARCHITECTURE-BASED ON WEB SERVER AND APPLICATION CLUSTER IN CLOUD ENVIRONMENT Gita Shah 1, Annappa 2 and K. C. Shet 3 1,2,3 Department of Computer Science & Engineering, National Institute of Technology,
Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc [email protected]
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc [email protected] What s Hadoop Framework for running applications on large clusters of commodity hardware Scale: petabytes of data
Highly Available Hadoop Name Node Architecture-Using Replicas of Name Node with Time Synchronization among Replicas
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 3, Ver. II (May-Jun. 2014), PP 58-62 Highly Available Hadoop Name Node Architecture-Using Replicas
Cyber Forensic for Hadoop based Cloud System
Cyber Forensic for Hadoop based Cloud System ChaeHo Cho 1, SungHo Chin 2 and * Kwang Sik Chung 3 1 Korea National Open University graduate school Dept. of Computer Science 2 LG Electronics CTO Division
R.K.Uskenbayeva 1, А.А. Kuandykov 2, Zh.B.Kalpeyeva 3, D.K.Kozhamzharova 4, N.K.Mukhazhanov 5
Distributed data processing in heterogeneous cloud environments R.K.Uskenbayeva 1, А.А. Kuandykov 2, Zh.B.Kalpeyeva 3, D.K.Kozhamzharova 4, N.K.Mukhazhanov 5 1 [email protected], 2 [email protected],
Research Article Hadoop-Based Distributed Sensor Node Management System
Distributed Networks, Article ID 61868, 7 pages http://dx.doi.org/1.1155/214/61868 Research Article Hadoop-Based Distributed Node Management System In-Yong Jung, Ki-Hyun Kim, Byong-John Han, and Chang-Sung
Hadoop IST 734 SS CHUNG
Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14
Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases Lecture 14 Big Data Management IV: Big-data Infrastructures (Background, IO, From NFS to HFDS) Chapter 14-15: Abideboul
Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)
Journal of science e ISSN 2277-3290 Print ISSN 2277-3282 Information Technology www.journalofscience.net STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS) S. Chandra
marlabs driving digital agility WHITEPAPER Big Data and Hadoop
marlabs driving digital agility WHITEPAPER Big Data and Hadoop Abstract This paper explains the significance of Hadoop, an emerging yet rapidly growing technology. The prime goal of this paper is to unveil
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON HIGH PERFORMANCE DATA STORAGE ARCHITECTURE OF BIGDATA USING HDFS MS.
Apache Hadoop. Alexandru Costan
1 Apache Hadoop Alexandru Costan Big Data Landscape No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard, except Hadoop 2 Outline What is Hadoop? Who uses it? Architecture HDFS MapReduce Open
Hadoop Distributed FileSystem on Cloud
Hadoop Distributed FileSystem on Cloud Giannis Kitsos, Antonis Papaioannou and Nikos Tsikoudis Department of Computer Science University of Crete {kitsos, papaioan, tsikudis}@csd.uoc.gr Abstract. The growing
Jeffrey D. Ullman slides. MapReduce for data intensive computing
Jeffrey D. Ullman slides MapReduce for data intensive computing Single-node architecture CPU Machine Learning, Statistics Memory Classical Data Mining Disk Commodity Clusters Web data sets can be very
Accelerating and Simplifying Apache
Accelerating and Simplifying Apache Hadoop with Panasas ActiveStor White paper NOvember 2012 1.888.PANASAS www.panasas.com Executive Overview The technology requirements for big data vary significantly
Big Data: Study in Structured and Unstructured Data
Big Data: Study in Structured and Unstructured Data Motashim Rasool 1, Wasim Khan 2 [email protected], [email protected] Abstract With the overlay of digital world, Information is available
Log Mining Based on Hadoop s Map and Reduce Technique
Log Mining Based on Hadoop s Map and Reduce Technique ABSTRACT: Anuja Pandit Department of Computer Science, [email protected] Amruta Deshpande Department of Computer Science, [email protected]
ISSN:2320-0790. Keywords: HDFS, Replication, Map-Reduce I Introduction:
ISSN:2320-0790 Dynamic Data Replication for HPC Analytics Applications in Hadoop Ragupathi T 1, Sujaudeen N 2 1 PG Scholar, Department of CSE, SSN College of Engineering, Chennai, India 2 Assistant Professor,
Survey on Load Rebalancing for Distributed File System in Cloud
Survey on Load Rebalancing for Distributed File System in Cloud Prof. Pranalini S. Ketkar Ankita Bhimrao Patkure IT Department, DCOER, PG Scholar, Computer Department DCOER, Pune University Pune university
Reduction of Data at Namenode in HDFS using harballing Technique
Reduction of Data at Namenode in HDFS using harballing Technique Vaibhav Gopal Korat, Kumar Swamy Pamu [email protected] [email protected] Abstract HDFS stands for the Hadoop Distributed File System.
White Paper. Big Data and Hadoop. Abhishek S, Java COE. Cloud Computing Mobile DW-BI-Analytics Microsoft Oracle ERP Java SAP ERP
White Paper Big Data and Hadoop Abhishek S, Java COE www.marlabs.com Cloud Computing Mobile DW-BI-Analytics Microsoft Oracle ERP Java SAP ERP Table of contents Abstract.. 1 Introduction. 2 What is Big
APACHE HADOOP JERRIN JOSEPH CSU ID#2578741
APACHE HADOOP JERRIN JOSEPH CSU ID#2578741 CONTENTS Hadoop Hadoop Distributed File System (HDFS) Hadoop MapReduce Introduction Architecture Operations Conclusion References ABSTRACT Hadoop is an efficient
Hadoop Scheduler w i t h Deadline Constraint
Hadoop Scheduler w i t h Deadline Constraint Geetha J 1, N UdayBhaskar 2, P ChennaReddy 3,Neha Sniha 4 1,4 Department of Computer Science and Engineering, M S Ramaiah Institute of Technology, Bangalore,
PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS
PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS By HAI JIN, SHADI IBRAHIM, LI QI, HAIJUN CAO, SONG WU and XUANHUA SHI Prepared by: Dr. Faramarz Safi Islamic Azad
Journal of Chemical and Pharmaceutical Research, 2015, 7(3):1388-1392. Research Article. E-commerce recommendation system on cloud computing
Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2015, 7(3):1388-1392 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 E-commerce recommendation system on cloud computing
Prepared By : Manoj Kumar Joshi & Vikas Sawhney
Prepared By : Manoj Kumar Joshi & Vikas Sawhney General Agenda Introduction to Hadoop Architecture Acknowledgement Thanks to all the authors who left their selfexplanatory images on the internet. Thanks
Distributed Metadata Management Scheme in HDFS
International Journal of Scientific and Research Publications, Volume 3, Issue 5, May 2013 1 Distributed Metadata Management Scheme in HDFS Mrudula Varade *, Vimla Jethani ** * Department of Computer Engineering,
CSE-E5430 Scalable Cloud Computing Lecture 2
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 14.9-2015 1/36 Google MapReduce A scalable batch processing
The Google File System
The Google File System By Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung (Presented at SOSP 2003) Introduction Google search engine. Applications process lots of data. Need good file system. Solution:
UPS battery remote monitoring system in cloud computing
, pp.11-15 http://dx.doi.org/10.14257/astl.2014.53.03 UPS battery remote monitoring system in cloud computing Shiwei Li, Haiying Wang, Qi Fan School of Automation, Harbin University of Science and Technology
GeoGrid Project and Experiences with Hadoop
GeoGrid Project and Experiences with Hadoop Gong Zhang and Ling Liu Distributed Data Intensive Systems Lab (DiSL) Center for Experimental Computer Systems Research (CERCS) Georgia Institute of Technology
METHOD OF A MULTIMEDIA TRANSCODING FOR MULTIPLE MAPREDUCE JOBS IN CLOUD COMPUTING ENVIRONMENT
METHOD OF A MULTIMEDIA TRANSCODING FOR MULTIPLE MAPREDUCE JOBS IN CLOUD COMPUTING ENVIRONMENT 1 SEUNGHO HAN, 2 MYOUNGJIN KIM, 3 YUN CUI, 4 SEUNGHYUN SEO, 5 SEUNGBUM SEO, 6 HANKU LEE 1,2,3,4,5 Department
The Hadoop Distributed File System
The Hadoop Distributed File System The Hadoop Distributed File System, Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler, Yahoo, 2010 Agenda Topic 1: Introduction Topic 2: Architecture
Snapshots in Hadoop Distributed File System
Snapshots in Hadoop Distributed File System Sameer Agarwal UC Berkeley Dhruba Borthakur Facebook Inc. Ion Stoica UC Berkeley Abstract The ability to take snapshots is an essential functionality of any
Big Data and Hadoop with components like Flume, Pig, Hive and Jaql
Abstract- Today data is increasing in volume, variety and velocity. To manage this data, we have to use databases with massively parallel software running on tens, hundreds, or more than thousands of servers.
HadoopRDF : A Scalable RDF Data Analysis System
HadoopRDF : A Scalable RDF Data Analysis System Yuan Tian 1, Jinhang DU 1, Haofen Wang 1, Yuan Ni 2, and Yong Yu 1 1 Shanghai Jiao Tong University, Shanghai, China {tian,dujh,whfcarter}@apex.sjtu.edu.cn
Big Data Storage Architecture Design in Cloud Computing
Big Data Storage Architecture Design in Cloud Computing Xuebin Chen 1, Shi Wang 1( ), Yanyan Dong 1, and Xu Wang 2 1 College of Science, North China University of Science and Technology, Tangshan, Hebei,
GraySort and MinuteSort at Yahoo on Hadoop 0.23
GraySort and at Yahoo on Hadoop.23 Thomas Graves Yahoo! May, 213 The Apache Hadoop[1] software library is an open source framework that allows for the distributed processing of large data sets across clusters
Hadoop Distributed File System. Jordan Prosch, Matt Kipps
Hadoop Distributed File System Jordan Prosch, Matt Kipps Outline - Background - Architecture - Comments & Suggestions Background What is HDFS? Part of Apache Hadoop - distributed storage What is Hadoop?
A Study on Workload Imbalance Issues in Data Intensive Distributed Computing
A Study on Workload Imbalance Issues in Data Intensive Distributed Computing Sven Groot 1, Kazuo Goda 1, and Masaru Kitsuregawa 1 University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan Abstract.
Analysis and Research of Cloud Computing System to Comparison of Several Cloud Computing Platforms
Volume 1, Issue 1 ISSN: 2320-5288 International Journal of Engineering Technology & Management Research Journal homepage: www.ijetmr.org Analysis and Research of Cloud Computing System to Comparison of
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A COMPREHENSIVE VIEW OF HADOOP ER. AMRINDER KAUR Assistant Professor, Department
Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components
Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components of Hadoop. We will see what types of nodes can exist in a Hadoop
CLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES
CLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES 1 MYOUNGJIN KIM, 2 CUI YUN, 3 SEUNGHO HAN, 4 HANKU LEE 1,2,3,4 Department of Internet & Multimedia Engineering,
Introduction to Hadoop Distributed File System Vaibhav Gopal korat [email protected]
Introduction to Hadoop Distributed File System Vaibhav Gopal korat [email protected] Ankush Pramod Deshmukh [email protected] Kumar Swamy Pamu [email protected] Abstract HDFS is a distributed file
http://www.paper.edu.cn
5 10 15 20 25 30 35 A platform for massive railway information data storage # SHAN Xu 1, WANG Genying 1, LIU Lin 2** (1. Key Laboratory of Communication and Information Systems, Beijing Municipal Commission
Seminar Presentation for ECE 658 Instructed by: Prof.Anura Jayasumana Distributed File Systems
Seminar Presentation for ECE 658 Instructed by: Prof.Anura Jayasumana Distributed File Systems Prabhakaran Murugesan Outline File Transfer Protocol (FTP) Network File System (NFS) Andrew File System (AFS)
Big Application Execution on Cloud using Hadoop Distributed File System
Big Application Execution on Cloud using Hadoop Distributed File System Ashkan Vates*, Upendra, Muwafaq Rahi Ali RPIIT Campus, Bastara Karnal, Haryana, India ---------------------------------------------------------------------***---------------------------------------------------------------------
Big Data Analytics. Lucas Rego Drumond
Big Data Analytics Lucas Rego Drumond Information Systems and Machine Learning Lab (ISMLL) Institute of Computer Science University of Hildesheim, Germany Big Data Analytics Big Data Analytics 1 / 21 Outline
Critical Analysis of Distributed File Systems on Mobile Platforms
Critical Analysis of Distributed File Systems on Mobile Platforms Madhavi Vaidya Dr. Shrinivas Deshpande Abstract -The distributed file system is the central component for storing data infrastructure.
Fault Tolerance in Hadoop for Work Migration
1 Fault Tolerance in Hadoop for Work Migration Shivaraman Janakiraman Indiana University Bloomington ABSTRACT Hadoop is a framework that runs applications on large clusters which are built on numerous
Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. https://hadoop.apache.org. Big Data Management and Analytics
Overview Big Data in Apache Hadoop - HDFS - MapReduce in Hadoop - YARN https://hadoop.apache.org 138 Apache Hadoop - Historical Background - 2003: Google publishes its cluster architecture & DFS (GFS)
Sector vs. Hadoop. A Brief Comparison Between the Two Systems
Sector vs. Hadoop A Brief Comparison Between the Two Systems Background Sector is a relatively new system that is broadly comparable to Hadoop, and people want to know what are the differences. Is Sector
Data-Intensive Computing with Map-Reduce and Hadoop
Data-Intensive Computing with Map-Reduce and Hadoop Shamil Humbetov Department of Computer Engineering Qafqaz University Baku, Azerbaijan [email protected] Abstract Every day, we create 2.5 quintillion
Massive Data Storage
Massive Data Storage Storage on the "Cloud" and the Google File System paper by: Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung presentation by: Joshua Michalczak COP 4810 - Topics in Computer Science
The Comprehensive Performance Rating for Hadoop Clusters on Cloud Computing Platform
The Comprehensive Performance Rating for Hadoop Clusters on Cloud Computing Platform Fong-Hao Liu, Ya-Ruei Liou, Hsiang-Fu Lo, Ko-Chin Chang, and Wei-Tsong Lee Abstract Virtualization platform solutions
HDFS Architecture Guide
by Dhruba Borthakur Table of contents 1 Introduction... 3 2 Assumptions and Goals... 3 2.1 Hardware Failure... 3 2.2 Streaming Data Access...3 2.3 Large Data Sets... 3 2.4 Simple Coherency Model...3 2.5
Suresh Lakavath csir urdip Pune, India [email protected].
A Big Data Hadoop Architecture for Online Analysis. Suresh Lakavath csir urdip Pune, India [email protected]. Ramlal Naik L Acme Tele Power LTD Haryana, India [email protected]. Abstract Big Data
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh
1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets
Secret Sharing based on XOR for Efficient Data Recovery in Cloud
Secret Sharing based on XOR for Efficient Data Recovery in Cloud Computing Environment Su-Hyun Kim, Im-Yeong Lee, First Author Division of Computer Software Engineering, Soonchunhyang University, [email protected]
Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop
Lecture 32 Big Data 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop 1 2 Big Data Problems Data explosion Data from users on social
Survey on Scheduling Algorithm in MapReduce Framework
Survey on Scheduling Algorithm in MapReduce Framework Pravin P. Nimbalkar 1, Devendra P.Gadekar 2 1,2 Department of Computer Engineering, JSPM s Imperial College of Engineering and Research, Pune, India
Parallel Processing of cluster by Map Reduce
Parallel Processing of cluster by Map Reduce Abstract Madhavi Vaidya, Department of Computer Science Vivekanand College, Chembur, Mumbai [email protected] MapReduce is a parallel programming model
Keywords: Big Data, HDFS, Map Reduce, Hadoop
Volume 5, Issue 7, July 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Configuration Tuning
SCHEDULING IN CLOUD COMPUTING
SCHEDULING IN CLOUD COMPUTING Lipsa Tripathy, Rasmi Ranjan Patra CSA,CPGS,OUAT,Bhubaneswar,Odisha Abstract Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism
Distributed Filesystems
Distributed Filesystems Amir H. Payberah Swedish Institute of Computer Science [email protected] April 8, 2014 Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 1 / 32 What is Filesystem? Controls
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related Summary Xiangzhe Li Nowadays, there are more and more data everyday about everything. For instance, here are some of the astonishing
Task Scheduling in Hadoop
Task Scheduling in Hadoop Sagar Mamdapure Munira Ginwala Neha Papat SAE,Kondhwa SAE,Kondhwa SAE,Kondhwa Abstract Hadoop is widely used for storing large datasets and processing them efficiently under distributed
DATA SECURITY MODEL FOR CLOUD COMPUTING
DATA SECURITY MODEL FOR CLOUD COMPUTING POOJA DHAWAN Assistant Professor, Deptt of Computer Application and Science Hindu Girls College, Jagadhri 135 001 [email protected] ABSTRACT Cloud Computing
Alternatives to HIVE SQL in Hadoop File Structure
Alternatives to HIVE SQL in Hadoop File Structure Ms. Arpana Chaturvedi, Ms. Poonam Verma ABSTRACT Trends face ups and lows.in the present scenario the social networking sites have been in the vogue. The
MapReduce and Hadoop Distributed File System
MapReduce and Hadoop Distributed File System 1 B. RAMAMURTHY Contact: Dr. Bina Ramamurthy CSE Department University at Buffalo (SUNY) [email protected] http://www.cse.buffalo.edu/faculty/bina Partially
Load Rebalancing for File System in Public Cloud Roopa R.L 1, Jyothi Patil 2
Load Rebalancing for File System in Public Cloud Roopa R.L 1, Jyothi Patil 2 1 PDA College of Engineering, Gulbarga, Karnataka, India [email protected] 2 PDA College of Engineering, Gulbarga, Karnataka,
POWER ALL GLOBAL FILE SYSTEM (PGFS)
POWER ALL GLOBAL FILE SYSTEM (PGFS) Defining next generation of global storage grid Power All Networks Ltd. Technical Whitepaper April 2008, version 1.01 Table of Content 1. Introduction.. 3 2. Paradigm
Large scale processing using Hadoop. Ján Vaňo
Large scale processing using Hadoop Ján Vaňo What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data Includes: MapReduce offline computing engine
