Hadoop MapReduce in Eucalyptus Private Cloud

Size: px
Start display at page:

Download "Hadoop MapReduce in Eucalyptus Private Cloud"

Transcription

1 Hadoop MapReduce in Eucalyptus Private Cloud Johan Nilsson May 27, 2011 Bachelor s Thesis in Computing Science, 15 credits Supervisor at CS-UmU: Daniel Henriksson Examiner: Pedher Johansson Umeå University Department of Computing Science SE UMEÅ SWEDEN

2

3 Abstract This thesis investigates how setting up a private cloud using the Eucalyptus Cloud system could be done along with it s usability, requirements and limitations as an open-source cloud platform providing private cloud solutions. It also studies if using the MapReduce framework through Apache Hadoop s implementation on top of the private Eucalyptus Cloud can provide near linear scalability in terms of time and the amount of virtual machines in the cluster. Analysis has shown that Eucalyptus is lacking in a few usability areas when setting up the cloud infrastructure in terms of private networking and DNS lookups, yet the API that Eucalyptus provides gives benefits when migrating from public clouds like Amazon. The MapReduce framework is showing an initial near-linear relation which is declining when the amount of virtual machines is reaching the maximum of the cloud infrastructure.

4 ii

5 Contents 1 Introduction 1 2 Problem Description Problem Statement Goals Related Work Virtualized cloud environments and Hadoop MapReduce Virtualization Networking in virtual operating systems Cloud Computing Amazon s public cloud service Software study - Eucalyptus The different parts of Eucalyptus A quick look at the hypervisors in Eucalyptus The Metadata Service Networking modes Accessing the system Software study - Hadoop MapReduce & HDFS HDFS MapReduce Accomplishment Preliminaries Setup, configuration and usage Setting up Eucalyptus Configuring an Hadoop image Running MapReduce on the cluster The MapReduce implementation Results MapReduce performance times iii

6 iv CONTENTS 6 Conclusions Restrictions and limitations Future work Acknowledgements 47 References 49 A Scripts and code 51

7 List of Figures 3.1 A hypervisor can have multiple guest operating systems in it Different types of hypervisor-based server and machine virtualizations Simplified visualization of cloud computing Overview of the components in Eucalyptus on rack based servers Metadata request example in Eucalyptus The HDFS node structure The interaction between nodes when a file is read from HDFS The MapReduce phases in Hadoop MapReduce Eucalyptus network layout on the test servers The optimal physical layout compared to the test environment Network traffic in the test environment Runtimes on a 2.9 GB database Runtimes on a 4.0 GB database Runtimes on a 9.6 GB database Map task times on a 2.9 GB database Map task times on a 4.0 GB database Map task times on a 9.6 GB database v

8 vi LIST OF FIGURES

9 Chapter 1 Introduction By using a cloud service a company, organization or even a private person can outsource management, maintenance and administration of large clusters of servers but still keep the benefits. While using a public cloud provider is sufficient for most tasks; bandwidth, storage, data protection or pricing details might encourage companies to house a private cloud. The infrastructure to control and maintain the cloud can be proprietary like Microsoft Hyper-V Cloud [17], VMware vcloud [21] and Citrix Open Cloud [4], but there are also a number of free and open-source solutions like Eucalyptus Cloud, OpenNebula [19] and CloudStack [5]. The cloud can provide the processing power, but the actual framework to take benefit of the these distributed instances does not inherently come with the machines. The Hadoop MapReduce claims to provide very high scalability and stability across a large cluster [7]. It is meant to run on dedicated servers, but there is nothing that limits them from running on a virtual machine. This thesis is a study performed at the University of Umeå, Department of Computing Science to provide familiarity with the cloud and it s related technologies in general, focusing specifically on the Eucalyptus cloud infrastructure. It shows a mean of setting up a private cloud, along with using the Hadoop MapReduce idiom/framework on top of the cloud showing the benefits and requirements of running MapReduce on a Eucalyptus private cloud. As a proof of concept a simple MapReduce test is implemented and tested on the cloud to provide an analysis of the distributed computation of MapReduce. The report will have a software study on the systems used in the thesis, followed by a description of the configuration, setup and usage of Eucalyptus and Hadoop. Finally the result from the analysis will be presented along with a short conclusion. 1

10 2 Chapter 1. Introduction

11 Chapter 2 Problem Description This thesis is two-fold. It will provide a relatively large software study of the Eucalyptus cloud and a general overview of some the technologies it uses. It will also study what Hadoop MapReduce is and how it can be used in conjunction with Eucalyptus. The first part of the thesis is to analyse how to setup an Eucalyptus private cloud in a small environment; what the requirements are to run and maintain it and what problems and/or benefits the current implementation of it has. This is a documentation and implementation of one way to configure the infrastructure to deliver virtual machines in small manner to a private user/company/organization. The second part is to test how well Hadoop MapReduce is performing in a virtual cluster. The machines used for the cluster will be virtual machines delivered through the Eucalyptus cloud that has been set up in the course of the thesis. A (simple) MapReduce application will be implemented to process a subset of Wikipedia s articles and the time it takes to process this, based on the number of nodes that the cluster runs on, will be measured. In a perfect environment the MapReduce framework can deliver near-linear performance [7] but that is without the extra overhead of running on small virtual machines. 2.1 Problem Statement By first setting up a small Eucalyptus Cloud on a few local servers the thesis can answer which problems and obstacles there are when preparing the open-source infrastructure. The main priority is setting up a cloud that can deliver virtual instances capable of running Hadoop MapReduce on them to supply a base to perform the analysis of the framework. Simplifying launching Hadoop MapReduce clusters inside the Eucalyptus Cloud is of second priority after setting up the infrastructure and testing the feasibility of MapReduce on virtual machines. This can include scripts, stand-alone programs or utilities beyond Eucalyptus and/or Hadoop. 3

12 4 Chapter 2. Problem Description 2.2 Goals The goals of this thesis is to do a software study and analysis of the performance and usability of Hadoop MapReduce running on top of virtual machines inside an Eucalyptus Cloud infrastructure. It will study means to setup, launch, maintain and remove virtual instances that can together form a MapReduce cluster. The following are the specific goals of this thesis: Demonstrate a way of setting up a private cloud infrastructure using the Eucalyptus Cloud system. This includes configuring subsystems that Eucalyptus uses like hypervisors, controller libraries and networking systems. Create a virtual machine image containing Hadoop MapReduce that will provide ease of use and minimal manual configuration at provisioning time. Provide a way to easily create and remove virtual instances inside the private cloud, adjusting the size of Hadoop worker nodes available in it s cluster. Test the Hadoop MapReduce framework on a virtual cluster inside the private cloud. This is to show what kind of performance increase a user gains when adding more virtual nodes to the cluster, and if it is a near linear increase. 2.3 Related Work Apache Whirr is a collection of scripts that has sprung out as a project of its own. The purpose of Whirr is to simplify controlling virtual nodes inside a cloud like Amazon Web Services [10]. Whirr controls everything from launching, removing and maintaining instances that Hadoop then can utilize in a cluster. Another similar controller program is Puppet [14] from Puppet Labs. This program fully controls instances and clusters inside an EC2-compatible (AWS or Eucalyptus for example) cloud. It uses a program outside the cloud infrastructure that can control whether to launch, edit or remove instances. Puppet also controls the Hadoop MapReduce cluster inside the virtual cluster. Mathias Gug, an Ubuntu Developer, has tested how to deploy a virtual cluster inside an Ubuntu Enterprise Cloud using Puppet. The results can be found on his blog [13]. Hadoop s commercial and enterprise offspring, Cloudera [6], has released a distribution called CDH. The current version, version 3, contains a virtual machine with Hadoop MapReduce configured along with Apache Whirr instructions. This is to simplify launching and configuring Hadoop MapReduce clusters inside a cloud. These releases also contains extra packages for enterprise clusters, such as Pig, Hive, Sqoop and HBase. CDH also uses Apache Whirr to simplify AWS deployment.

13 Chapter 3 Virtualized cloud environments and Hadoop MapReduce This in-depth study focuses on explaining some key concepts regarding cloud computing, virtualization and clustering along with how certain specific software solutions work based on these concepts. As some of the software are used in the practical implementation of the thesis the in-depth study naturally focuses on how these work in practical environment. 3.1 Virtualization The term virtualization refers to creating a virtual environment instead of an actual physical one. This enables a physical system to run different logical solutions on it by virtually creating an environment that meets the demand of the solution. By virtually creating several different operating systems on one physical workstation the administrator can create a cluster of computers that acts as if they were physical. There are several different methods of virtualization. Network Virtualization refers to creating virtual networks that can be used for segmenting, subnetworking or creating virtual private networks (VPN) as a few examples. Desktop virtualization enables a user to access his local desktop from a remote location and is commonly used in large corporations or authorities to ensure security and accessibility. A more common virtualization usually encountered by a home user is Application Virtualization which enables compilation of code to machine instruction running in a certain environment. Examples of this include Java VM and Microsoft s.net framework. In cloud computing, Server & Machine Virtualization are extensively used to virtually create new computers that can act as a completely different operating system independent of the underlying system it runs on [26]. Without virtualization situations would arise where machines would use only a percentage of their maximum capacity. If the server would have virtualization active and enable more operating systems run on the physical hardware, the hardware would be used more 5

14 6 Chapter 3. Virtualized cloud environments and Hadoop MapReduce effectively. This is why server and machine virtualization is of great benefit when creating a cloud environment because the cloud host can maximize effectivity and distribute resources without having to buy a physical server each time a new instance is needed. The system that keeps track of the machine virtualization is called a hypervisor. Hypervisors are mediators that translates calls from the virtualized OS to the hardware and acts as a security guard. The guard prevents different virtual instances from accessing each other s memory or storage areas that is outside their virtual bounds. When a hypervisor creates a new virtual instance (a Guest OS), the hypervisor marks memory, CPU and storage areas to be used by that instance [22]. The underlying hardware is usually the limit of how many virtual instances that can be run on one physical machine. Figure 3.1: A hypervisor can have multiple guest operating systems in it. Depending on the type of hypervisor it can either work directly with the hardware (called type 1 virtualization) or on top of an already installed OS (called type 2 virtualization). The type used varies based on which hypervisor, underlying OS or hardware installed that is installed. These different variations demands different requirements for each system; a hypervisor might flawlessly work on one hardware/os setup but might be inoperable in a slightly different variation [26]. See figure 3.2. Figure 3.2: Different types of hypervisor-based server and machine virtualizations.

15 3.1. Virtualization 7 Hypervisor-based virtualization is the most commonly used one [22], but several different variants of it exists. Kernel-based virtualization employs specialized OS kernels, where the kernel runs a separate version of itself along with a virtual machine on the physical hardware. In practice one could say that the kernel acts as a hypervisor and it is usually a Linux kernel that uses this technique. Hardware virtualization does not rely on any software OS, but instead uses specialized hardware along with a special hypervisor to provide virtualization. The benefit of this is that the OS running inside the hypervisor does not have to be modified, which normal software hypervisor virtualization requires [22]. Technologies for providing hardware virtualization on the CPU s (native virtualization) are based on the CPU developers such as Intel VT-x or AMD-V. The operating systems that runs in the virtual environment are called machine images. These images can be be put to sleep and then stored on the hard drive with their current installation, configuration and even running processes hibernated. When requested, the images can then be restored to their running state to continue to finish what they did before hibernation. This allows dynamic activation and deactivation of resources Networking in virtual operating systems With operating systems acting inside a hypervisor and not directly contacting the physical hardware the problem arises when there are several instances that wants to communicate on the network. They do not actually exists with a physical Network Interface Card (NIC) connected to them, so the hypervisor has to ensure that the right instance receives the correct network packages. The way the networking is handled depends on the hypervisor. There are four techniques used to create virtual NICs [26]: NAT Networking NAT (Network Area Translation) is the same type of technique used in common home routers. It translates an external IP address to an internal one, which enables multiple internal IPs. The packets sent are recognized by the port they are sent to and from. The hypervisor provides the NAT translation and the VMs reside in a subnetwork with the hypervisor acting as the router. Bridge Networking Bridging the networking is basically connecting the virtual NIC with the physical hardware NIC. The hypervisor sets up the bridge and the virtual OS connects to it as it believes it to be a physical. The benefit of this is that the Virtual Machine will show up on the local network just as any other physical machine. Host-only Host-only networking is the local variant of networking. The hypervisor disables networking to external machines outside of the VM which defeats the purpose of the VM in a cloud environment. This is used on local machines mostly. Hybrid Hybrid networking is a combination or variation of the networking styles mentioned.

16 8 Chapter 3. Virtualized cloud environments and Hadoop MapReduce These can connect to most of the other types of networking styles and in some ways can act as a bridge to a host-only VM. Networking the virtual machines in a proper way is crucial when setting up a virtualized cloud. The virtual machines have to be able to connect to the cloud system network to provide resources. 3.2 Cloud Computing Cloud computing is a type of distributed computing that provides elastic, dynamic processing power and storage when needed. At its essence it basically gives the user the computing power when it needs it. The term cloud refers to typical visual representation of Internet in a diagram; a cloud. What cloud computing means is that there is a collection of computers that can give the customer/user the amount of computational power needed, without them having to worry about maintenance or hardware [20]. Typically a cloud is hosted on a server farm with a large amount of clustered computers. These provide the hardware resources. The cloud provider (the organization that hosts the servers) offers an interface for users to pay for a certain amount of processing power, storage or computers in a business model. These resources can then be increased or decreased based on demand, so the user only needs to focus on it s contents whereas the provider takes care of maintenance, security and networking. Figure 3.3: Simplified visualization of cloud computing. The servers in the server farm are usually virtualized, although they are not required to be to actually be included in a cloud. Virtualization is a ground pillar in cloud computing; it enables the provider to maximize the processing power on the raw hardware and gives the cloud elasticity, the ability for users to scale the instances required. It also helps providing two other key features of a cloud: multitenacity, the sharing of resources, and massive scalability, the ability to have huge amounts of processing systems and storage areas (tens of thousands of systems with large amounts of terabytes or petabytes of data) [16]. There are three major types of services that can be provided from a cloud. These are usually different levels of access for the user, ranging from having control of just a few components to the operating system itself [16]: Infrastructure as a Service (IaaS) IaaS gives the user the most freedom and access on the systems. These can sometimes

17 3.2. Cloud Computing 9 be on dedicated hardware (that is, not virtualized) where the user has to install whatever they want on the system themselves. The user is given access to the operating system, or the ability to create their own through images that they create (typically in a virtualized environment). This is used when the user wants the raw processing power of alot of systems or needs a huge amount of storage. Platform as a Service (PaaS) PaaS does not give as much freedom as the IaaS, but instead focusing on having key applications already installed on the systems delivered. These are used to provide the user the systems needed in a quick and accessible way. The users can then modify the applications to their needs. An example of this would be a hosted Website; the tools for hosting the website (along with extra systems like databases, web service engine etc.) is installed and the user can create the page without having to think about networking or accessibility. Software as a Service(SaaS) SaaS is generally transparent for the user. It gives the user a software that has its process in a cloud. The user can only interact with the software itself and is more often unaware that it is being processed in a cloud. A quick example is the Google Docs, where users can edit documents that are hosted and processed in the Google cloud. The cloud providers in the business model often uses web interfaces to enable users to increase or decrease the instances they use. They are then billed by the amount of space required or processing power used (depending on what type of service that is bought) in a pay-as-you-go system. This type is of the IaaS type, which for example Amazon, Proofpoint and Rightscale can provide [16]. However a cloud does not necessarily only exists on the Internet as business model of delivering computational power or storage. A cloud could either be public; which means that the machines delivered resides on the Internet, private; which means that the cluster is hosted locally or hybrid; where the instances are local at start but can use public cloud services on-demand if the private cloud does not have sufficient power [20]. Cloud computing is also used in a lot more different systems. Google uses cloud computing to provide the backbone to their large systems such as Gmail, Google App Engine and Google Sites. Google App Engine provides a PaaS for the sole purpose of creating and hosting web sites. The Google Apps is a SaaS cloud that is a distributed system of handling office types of files; a clouded Microsoft Office [16] Amazon s public cloud service Amazon was one of the first big companies moving into being a large cloud system provider [20]. Amazon is interesting in terms of being one of the first giants that provides an API to access data through their Amazon Web Service (AWS) and web pages. Since Eucalyptus uses the same API, albeit with a different and open source implementation, a closer look on Amazon and their service is interesting. Amazon provides several different services [20], but in terms of this thesis there are some of more interest:

18 10 Chapter 3. Virtualized cloud environments and Hadoop MapReduce Amazon Simple Storage Service (S3) The Simple Storage Service is Amazon s way of providing vast amounts of storage space to the user. A user can pay for the amount of space needed, from just a few gigabytes to several petabytes. Also, fees apply to the amount of data transfered to and from the storage. S3 uses buckets which in layman terms can be seen as folders to store data within. These buckets are stored somewhere inside the cloud and are replicated on several devices to provide redundancy. Using standard protocols such as HTTP, SOAP, REST and even BitTorrent to transfer the data to the S3; the Simple Storage Service provides ease of access [3] to the user. Amazon Elastic Compute Cloud (EC2) The Elastic Compute Cloud is a way to provide a dynamic/elastic amount of computational power. Amazon gives the user the ability to pay for nodes. These nodes are virtualized computers that can take an Amazon Machine Image and use it as the image that runs on their virtualized environment (see section 3.1). The EC2 aims at supplying large amounts of CPU and RAM to the user, but it is up to the user to write and execute the applications to use the resources [2]. These virtualized computers, nodes, are contained inside a security group, or a virtual network consisting of all the EC2 nodes the user has payed for. During computation they can be linked together to provide a strong distributed computational base. Amazon Elastic Block Store (EBS) While the S3 is focused on storage it does not focus on speed and fast access to the data. When using the EC2 system the data that has to be processed must be stored in a fast-to-access way to avoid downtime for the EC2 system. The S3 does not provide that, so Amazon has created a way of attaching virtual, reliable and fast devices to be attached to the EC2. This is called the Elastic Block Storage, EBS. EBS differs to S3 in that the volumes cannot be as small or large as the S3 (1 GB - 1 TB on EBS compared to 1 B - 5 TB on S3 [1, 3]) but instead has faster read-write times and are easier to attach to EC2 instances. One EBS volume can only be attached to one EC2 instance at a time, but one EC2 instance can have several EBS volumes attached to it. The EBS also offers the ability to snapshot the EBS volume and store the snapshot on a different storage medium, for example the S3 [1]. As an example of using the Amazon EC2, the New York Times used EC2 and S3 in conjunction to convert 4 TB of articles from (around 11 million articles) stored in TIFF images to PDF format. By using 100 Amazon EC2 nodes, the NY Times converted the TIFF-images to 1.5 TB of PDFs in less than 24 hours, a conversion that would take far greater time if done on a single computer [18]. On a side note, the NY Times also used Apache Hadoop installed on their AMIs to process the data (see section 3.4). 3.3 Software study - Eucalyptus Eucalyptus is a free open-source cloud management system that is using the same API as the AWS are using. This enables tools that originally where developed for Amazon to be used with Eucalyptus, but with the added benefit of Eucalyptus being free and opensource. It provides the same functionality in terms of IaaS deployment and can be used as

19 3.3. Software study - Eucalyptus 11 a private, hybrid or even a public cloud system with enough hardware. Instances running inside Eucalyptus runs Eucalyptus Machine Images (EMI, cleverly name after AMI), which can either be created by the user or downloaded as pre-packaged version. An EMI can contain either a Windows, Linux or CentOS operating system [8]. At the time of writing Eucalyptus does not support Mac OS The different parts of Eucalyptus Eucalyptus resides on the host operating systems of which is installed on. Since it uses libraries and hypervisors that are restricted to the Linux OS it cannot be run on other operating systems like Microsoft Windows or Apple OS. When Eucalyptus starts it contacts its different components to determine the layout and setup of the systems it controls. These components are configured using configuration files in each component. They all have different responsibilities and areas to create a complete system that can handle dynamic creation of virtualized instances, large storage environments and user access control. Providing the same features as Amazon in terms of computation clouds and storage, the components inside Eucalyptus have different names but with equal functionality and API [8]: Walrus Walrus is the name of the storage container system, similar to Amazon S3. It stores data in buckets and have the same API to read and write data in a redundant system. Eucalyptus offers a way to limit access and size of the storage buckets through the same means as S3, by enforcing user credentials and size limits. Walrus is written in Java, and is accessible through the same means as S3 (SOAP, REST or Web Browser). Cloud Controller The Cloud Controller (CLC) is the Eucalyptus implementation of the Elastic Compute Cloud (EC2) that Amazon provides. The CLC is responsible for starting, stopping and controlling instances in the system, as this is providing the computational power (CPU & RAM) to the user. The CLC is indirectly contacting the hypervisors through Cluster Controllers (CC) and Node Controllers (NC). The CLC is written in Java. Storage Controller This is the equivalent to the EBS found in Amazon. The Storage Controller (SC) is responsible for providing fast dynamic storage devices with low latency and variable storage size. It resides outside the virtual CLC-instances, but can communicate with them as external devices in a similar fashion of the EBS system. The SC is written in Java. Beneath the Cluster Controller, on every physical machine lies the Node Controller (NC). Written in C, this component is in direct contact with the hypervisor. The CC and SC talks with the NC to determine the availability, access and need of the hypervisors. The CC and SC runs on a cluster-level, which means they only need one per cluster. Usually the SC and CC are deployed on the head-node of each cluster - that is a defined machine marked as being the leader of the rest of the physical machines in the cluster - but if the cloud only

20 12 Chapter 3. Virtualized cloud environments and Hadoop MapReduce consists of one large cluster the CLC, SC, CC and Walrus all can reside on the head node, the front-end node. Figure 3.4: Overview of the components in Eucalyptus on rack based servers. All components communicate with each other over SOAP with WS-security [8]. To make up the entire system Eucalyptus has more parts which were mentioned in brief earlier. The Cluster Controller, written in C, is responsible for an entire physical cluster of machines to provide scheduling and network control of all the machines under the same physical switch/router. See figure 3.4. While the CLC is responsible for controlling most of the instances and their requests (creating, deleting, setting EMIs etc) it talks with both the CC and SC on the cluster level. Walrus on the other hand is only responsible for storage actions and thus only talks with the SC. The front-end serves as the maintenance access point. If a user wants more instances or needs more storage allocated to them, the front-end has Walrus and the CLC ready to accept requests and propagate them to CCs and SCs. This provides the user the transparency of the Eucalyptus cloud. The user cannot tell where and how the storage is created, only that they actually received more storage space by requesting it from the front-end A quick look at the hypervisors in Eucalyptus To be able to create and destroy virtualized instances on demand the Node Controller needs to talk with a hypervisor installed on the machine it is running on. Currently, Eucalyptus only supports the hypervisors Xen and KVM. To be able to communicate with them, Eucalyptus utilizes the libvirt virtualization API and virsh. The Xen hypervisor is a Type 1 hypervisor which utilizes paravirtualization to run

21 3.3. Software study - Eucalyptus 13 operating systems on it. This requires that the Host OS s must be modified to do calls to the hypervisor instead of the actual hardware [22]. Xen can also support hardware virtualization but that requires specialized virtualization hardware. See section 3.1. The first guest operating system that Xen virtualizes is called dom0 (basically the first domain) and is automatically booted into when starting the computer. When Xen runs a virtual machine, the drivers are run in user-space, which means that every OS runs inside the memory of a user instead of the kernel s memory space. Xen provides networking by bridging the NIC (see Section 3.1.1). KVM, Kernel-based Virtual Machine, is a hypervisor built using the OS s kernel. This means that KVM uses calls far deeper into the OS architecture, which in turn provide greater speed. KVM is very small and built into the Linux kernel but it cannot by itself provide CPU paravirtualization. To be able to do that it uses the QEMU CPU emulator. QEMU is in short an emulator designed to simulate different CPU s through API calls. The usage of QEMU inside KVM means that KVM sometimes is referred to as qemu/kvm. When KVM runs inside the kernel-space, it uses calls through QEMU to interact with the user-space parts, like creating or destroying a virtual machine [22]. Like Xen, KVM also bridges the NIC to provide networking to the virtual machines. When Eucalyptus creates EMIs to be used inside the cloud system, it requires images along with kernel and ramdisk pairs that works for the designated hypervisor. A RAM disk is not required in the beginning since the ramdisk image defines the state of the RAM memory in the virtual machine [22], but if the image has been installed and is running when put to sleep a ramdisk image should come with it (assume there were none, then when the virtual machine resumed all the RAM would be empty). Since the images might look different depending on which hypervisor created them; Xen and KVM cannot load each others images. This provides an interesting point to Eucalyptus: if in theory there was an image and ramdisk/kernel pair that would work on both the hypervisors, Eucalyptus could run physical machines that had either KVM or Xen installed on them and boot any Xen/KVM image without encountering any virtualization problems. With the current discrepancy, the machines in the cloud are forced to run a specific hypervisor so that the EMIs can be loaded across any Node Controller in the cloud The Metadata Service Just like Amazon, Eucalyptus has a metadata service available for the virtual machines [8]. What the metadata does is supplying information to VMs about themselves. This is achieved by the VMs contacting the CLC with a HTTP request. The CLC checks from which VM the call is made and returns the requested information based on the VM which made the call. An example, if the CLC s IP is then a VM could make a request like: of metadata tag> of user-defined metadata tag> The metadata tag can be anything from standard default ones like the kernel-id, securitygroups or the public hostname or it could be specific ones defined by the administration. This is a method of obtaining data to setup information when new instances are created or

22 14 Chapter 3. Virtualized cloud environments and Hadoop MapReduce Figure 3.5: Metadata request example in Eucalyptus. destroyed. The metadata calls have the exact same callnames and structure like the AWS, so tools used inside the AWS system works with the Eucalyptus metadata Networking modes With Eucalyptus installed on all the machines in the cluster(s), the different components call each other with SOAP commands on the physical NIC. However, when new instances are created they need to have their networking set up on the fly. Since the physical network might have other settings regarding how the NIC s retrieve their IP s Eucalyptus have different modes to give the virtual machines access to the network. The virtual machines communicate with each other using virtual subnets. These subnets must not in any way be in the same area as the physical net, used by the components of Eucalyptus (notice the difference between components like CLC, Walrus, NC etc and virtual machines). The CC has one connection towards the virtual subnet and another bridged to the physical network [8]. Networking modes inside the Eucalyptus cloud system differs in how much freedom and connectivity the instances have. Some modes adds features to the VM networks: Elastic IPs is in short a way to supply the user with a range of IPs that the VMs can use. These can then be used towards external, public, IPs and is ideal if the user needs a persistent Web server for example. Security groups is way of giving the user of a group of instances control what can be done or not in terms of network traffic. For example one security group can enforce that no ICMP calls is answered, or that one cannot make SSH connections.

23 3.3. Software study - Eucalyptus 15 VM isolation prevents VMs from different security groups to contact each other if activated. When running a public cloud providing IaaS, this is almost a must-have. The different modes gives different benefits and drawbacks and some is even required to use under certain circumstances. There are four different networking modes in Eucalyptus. In three modes, the front-end acts as the DHCP server distributing IPs to the virtual machines. The fourth mode require an external DHCP server to distribute IPs to virtual machines [8]. In all networking modes the VMs can be connected to from an external source if given a public IP and the security group allows it. The different modes are the following: SYSTEM The only networking mode that requires an external DHCP server to serve new VM instances with IPs. This mode requires little configuration since it does not limit internal interaction. It does however provide no extra things like security groups, elastic IPs or VM isolation. This mode is better if used when the cloud is private and there are few users that share the cloud. STATIC STATIC mode requires that the DHCP server on the network is either turned off or configured to not serve a specific IP-range that the VMs use. The front-end has to take care of the DHCP server towards the instances, but through a non-dynamic way by adding pairs of MAC-addresses and IPs to VMs. Just like SYSTEM, it does not provide the benefits that you normally associate with a public cloud like elastic IPs, VM isolation or security groups. MANAGED MANAGED mode gives the most freedom to the cloud. Advanced features like VM isolation, elastic IPs and security groups are available by creating virtual networks between the VMs. MANAGED-NoVLAN If the physical network relies on VLAN then normal MANAGED mode will not work (since several VLAN packets on top of each others will cause problems for the routing). In this mode most of the MANAGED mode features are still there except VM isolation. When setting up the Eucalyptus networking mode one has to consider what type of cloud it is and what kind of routing setup is made on the physical network Accessing the system When a user wants to create, remove or edit the instances they can either contact them directly through SSH (if they have public IPs) or they can control the instances by using Eucalyptus web interface. By logging in on the front-end with a username and password the user or admin can configure settings of the system. Also, tools developed for AWS can be used for this since Eucalyptus supports the same API calls [8]. Similar there is a tool called euca2ools for administration. It is a Command Line Interface tool that is used to manipulate the instances that a user has running. An admin

24 16 Chapter 3. Virtualized cloud environments and Hadoop MapReduce using euca2ools has more access than an ordinary user. when working with Eucalyptus. Euca2ools is almost mandatory 3.4 Software study - Hadoop MapReduce & HDFS Hadoop is an open source software package from the Apache Foundation which contains different systems aimed at file storage, analysis and processing of large amounts of data ranging from only a few gigabytes to several hundreds or thousands of petabytes. Hadoop s software is all written in Java, but the different parts are separate projects by themselves so bindings to other programming languages exists on per-project basis. The three major subprojects of Hadoop are the following [9]: HDFS The Hadoop Distributed File System, HDFS, is a specialized filesystem to store large amounts of data across a distributed system of computers with very high throughput and multiple replication on a cluster. It provides reliability between the different physical machines to support a base for very fast computations on a large dataset. MapReduce MapReduce is a programming idiom for analyzing and process extremely large datasets in a fast, scalable and distributed way. Originally conceived by Google as a way of handling the enormous amount of data produced by their search bots [23], it has been adapted in a way that it can run on a cluster of normal commodity machines. Common The Hadoop Common subproject provides interfaces and components built in Java to support distributed filesystems and I/O. This is more of a library that has all the features that HDFS and MapReduce uses to handle the distributed computation. It has the the code for persistent data structures and Java RPC that HDFS needs to store clustered data [23]. While these are the projects that Hadoop have as it s major subprojects, there are several other that are related to the Hadoop package. These are generally projects related to distributed systems which either uses the major Hadoop subprojects or are related to them in some way: Pig Pig introduces a higher level data-flow language and framework when doing parallel computation. It can work in conjunction with MapReduce and HDFS and has an SQL-like syntax. Hive Hive is data warehouse infrastructure with a basic query language, Hive QL, which is based of SQL. Hive is designed to easily integrate and work together with the data storage of MapReduce jobs.

25 3.4. Software study - Hadoop MapReduce & HDFS 17 HBase A distributed database designed to support large tables of data with a scalable infrastructure on top of normal commodity hardware. It s main usage is when to handle extremely large database tables, i.e. billions of rows on millions of columns. Avro By using Remote Procedure Calls (RPC), Avro provides a data serialization system to be used in distributed systems. Avro can be used when parts of a system needs to communicate through the network. Chukwa Built on top of HDFS and MapReduce, Chukwa is a monitoring system when a large distributed system needs to be monitored. Mahout A large machine learning library. It uses the MapReduce to provide scalability and handling of large datasets. ZooKeeper ZooKeeper is mainly a service for distributed systems control, monitoring and synchronization. HDFS and MapReduce are intended to work on commodity hardware which is the opposite to specialized high-end server hardware designed for computational-heavy processing. The idea is to be able to use the Hadoop software on a cluster of not-that-high-end computers and still get a very good result in terms of throughput and reliability. An example taken from Hadoop - The Definitive Guide [24] of a commodity hardware: Processor: Memory: Harddrive: Network: 2 quad-core 2-2.5GHz CPUs GB ECC RAM 4 x 1TB SATA disks Gigabit Ethernet Since Hadoop is designed to use multiple large harddrives and multiple CPU cores, having more of them is almost always a benefit. The ECC RAM stands for Error Correction Code RAM and is almost a must have since Hadoop uses a lot of memory in processing and reportedly sees a lot of checksum errors on clusters without it [24]. Using Hadoop on a large cluster of racked physical machines in a two-level network architecture is a common setup HDFS The Hadoop Distributed File System is designed to be a filesystem that gives a fast access rate and reliability for very large datasets. HDFS is basically a Java program that communicates with other networked instances of HDFS through RPC to store blocks of data across a cluster. It is designed to work well with large file sizes (which can vary from just a hundreds of MBs to several PBs), but since it focuses more on delivering high amount of data between the physical machines it has a slower access rate and higher latency [23].

26 18 Chapter 3. Virtualized cloud environments and Hadoop MapReduce HDFS is split into three software parts. The NameNode is the master of the filesystem that keeps track of where and how the files are stored in the filesystem. The DataNode is the slave in the system and is controlled by the NameNode. There is also a Secondary NameNode which, contrary to what it s name says, is not a replacement of the NameNode. The secondary NameNode is optional, which is explained why later in the section. When HDFS stores files in it s filesystem it splits the data into blocks. These blocks of raw data is of configurable size (defined in the NameNode configuration) but the default size is 64 MB. This is compared to a normal disk block which is 512 bytes [23]. When a datafile has been split up into blocks, the NameNode sends the blocks to the different DataNodes (other machines) where they are stored on disk. The same block can be sent to multiple DataNodes which will provide redundancy and higher throughput when another system requests access to the file. The NameNode is responsible for keeping track of the location of the file among the DataNode, as well as the tree structure that the filesystem uses. The metadata about each file is also stored in the NameNode, like which original datafile it belongs to and it s relation to other blocks. This data is stored on-disk in the NameNode in form of two files: the namespace image and the edit log. The exact block locations on the DataNodes is not stored in the namespace image, this is reconstructed on startup by communicating with the DataNodes and then only kept in memory [23]. Figure 3.6: The HDFS node structure. Due to the NameNode keeping track of the metadata of the files and the tree structure of the file system it is also a single point of failure. If it breaks down the whole HDFS filesystem will be invalid since the DataNodes only stores the data on disk without any knowledge of the structure. Even the secondary NameNode cannot work without the NameNode, since the secondary NameNode is only responsible validating the namespace image of the NameNode. Due to the large data amounts that file metadata can provide the NameNode and secondary NameNodes should be different machines (and separated from the DataNodes) on a large system [23]. However, as of Hadoop work has begun to remove the Secondary NameNode and replace it with a Checkpoint Node and Backup Node which are meant to keep track of the NameNode and keep an up-to-date copy of the NameNode. This will work as a backup

27 3.4. Software study - Hadoop MapReduce & HDFS 19 in case of a NameNode breakdown [11], lowering the risk of failure if the NameNode crashes. By default, the NameNode replicates each block by a factor of three. That is, the NameNode tries to keep three copies of each block on different DataNodes at each time. This will provide both redundancy and more throughput for the client that uses the filesystem. To provide better redundancy and throughput HDFS is also rack-aware, that is it wants to know which rack each node resides in and how far in terms of bandwidth each node is from each other. That way the NameNode can keep more copies of blocks on one rack for faster throughput, but additional copies on other racks for better redundancy. Figure 3.7: The interaction between nodes when a file is read from HDFS. DataNodes are more or less data-dummies that takes care of storing and sending file data to and from clients. When started they have the NameNode s location defined as an URL in their configuration file. This is by default localhost, which needs to be changed as soon as there are more than one node in the cluster. When a user wants to read a file it uses a Hadoop HDFS client that contacts the NameNode. The NameNode then fetches the block locations and return the locations to the client, forcing the client to do the reading and merging of blocks from the DataNodes. See Figure 3.7. Since HDFS requires a special client to interact with the filesystem it is not as easy as mounting a NFS (Network File System) and reading from it in an operating system. However, there are bindings to HTTP and FTP available and software like FUSE, Thrift or WebDAV can also work with HDFS [23]. Using FUSE on top of HDFS would mean that one can mount it as a normal Unix userspace drive MapReduce MapReduce is a programming idiom/model for processing extremely large datasets using distributed computing on a computer cluster. It is invented and patented by Google. The word MapReduce derives from two typical functions used within functional programming, the Map and Reduce functions [7]. Hadoop has taken this framework and implemented it to be able to run it on top of a cluster of computers that are not high-end, similar to HDFS, through a license from Google. The purpose of Hadoop s MapReduce is to be able to utilize the combined resources of a large cluster of commodity hardware. MapReduce relies on a

28 20 Chapter 3. Virtualized cloud environments and Hadoop MapReduce distributed file system, where HDFS is currently one of the few supported. MapReduce phases The MapReduce framework is split up into two major phases; the Map phase and Reduce phase. The entire framework is built around Key-Value pairs and the only thing that is communicated between the different parts of the framework are Key-Value pairs. The keys and values can be user-implemented, but they are required to be serialized since they are communicated across the network. Keys and values can range from simple primitive types to large data types. When implementing a MapReduce problem the problem has to be able to be split into n parts, where n is at least the amount of Hadoop nodes in the cluster. It is important to understand that while the different phases in a MapReduce job can be regarded as sequential, they are in fact working in parallel as much as possible. The shuffle and reduce phases can start working as soon as one map task has completed and this is usually the case. Depending on work slots available across the cluster each job is divided as much as possible. The MapReduce framework is built around these components: InputFormat Reads file(s) on the DFS, tables from a DBMS or what the programmer wants it to read. This phase takes an input of some sorts and splits it into InputSplits. InputSplits An InputSplit is dependant on what the input data is. It is a subset of the data read and one InputSplit is sent to each Map task. MAP The Map phase takes a key-value pair generated through the InputSplit. Each node runs one map task and is run in parallel with each other. One Map task takes a key-value pair, process it and generates another key-value pair. Combine The optional combine phase is a local task run directly after each map task on each node. It does a mini-reduce by combining all keys that are the same generated from the current map task. Shuffle When the nodes have completed their map task it enters the shuffle phase, where data is communicated with each node. Key-value pairs are passed between the nodes to append, sort and partition it. This is the only phase where the nodes communicate with each other. Shuffle - Append Appending the data during the shuffle phase is generally just putting all the data together. Shuffle append is automatically done by the framework. Shuffle - Sort The sort phase is when the keys are sorted by either a default way or in a programmerimplemented way.

29 3.4. Software study - Hadoop MapReduce & HDFS 21 Shuffle - Partition The partition phase is the last phase of the shuffle. This calculates how the combined data should be split out to the reduces. It can either be handled in a default way or programmer-implemented. It should generate an equal amount of data to each reducer for optimal performance. REDUCE Reduce is done by taking all the key-value pairs with the same key and performing some kind of reducing on the values. Each reduce takes a subset of all the key-value pairs, but will always have all the values to one key. For example, the (Foo, Bar) and (Foo, Bear) will go to the same reducer as (Foo, [Bar, Bear]). If a reducer has one key, no other reducer will receive that key. Output Each reducer generates one output to a storage. The output can be controlled through subclassing OutputFormat. By default the output is generating part-files for each reducer in the form of files named part-r-00000, part-r etc. This can be controlled through an implementation of OutputFormat. Although each inputsplit is sent to one Map task, the programmer can tell the Input- Format (through a RecordReader) to read across the boundaries of the split given. This enables the InputFormat to read a subset of data without having to combine them from two or more maps. When one inputsplit has been read across it s boundaries, the latter split will begin after where the former stopped. The size of the inputsplit given to a map task is most oftenly dependant on the size of the data and the size of a HDFS block. Since Hadoop MapReduce is optimized for - and most oftenly runs on - HDFS, the block size of HDFS most often dictates the size of the split if it is read from a file. Figure 3.8: The MapReduce phases in Hadoop MapReduce. The key-values given to the mapper does not always require the keys to be meaningful. The values can be the only interesting for the mapper, outputting a different key and value

30 22 Chapter 3. Virtualized cloud environments and Hadoop MapReduce after computation. The general flow of key-value pairs in the MapReduce framework is the following: map(k1, V 1) list(k2, V 2) reduce(k2, list(v 2)) list(v 3) However, when implementing MapReduce the framework takes care of generating the lists and combining the values to one key. The programmer only needs to focus on what one map or reduce task does and the framework will apply it n times up to until the job is mapped. In terms of the Hadoop framework it can be regarded as: framework in(data (K1, V 1)) map(k1, V 1) (K2, V 2) framework shuffle((k2, V 2) (K2, list(v 2))) reduce(k2, list(v 2)) (K3, V 3) framework out((k3, V 3) (K3, list(v 3))) When starting a Hadoop MapReduce cluster it requires a master that contains the HDFS NameNode and the JobTracker. The JobTracker is the scheduler and input of a MapReduce job. The JobTracker communicates with TaskTrackers that runs on other nodes in the cluster. A node that only contains a JobTracker and HDFS DataNode is generally known as a slave. TaskTrackers periodically pings the JobTracker and checks whether a free task to work on is ready or not. If the JobTracker has a task ready it is sent to the TaskTracker which performs it. Generally on a large cluster the master is separate from the slaves, but on smaller clusters the master also runs a slave.

31 Chapter 4 Accomplishment This chapter describes how the configuration of Eucalyptus and Hadoop was done. It describes one way to set up an Eucalyptus cloud and one way to run Hadoop MapReduce in it. While it describes one way, it should not be regarded as the definite way to do it. Eucalyptus can run several different networking modes on top of different OS s which means that the following configurations are not the only solution. 4.1 Preliminaries The hardware available for this thesis were nine rack servers running Debian 5, kernel version , connected through one gigabit switch with virtual network support. Of these, one was the designated DHCP server of the subnet, only serving specific MAC addresses a specific IP address and not giving out any IP to an unknown MAC address. This server also had a shared NFS /home for the other eight servers in the subnet. Due to the other eight server being dependant on this server, it was ruled out of the Eucalyptus setup. Out of the eight available, four supported the Xen hypervisor and four supported the KVM hypervisor. This is the hardware settings of the servers. CPU AMD Opteron 246 Quad-core on test01-04 AMD Opteron 2346 HE Quad-core on test05-08 RAM 2 GB on test GB on test05-08 HDD 27 GB on /home 145 GB/server Initially Eucalyptus version was chosen for this thesis, but due to problematic configurations of the infrastructure a later version was used in the final testing; See 23

32 24 Chapter 4. Accomplishment section for further explanation. For Hadoop MapReduce the latest version, , was chosen due to the fact that in this release a major API change has been made also has a large number of bug fixes implemented into it compared to the earlier versions. None of the servers had any Hadoop or Eucalyptus previously installed on them. 4.2 Setup, configuration and usage The following sections contains a description of how the Eucalyptus and Hadoop MapReduce software were setup. It is divided into three different subsection; the first focuses on how to configure Eucalyptus on the servers available, the second on how to create an image with Hadoop suitable for the environment and finally the MapReduce implementation along with how to get it running. Installing, configuring and running Eucalyptus requires a user with root access to the systems of which it is installed on. The configuration is based on a Debian host OS, as this was the system it was run on. This means that some packages or commands either does not exists or have a different command structure on other OS s like CentOS or RedHat Linux Setting up Eucalyptus Compared to Xen, KVM works more out of the box since it is tightly configurated with the native Linux kernel. The choice of hypervisor was then to use KVM to avoid any problems that might occur between the host OS and the hypervisor. This meant that four out of eight servers could not be used as servers inside the cloud infrastructure. Eucalyptus can probably be set up to use both Xen and KVM if it loads an image adapted to the correct hypervisor, but that is out of scope of this thesis. Installing Eucalyptus can be done using the package manager of the OS. In Debian, aptget can be used once the repository of Eucalyptus has been added to /etc/apt/sources.list. Depending on the version used the repository location is different. To add Eucalyptus 2.0.2, edit the sources.list file and add the following line: deb squeeze main Calling apt-get install eucalyptus-nc eucalyptus-cc eucalyptus-cloud eucalyptus-sc will install all the parts of Eucalyptus on the server it was called on. Starting, stopping and rebooting Eucalyptus services is done through the init.d/eucalyptus-* scripts. The physical network setup is to have one server act as the front-end that has all the cloud, cluster and storage controllers running on it. The three other servers only run virtual machines and talk to the front-end. The FE does not run any instances at all to avoid any issues with networking and resources. Public IPs can be booked in the CLC configuration file and for the test environment the IP ranges from *.*.* were set as available for Eucalyptus. This needs to be communicated with the network admin, as the Eucalyptus software must assume that these are IP addresses are free and no one else are using them.

33 4.2. Setup, configuration and usage 25 Figure 4.1: Eucalyptus network layout on the test servers along with what services were running on each server. In terms of the networking, different modes has different benefits. However, due to the configuration of the subnet DHCP server the SYSTEM mode is not an option. The STATIC configuration simplifies setting up new VMs but it prevents benefits like VM isolation and especially the metadata service. So the choice is to use either the MANAGED or the MANAGED-NOVLAN mode. By setting up a virtual network between two of the server one can verify whether the network is able to use virtual LANs. This is documented on the Eucalyptus network configuration page [8]. The network verifies as VLAN clean, that is it is able to run VLANs. Most of the problems encountered when setting up an Eucalyptus infrastructure is related to the network. The networking inside the infrastructure consists of several layers, and with the error logs available (found in the /var/log/eucalyptus/*.log files) there is a lot of information to search through when searching for errors. The configuration file that Eucalyptus uses, /etc/eucalyptus/eucalyptus.conf, has a configuration setting that sets the verbosity level of the logging. When installing, at least INFO setting is found to be recommended, while DEBUG can be set when there are errors that are hard to find the source of. When working with the Eucalyptus configuration file, it is important to note that the different subsystems (NC, CC, CLC, SC and Walrus) uses the same configuration file but different parts of it. As an example, changing the VIRTIO * settings on the Cloud controller has no effect, as it is a setting that only the Node controller uses. This might cause confusion in the beginning of the configuration, but the file itself is by default very well documented. When setting up the initial Eucalyptus version - Eucalyptus problems occur with compability of newer libraries. Since uses libraries that are relatively new compared to the Eucalyptus itself, it attempts to load libraries that has changed the names and/or locations. The version does for example attempt to load the Groovy library through an import call that points to a non-existent location. To remedy these library problems was selected as the next Eucalyptus version to try. This demanded a kernel and distribution upgrade from Debian 5 to 6. Eucalyptus has a better support for KVM networking and calls to newer libraries so it should work better than does not run without any problems though. Booting the NCs on the three node server (test06-08) as well as the Cluster controller on the front-end (test05) goes without any problems but the cloud controller silently dies a

34 26 Chapter 4. Accomplishment few seconds after launching it. This is an error that cannot be found in the logs since it is output directly to stderr from the daemon, which itself is directed to /dev/null. By running the CLC directly from /usr/sbin/eucalyptus-cloud instead of /etc/init.d/eucalyptus-clc one can see that still has dependency problems with newer Hibernate and Groovy libraries. This can be solved by downgrading the libraries to earlier version, however this can cause compability issues with other software running on the servers. To prevent compability issues the latest version, 2.0.2, was installed on the four servers used. This proved to be a working concept. All the Eucalyptus services are running, but they are not connected to each other. The CLC is not aware of any SC, Walrus or CC running, and the CC is not aware of any nodes available. Since Eucalyptus is a web service that runs on Apache Web Server one can verify that the services are running by calling ps auxw grep euca This should show an httpd- to determine that the correct httpd-daemons are running. daemon running under the eucalyptus user. There are two ways of connecting the different parts of the system. Either the correct settings can be set in the configuration file at the CLC and then rebooting it or running the euca conf CLI program. What euca conf does is actually change the configuration file and reboot a specific part of the CLC. The cloud controller can then connect to the NCs through REST calls (which can be seen by reading the /var/log/eucalyptus/cc.log file at the front-end). This means that the Eucalyptus infrastructure is working in one term, but the virtual machines themselves can still be erroneously configured. Network configuration Figure 4.2: The optimal physical layout (left) compared to the test environment (right).

35 4.2. Setup, configuration and usage 27 Configuring the right network settings for the VMs that run inside the cloud is a somewhat trial and error procedure. Even though there are documentation on network setup, it does not explain why the setup should be in the documented way. It also assumes that the cloud has a specific setup regarding the physical network. In a perfect environment the front-end should have two NIC s, with one NIC connected to the external network and one to the internal. See Figure 4.2. The front-end will act as a virtual switch in any case which means that in a single-nic layout the traffic passes through the physical layer to the front-end, where it is switched and then passed through the same network again as shown in Figure 4.3. This means that on the NCs the Private and public interface settings are in fact the same NIC, with a bridge specified so that virsh knows where to attach new VMs. On the front-end there is no virtual machines but only a connection from the outside network (the NIC, eth0) and a connection to the internal network (here a bridge). Figure 4.3: Network traffic in the test environment. Dotted lines indicate a connection on the same physical line. While the documentation specifies that MANAGED should work even in the non-optimal layout the physical network works on, none of the different combinations of settings on the NC and FE could make the VMs to properly connect to the outside network. An indication of a faulty network configuration is by checking the public IP address of the newly created instance through the euca2ools or another tool like hybridfox, or read through the logs. If the public IP shows it is generally an indication of a faulty network setting. To properly configure the network for the VMs, the front-end and node controllers needs to have configured settings in the configuration file that match each other. Table 4.1 shows variables shared on the NC and front-end found in the configuration file, with their setting in the test environment that provides a working network. Variable Front-end Value NC Value Comment VNET MODE MANAGED-NOVLAN MANAGED-NOVLAN Network environment. Same on NC and FE. VNET PUBINTERFACE eth0 eth0 The public interface to use. VNET PRIVINTERFACE br0 eth0 The private interface to the nodes. VNET BRIDGE - br0 Bridge to connect the VMs to. Table 4.1: Configuration variables and their corresponding values in the environment. This renders a network that has an internal network communicating through bridges.

Parallel Data Mining and Assurance Service Model Using Hadoop in Cloud

Parallel Data Mining and Assurance Service Model Using Hadoop in Cloud Parallel Data Mining and Assurance Service Model Using Hadoop in Cloud Aditya Jadhav, Mahesh Kukreja E-mail: aditya.jadhav27@gmail.com & mr_mahesh_in@yahoo.co.in Abstract : In the information industry,

More information

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Jeffrey D. Ullman slides. MapReduce for data intensive computing Jeffrey D. Ullman slides MapReduce for data intensive computing Single-node architecture CPU Machine Learning, Statistics Memory Classical Data Mining Disk Commodity Clusters Web data sets can be very

More information

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop) CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2016 MapReduce MapReduce is a programming model

More information

Hadoop Architecture. Part 1

Hadoop Architecture. Part 1 Hadoop Architecture Part 1 Node, Rack and Cluster: A node is simply a computer, typically non-enterprise, commodity hardware for nodes that contain data. Consider we have Node 1.Then we can add more nodes,

More information

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney Introduction to Hadoop New York Oracle User Group Vikas Sawhney GENERAL AGENDA Driving Factors behind BIG-DATA NOSQL Database 2014 Database Landscape Hadoop Architecture Map/Reduce Hadoop Eco-system Hadoop

More information

Distributed File Systems

Distributed File Systems Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.

More information

Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay

Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay Weekly Report Hadoop Introduction submitted By Anurag Sharma Department of Computer Science and Engineering Indian Institute of Technology Bombay Chapter 1 What is Hadoop? Apache Hadoop (High-availability

More information

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Prepared By : Manoj Kumar Joshi & Vikas Sawhney Prepared By : Manoj Kumar Joshi & Vikas Sawhney General Agenda Introduction to Hadoop Architecture Acknowledgement Thanks to all the authors who left their selfexplanatory images on the internet. Thanks

More information

Data Centers and Cloud Computing

Data Centers and Cloud Computing Data Centers and Cloud Computing CS377 Guest Lecture Tian Guo 1 Data Centers and Cloud Computing Intro. to Data centers Virtualization Basics Intro. to Cloud Computing Case Study: Amazon EC2 2 Data Centers

More information

CSE-E5430 Scalable Cloud Computing Lecture 2

CSE-E5430 Scalable Cloud Computing Lecture 2 CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University keijo.heljanko@aalto.fi 14.9-2015 1/36 Google MapReduce A scalable batch processing

More information

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture DATA MINING WITH HADOOP AND HIVE Introduction to Architecture Dr. Wlodek Zadrozny (Most slides come from Prof. Akella s class in 2014) 2015-2025. Reproduction or usage prohibited without permission of

More information

Options in Open Source Virtualization and Cloud Computing. Andrew Hadinyoto Republic Polytechnic

Options in Open Source Virtualization and Cloud Computing. Andrew Hadinyoto Republic Polytechnic Options in Open Source Virtualization and Cloud Computing Andrew Hadinyoto Republic Polytechnic No Virtualization Application Operating System Hardware Virtualization (general) Application Application

More information

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware. Hadoop Source Alessandro Rezzani, Big Data - Architettura, tecnologie e metodi per l utilizzo di grandi basi di dati, Apogeo Education, ottobre 2013 wikipedia Hadoop Apache Hadoop is an open-source software

More information

Introduction to Cloud Computing

Introduction to Cloud Computing Introduction to Cloud Computing Cloud Computing I (intro) 15 319, spring 2010 2 nd Lecture, Jan 14 th Majd F. Sakr Lecture Motivation General overview on cloud computing What is cloud computing Services

More information

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud StACC: St Andrews Cloud Computing Co laboratory A Performance Comparison of Clouds Amazon EC2 and Ubuntu Enterprise Cloud Jonathan S Ward StACC (pronounced like 'stack') is a research collaboration launched

More information

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components of Hadoop. We will see what types of nodes can exist in a Hadoop

More information

THE HADOOP DISTRIBUTED FILE SYSTEM

THE HADOOP DISTRIBUTED FILE SYSTEM THE HADOOP DISTRIBUTED FILE SYSTEM Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Presented by Alexander Pokluda October 7, 2013 Outline Motivation and Overview of Hadoop Architecture,

More information

Virtualization & Cloud Computing (2W-VnCC)

Virtualization & Cloud Computing (2W-VnCC) Virtualization & Cloud Computing (2W-VnCC) DETAILS OF THE SYLLABUS: Basics of Networking Types of Networking Networking Tools Basics of IP Addressing Subnet Mask & Subnetting MAC Address Ports : Physical

More information

Cloud Models and Platforms

Cloud Models and Platforms Cloud Models and Platforms Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF A Working Definition of Cloud Computing Cloud computing is a model

More information

Private Cloud in Educational Institutions: An Implementation using UEC

Private Cloud in Educational Institutions: An Implementation using UEC Private Cloud in Educational Institutions: An Implementation using UEC D. Sudha Devi L.Yamuna Devi K.Thilagavathy,Ph.D P.Aruna N.Priya S. Vasantha,Ph.D ABSTRACT Cloud Computing, the emerging technology,

More information

Hadoop implementation of MapReduce computational model. Ján Vaňo

Hadoop implementation of MapReduce computational model. Ján Vaňo Hadoop implementation of MapReduce computational model Ján Vaňo What is MapReduce? A computational model published in a paper by Google in 2004 Based on distributed computation Complements Google s distributed

More information

2) Xen Hypervisor 3) UEC

2) Xen Hypervisor 3) UEC 5. Implementation Implementation of the trust model requires first preparing a test bed. It is a cloud computing environment that is required as the first step towards the implementation. Various tools

More information

Apache Hadoop. Alexandru Costan

Apache Hadoop. Alexandru Costan 1 Apache Hadoop Alexandru Costan Big Data Landscape No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard, except Hadoop 2 Outline What is Hadoop? Who uses it? Architecture HDFS MapReduce Open

More information

PARALLELS SERVER BARE METAL 5.0 README

PARALLELS SERVER BARE METAL 5.0 README PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal

More information

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop Lecture 32 Big Data 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop 1 2 Big Data Problems Data explosion Data from users on social

More information

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE AGENDA Introduction to Big Data Introduction to Hadoop HDFS file system Map/Reduce framework Hadoop utilities Summary BIG DATA FACTS In what timeframe

More information

Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH

Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH CONTENTS Introduction... 4 System Components... 4 OpenNebula Cloud Management Toolkit... 4 VMware

More information

Hadoop on OpenStack Cloud. Dmitry Mescheryakov Software Engineer, @MirantisIT

Hadoop on OpenStack Cloud. Dmitry Mescheryakov Software Engineer, @MirantisIT Hadoop on OpenStack Cloud Dmitry Mescheryakov Software Engineer, @MirantisIT Agenda OpenStack Sahara Demo Hadoop Performance on Cloud Conclusion OpenStack Open source cloud computing platform 17,209 commits

More information

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. https://hadoop.apache.org. Big Data Management and Analytics

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. https://hadoop.apache.org. Big Data Management and Analytics Overview Big Data in Apache Hadoop - HDFS - MapReduce in Hadoop - YARN https://hadoop.apache.org 138 Apache Hadoop - Historical Background - 2003: Google publishes its cluster architecture & DFS (GFS)

More information

MapReduce, Hadoop and Amazon AWS

MapReduce, Hadoop and Amazon AWS MapReduce, Hadoop and Amazon AWS Yasser Ganjisaffar http://www.ics.uci.edu/~yganjisa February 2011 What is Hadoop? A software framework that supports data-intensive distributed applications. It enables

More information

NoSQL and Hadoop Technologies On Oracle Cloud

NoSQL and Hadoop Technologies On Oracle Cloud NoSQL and Hadoop Technologies On Oracle Cloud Vatika Sharma 1, Meenu Dave 2 1 M.Tech. Scholar, Department of CSE, Jagan Nath University, Jaipur, India 2 Assistant Professor, Department of CSE, Jagan Nath

More information

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015 Lecture 2 (08/31, 09/02, 09/09): Hadoop Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015 K. Zhang BUDT 758 What we ll cover Overview Architecture o Hadoop

More information

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee dhruba@apache.org dhruba@facebook.com

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee dhruba@apache.org dhruba@facebook.com Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee dhruba@apache.org dhruba@facebook.com Hadoop, Why? Need to process huge datasets on large clusters of computers

More information

Big Data With Hadoop

Big Data With Hadoop With Saurabh Singh singh.903@osu.edu The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials

More information

2010-2011 Final Year Project Report (1st Term)

2010-2011 Final Year Project Report (1st Term) Department of Computer Department Science and Engineering, of Computer CUHK Science 2010-2011 Final and Year Engineering Project Report The Chinese University of Hong Kong (1st Term) Cloud computing technologies

More information

From Wikipedia, the free encyclopedia

From Wikipedia, the free encyclopedia Page 1 sur 5 Hadoop From Wikipedia, the free encyclopedia Apache Hadoop is a free Java software framework that supports data intensive distributed applications. [1] It enables applications to work with

More information

Intro to Virtualization

Intro to Virtualization Cloud@Ceid Seminars Intro to Virtualization Christos Alexakos Computer Engineer, MSc, PhD C. Sysadmin at Pattern Recognition Lab 1 st Seminar 19/3/2014 Contents What is virtualization How it works Hypervisor

More information

Operating Systems Virtualization mechanisms

Operating Systems Virtualization mechanisms Operating Systems Virtualization mechanisms René Serral-Gracià Xavier Martorell-Bofill 1 1 Universitat Politècnica de Catalunya (UPC) May 26, 2014 Contents 1 Introduction 2 Hardware Virtualization mechanisms

More information

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc hairong@yahoo-inc.com

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc hairong@yahoo-inc.com Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc hairong@yahoo-inc.com What s Hadoop Framework for running applications on large clusters of commodity hardware Scale: petabytes of data

More information

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets

More information

Private Distributed Cloud Deployment in a Limited Networking Environment

Private Distributed Cloud Deployment in a Limited Networking Environment Private Distributed Cloud Deployment in a Limited Networking Environment Jeffrey Galloway, Susan Vrbsky, and Karl Smith The University of Alabama jmgalloway@crimson.ua.edu, vrbsky@cs.ua.edu, smith102@crimson.ua.edu

More information

9/26/2011. What is Virtualization? What are the different types of virtualization.

9/26/2011. What is Virtualization? What are the different types of virtualization. CSE 501 Monday, September 26, 2011 Kevin Cleary kpcleary@buffalo.edu What is Virtualization? What are the different types of virtualization. Practical Uses Popular virtualization products Demo Question,

More information

Chapter 7. Using Hadoop Cluster and MapReduce

Chapter 7. Using Hadoop Cluster and MapReduce Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in

More information

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop Kanchan A. Khedikar Department of Computer Science & Engineering Walchand Institute of Technoloy, Solapur, Maharashtra,

More information

Hadoop Distributed File System. T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela

Hadoop Distributed File System. T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela Hadoop Distributed File System T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela Agenda Introduction Flesh and bones of HDFS Architecture Accessing data Data replication strategy Fault tolerance

More information

Open Cloud System. (Integration of Eucalyptus, Hadoop and AppScale into deployment of University Private Cloud)

Open Cloud System. (Integration of Eucalyptus, Hadoop and AppScale into deployment of University Private Cloud) Open Cloud System (Integration of Eucalyptus, Hadoop and into deployment of University Private Cloud) Thinn Thu Naing University of Computer Studies, Yangon 25 th October 2011 Open Cloud System University

More information

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Introduction to Hadoop HDFS and Ecosystems ANSHUL MITTAL Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Topics The goal of this presentation is to give

More information

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after

More information

What We Can Do in the Cloud (2) -Tutorial for Cloud Computing Course- Mikael Fernandus Simalango WISE Research Lab Ajou University, South Korea

What We Can Do in the Cloud (2) -Tutorial for Cloud Computing Course- Mikael Fernandus Simalango WISE Research Lab Ajou University, South Korea What We Can Do in the Cloud (2) -Tutorial for Cloud Computing Course- Mikael Fernandus Simalango WISE Research Lab Ajou University, South Korea Overview Riding Google App Engine Taming Hadoop Summary Riding

More information

Data-Intensive Computing with Map-Reduce and Hadoop

Data-Intensive Computing with Map-Reduce and Hadoop Data-Intensive Computing with Map-Reduce and Hadoop Shamil Humbetov Department of Computer Engineering Qafqaz University Baku, Azerbaijan humbetov@gmail.com Abstract Every day, we create 2.5 quintillion

More information

THE EUCALYPTUS OPEN-SOURCE PRIVATE CLOUD

THE EUCALYPTUS OPEN-SOURCE PRIVATE CLOUD THE EUCALYPTUS OPEN-SOURCE PRIVATE CLOUD By Yohan Wadia ucalyptus is a Linux-based opensource software architecture that implements efficiencyenhancing private and hybrid clouds within an enterprise s

More information

FleSSR Project: Installing Eucalyptus Open Source Cloud Solution at Oxford e- Research Centre

FleSSR Project: Installing Eucalyptus Open Source Cloud Solution at Oxford e- Research Centre FleSSR Project: Installing Eucalyptus Open Source Cloud Solution at Oxford e- Research Centre Matteo Turilli, David Wallom Eucalyptus is available in two versions: open source and enterprise. Within this

More information

A very short Intro to Hadoop

A very short Intro to Hadoop 4 Overview A very short Intro to Hadoop photo by: exfordy, flickr 5 How to Crunch a Petabyte? Lots of disks, spinning all the time Redundancy, since disks die Lots of CPU cores, working all the time Retry,

More information

VMware Server 2.0 Essentials. Virtualization Deployment and Management

VMware Server 2.0 Essentials. Virtualization Deployment and Management VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.

More information

A programming model in Cloud: MapReduce

A programming model in Cloud: MapReduce A programming model in Cloud: MapReduce Programming model and implementation developed by Google for processing large data sets Users specify a map function to generate a set of intermediate key/value

More information

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com Who Am I? Yahoo! Architect on Hadoop Map/Reduce Design, review, and implement features in Hadoop Working on Hadoop full time since Feb

More information

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems Processing of massive data: MapReduce 2. Hadoop 1 MapReduce Implementations Google were the first that applied MapReduce for big data analysis Their idea was introduced in their seminal paper MapReduce:

More information

Assignment # 1 (Cloud Computing Security)

Assignment # 1 (Cloud Computing Security) Assignment # 1 (Cloud Computing Security) Group Members: Abdullah Abid Zeeshan Qaiser M. Umar Hayat Table of Contents Windows Azure Introduction... 4 Windows Azure Services... 4 1. Compute... 4 a) Virtual

More information

Apache CloudStack 4.x (incubating) Network Setup: excerpt from Installation Guide. Revised February 28, 2013 2:32 pm Pacific

Apache CloudStack 4.x (incubating) Network Setup: excerpt from Installation Guide. Revised February 28, 2013 2:32 pm Pacific Apache CloudStack 4.x (incubating) Network Setup: excerpt from Installation Guide Revised February 28, 2013 2:32 pm Pacific Apache CloudStack 4.x (incubating) Network Setup: excerpt from Installation Guide

More information

Entering the Zettabyte Age Jeffrey Krone

Entering the Zettabyte Age Jeffrey Krone Entering the Zettabyte Age Jeffrey Krone 1 Kilobyte 1,000 bits/byte. 1 megabyte 1,000,000 1 gigabyte 1,000,000,000 1 terabyte 1,000,000,000,000 1 petabyte 1,000,000,000,000,000 1 exabyte 1,000,000,000,000,000,000

More information

HDFS. Hadoop Distributed File System

HDFS. Hadoop Distributed File System HDFS Kevin Swingler Hadoop Distributed File System File system designed to store VERY large files Streaming data access Running across clusters of commodity hardware Resilient to node failure 1 Large files

More information

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS By HAI JIN, SHADI IBRAHIM, LI QI, HAIJUN CAO, SONG WU and XUANHUA SHI Prepared by: Dr. Faramarz Safi Islamic Azad

More information

Eucalyptus 3.2: Design, Build, Manage

Eucalyptus 3.2: Design, Build, Manage Eucalyptus 3.2: Design, Build, Manage Eucalyptus Contents 2 Contents Eucalyptus History...10 Eucalyptus Cloud Characteristics...11 Open Source...11 Amazon Web Services Compatible...11 Hypervisor Agnostic...11

More information

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14 Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases Lecture 14 Big Data Management IV: Big-data Infrastructures (Background, IO, From NFS to HFDS) Chapter 14-15: Abideboul

More information

Introduction to Cloud Computing

Introduction to Cloud Computing Discovery 2015: Cloud Computing Workshop June 20-24, 2011 Berkeley, CA Introduction to Cloud Computing Keith R. Jackson Lawrence Berkeley National Lab What is it? NIST Definition Cloud computing is a model

More information

Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)

Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS) Journal of science e ISSN 2277-3290 Print ISSN 2277-3282 Information Technology www.journalofscience.net STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS) S. Chandra

More information

Enabling High performance Big Data platform with RDMA

Enabling High performance Big Data platform with RDMA Enabling High performance Big Data platform with RDMA Tong Liu HPC Advisory Council Oct 7 th, 2014 Shortcomings of Hadoop Administration tooling Performance Reliability SQL support Backup and recovery

More information

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes

More information

CHAPTER 2 THEORETICAL FOUNDATION

CHAPTER 2 THEORETICAL FOUNDATION CHAPTER 2 THEORETICAL FOUNDATION 2.1 Theoretical Foundation Cloud computing has become the recent trends in nowadays computing technology world. In order to understand the concept of cloud, people should

More information

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl Big Data Processing, 2014/15 Lecture 5: GFS & HDFS!! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind

More information

Hadoop Ecosystem B Y R A H I M A.

Hadoop Ecosystem B Y R A H I M A. Hadoop Ecosystem B Y R A H I M A. History of Hadoop Hadoop was created by Doug Cutting, the creator of Apache Lucene, the widely used text search library. Hadoop has its origins in Apache Nutch, an open

More information

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments

Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments Cloudera Enterprise Reference Architecture for Google Cloud Platform Deployments Important Notice 2010-2015 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, Cloudera Impala, Impala, and

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

A Brief Outline on Bigdata Hadoop

A Brief Outline on Bigdata Hadoop A Brief Outline on Bigdata Hadoop Twinkle Gupta 1, Shruti Dixit 2 RGPV, Department of Computer Science and Engineering, Acropolis Institute of Technology and Research, Indore, India Abstract- Bigdata is

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to

More information

The Performance Characteristics of MapReduce Applications on Scalable Clusters

The Performance Characteristics of MapReduce Applications on Scalable Clusters The Performance Characteristics of MapReduce Applications on Scalable Clusters Kenneth Wottrich Denison University Granville, OH 43023 wottri_k1@denison.edu ABSTRACT Many cluster owners and operators have

More information

Mobile Cloud Computing T-110.5121 Open Source IaaS

Mobile Cloud Computing T-110.5121 Open Source IaaS Mobile Cloud Computing T-110.5121 Open Source IaaS Tommi Mäkelä, Otaniemi Evolution Mainframe Centralized computation and storage, thin clients Dedicated hardware, software, experienced staff High capital

More information

Hadoop IST 734 SS CHUNG

Hadoop IST 734 SS CHUNG Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to

More information

HDFS Users Guide. Table of contents

HDFS Users Guide. Table of contents Table of contents 1 Purpose...2 2 Overview...2 3 Prerequisites...3 4 Web Interface...3 5 Shell Commands... 3 5.1 DFSAdmin Command...4 6 Secondary NameNode...4 7 Checkpoint Node...5 8 Backup Node...6 9

More information

Comparing Open Source Private Cloud (IaaS) Platforms

Comparing Open Source Private Cloud (IaaS) Platforms Comparing Open Source Private Cloud (IaaS) Platforms Lance Albertson OSU Open Source Lab Associate Director of Operations lance@osuosl.org / @ramereth About me OSU Open Source Lab Server hosting for Open

More information

DISTRIBUTED SYSTEMS [COMP9243] Lecture 9a: Cloud Computing WHAT IS CLOUD COMPUTING? 2

DISTRIBUTED SYSTEMS [COMP9243] Lecture 9a: Cloud Computing WHAT IS CLOUD COMPUTING? 2 DISTRIBUTED SYSTEMS [COMP9243] Lecture 9a: Cloud Computing Slide 1 Slide 3 A style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet.

More information

Parallel Processing of cluster by Map Reduce

Parallel Processing of cluster by Map Reduce Parallel Processing of cluster by Map Reduce Abstract Madhavi Vaidya, Department of Computer Science Vivekanand College, Chembur, Mumbai vamadhavi04@yahoo.co.in MapReduce is a parallel programming model

More information

Cloud computing - Architecting in the cloud

Cloud computing - Architecting in the cloud Cloud computing - Architecting in the cloud anna.ruokonen@tut.fi 1 Outline Cloud computing What is? Levels of cloud computing: IaaS, PaaS, SaaS Moving to the cloud? Architecting in the cloud Best practices

More information

WINDOWS AZURE EXECUTION MODELS

WINDOWS AZURE EXECUTION MODELS WINDOWS AZURE EXECUTION MODELS Windows Azure provides three different execution models for running applications: Virtual Machines, Web Sites, and Cloud Services. Each one provides a different set of services,

More information

CS54100: Database Systems

CS54100: Database Systems CS54100: Database Systems Cloud Databases: The Next Post- Relational World 18 April 2012 Prof. Chris Clifton Beyond RDBMS The Relational Model is too limiting! Simple data model doesn t capture semantics

More information

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 ISSN 2278-7763

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 ISSN 2278-7763 International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February-2014 10 A Discussion on Testing Hadoop Applications Sevuga Perumal Chidambaram ABSTRACT The purpose of analysing

More information

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related Summary Xiangzhe Li Nowadays, there are more and more data everyday about everything. For instance, here are some of the astonishing

More information

Where We Are. References. Cloud Computing. Levels of Service. Cloud Computing History. Introduction to Data Management CSE 344

Where We Are. References. Cloud Computing. Levels of Service. Cloud Computing History. Introduction to Data Management CSE 344 Where We Are Introduction to Data Management CSE 344 Lecture 25: DBMS-as-a-service and NoSQL We learned quite a bit about data management see course calendar Three topics left: DBMS-as-a-service and NoSQL

More information

OSDC PIRE Summer Internship, Sao Paulo, Brazil: Integration of Remote Clusters Under the Same Cloud

OSDC PIRE Summer Internship, Sao Paulo, Brazil: Integration of Remote Clusters Under the Same Cloud OSDC PIRE Summer Internship, Sao Paulo, Brazil: Integration of Remote Clusters Under the Same Cloud Felipe Navarro Florida International University Computer Engineering Master s Student Cloud Computing

More information

How To Scale Out Of A Nosql Database

How To Scale Out Of A Nosql Database Firebird meets NoSQL (Apache HBase) Case Study Firebird Conference 2011 Luxembourg 25.11.2011 26.11.2011 Thomas Steinmaurer DI +43 7236 3343 896 thomas.steinmaurer@scch.at www.scch.at Michael Zwick DI

More information

Comparison and Evaluation of Open-source Cloud Management Software

Comparison and Evaluation of Open-source Cloud Management Software Comparison and Evaluation of Open-source Cloud Management Software SRIVATSAN JAGANNATHAN Masters Degree Project Stockholm, Sweden XR-EE-LCN 2012:008 Comparison and Evaluation of Open-source Cloud Management

More information

STeP-IN SUMMIT 2013. June 18 21, 2013 at Bangalore, INDIA. Performance Testing of an IAAS Cloud Software (A CloudStack Use Case)

STeP-IN SUMMIT 2013. June 18 21, 2013 at Bangalore, INDIA. Performance Testing of an IAAS Cloud Software (A CloudStack Use Case) 10 th International Conference on Software Testing June 18 21, 2013 at Bangalore, INDIA by Sowmya Krishnan, Senior Software QA Engineer, Citrix Copyright: STeP-IN Forum and Quality Solutions for Information

More information

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction

More information

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction

More information

Big Data Primer. 1 Why Big Data? Alex Sverdlov alex@theparticle.com

Big Data Primer. 1 Why Big Data? Alex Sverdlov alex@theparticle.com Big Data Primer Alex Sverdlov alex@theparticle.com 1 Why Big Data? Data has value. This immediately leads to: more data has more value, naturally causing datasets to grow rather large, even at small companies.

More information

LSKA 2010 Survey Report I Device Drivers & Cloud Computing

LSKA 2010 Survey Report I Device Drivers & Cloud Computing LSKA 2010 Survey Report I Device Drivers & Cloud Computing Yu Huang and Hao-Chung Yang {r98922015, r98944016}@csie.ntu.edu.tw Department of Computer Science and Information Engineering March 31, 2010 Abstract

More information

Cloud Computing an introduction

Cloud Computing an introduction Prof. Dr. Claudia Müller-Birn Institute for Computer Science, Networked Information Systems Cloud Computing an introduction January 30, 2012 Netzprogrammierung (Algorithmen und Programmierung V) Our topics

More information

Cloud Computing and Amazon Web Services

Cloud Computing and Amazon Web Services Cloud Computing and Amazon Web Services Gary A. McGilvary edinburgh data.intensive research 1 OUTLINE 1. An Overview of Cloud Computing 2. Amazon Web Services 3. Amazon EC2 Tutorial 4. Conclusions 2 CLOUD

More information

A Cost-Evaluation of MapReduce Applications in the Cloud

A Cost-Evaluation of MapReduce Applications in the Cloud 1/23 A Cost-Evaluation of MapReduce Applications in the Cloud Diana Moise, Alexandra Carpen-Amarie Gabriel Antoniu, Luc Bougé KerData team 2/23 1 MapReduce applications - case study 2 3 4 5 3/23 MapReduce

More information