}w!"#$%&'()+,-./012345<ya

Size: px
Start display at page:

Download "}w!"#$%&'()+,-./012345<ya"

Transcription

1 }w!"#$%&'()+,-./012345<ya MASARYKOVA UNIVERZITA FAKULTA INFORMATIKY Cloud backend deployment at Faculty of informatics MU MASTER THESIS Ondrej Faměra Brno, Spring 2013

2 Declaration I hereby declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Advisor: RNDr. Jan Kasprzak ii

3 Acknowledgement I would like to thank Yenya, for technical support and feedback, Hanna Kim for support and language corrections and my colleagues for feedback. iii

4 Abstract This work presents approaches in building cloud storage, network and management subsystems, compares performance of individual implementations and discusses best choices in this field. It also discusses implementation of cloud backend in infrastructure of Faculty of Informatics at Masaryk s university. iv

5 Keywords cloud, CEPH, GlusterFS, OpenStack, OpenNebula, fadmin, layer-bloat v

6 Contents 1 Introduction Cloud Cloud storage Distributed storage Storage layer-bloat Distributed object storage as VM storage Metadata GlusterFS Distributed, replicated and striped volumes CEPH RADOS Block Device (RBD) DRBD Other distributed file systems Moose FS SheepDog Cloud networking VM traffic Management networking DFS traffic Cloud management traffic Inter VM network separation techniques Linux bridge VLANs Tunneled protocols and tunnels Open vswitch Cloud management Layer-bloat problem OpenStack OpenNebula Performance testing Physical testing environment Virtual testing environment Storage performance testing

7 6.1 DFS capabilities requirements DRBD GlusterFS CEPH MooseFS and SheepDog CEPH and GlusterFS performance comparison Cloud management testing OpenStack OpenNebula Deployment at FI MU Existing infrastructure and front-end Current deployment state Conclusion

8 Chapter 1 Introduction The boom of virtualization in IT has never seen such a rapid growth as when it was hidden after the word cloud. This concept made a revolution in this area as stuff around virtualization get highly propagated to non-it people claiming and promising consolidation, money saving, ease of use and many other over-hyped conclusions. This work does not aim at introducing concepts of cloud from user perspective but rather shows the other side of whole cloud ecosystem the machinery behind it. Despite our determination to bring most recent information, this area is silently and rapidly evolving so we ll write about most recent state with consideration of possible future development. First, we ll provide insight into cloud storage concepts. Beginning with definition of most suitable kind of storage for cloud. Also, we ll introduce Storage layer-bloat problem which impacts performance expectations planning. Further, we ll describe distributed file systems and their suitability for storing virtual machines data. Along with these concepts, we ll also debate about redundancy, reliability and robustness of distributed storage systems. As example for our requirements on cloud storage we more deeply introduce two projects: GlusterFS and CEPH. In the end we shorty describe other possible file systems with their main shortcomings that made them out of our list of options for effective deployment. In third chapter we ll take a closer look at anatomy of network traffic in cloud environment. We will not only focus on high-level architecture of how virtual machines in cloud communicate, but also on management traffic around cloud. Recommendations for traffic separation, security and performance are also described here with the aim at distributed file systems presented in previous chapter. 3

9 1. INTRODUCTION Last theoretical chapter will introduce cloud management systems with their main problem similar to one from storage layer-bloat. After general discussion about cloud management, we ll introduce two candidates for implementation at Faculty of Informatics of Masaryk s university (FI MU). First of them is a large project called OpenStack, which is aiming at becoming cloud operating system. The second one is OpenNebula project, which is less ambitious than OpenStack but more focuses on the particular area of cloud management. Part of this work is also performance testing of selected subsystems that are crucial for cloud deployment. First, we ll present here testing equipment and testing methods description. We will take look at storage subsystem performance and compare distributed file systems that had interesting features for deployment at FI MU. Comparisons that we made here are supported by graphs with most important results to make right overview. Later, we ll discuss about suitability of tested distributed file systems for our deployment. One whole chapter we ll dedicate to discussion about cloud management systems. We ll show differences between two selected in earlier chapters. Then we decide on suitability for our deployment and describe most important advantages of them. The last chapter deals with deployment of systems discussed in our work to improve virtualization environment at FI MU. Here we gather our knowledge from beginning and transform it into practical deployment and implementation suggestions. 4

10 1. INTRODUCTION 1.1 Cloud Before we dive deeper into cloud technologies, we need to introduce what we are actually dealing with. Cloud technology provides services at all levels of IT infrastructure, from basic virtualization services to operating system services to application level services.[1] The goal of Cloud Computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them.[17] Figure 1.1: Cloud computing logical diagram[19] 5

11 1. INTRODUCTION Most recent accepted standardized definition of Cloud Computing is one by the National Institute of Standards and Technology (NIST).[21] Cloud can be classified in many different ways, depending on which aspect we are looking at. In figure 1.1 we show for example division by cloud provided services. This view is most important for end users. It consists of: Software as a Service (SaaS) Platform as a Service (Paas) Infrastructure as a Service (Iaas) From deployments point of view, we can divide cloud into following categories: Public Cloud run and provisioned by 3rd party cloud providers, enabling organizations to outsource computer infrastructure outside to decrease total running cost of IT Private Cloud contained inside control domain of cloud owner providing better control on whats happening inside along with ability to completely customize it to specific requirements Hybrid Cloud mix of public and private cloud bringing the best of both worlds: low cost services and highly customizable own environment This work is aiming at giving more insights to cloud owners into building up private cloud providing IaaS and PaaS services. Our main goal is to find optimal solution for FI MU as a cloud provider. To build up cloud from the bottom, we should take a deeper look at: Cloud storage providing storage for whole cloud infrastructure Cloud networking enabling communication management not only inside the cloud but also managing connection to rest of the world Cloud management giving administrators and users ability to keep everything in cloud under control and provide them with high level interface to work with this environment 6

12 Chapter 2 Cloud storage Cloud storage is made up of many distributed resources, but still acts as one. It s highly fault tolerant through redundancy and distribution of data and highly durable through the creation of versioned copies.[22] Important aspects of Cloud storage are scalability it should be easy to scale-out without loosing the data availability flexibility it should be possible to use commodity hardware as well as highperformance one redundancy data should be protected against errors recovery data should be easy to recover or storage should deploy self-healing mechanism performance we should take into consideration planned usage of storage system ( distributed computations versus service hosting ) As for Cloud storage solutions, they follow two implementation paths: hardware path Big powerful storage systems providing all-in-one solution for storage, however they are expensive and they outdates quickly with new trends. software path Approach that builds upon deployment of distributed storage created by software to lower cost of specialized hardware. Usually, the commodity hardware which has lower price is used than specialized all-in-one solutions. Our work aim at providing better insight into distributed storage implemented in software as we believe it provides better performance/price ratio. 7

13 2. CLOUD STORAGE 2.1 Distributed storage Distributed storage (DS) is closely analogous to a distributed object storage. Distributed object storage (DOS) stores objects (unified data structures) among data nodes and provides access to them. These data nodes forms groups of nodes called clusters. Objects on data nodes are building blocks for more complicated higher-level structures such as distributed file systems or virtual block devices. Before we continue, we ll provide you some basic terminology from DS cluster environment: data nodes nodes that stores the data (objects) monitor nodes nodes that monitors operation of other nodes in cluster distributed file system (DFS) file system running on the top of DOS metadata extra data stored by DFS metadata nodes nodes that takes care about metadata operations in DFS object replication way of storing the same object on multiple data nodes providing redundancy of object in case of data node failure self-healing ability to react on event of data node failure, for example by allocating more objects to keep number of replicas on desired level replica redundant data, exact copy of another object used to increase redundancy and so provide fault tolerance In this work we ll focus on DS suitable to provide storage for virtual machines (VMs) in virtualization environment. As one of typical use cases of storage for VMs is a non-volatile storage that will hold users data from VM. This can be a disk, disk partition, file, loop device etc. The goal is to provide performance close to physical hardware (bare metal) Storage layer-bloat In order to provide high performance storage to VM we face problem that we cannot optimize the storage performance according to underlying hardware. Reason is that there are too many levels (layers) with different characteristics (disk block size, file system block size, DFS block size, etc. ). It s very hard to determine how we should work with such storage to get the optimal performance. 8

14 2. CLOUD STORAGE Lets take for demonstration a virtual disk (1) stored in files (2) on distributed file system (3) which is created of objects (4) lying on other conventional file system (5) placed on RAID 1 (6) on different machines (7). Previous sentence with numbers in brackets indicate layer number in which preceding component is. We need to remember that this kind of abstraction makes it not feasible to easily figure out optimal values for things like disk block size to use Distributed object storage as VM storage There are two possible approaches how to allow DOS to become storage for VMs. One of them is file system approach where we don t consider VM storage to be something special. We treat it as regular file stored on DFS. Problem of this approach is unnecessary overhead of DFS when working with the file such as metadata look-up and update. Second approach is to provide block device created on top of DOS. Here we avoid using DFS and we access DOS directly. Virtual block device maps directly to objects resulting in higher performance as we avoid use of DFS metadata as there is no DFS Metadata Similar to regular local file systems such as ext4, DFS also needs to have metadata. This is mainly due to the fact that DFS are actually file systems and they generally require metadata to properly function. We know about three general approaches to metadata realization in DFS: centralized metadata systems Metadata of whole DFS are stored on one centralized system. This system soon becomes performance bottleneck and single point of failure (SPOF). distributed metadata systems Metadata are distributed among several systems and can be access from any of them. This requires that all systems maintain consistency in distributed metadata in real time which poorly scales with increasing number of systems. algorithmically located metadata Similar to distributed metadata systems, metadata are on different systems. However metadata now share common function according to which it is possible to compute where to look for them without querying metadata systems. So clients can access metadata directly as if they were regular data stored in DFS about which they know their location. 1. Redundant array of inexpensive disks 9

15 2. CLOUD STORAGE 2.2 GlusterFS GlusterFS is an open source DFS capable of scaling to several petabytes and handling thousands of clients. It is based on a stackable purely user space design.[2] Some of interesting advantages includes: Scalability and Performance algorithmically located data and metadata Elastic Hash Algorithm function that allows client to compute where (meta)data are without need to query GlusterFS for this information High Availability internal replication and self-healing mechanisms As basic building block GlusterFS uses a brick. Brick is storage file system that is part of Gluster storage volume (Gluster volume types are discussed later in section 2.2.1). Brick is a regular directory on data node, which holds the DOS objects and is build on top of existing local file system. It s advised for brick to be on separated partition to prevent interference with other systems or user data. Essential requirement for local file system is support for user extended attributes. For locating data and metadata, GlusterFS uses only Elastic Hash Algorithm. This allows the clients to calculate themselves where data resides based just on knowledge of structure of GlusterFS volume. In case of replicated volume or undergoing re-balancing of data it might happen that data are not in calculated place. To cover situations when data are not in position yet, GlusterFS uses references which temporary refers to current location of data. Configuration of GlusterFS is created by interacting with command line interface (CLI) utilities. This is due to complexity of configuration files which gets generated through this CLI in background. From user s point of view, this is somehow better approach as user can interactively configure GlusterFS, which is simpler than manually editing configuration files Distributed, replicated and striped volumes As mentioned earlier, GlusterFS organizes storage networks into bricks which then forms storage volumes. These volumes can have some special properties which improves their parameter in way of performance, fault-tolerance and distribution of data among nodes. Basic volume types are: 10

16 2. CLOUD STORAGE distributed volume files are spread across cluster and are not accumulating on single node replicated volume files get replicated to another bricks, number of replicas can be adjusted. By default number of replicas is one which means no replication. striped volume split files to stripes and store them on different bricks, this improves performance in high concurrency environment These basic types can be further combined together to create more specific attributes. Such an example can be the distributed, striped and replicated volume. This combination brings high performance of stripped volume distributed evenly in cluster to spread the workload, and replicated volume that provides fault tolerance in case of failure of some nodes. All basic types and this example are showed in figure 2.1. Figure 2.1: Basic GlusterFS volume types and example of their combination[15] 11

17 2. CLOUD STORAGE 2.3 CEPH CEPH is a unified, distributed storage system designed for excellent performance, reliability and scalability.[3] It s built on top of Reliable Autonomic Distributed Object Store (RADOS) which provides foundation for all other services provided by CEPH. Essential building blocks of CEPH cluster are[4]: Object Storage Daemons (OSDs) store data as objects on storage nodes Monitors maintain copies of cluster structure, monitors data nodes and provide information for clients about cluster OSDs are closely analogous to bricks in GlusterFS. Apparent difference is only in their internal structure. They also recommend separated file system and requires user extended attributes. For locating data in cluster CEPH uses Controlled, Scalable, Decentralized Placement of Replicated Data (CRUSH) algorithm. Figure 2.2: CEPH cluster services built on top of RADOS[11] 12

18 2. CLOUD STORAGE CEPH cluster builds it s services using librados library which is also available as general purpose library for accessing RADOS. RADOS organizes internally it s structure into storage pools (similar to volumes in GlusterFS) which can have different attributes such as number of replicas or different CRUSH map for data placement. Services that build on RADOS are (also shown in figure 2.2): RBD RADOS Block Device (RBD) (discussed later in section 2.3.1) RADOSGW/RGW RADOS REST 2 full Gateway CEPH FS POSIX-compliant 3 DFS similar to GlusterFS Configuration of CEPH cluster is done through configuration files, which are well documented and provide wide variety of configuration options. All configuration options are stored in single file. Along with configuration, CEPH have it s own authentication protocol similar to Kerberos in order to maintain security in communication between nodes in cluster RADOS Block Device (RBD) RBD service provides access to virtual block devices built on top of RADOS for use mainly in VMs. These devices can be access using Linux kernel module utilizing kernel objects (KO) or by QEMU hypervisor accessing them directly via librbd library. By default RBD stripes a block device image over multiple objects to increase performance.[12] 2. REST - Representational State Transfer 3. POSIX - Portable Operating System Interface standard 13

19 2.4 DRBD 2. CLOUD STORAGE For performance comparison use, we also mention Distributed Replicated Block Device (DRBD) as distributed file system with very limited distribution capabilities. It s suitable for 2-node clusters where it works best. Despite it is able to run in 3 node cluster, its configuration is very complicated and brings very limited use in scenario where we want to use tenths of nodes. As the name suggests its main use is to provide network replicated block device. In figure 2.3 is shown how DRBD works in 2-node cluster scenario. Both nodes can see DRBD block device but only one can actively use it. In case of failure of one node, DRBD relies on external application or manual intervention to change access point for DRBD block device. Figure 2.3: DRBD operation example[16] 2.5 Other distributed file systems Following DFSs have some potential in cloud storage environment. But we consider them not to be ready for production environment because of many reasons, which we ll discuss later. However their concepts are interesting for being included into this work. 14

20 2. CLOUD STORAGE Moose FS This DFS is similar in architecture to GlusterFS, but it implements some interesting unique features. Most interesting one of them is per-file replication, which allows to define number of replicas in granularity of one file (GlusterFS replication level granularity is only on volume level).[5] Another feature such as file system wide trash bin which allows to delete files not instantly but after selected period of time are rather eye-candy. List of disadvantages for this DFS starts with existence of metadata server which creates place for performance bottleneck and also it introduces SPOF.[14]. As workaround to problem of SPOF it is possible to use external high-availability solutions such as pacemaker 4, but it s not as stable and reliable as mechanisms that are parts of DFSs themselves SheepDog Another promising project that addresses exactly the problem of distributed storage for the purpose of virtualization. It supports block devices, replication, snapshots, thin provisioning and other useful features. Current it s limited for user with QEMU/KVM hypervisor and high-availability features are supported by underlying cluster management software (corosync, zookeeper, accord). [6]. The biggest problem of this DFS is immaturity of whole project and lack of activity in it that would make it widely available in distributions in recent versions. Road-map of project promises a lot but reality is somewhere else

21 Chapter 3 Cloud networking Network traffic in cloud can be categorized into two main types: VM traffic generated by running VMs in cloud management/overhead traffic needed by various components of cloud to work properly together VM traffic can be considered as pure as the traffic in the environment where we use bare-metal servers. This traffic is what we can see arriving/leaving the cloud or traveling between VMs in cloud. This represents real user data in network. On other side there is so called management (or more precisely overhead) traffic. In one single machine environment this would be equivalent to communication between components of that machine. In cluster of computers this includes DFS communication between data nodes, monitoring communication, cloud management communication, network overhead because of encapsulations and other communication not related to VM traffic. 3.1 VM traffic On this level, we may consider VM traffic to be regular traffic, which can be seen real world networks. But the difference is that it would be most probably running over virtualized environment that is hidden from our viewing perspective. Virtualization enables us to virtualize whole networks in ways that most of devices in network topology can be virtual. For end users this concept should be not visible or disruptive. The way how this is achieved is discussed in section about management traffic. Away from how the data are transferred over network, we deal with numerous ways how to interconnect with other networks. These concepts are taken from virtualization scenarios and they provide following abilities for communication: 16

22 3. CLOUD NETWORKING Isolated networking VMs, which are connected to isolated network, can only communicate between each other. NAT-ed or routed networking Same as isolated networking but there is a point in network (computer, router,... ) which has connectivity to outer network. This single point which defines how to reach outer network can be limited (Network address translation (NAT)) or restricting (static routing). Direct networking VMs are connected to some existing network without major limitations and acts as part of that network. Typical example of this is network bridge which interconnects real and virtual network on virtualization server. 3.2 Management networking Non-negligible amount of traffic we observer in physical network of cloud is used for management, monitoring and subsystem communication. This category covers many different types of traffic which have different requirements on network parameters such as throughput, latency or jitter DFS traffic The largest amount of this traffic comes from DFS subsystem as it s responsible for providing DS over network. This kind of traffic is very sensitive to network latency and jitter which greatly influences its performance. According to speeds of today s hard drives, DFS have no problem to utilize 1 Gbit network interface. Because 10 Gbit network cards are still not widely used nor cheap, we should try to optimize performance on 1 Gbit network. First possible optimization is dedicating separated network exclusively to DFS communication. This improves processing at switch level where most of traffic are large packets. Second recommended optimization is increase of Message transmission unit (MTU) to reduce overhead of packet header. For example, increasing MTU size from 1500 bytes (standard Ethernet) to 9000 bytes (jumbo frame) requires 6-times less headers and CPU interrupts to send and receive such a packet. This can be very beneficial for DFS traffic as most of it is expected to be large in size. 17

23 3. CLOUD NETWORKING Cloud management traffic Depending on selected cloud management solution, it might be needed to protect communication between cloud management and rest of cloud infrastructure by separating this traffic. Despite this applies mainly to applications unable to secured communication themselves, it s also a good practice to do it for hardening management environment. Another reason can be the fact that some of traffic might be time critical and in case it is disrupted, it can paralyze management of cloud or even cause Denial of Service. 3.3 Inter VM network separation techniques In inter VM communication we face the problem of making communication link between two or more processes (VMs) that can be on one or more virtualization servers. Problem is how to separate this traffic easily and dynamically. We are focusing on situation when VMs are on different virtualization servers. Both of VMs that wants to communicate have Virtual NICs (VNICs) that are visible as VNIC on virtualization hosts. One of possible solutions is to setup static routing based on knowledge of network between VM and route it between servers. Such an approach leads to no separation at all, requires manual setup and is not flexible. Because of that, we describe here better solutions Linux bridge In case we have more of VNICs that we need to put into same network, we can use Linux bridge which groups them together under one bridge. This allows us to easily create distinguishable groups on single virtualization host. But very soon we ll discover that this approach is also not flexible as it limits us to only interfaces that we have available on virtualization host. It doesn t provide any separation on network out of virtualization host (except the case we have the server that has as many physical NICs as VMs). To overcome this limitation we need something that would help us extend the influence of separation we have on local machine to the rest of the network VLANs Answer to problem of network separation can be Virtual Local Area Networks (VLANs) defined in RFC 802.1Q[18]. Thanks to VLANs, we can have more than just one layer 2 network on one single physical NIC. VLANs are amend to standard that allows to put small tag on network packet with 18

24 3. CLOUD NETWORKING identification number with value of By using this tag, the network devices can identify the network to which the packet belongs and treat it accordingly. The catch here is that network devices such as switches and routers must understand this tag and route it to right destination where it belongs to. Setting up VLANs can be done either manually or automatically using GARP/GVRP or MRVP protocols. However this can be problematic for security reasons as administrators might not wish to allow network devices to be reconfigured dynamically as this can cause harm to the rest of network in case this goes out of control. Another problem is that number of VLANs is limited to 4094 which in large deployments can be easily depleted Tunneled protocols and tunnels Another way to create separated layer 2 networks on single physical NIC is the use of tunneling protocols creating tunnels working on top of existing layer 3 network. This can be very useful as it allows not only to solve problem with VLAN limits but also allows the networks to be interconnected beyond the borders of local layer 2 network. This feature comes with price of overhead as these protocols needs to package data inside the regular packet which means they transfer less data compared to ones that are just tagged as VLANs. Despite there are quite many encapsulation protocols we picked following that can server purpose of interconnecting VMs: NVGRE Network Virtualization using Generic Routing Encapsulation STT Stateless Transport Tunneling Protocol for Network Virtualization VXLAN A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks Open vswitch This very interesting open source project tries to take best from Linux bridge and combines it with ability to dynamically create VLANs and various encapsulation tunnels. It s designed to be configured and monitored using standardized protocols such as OpenFlow, sflow or NetFlow. [7]. We may imagine open vswitch as very intelligent Linux bridge capable of collaboration with Software defined network (SDN). 19

25 Chapter 4 Cloud management Cloud is very abstract definition and so it could be a problem to define what the Cloud management should be responsible for. We focus on narrowing our definition to IaaS and PaaS in private cloud but we ll still refer to it as to Cloud management. As essential it should provide unified interface to administration of all cloud subsystems such as storage and network management discussed in earlier chapters. We also believe that this unification should bring easier maintenance of these systems for administrators. Sadly this is not always true and sometimes it can make things more complicated. 4.1 Layer-bloat problem In Cloud storage chapter we mentioned problem with layer-bloat in storage management. Here we are facing similar problem, which affects the way how the management system is deployed and used. Many cloud management systems were created on top of other existing utilities and software which provided parts of its functionality. This is not bad as it uses already existing utilities to do the job. Problem arises when these utilities starts to be dependent on each other as all of them provides its own scope of configuration options and its own layer of functionality that builds on top of the other. The more layers of these utilities are bonded together, the more probable is that failure of any of them would bring breakage of whole system. As an example serves existence of cloud management for other cloud managements which tries to make abstraction over all other abstractions over management of cloud. Existence of such software indicates disastrous state of these kinds of software. 20

26 4. CLOUD MANAGEMENT 4.2 OpenStack OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface[8] Figure 4.1: OpenStack architecture[13] As description from OpenStack homepage says, OpenStack is aiming at providing solution for complete cloud management. Whole system is logically divide into three main categories shown in the figure 4.1. These categories include projects which deals with tasks in different areas: OpenStack Compute (Nova) OpenStack Image service (Glance) OpenStack Networking (Quantum) OpenStack Object Storage (Swift) OpenStack Block Storage (Cinder) 21

27 4. CLOUD MANAGEMENT Furthermore OpenStack have projects, which overcome categorization and work in general scope of whole OpenStack environment: OpenStack Identity (Keystone) Metering (Ceilometer) Orchestration (Heat) OpenStack Dashboard (Horizon) OpenStack Common Library (Oslo) Figure 4.2: OpenStack subsystem logical architecture (version Grizzly) [13] Each of this projects takes care for area of its influence in OpenStack environment and can be possibly stand-alone. This way every project can be developed on its own and it needs just to provide some interface to rest of OpenStack ecosystem. However, some projects might be dependent on each other and need to communicate. To imagine how this communication is architected, we provide look at newest version of OpenStack named Grizzly in figure 4.2. To ease configuration of whole system, developers provide not only manuals, but also definition files for automated deployment used by various tools such as Puppet or Chef. The biggest problem could be the number of nearly independent projects that needs to be configured separately to form OpenStack. 22

28 4. CLOUD MANAGEMENT Distributions powered by OpenStack as of beginning of 2013 include Cloudscaling, Debian, Fedora, Piston Cloud Computing, Red Hat Enterprise Linux, SwiftStack, opensuse, Ubuntu and Stackops. OpenStack is also being integrated into software from HP, which wants to create software that would manage whole datacenter. [20] 4.3 OpenNebula OpenNebula.org is an open-source project developing the industry standard solution for building and managing virtualized enterprise data centers and enterprise private clouds. [9] Architecture of OpenNebula can be simply divided into three main categories by type: Tool, Component, Interface/API. These types then form the whole environment around it as shown in figure 4.3. Figure 4.3: OpenNebula architecture[10] 23

29 4. CLOUD MANAGEMENT The main functionality is located in one central component OpenNebula. Communication with other components is done through XML-RPC API or specialized internal APIs. Specialized APIs are designated to give control to scripts and programs that communicate directly with physical infrastructure of Open- Nebula cloud environment. Objectives that these programs fulfills are: Transfer Manager (TM) transfers configuration files and disk images between nodes in infrastructure Information Manager (IM) gathers information about node operation and monitors state of VMs running on it. Virtual machine Manager (VM) interacts with local hypervisors and manages VMs life cycle on nodes. Authentication API authenticates users. These APIs are on side of physical infrastructure provisioned by scripts mostly written in bash or ruby. This allows great flexibility for future customizations as these scripts can be easily edited. Another special thing is that these scripts are main part of software needed for node to be part of cluster. Except of them only few system utilities are needed depending on used features. Distributions powered by OpenNebula as of beginning of 2013 include Debian, Red Hat Enterprise Linux, CentOS, opensuse and Ubuntu. and Stackops. 24

30 Chapter 5 Performance testing Important part before deployment of any part of new cloud system is performance testing of individual components to maximize performance of future system. We conducted the following tests to determine possible physical limits of available hardware of our deployment and to find best possible combination of components that will bring maximum possible performance for our scenario of deployment. 5.1 Physical testing environment The testing was conducted on 5 nodes running latest 64bit version of Fedora 18 OS. From hardware point of view, there were 4 identical nodes ( later referred to as data nodes) and one more powerful node ( later referred to as master node ). Master node was used as client of DFSs and as master node in cloud management testing. Configuration of master node: Processor : 8-core AMD Opteron 6134 Memory : 16 GB ECC DDR2 RAM Disk : 2x 1 TB Seagate Barracuda Network : 2x Gigabit Intel PRO 1000 Configuration of data node: Processor : 2-core Intel Core 2 Duo E8500 Memory : 8 GB ECC DDR2 RAM Disk : 2x 1 TB Seagate Barracuda Network : 2x Gigabit Intel PRO

31 5. PERFORMANCE TESTING Nodes were interconnected using HP 2848 ProCurve switch. Each node was connected to network with both NICs using 1 Gigabit Full-duplex link: NIC1 - MTU 1500, connected to Internet NIC2 - MTU 9000, connected to local network on switch Disks were directly attached to motherboard without using hardware RAID controller. Each node used one disk for OS and one disk for our tests. Before testing, disks were checked for errors and completely rewritten using random pattern. 5.2 Virtual testing environment To simulate performance in real deployment, some tests were run in VM with following configuration: CPU: 1x kvm64 CPU on KVM enabled host Memory: 2048 MB System: Fedora 18 x86 64 freshly booted from LiveCD Disk: attached using virtio driver with disabled caching of data The tests were conducted with SELinux in enforcing mode. Tests in VMs were run from non-graphical terminal to avoid performance drop caused by graphics emulation. 26

32 Chapter 6 Storage performance testing For comparable performance we have chosen hard drives with similar physical characteristics such as size, linear read and write speed. Comparison of these values was done by a simple test of 2000 reading and writing of 1 MB blocks to disk. The test was conducted using dd program with direct flag set for ensuring that page cache and disk cache are avoided for both reading and writing. The difference between disks was less than 3% during whole testing. Because most of DFSs use some existing underlying file system on disk we have also tested performance of two file systems (FSs). One was ext4, which is FS widely used as default FS on many Linux distributions and represents conservative FS approach. Second is btrfs, which is FS that gathers lots of popularity by re-making way how modern FSs works. Both of FSs were tested using IOzone File system Benchmark 1. All raw results of following tests can be found as attachment of this work. Test were ran in a way that page and disk cache is avoided. For our work we consider following results from IOzone relevant as they represent extremes which are crucial for wide range of deployments of DFS: Linear read performance Random read performance Linear write performance Random write performance On both file systems, ext4 and btrfs, we ran the test two times, once with 200 MB and once with 400 MB testing file. As a record size we have tested power of 2 from 128 bytes to 8096 bytes (8kB). As relevant record sizes we consider of 512 bytes, 1kB, 4kB and 8kB. While 512 bytes and 4kB are common disk block sizes, 1kB and 8kB are their multiplies. Results of IOzone benchmark on these two FSs show the figure 6.1 and figure 6.2. Please notice that these graphs are scaled to show small differences between FSs

33 6. STORAGE PERFORMANCE TESTING Figure 6.1: Linear read/write performance of ext4 and btrfs on physical disks Figure 6.2: Random read/write performance of ext4 and btrfs on physical disks 6.1 DFS capabilities requirements For our deployment we decided on following requirements for DFS: hardware independent It s essential that DFS would be independent of hardware on which it s run as most of it would be commodity HW with possibly different configurations distribution independent There must be at least 2 officially supported Linux distributions with this file system support redundancy of data each piece of user data must be stored at least at two physical nodes to provide redundancy and fault tolerance random read/write performance DFS should be able to provide high performance in random reads and writes of small blocks (512 bytes, 1 kb, 4kB, 8kB) on-line configuration DFS should provide ability to change number of data nodes without need to shut down whole DFS self-healing DFS should be able to heal itself after node failure. Data with reduced redundancy must be always available. It s not feasible to deny access to data because of undergoing replication. maintenance mode DFS should provide reasonable function for exercising 28

34 6. STORAGE PERFORMANCE TESTING maintenance on multiple data nodes at once without great impact on performance of DFS = DFS should not re-balance abruptly 6.2 DRBD Before moving to truly distributed DFS, we have tested performance of DRBD. This was done to compare it to performance of other DFS. DRBD is architected to be most efficient in scenario of disk mirroring between two nodes. We expected write and read performance to be very close to native performance of physical disks as if they were stand alone. In test results we were interested in amount of overhead which brings this networked file system as there is need for synchronization between nodes over network. We also wanted to compare this performance on physical (bare metal) and virtual machine (VM). Results of these tests can be seen in figure 6.3 and figure 6.4. As we can see the overhead of virtualization layer and its impact on I/O (input/output) performance of disks is marginal. Figure 6.3: Linear read/write performance of ext4 and btrfs on DRBD device in virtualized (virt) and physical (bare) environment Figure 6.4: Random read/write performance of ext4 and btrfs on DRBD device in virtualized (virt) and physical (bare) environment 29

35 6. STORAGE PERFORMANCE TESTING 6.3 GlusterFS First of tested DFSs, which meets most of our requirements, was GlusterFS. Performance tests were conducted using version 3.4-alpha which was stable enough for the testing. In general, it was very easy to start using GlusterFS as it has rapid deployment ability, which only requires few commands to have DFS up and running. There was no need to mess around configuration files, everything was configured via command line interface (CLI). Since the version 3.2 the CLI is only supported way of configuring GlusterFS which makes some manuals around Internet useless. We tested GlusterFS in configuration with distributed, replicated and striped volume to meet requirements of high performance and redundancy. Exact commands used for creating volume can be found in attachment of this work. Because GlusterFS doesn t provide block device support yet, we had to use files to simulate block device instead. So the virtual disk for VM was in fact RAW nonpreallocated image file. Graphs bellow shows the performance of GlusterFS attached to VM as storage depending on FS used. In figure 6.5 are tests of linear read and write operations. In figure 6.6 are results for random read and write. Figure 6.5: Linear read/write performance of ext4 and btrfs on GlusterFS in virtualized environment From the graphs can be seen that, there is a measurable difference in read and write performance of GlusterFS depending on underlying FS and FS used in VM. But in most scenarios this difference is too small to bring some benefits. 30

36 6. STORAGE PERFORMANCE TESTING Figure 6.6: Random read/write performance of ext4 and btrfs on GlusterFS in virtualized environment Due to the lack of block device support the previous tests might be discriminating compared to other DFSs that have this functionality. Work on implementing this functionality is planned for the final version 3.4 of GlusterFS. However it would be restricted to cluster with one node only which is still not acceptable for deployment in distributed environment. We believe that even despite of that, the performance of block device over DFS would be in that case better than compared to image file over DFS as we tested now. Another downside we encountered was limited ability to control fail domain. This is very essential thing in planning the outage of several data nodes for maintenance. This is somehow possible in case of replicated volumes where exists concept named Replica-Set. Replica-Set holds list of nodes which can be safely taken down without disruption of whole DFS. However this list of nodes is defined statically when volume is created or extended. GlusterFS doesn t allow dynamic change, which might be needed in some cases. In case we want to turn off machines in one rack that are not in same Replica-Set, there is no possibility to do that or to change it without taking down the DFS. 6.4 CEPH Second tested DFS that we have chosen was CEPH in version Deployment was not so straightforward as in previous DFS and required several stages of file copying over network to establish working cluster. Learning curve to figure out the right configuration was a bit steep. The tests here used RBD block device provided by CEPH, which was stripped and replicated over the cluster to meet requirements. Block device was attached to system via Linux kernel module, which should bring better performance in 31

37 6. STORAGE PERFORMANCE TESTING some cases but doesn t support some features such as copy-on-write clones. Scripts used to configure testing environment can be founded in attachment of this work. Graphs in figure 6.7 and figure 6.8 shows performance of CEPH RBD device used by VM as storage device. Interesting is good read performance of ext4 used in VM. Figure 6.7: Linear read/write performance of ext4 and btrfs of CEPH RBD in virtualized environment Figure 6.8: Linear read/write performance of ext4 and btrfs of CEPH RBD in virtualized environment Despite quite complex configuration CEPH gives us more options that we can use. For example one configuration, which is separated from main configuration file, is CRUSH map. This map describes how the cluster looks and allows us to create logical view over it. So we can organize OSDs into racks, storage rooms, datacenters or anything that we come to into hierarchical structure. But CRUSH map is not about only organizing things but it s about the weight of each part of tree hierarchy in determining where to place data. This can be especially useful when we need to change physical topology of cluster and then we want to rebalance the data distribution in the cluster. Also it can be used for maintenance when we can tag some parts of tree to be offline for a while but we 32

38 6. STORAGE PERFORMANCE TESTING don t want to rebalance data that are stored there. This is case when we want to just reboot one rack of computers for maintenance. 6.5 MooseFS and SheepDog Through initial testing we have also tried other DFSs which got major drawbacks described in earlier chapters. MooseFS was hot candidate as competitor of GlusterFS however it s stability was a concern to us because it was susceptible to crashes which lead to DFS unavailability during our testing. Lack of block device support without any plans for near future made this DFS not interesting for further testing. The main problem of SheepDog project is its immaturity. It still contain some serious flaws and it s not supported by many distributions in recent versions. It s high-availability stands on top of other software such as corosync, which we don t think it s the good solution for DFS used for virtualization. Still concepts presented by this project are really interesting as they describe the need of specialized DFS for virtualization. Despite all tries we were not able to make it run on recent version Fedora. 33

39 6. STORAGE PERFORMANCE TESTING 6.6 CEPH and GlusterFS performance comparison We consider the comparison of CEPH and GlusterFS as very interesting. Comparison of linear read/write speed can be seen in figure 6.9. Despite CEPH clearly wins in speed of linear write, GlusterFS is comparable and sometimes better in linear read. Figure 6.9: Linear read/write performance between CEPH and GlusterFS The situation in figure 6.10 however shows mostly better or comparable speed of random read/write operations for CEPH. The reason of dramatically higher read performance in case of ext4 as VM file system is still a mystery for us. Figure 6.10: Random read/write performance between CEPH and GlusterFS 34

40 Chapter 7 Cloud management testing Our knowledge of cloud management starts from assumption that it could be very similar to libvirt library, which is widely used at Linux platform for unified access to virtualization. As we discovered most of cloud management projects build on top of it. However this new layer of abstraction brings new set of problems in way how the whole system works. Some of them that we encountered are described later in next chapter. Our vision about the cloud management was that it would be able to work with at least one of tested DFSs. Also it would support the use of some more advanced networking setup either using Open vswitch or similar. For consistent testing environment, we first used VMs to test cloud managements as they are easily disposable. This approach saved us hours of sitting in server room testing this on real hardware as VMs were breaking down pretty regularly. The goal of this phase was to test feasibility of deployment and create customized manuals for future deployment of selected projects in our environment. Following Linux distributions were used to host cloud management software: Fedora 18 x86 64 Fedora 19 x86 64 Debian 6 x86 64 CentOS 6 x86 64 Second phase was bare metal deployment in our 4-node data cluster with one master node. We have tested customization of tested systems to our environment. Compared to isolated virtual environment in first test phase, the biggest problems here lied in network customization. As real network was behaving differently than simulated environment and we interacted with living network. 35

Virtualization, SDN and NFV

Virtualization, SDN and NFV Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,

More information

Network Virtualization for Large-Scale Data Centers

Network Virtualization for Large-Scale Data Centers Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning

More information

SYNNEFO: A COMPLETE CLOUD PLATFORM OVER GOOGLE GANETI WITH OPENSTACK APIs VANGELIS KOUKIS, TECH LEAD, SYNNEFO

SYNNEFO: A COMPLETE CLOUD PLATFORM OVER GOOGLE GANETI WITH OPENSTACK APIs VANGELIS KOUKIS, TECH LEAD, SYNNEFO SYNNEFO: A COMPLETE CLOUD PLATFORM OVER GOOGLE GANETI WITH OPENSTACK APIs VANGELIS KOUKIS, TECH LEAD, SYNNEFO 1 Synnefo cloud platform An all-in-one cloud solution Written from scratch in Python Manages

More information

Mobile Cloud Computing T-110.5121 Open Source IaaS

Mobile Cloud Computing T-110.5121 Open Source IaaS Mobile Cloud Computing T-110.5121 Open Source IaaS Tommi Mäkelä, Otaniemi Evolution Mainframe Centralized computation and storage, thin clients Dedicated hardware, software, experienced staff High capital

More information

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments RED HAT ENTERPRISE VIRTUALIZATION DATASHEET RED HAT ENTERPRISE VIRTUALIZATION AT A GLANCE Provides a complete end-toend enterprise virtualization solution for servers and desktop Provides an on-ramp to

More information

StorPool Distributed Storage Software Technical Overview

StorPool Distributed Storage Software Technical Overview StorPool Distributed Storage Software Technical Overview StorPool 2015 Page 1 of 8 StorPool Overview StorPool is distributed storage software. It pools the attached storage (hard disks or SSDs) of standard

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

Product Spotlight. A Look at the Future of Storage. Featuring SUSE Enterprise Storage. Where IT perceptions are reality

Product Spotlight. A Look at the Future of Storage. Featuring SUSE Enterprise Storage. Where IT perceptions are reality Where IT perceptions are reality Product Spotlight A Look at the Future of Storage Featuring SUSE Enterprise Storage Document # SPOTLIGHT2013001 v5, January 2015 Copyright 2015 IT Brand Pulse. All rights

More information

Introduction to OpenStack

Introduction to OpenStack Introduction to OpenStack Carlo Vallati PostDoc Reseracher Dpt. Information Engineering University of Pisa carlo.vallati@iet.unipi.it Cloud Computing - Definition Cloud Computing is a term coined to refer

More information

Introduction to Gluster. Versions 3.0.x

Introduction to Gluster. Versions 3.0.x Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster

More information

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager pchadwick@suse.com. Product Marketing Manager djarvis@suse.

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager pchadwick@suse.com. Product Marketing Manager djarvis@suse. SUSE Cloud 2.0 Pete Chadwick Douglas Jarvis Senior Product Manager pchadwick@suse.com Product Marketing Manager djarvis@suse.com SUSE Cloud SUSE Cloud is an open source software solution based on OpenStack

More information

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled

More information

State of the Art Cloud Infrastructure

State of the Art Cloud Infrastructure State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks

More information

Virtualization @ Google

Virtualization @ Google Virtualization @ Google Alexander Schreiber Google Switzerland Libre Software Meeting 2012 Geneva, Switzerland, 2012-06-10 Introduction Talk overview Corporate infrastructure Overview Use cases Technology

More information

HRG Assessment: Stratus everrun Enterprise

HRG Assessment: Stratus everrun Enterprise HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at

More information

FIA Athens 2014 vkoukis@grnet.gr ~OKEANOS: A LARGE EUROPEAN PUBLIC CLOUD BASED ON SYNNEFO. VANGELIS KOUKIS, TECHNICAL LEAD, ~OKEANOS

FIA Athens 2014 vkoukis@grnet.gr ~OKEANOS: A LARGE EUROPEAN PUBLIC CLOUD BASED ON SYNNEFO. VANGELIS KOUKIS, TECHNICAL LEAD, ~OKEANOS ~OKEANOS: A LARGE EUROPEAN PUBLIC CLOUD BASED ON SYNNEFO. VANGELIS KOUKIS, TECHNICAL LEAD, ~OKEANOS 1 Fact 1 NRENs and Europe need to decide on how to deliver cloud services Brokering between 3 rd party

More information

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction

More information

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches

More information

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure TECHNICAL WHITE PAPER Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure A collaboration between Canonical and VMware

More information

WHITE PAPER. Software Defined Storage Hydrates the Cloud

WHITE PAPER. Software Defined Storage Hydrates the Cloud WHITE PAPER Software Defined Storage Hydrates the Cloud Table of Contents Overview... 2 NexentaStor (Block & File Storage)... 4 Software Defined Data Centers (SDDC)... 5 OpenStack... 5 CloudStack... 6

More information

Xen @ Google. Iustin Pop, <iustin@google.com> Google Switzerland. Sponsored by:

Xen @ Google. Iustin Pop, <iustin@google.com> Google Switzerland. Sponsored by: Xen @ Google Iustin Pop, Google Switzerland Sponsored by: & & Introduction Talk overview Corporate infrastructure Overview Use cases Technology Open source components Internal components

More information

Scala Storage Scale-Out Clustered Storage White Paper

Scala Storage Scale-Out Clustered Storage White Paper White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current

More information

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman High Performance OpenStack Cloud Eli Karpilovski Cloud Advisory Council Chairman Cloud Advisory Council Our Mission Development of next generation cloud architecture Providing open specification for cloud

More information

Programmable Networking with Open vswitch

Programmable Networking with Open vswitch Programmable Networking with Open vswitch Jesse Gross LinuxCon September, 2013 2009 VMware Inc. All rights reserved Background: The Evolution of Data Centers Virtualization has created data center workloads

More information

Testing of several distributed file-system (HadoopFS, CEPH and GlusterFS) for supporting the HEP experiments analisys. Giacinto DONVITO INFN-Bari

Testing of several distributed file-system (HadoopFS, CEPH and GlusterFS) for supporting the HEP experiments analisys. Giacinto DONVITO INFN-Bari Testing of several distributed file-system (HadoopFS, CEPH and GlusterFS) for supporting the HEP experiments analisys. Giacinto DONVITO INFN-Bari 1 Agenda Introduction on the objective of the test activities

More information

Pluribus Netvisor Solution Brief

Pluribus Netvisor Solution Brief Pluribus Netvisor Solution Brief Freedom Architecture Overview The Pluribus Freedom architecture presents a unique combination of switch, compute, storage and bare- metal hypervisor OS technologies, and

More information

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

Comparing Ganeti to other Private Cloud Platforms. Lance Albertson Director lance@osuosl.org @ramereth

Comparing Ganeti to other Private Cloud Platforms. Lance Albertson Director lance@osuosl.org @ramereth Comparing Ganeti to other Private Cloud Platforms Lance Albertson Director lance@osuosl.org @ramereth About me OSU Open Source Lab Server hosting for Open Source Projects Open Source development projects

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

Storage solutions for a. infrastructure. Giacinto DONVITO INFN-Bari. Workshop on Cloud Services for File Synchronisation and Sharing

Storage solutions for a. infrastructure. Giacinto DONVITO INFN-Bari. Workshop on Cloud Services for File Synchronisation and Sharing Storage solutions for a productionlevel cloud infrastructure Giacinto DONVITO INFN-Bari Synchronisation and Sharing 1 Outline Use cases Technologies evaluated Implementation (hw and sw) Problems and optimization

More information

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India sudha.mooki@gmail.com 2 Department

More information

AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013

AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013 AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013 OpenStack What is OpenStack? OpenStack is a cloud operaeng system that controls large pools of compute, storage, and networking resources

More information

Openstack. Cloud computing with Openstack. Saverio Proto saverio.proto@switch.ch

Openstack. Cloud computing with Openstack. Saverio Proto saverio.proto@switch.ch Openstack Cloud computing with Openstack Saverio Proto saverio.proto@switch.ch Lugano, 23/03/2016 Agenda SWITCH role in Openstack and Cloud Computing What is Virtualization? Why is Cloud computing more

More information

ovirt and Gluster hyper-converged! HA solution for maximum resource utilization

ovirt and Gluster hyper-converged! HA solution for maximum resource utilization ovirt and Gluster hyper-converged! HA solution for maximum resource utilization 21 st of Aug 2015 Martin Sivák Senior Software Engineer Red Hat Czech KVM Forum Seattle, Aug 2015 1 Agenda (Storage) architecture

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

ovirt and Gluster hyper-converged! HA solution for maximum resource utilization

ovirt and Gluster hyper-converged! HA solution for maximum resource utilization ovirt and Gluster hyper-converged! HA solution for maximum resource utilization 31 st of Jan 2016 Martin Sivák Senior Software Engineer Red Hat Czech FOSDEM, Jan 2016 1 Agenda (Storage) architecture of

More information

Release Notes for Fuel and Fuel Web Version 3.0.1

Release Notes for Fuel and Fuel Web Version 3.0.1 Release Notes for Fuel and Fuel Web Version 3.0.1 June 21, 2013 1 Mirantis, Inc. is releasing version 3.0.1 of the Fuel Library and Fuel Web products. This is a cumulative maintenance release to the previously

More information

Building on these core skills, customers can work on advanced concepts, such as:

Building on these core skills, customers can work on advanced concepts, such as: OpenStack Training OVERVIEW OnX s OpenStack training courses provide a deep and practical understanding of all aspects of today s most popular cloud platform. Unlike other training providers, OnX offerings

More information

Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista

Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista Setting the Stage This presentation will discuss the usage of Linux as a base component of hypervisor components

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

Cloud Platform Comparison: CloudStack, Eucalyptus, vcloud Director and OpenStack

Cloud Platform Comparison: CloudStack, Eucalyptus, vcloud Director and OpenStack Cloud Platform Comparison: CloudStack, Eucalyptus, vcloud Director and OpenStack This vendor-independent research contains a product-by-product comparison of the most popular cloud platforms (along with

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014

How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014 How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014 2014 年 云 计 算 效 率 与 能 耗 暨 第 一 届 国 际 云 计 算 咨 询 委 员 会 中 国 高 峰 论 坛 Contents Background

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Building Storage as a Service with OpenStack. Greg Elkinbard Senior Technical Director

Building Storage as a Service with OpenStack. Greg Elkinbard Senior Technical Director Building Storage as a Service with OpenStack Greg Elkinbard Senior Technical Director MIRANTIS 2012 PAGE 1 About the Presenter Greg Elkinbard Senior Technical Director at Mirantis Builds on demand IaaS

More information

Sales Slide Midokura Enterprise MidoNet V1. July 2015 Fujitsu Limited

Sales Slide Midokura Enterprise MidoNet V1. July 2015 Fujitsu Limited Sales Slide Midokura Enterprise MidoNet V1 July 2015 Fujitsu Limited What Is Midokura Enterprise MidoNet? Network Virtualization Software Coordinated with OpenStack Provides safe & effective virtual networks

More information

FOR SERVERS 2.2: FEATURE matrix

FOR SERVERS 2.2: FEATURE matrix RED hat ENTERPRISE VIRTUALIZATION FOR SERVERS 2.2: FEATURE matrix Red hat enterprise virtualization for servers Server virtualization offers tremendous benefits for enterprise IT organizations server consolidation,

More information

Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000

Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000 Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000 Clear the way for new business opportunities. Unlock the power of data. Overcoming storage limitations Unpredictable data growth

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

Storage Architectures for Big Data in the Cloud

Storage Architectures for Big Data in the Cloud Storage Architectures for Big Data in the Cloud Sam Fineberg HP Storage CT Office/ May 2013 Overview Introduction What is big data? Big Data I/O Hadoop/HDFS SAN Distributed FS Cloud Summary Research Areas

More information

Red Hat enterprise virtualization 3.0 feature comparison

Red Hat enterprise virtualization 3.0 feature comparison Red Hat enterprise virtualization 3.0 feature comparison at a glance Red Hat Enterprise is the first fully open source, enterprise ready virtualization platform Compare the functionality of RHEV to VMware

More information

cloud functionality: advantages and Disadvantages

cloud functionality: advantages and Disadvantages Whitepaper RED HAT JOINS THE OPENSTACK COMMUNITY IN DEVELOPING AN OPEN SOURCE, PRIVATE CLOUD PLATFORM Introduction: CLOUD COMPUTING AND The Private Cloud cloud functionality: advantages and Disadvantages

More information

SUSE Enterprise Storage Highly Scalable Software Defined Storage. Māris Smilga

SUSE Enterprise Storage Highly Scalable Software Defined Storage. Māris Smilga SUSE Enterprise Storage Highly Scalable Software Defined Storage āris Smilga Storage Today Traditional Storage Arrays of disks with RAID for redundancy SANs based on Fibre Channel connectivity Total System

More information

Boas Betzler. Planet. Globally Distributed IaaS Platform Examples AWS and SoftLayer. November 9, 2015. 20014 IBM Corporation

Boas Betzler. Planet. Globally Distributed IaaS Platform Examples AWS and SoftLayer. November 9, 2015. 20014 IBM Corporation Boas Betzler Cloud IBM Distinguished Computing Engineer for a Smarter Planet Globally Distributed IaaS Platform Examples AWS and SoftLayer November 9, 2015 20014 IBM Corporation Building Data Centers The

More information

Outline. Why Neutron? What is Neutron? API Abstractions Plugin Architecture

Outline. Why Neutron? What is Neutron? API Abstractions Plugin Architecture OpenStack Neutron Outline Why Neutron? What is Neutron? API Abstractions Plugin Architecture Why Neutron? Networks for Enterprise Applications are Complex. Image from windowssecurity.com Why Neutron? Reason

More information

Solaris For The Modern Data Center. Taking Advantage of Solaris 11 Features

Solaris For The Modern Data Center. Taking Advantage of Solaris 11 Features Solaris For The Modern Data Center Taking Advantage of Solaris 11 Features JANUARY 2013 Contents Introduction... 2 Patching and Maintenance... 2 IPS Packages... 2 Boot Environments... 2 Fast Reboot...

More information

KVM, OpenStack, and the Open Cloud

KVM, OpenStack, and the Open Cloud KVM, OpenStack, and the Open Cloud Adam Jollans, IBM Southern California Linux Expo February 2015 1 Agenda A Brief History of VirtualizaJon KVM Architecture OpenStack Architecture KVM and OpenStack Case

More information

HP OpenStack & Automation

HP OpenStack & Automation HP OpenStack & Automation Where we are heading Thomas Goh Cloud Computing Cloud Computing Cloud computing is a model for enabling ubiquitous network access to a shared pool of configurable computing resources.

More information

POSIX and Object Distributed Storage Systems

POSIX and Object Distributed Storage Systems 1 POSIX and Object Distributed Storage Systems Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph by Michael Poat, Dr. Jerome

More information

SDN and Data Center Networks

SDN and Data Center Networks SDN and Data Center Networks 10/9/2013 1 The Rise of SDN The Current Internet and Ethernet Network Technology is based on Autonomous Principle to form a Robust and Fault Tolerant Global Network (Distributed)

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

How To Build A Cloud Stack For A University Project

How To Build A Cloud Stack For A University Project IES+Perto Project Cloud Computing Instituições de Ensino Superior Mais Perto Higher Education Institutions Closer Universities: Aveiro, Coimbra, Porto José António Sousa (UP), Fernando Correia (UP), Mário

More information

How To Install Openstack On Ubuntu 14.04 (Amd64)

How To Install Openstack On Ubuntu 14.04 (Amd64) Getting Started with HP Helion OpenStack Using the Virtual Cloud Installation Method 1 What is OpenStack Cloud Software? A series of interrelated projects that control pools of compute, storage, and networking

More information

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems RH413 Manage Software Updates Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems Allocate an advanced file system layout, and use file

More information

Software Define Storage (SDs) and its application to an Openstack Software Defined Infrastructure (SDi) implementation

Software Define Storage (SDs) and its application to an Openstack Software Defined Infrastructure (SDi) implementation Software Define Storage (SDs) and its application to an Openstack Software Defined Infrastructure (SDi) implementation This paper discusses how data centers, offering a cloud computing service, can deal

More information

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed and Cloud Computing Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading

More information

Building Storage Service in a Private Cloud

Building Storage Service in a Private Cloud Building Storage Service in a Private Cloud Sateesh Potturu & Deepak Vasudevan Wipro Technologies Abstract Storage in a private cloud is the storage that sits within a particular enterprise security domain

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

ADVANCED NETWORK CONFIGURATION GUIDE

ADVANCED NETWORK CONFIGURATION GUIDE White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4

More information

Distributed File Systems

Distributed File Systems Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

An Intro to OpenStack. Ian Lawson Senior Solution Architect, Red Hat ilawson@redhat.com

An Intro to OpenStack. Ian Lawson Senior Solution Architect, Red Hat ilawson@redhat.com An Intro to OpenStack Ian Lawson Senior Solution Architect, Red Hat ilawson@redhat.com What is OpenStack? What is OpenStack? Fully open source cloud operating system Comprised of several open source sub-projects

More information

WHITE PAPER Optimizing Virtual Platform Disk Performance

WHITE PAPER Optimizing Virtual Platform Disk Performance WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower

More information

What s new in Hyper-V 2012 R2

What s new in Hyper-V 2012 R2 What s new in Hyper-V 2012 R2 Carsten Rachfahl MVP Virtual Machine Rachfahl IT-Solutions GmbH & Co KG www.hyper-v-server.de Thomas Maurer Cloud Architect & MVP itnetx gmbh www.thomasmaurer.ch Before Windows

More information

Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP

Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP Agenda ADP Cloud Vision and Requirements Introduction to SUSE Cloud Overview Whats New VMWare intergration HyperV intergration ADP

More information

Solution for private cloud computing

Solution for private cloud computing The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details System requirements and installation How to get it? 2 What is CC1? The CC1 system is a complete solution

More information

Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH

Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH CONTENTS Introduction... 4 System Components... 4 OpenNebula Cloud Management Toolkit... 4 VMware

More information

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

Dynamic Load Balancing of Virtual Machines using QEMU-KVM Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College

More information

Parallels Cloud Storage

Parallels Cloud Storage Parallels Cloud Storage White Paper Best Practices for Configuring a Parallels Cloud Storage Cluster www.parallels.com Table of Contents Introduction... 3 How Parallels Cloud Storage Works... 3 Deploying

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

Scientific Computing Data Management Visions

Scientific Computing Data Management Visions Scientific Computing Data Management Visions ELI-Tango Workshop Szeged, 24-25 February 2015 Péter Szász Group Leader Scientific Computing Group ELI-ALPS Scientific Computing Group Responsibilities Data

More information

ovirt and Gluster Hyperconvergence

ovirt and Gluster Hyperconvergence ovirt and Gluster Hyperconvergence January 2015 Federico Simoncelli Principal Software Engineer Red Hat ovirt and GlusterFS Hyperconvergence, Jan 2015 1 Agenda ovirt Architecture and Software-defined Data

More information

EMC SCALEIO OPERATION OVERVIEW

EMC SCALEIO OPERATION OVERVIEW EMC SCALEIO OPERATION OVERVIEW Ensuring Non-disruptive Operation and Upgrade ABSTRACT This white paper reviews the challenges organizations face as they deal with the growing need for always-on levels

More information

SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment

SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment Best Practices Guide www.suse.com SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment Written by B1 Systems GmbH Table of Contents Introduction...3 Use Case Overview...3

More information

High Availability Solutions for the MariaDB and MySQL Database

High Availability Solutions for the MariaDB and MySQL Database High Availability Solutions for the MariaDB and MySQL Database 1 Introduction This paper introduces recommendations and some of the solutions used to create an availability or high availability environment

More information

RED HAT ENTERPRISE VIRTUALIZATION & CLOUD COMPUTING

RED HAT ENTERPRISE VIRTUALIZATION & CLOUD COMPUTING RED HAT ENTERPRISE VIRTUALIZATION & CLOUD COMPUTING James Rankin Senior Solutions Architect Red Hat, Inc. 1 KVM BACKGROUND Project started in October 2006 by Qumranet - Submitted to Kernel maintainers

More information

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200

More information

東 海 大 學 資 訊 工 程 研 究 所 碩 士 論 文

東 海 大 學 資 訊 工 程 研 究 所 碩 士 論 文 東 海 大 學 資 訊 工 程 研 究 所 碩 士 論 文 指 導 教 授 : 楊 朝 棟 博 士 以 異 質 儲 存 技 術 實 作 一 個 軟 體 定 義 儲 存 服 務 Implementation of a Software-Defined Storage Service with Heterogeneous Storage Technologies 研 究 生 : 連 威 翔 中 華 民

More information

SDN v praxi overlay sítí pro OpenStack. 5.10.2015 Daniel Prchal daniel.prchal@hpe.com

SDN v praxi overlay sítí pro OpenStack. 5.10.2015 Daniel Prchal daniel.prchal@hpe.com SDN v praxi overlay sítí pro OpenStack 5.10.2015 Daniel Prchal daniel.prchal@hpe.com Agenda OpenStack OpenStack Architecture SDN Software Defined Networking OpenStack Networking HP Helion OpenStack HP

More information

Red Hat Enterprise Linux OpenStack Platform Update February 17, 2016

Red Hat Enterprise Linux OpenStack Platform Update February 17, 2016 Red Hat Enterprise Linux OpenStack Platform Update February 17, 2016 1 Ian Pilcher Principal Product Manager Platform Business Unit AGENDA Introductions War stories OpenStack in a Minute or So.. Understanding

More information

Best Practices for Virtualised SharePoint

Best Practices for Virtualised SharePoint Best Practices for Virtualised SharePoint Brendan Law Blaw@td.com.au @FlamerNZ Flamer.co.nz/spag/ Nathan Mercer Nathan.Mercer@microsoft.com @NathanM blogs.technet.com/nmercer/ Agenda Why Virtualise? Hardware

More information

VM Image Hosting Using the Fujitsu* Eternus CD10000 System with Ceph* Storage Software

VM Image Hosting Using the Fujitsu* Eternus CD10000 System with Ceph* Storage Software Intel Solutions Reference Architecture VM Image Hosting Using the Fujitsu* Eternus CD10000 System with Ceph* Storage Software Intel Xeon Processor E5-2600 v3 Product Family SRA Section: Audience and Purpose

More information

How To Install Eucalyptus (Cont'D) On A Cloud) On An Ubuntu Or Linux (Contd) Or A Windows 7 (Cont') (Cont'T) (Bsd) (Dll) (Amd)

How To Install Eucalyptus (Cont'D) On A Cloud) On An Ubuntu Or Linux (Contd) Or A Windows 7 (Cont') (Cont'T) (Bsd) (Dll) (Amd) Installing Eucalyptus Past, Present, and Future Eucalyptus Overview Most widely deployed software platform for on-premise IaaS clouds 25,000+ cloud starts as of mid 2011 AWS-compatible, enterprise-deployed

More information

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,

More information

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Overlay networking with OpenStack Neutron in Public Cloud environment. Trex Workshop 2015

Overlay networking with OpenStack Neutron in Public Cloud environment. Trex Workshop 2015 Overlay networking with OpenStack Neutron in Public Cloud environment Trex Workshop 2015 About Presenter Anton Aksola (aakso@twitter,ircnet,github) Network Architect @Nebula Oy, started in 2005 Currently

More information

Isilon IQ Network Configuration Guide

Isilon IQ Network Configuration Guide Isilon IQ Network Configuration Guide An Isilon Systems Best Practice Paper August 2008 ISILON SYSTEMS Table of Contents Cluster Networking Introduction...3 Assumptions...3 Cluster Networking Features...3

More information