rackspace.com/cloud/private

Size: px
Start display at page:

Download "rackspace.com/cloud/private"

Transcription

1 rackspace.com/cloud/private

2 Rackspace Private Cloud Object ( ) Copyright 2015 Rackspace All rights reserved. This documentation is intended for Rackspace customers who have installed an OpenStack-powered private cloud according to the recommendations of Rackspace and want to deploy OpenStack Object Storage (swift) as well. ii

3 Table of Contents 1. Overview System requirements for Object Storage Installation workflow Networking for Object Storage Example of Object Storage installation architecture Prerequisites Configure and mount storage devices Configure an Object Storage deployment Configuring Object Storage Storage Policies Object Storage playbooks Running Object Storage playbooks Verify the installation Integrate Object Storage with the Image Service Object Storage monitoring Service and response Object Storage monitors A. Object Storage configuration files A.1. swift.yml example configuration file A.2. user_variables.yml configuration file Additional resources Document Change History iii

4 List of Figures 1.1. Installation workflow Object Storage architecture... 7 iv

5 List of Tables 5.1. Mounted devices v

6 1. Overview Object Storage (swift) is a multi-tenant object storage system. It is highly scalable, can manage large amounts of unstructured data, and provides a RESTful HTTP API. Note Before installing Object Storage v 2.2.0, Rackspace Private Cloud v10 (Juno) must be installed. Object Storage integrates with some components from the v10 release of Rackspace Private Cloud (for example, keystone, keystone v3). Object Storage includes the following components: Proxy servers (swift-proxyserver) Account servers (swift-account-server) Accepts OpenStack Object Storage API and raw HTTP requests to upload files, modify metadata, and create containers. It also serves file or container listings to web browsers. The proxy server takes each request and looks up locations for the account, container, or object and routes the requests correctly. The proxy server also handles API requests. To improve performance, the proxy server can use an optional cache that is usually deployed with memcache. Manages accounts defined with Object Storage. Note Accounts are the root storage location for data. Container servers (swift-container-server) Manages the mapping of containers or folders, within Object Storage. Note Containers are user-defined segments of the account namespace that provides the storage location where objects are found. Object servers (swift-object-server) Manages actual objects, such as files, on the storage nodes. Note Objects are the actual data stored in Object Storage (swift). Ring A set of hash tables that associate each object to a specific physical device. There is one ring per type of data manipulated by Object Storage (Objects, Containers and Accounts). The set of rings are shared among every 1

7 Object Storage node (storage and proxy). Each ring determines the physical devices (hard disks) where each object, container, and account will be stored. The number of devices on which an object is stored depends on the number of replicas (copies) specified for the Object Storage cluster. Various periodic processes WSGI middleware Perform housekeeping tasks on the large data store. The replication services ensure consistency and availability through the cluster. Other periodic processes include auditors, updaters, and reapers. Handles authentication (usually OpenStack Identity). Object Storage integrates with the Compute layer and can be used as a back end for Image Service (glance), providing a horizontally scalable store for all image and snapshot data. Object Storage focuses on storing non-transactional data. Object Storage data (the accounts, containers, and objects) are resources that are stored on physical hardware. Object Storage proxy containers are installed on specific infrastructure nodes. The other Object Storage nodes are storage nodes and can have any combination (at a device or drive level) of object, container, or account servers running on them. For example, given two servers with five drives in each, each drive can be part of the ring for any combination of object, account, or container (including each having different storage policies within the object server). Object Storage uses the following nodes: Three existing infrastructure nodes to run the swift-proxy-server processes. The proxy servers proxy requests to the appropriate storage nodes. Multiple storage nodes that run the swift-account-server, swift-container-server, and swift-object-server processes which control storage of the account databases, the container databases, as well as the actual stored objects. The following Object Storage features are not supported in Rackspace Private Cloud Object Storage: Ring management and ring storage repositories Drive detection, formatting, mounting, and unmounting 1.1. System requirements for Object Storage Hardware Object Storage is designed to run on commodity hardware. Rackspace recommends that each node in the cluster meet the following minimum specifications: 1 core per 3 TB capacity At least 6 SAS drives of at least 1 TB capacity each At least 2 GB RAM, plus an additional 250 MB RAM per TB of drive capacity. 2

8 The amount of disk space depends on how much can fit into the rack efficiently. At Rackspace, storage servers are generic 4U servers with 24 2T SATA drives and 8 cores of processing power. RAID on the storage drives is not required and is not recommended. Object Storage's disk usage pattern is unsuitable for RAID. Operating system Software Networking Database Rackspace Private Cloud Object Storage runs on Ubuntu. Rackspace Private Cloud Object Storage requires Rackspace Private Cloud v10 Software (Juno). 1 Gbps or 10 Gbps is suggested internally. For Rackspace Private Cloud Object Storage, an external network should connect anything external to the proxy servers. The storage network is intended to be isolated on a private network or multiple private networks. A SQLite database is part of the Rackspace Private Cloud Object Storage container and account management process. Permissions 1.2. Installation workflow Rackspace Private Cloud Object Storage can be installed with either root permissions or with as a user with sudo permissions if the sudoers file is configured to enable all the permissions. The following diagram shows the general workflow for an Object Storage installation. Figure 1.1. Installation workflow 3

9 2. Networking for Object Storage This section offers recommendations for network resources and provides guidelines to help network administrators understand the recommended networks and public IP addresses. Bandwidth of at least 1 Gbps is suggested. Object Storage uses the following networks: Public network (Publicly routable IP range) Mandatory network that connects to the proxy server. Provides public IP accessibility to the API endpoints within the cloud infrastructure. Minimum size: One IP address for each proxy server. Storage network (RFC1918 IP Range, not publicly routable) Must be specified in the rpc_user_config.yml file. Note The used_ips section in rpc_user_config.yml must include the storage IPs of the swift storage and management networks, so they are not allocated to a container. Add swift_proxy to the "group_binds" variable for the storage network. Mandatory network that is not accessible from outside the cluster. All nodes connect to this network. Manages all inter-server communications within the Object Storage infrastructure. Minimum size: One IP address for each storage node and proxy server. Recommended size: As above, with room for expansion to the largest cluster size. For example, 255 or CIDR /24. Replication network (RFC1918 IP Range, not publicly routable) Optional network that is not accessible from outside the cluster. All nodes connect to this network. Manages replication object containers and accounts. Minimum size: One IP address for each storage node. Recommended size: As above, with room for expansion to the largest cluster size. For example, 255 or CIDR /24. 4

10 By default, all of the Object Storage services, as well as the rsync daemon on the storage nodes, are configured to listen on their STORAGE_LOCAL_NET IP addresses. When configuring a replication network in the ring, the account, container, and object servers listen on both the STORAGE_LOCAL_NET and STORAGE_REPLICATION_NET IP addresses. The rsync daemon only listens to the STORAGE_REPLICATION_NET IP address. 5

11 3. Example of Object Storage installation architecture This section provides an example Object Storage installation architecture. Object Storage uses the following constructs: Node Proxy node Storage node Ring Replica Zone Region (optional) A host machine that runs one or more OpenStack Object Storage services. Runs proxy services. Runs account, container, and Object services. Contains the SQLite databases. A set of mappings between OpenStack Object Storage data and physical devices. A copy of an object. By default, three copies are maintained in the cluster. A logically separate section of the cluster. Related to independent failure characteristics. A logically separate section of the cluster representing distinct physical locations such as cities or countries. Similar to zones, but regions represent the physical locations of portions of the cluster. To increase reliability and performance, additional proxy servers can be added. The ring guarantees that every replica is stored in a separate zone. This is applicable only if the zones are greater than or equal to the replica count. A zone is a group of nodes that are as isolated as possible from other nodes (separate servers, network, power, even geography). The following diagram shows one possible configuration for a minimal installation. In this example, each storage node is a separate zone in the ring. At a minimum, five zones are recommended. 6

12 Figure 3.1. Object Storage architecture 7

13 4. Prerequisites Enable the trusty-backports repository. The trusty-backports repository is required to install Object Storage. Add repository details in /etc/apt/sources.list, and update the package list: $ cd /opt/openstack-ansible/rpc_deployment ansible hosts -m shell -a "sed -r -i 's/^ \ (deb.*trusty-backports.*)$/\1/' /etc/apt/sources.list; apt-get update" Note If using a version prior to , use the /opt/openstack-ansible/rpc_deployment directory for any new deployments. If any update directories were cloned prior to the Juno release, or any local clones that have not been manually updated, use the /opt/os-ansible-deployment directory. 8

14 5. Configure and mount storage devices This section offers a set of prerequisite instructions for setting up Object Storage storage devices. The storage devices must be set up before installing Object Storage. Procedure 5.1. Configuring and mounting storage devices RPC Object Storage requires a minimum of three Object Storage devices with mounted storage drives. The example commands in this procedure assume the storage devices for Object Storage are devices sdc through sdg. 1. Determine the storage devices on the node to use for Object Storage. 2. Format each device on the node used for storage with XFS. While formatting the devices, add a unique label for each device. Note Without labels, a failed drive can cause mount points to shift and data to become inaccessible. For example, create the file systems on the devices using the mkfs command $ apt-get install xfsprogs $ mkfs.xfs -f -i size=1024 -L sdc /dev/sdc $ mkfs.xfs -f -i size=1024 -L sdd /dev/sdd $ mkfs.xfs -f -i size=1024 -L sde /dev/sde $ mkfs.xfs -f -i size=1024 -L sdf /dev/sdf $ mkfs.xfs -f -i size=1024 -L sdg /dev/sdg 3. Add the mount locations to the /etc/fstab file so that the storage devices are remounted on boot. The following example mount options are recommended when using XFS. Finish all modifications to the /etc/fstab file before mounting the new filesystems created within the storage devices. LABEL=sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8, nobootwait 0 0 LABEL=sdd /srv/node/sdd xfs noatime,nodiratime,nobarrier,logbufs=8, nobootwait 0 0 LABEL=sde /srv/node/sde xfs noatime,nodiratime,nobarrier,logbufs=8, nobootwait 0 0 LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8, nobootwait 0 0 LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8, nobootwait Create the mount points for the devices using the mkdir command. $ mkdir -p /srv/node/sdc $ mkdir -p /srv/node/sdd $ mkdir -p /srv/node/sde $ mkdir -p /srv/node/sdf 9

15 $ mkdir -p /srv/node/sdg The mount point is referenced as the mount_point parameter in the swift.yml file (/etc/rpc_deploy/conf.d/swift.yml). $ mount /srv/node/sdc $ mount /srv/node/sdd $ mount /srv/node/sde $ mount /srv/node/sdf $ mount /srv/node/sdg To view an annotated example of the swift.yml file, see Appendix A, Object Storage configuration files [25]. For the following mounted devices: Table 5.1. Mounted devices Device /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg Mount location /srv/node/sdc /srv/node/sdd /srv/node/sde /srv/node/sdf /srv/node/sdg The entry in the swift.yml would be: drives: - name: sdc - name: sdd - name: sde - name: sdf - name: sdg mount_point: /srv/node 10

16 6. Configure an Object Storage deployment Object Storage is configured using the /etc/rpc_deploy/conf.d/swift.yml file and the /etc/rpc_deploy/user_variables.yml file. The group variables in the /etc/rpc_deploy/conf.d/swift.yml file are used by the Ansible playbooks when installing Object Storage. Some variables cannot be changed after they are set, while some changes require re-running the playbooks. The values in the swift_hosts section supersede values in the swift section. To view the configuration files, including information about which variables are required and which are optional, see Appendix A, Object Storage configuration files [25] Configuring Object Storage Procedure 6.1. Updating the Object Storage configuration swift.yml file 1. Update the global override values: global_overrides: swift: part_power: 8 weight: 100 min_part_hours: 1 repl_number: 3 storage_network: 'br-storage' replication_network: 'br-repl' drives: - name: sdc - name: sdd - name: sde - name: sdf mount_point: /mnt account: container: storage_policies: - policy: name: gold index: 0 default: True - policy: name: silver index: 1 repl_number: 3 deprecated: True part_power Set the partition power value based on the total amount of storage the entire ring will use. Multiply the maximum number of drives ever used with this Object Storage installation by 100 and round that value up to the closest power of two value. For example, a 11

17 maximum of six drives, times 100, equals 600. The nearest power of two above 600 is two to the power of nine, so the partition power is nine. The partition power cannot be changed after the Object Storage rings are built. weight min_part_hours repl_number storage_network The default weight is 100. If the drives are different sizes, set the weight value to avoid uneven distribution of data. For example, a 1 TB disk would have a weight of 100, while a 2 TB drive would have a weight of 200. The default value is 1. Set the minimum partition hours to the amount of time to lock a partition's replicas after a partition has been moved. Moving multiple replicas at the same time might make data inaccessible. This value can be set separately in the swift, container, account, and policy sections with the value in lower sections superseding the value in the swift section. The default value is 3. Set the replication number to the number of replicas of each object. This value can be set separately in the swift, container, account, and policy sections with the value in the more granular sections superseding the value in the swift section. By default, the swift services will listen on the default management IP. Optionally, specify the interface of the storage network. Note If the storage_network is not set, but the storage_ips per host are set (or the storage_ip is not on the storage_network interface) the proxy server will not be able to connect to the storage services. replication_network Optionally, specify a dedicated replication network interface, so dedicated replication can be setup. If this value is not specified, no dedicated replication_network is set. Note If no dedicated replication network exists, do not specify a replication network. As with the storage_network, if the repl_ip is not set on the replication_network interface, replication will not work properly. 12

18 drives mount_point storage_policies default deprecated Set the default drives per host. This is useful when all hosts have the same drives. These can be overridden on a per host basis. Set the mount_point value to the location where the swift drives are mounted. For example, with a mount point of /mnt and a drive of sdc, a drive is mounted at / mnt/sdc on the swift_host. This can be overridden on a per-host basis. Storage policies determine on which hardware data is stored, how the data is stored across that hardware, and in which region the data resides. Each storage policy must have an unique name and a unique index. There must be a storage policy with an index of 0 in the swift.yml file to use any legacy containers created before storage policies were instituted. Set the default value to yes for at least one policy. This is the default storage policy for any non-legacy containers that are created. Set the deprecated value to yes to turn off storage policies. Note 2. Update the Object Storage proxy hosts values: swift-proxy_hosts: infra-node1: ip: infra-node2: ip: infra-node3: ip: For account and container rings, min_part_hours and repl_number are the only values that can be set. Setting them in this section overrides the defaults for the specific ring. swift-proxy_hosts Set the IP address of the hosts that Ansible will connect to to deploy the swift-proxy containers. The swiftproxy_hosts value should match the infra nodes. 3. Update the Object Storage hosts values: swift_hosts: swift-node1: ip: container_vars: swift_vars: 13

19 zone: 0 swift-node2: ip: container_vars: swift_vars: zone: 1 swift-node3: ip: container_vars: swift_vars: zone: 2 swift-node4: ip: container_vars: swift_vars: zone: 3 swift-node5: ip: container_vars: swift_vars: storage_ip: repl_ip: zone: 4 region: 3 weight: 200 groups: - account - container - silver drives: - name: sdb storage_ip: repl_ip: weight: 75 groups: - gold - name: sdc - name: sdd - name: sde - name: sdf swift_hosts swift_vars storage_ip and repl_ip Specify the hosts to be used as the storage nodes. The ip is the address of the host to which Ansible connects. Set the name and IP address of each Object Storage host. The swift_hosts section is not required. Contains the Object Storage host specific values. These values are based on the IP addresses of the host's storage_network or replication_network. For example, if the storage_network is br-storage and host1 has an IP address of on br-storage, then that is the IP address that will be used for storage_ip. If only the storage_ip is specified then the repl_ip defaults to the storage_ip. If 14

20 neither are specified, both will default to the host IP address. Note Overriding these values on a host or drive basis can cause problems if the IP address that the service listens on is based on a specified storage_network or replication_network and the ring is set to a different IP address. zone region weight groups drives weight The default is 0. Optionally, set the Object Storage zone for the ring. Optionally, set the Object Storage region for the ring. The default weight is 100. If the drives are different sizes, set the weight value to avoid uneven distribution of data. This value can be specified on a host or drive basis (if specified at both, the drive setting takes precedence). Set the groups to list the rings to which a host's drive belongs. This can be set on a per drive basis which will override the host setting. Set the names of the drives on this Object Storage host. At least one name must be specified. The default weight is 100. If the drives are different sizes, set the weight value to avoid uneven distribution of data. This value can be specified on a host or drive basis (if specified at both, the drive setting takes precedence). In the following example, swift-node5 shows values in the swift_hosts section that will override the global values. Groups are set, which overrides the global settings for drive sdb. The weight is overridden for the host and specifically adjusted on drive sdb. Also, the storage_ip and repl_ip are set differently for sdb. swift-node5: ip: container_vars: swift_vars: storage_ip: repl_ip: zone: 4 region: 3 weight: 200 groups: - account 15

21 - container - silver drives: - name: sdb storage_ip: repl_ip: weight: 75 groups: - gold - name: sdc - name: sdd - name: sde - name: sdf 4. Ensure the swift.yml is in the /etc/rpc_deploy/conf.d/ folder. Procedure 6.2. Allowing all Identity users to use Object Storage 1. Set swift_allow_all_users in the user_variables.yml file to True. Any users with the _member_ role (all authorized Identity (keystone) users) can create containers and upload objects to Object Storage. Note 2. Run the Identity playbook: If this value is False, then by default, only users with the admin or swiftoperator role are allowed to create containers or manage tenants. When the backend type for the Image Service (glance) is set to swift, the Image Service can access the Object Storage cluster regardless of whether this value is True or False. $ ansible-playbook \ playbooks/openstack/keystone.yml 6.2. Storage Policies Storage Policies allow segmenting the cluster for various purposes through the creation of multiple object rings. Using policies, different devices can belong to different rings with varying levels of replication. By supporting multiple object rings, Object Storage can segregate the objects within a single cluster. Storage policies can be used for the following situations: Differing levels of replication: A provider may want to offer 2x replication and 3x replication, but does not want to maintain two separate clusters. They can set up a 2x policy and a 3x policy and assign the nodes to their respective rings. Improving performance: Just as solid state drives (SSD) can be used as the exclusive members of an account or database ring, an SSD-only object ring can be created to implement a low-latency or high performance policy. Collecting nodes into groups: Different object rings can have different physical servers so that objects in specific storage policies are always placed in a specific data center or geography. 16

22 Differing storage implementations: A policy can be used to direct traffic to collected nodes that use a different disk file (for example, Kinetic, GlusterFS). Most storage clusters do not require more than one storage policy. The following problems can occur if using multiple storage policies per cluster: Creating a second storage policy without any specified drives (all drives are part of only the account, container, and default storage policy groups) creates an empty ring for that storage policy. A non-default storage policy is used only if specified when creating a container, using the X-Storage-Policy: <policy-name> header. After the container is created, it uses the created storage policy. Other containers continue using the default or another storage policy specified when created. For more information about storage policies, see: Storage Policies 17

23 7. Object Storage playbooks The Ansible Object Storage playbooks prepare the target hosts for Object Storage services and depend on the values in the swift.yml file. The playbooks consist of the following operations: host-setup: This play sets up the rsyslog containers on the new storage hosts, the swift-proxy containers on the infrastructure hosts, and additional initial setup of the hosts. rsyslog-install: This play installs the rsyslog containers on the Object Storage hosts. rsyslog-config: This play configures the rsyslog containers on the Object Storage hosts. swift-all: This play runs all the other Object Storage plays. It is used for new installs or it can be run to propagate any changes that have been made to the swift.yml file. The following are the individual plays within swift-all: swift-common: This play runs the common Object Storage setup plays (for example, general configuration for hosts, creating required directories, and installing packages). swift-build-rings: This play builds the Object Storage rings (based on the rpc_inventory.json file which is created from the swift.yml). This play can be rerun to adjust the rings and redistribute the rings to the Object Storage nodes based on changes to the swift.yml file. swift-proxy: This play installs the init_scripts/conf files/service for swift-proxy. Because the swift-proxy service is not installed as a Object Storage service it is installed on the swift proxy hosts instead of the Object Storage hosts. swift-storage: This play runs the swift-account, swift-container, and swift-object plays which result in the services/init_scripts and config files for the Object Storage services being installed, configured, and started. The following list is the names of the individual plays within swift-storage: swift-account: This play installs the init_scripts/conf files/service for the account server. swift-container: This play installs the init_scripts/conf files/service for the container server. swift-object: This play installs the init_scripts/conf files/service for the object server Running Object Storage playbooks Before running the Object Storage playbooks, Rackspace Private Cloud v10 must be installed. Refer to the Rackspace Private Cloud v10 Installation Guide for instructions. 1. Change to the ~/opt/openstack-ansible/rpc_deployment directory. 18

24 2. Run the host-setup playbook: $ ansible-playbook \ playbooks/setup/host-setup.yml 3. Run the rsyslog-install playbook: $ ansible-playbook \ playbooks/infrastructure/rsyslog-install.yml 4. Run the rsyslog-config playbook: $ ansible-playbook \ playbooks/infrastructure/rsyslog-config.yml 5. Run the swift-all playbook: $ ansible-playbook \ playbooks/openstack/swift-all.yml 19

25 8. Verify the installation These commands can be run from the proxy server or any server that has access to the Identity Service. 1. Ensure that the credentials are set correctly in the /root/openrc file and then source it: $ source /root/openrc 2. Run the following command: $ swift stat Account: AUTH_11b9758b d9b48f7a91ea11493 Containers: 0 Objects: 0 Bytes: 0 Content-Type: text/plain; charset=utf-8 X-Timestamp: X-Trans-Id: txdcdd fb4a2d X-Put-Timestamp: Run the following commands to upload files to a container. Create the test.txt and test2.txt test files locally if needed (can contain anything). $ swift upload myfiles test.txt $ swift upload myfiles test2.txt 4. Run the following command to download all files from the myfiles container: $ swift download myfiles test2.txt [headers 0.267s, total 0.267s, 0.000s MB/s] test.txt [headers 0.271s, total 0.271s, 0.000s MB/s] If the files download successful, Object Storage has installed successfully. 20

26 9. Integrate Object Storage with the Image Service The images created by the Image Service (glance) can be optionally stored using Object Storage. Note If there is an existing Image Service (glance) backend (for example, cloud files) but want to add Object Storage (swift) to use as the Image Service back end, re-add any images from the Image Service after moving to Object Storage. If the Image Service variables are changed (as described below) and begin using Object storage, any images in the Image Service will no longer be available. Procedure 9.1. Integrating Object Storage with Image Service This procedure requires the following: (Juno) Object Storage v Update the glance options in the /etc/rpc_deploy/user_variables.yml file: Glance Options glance_default_store: swift glance_swift_store_auth_address: '{{ auth_identity_uri }}' glance_swift_store_container: glance_images glance_swift_store_endpoint_type: internalurl glance_swift_store_key: '{{ glance_service_password }}' glance_swift_store_region: RegionOne glance_swift_store_user: 'service:glance' glance_default_store: Set the default store to swift. glance_swift_store_auth_address: Set to the local authentication address using the '{{ auth_identity_uri }}' variable. glance_swift_store_container: Set the container name. glance_swift_store_endpoint_type: Set the endpoint type to internalurl. glance_swift_store_key: Set the Image Service password using the '{{ glance_service_password }}' variable. glance_swift_store_region: Set the region. The default value is RegionOne. glance_swift_store_user: Set the tenant and user name to 'service:glance'. 2. Rerun the Image Service (glance) configuration plays. 21

27 3. Run the Image Service (glance) playbook: $ ansible-playbook \ playbooks/openstack/glance-all.yml 22

28 10. Object Storage monitoring Rackspace Cloud Monitoring Service allows Rackspace Private Cloud (RPC) customers to monitor system performance, and safeguard critical data Service and response When a threshold is reached or functionality fails, the Rackspace Cloud Monitoring Service generates an alert, which creates a ticket in the Rackspace ticketing system. This ticket moves into the RPC support queue. Tickets flagged as monitoring alerts are given highest priority, and response is delivered according to the Service Level Agreement (SLA). Refer to the SLA for detailed information about incident severity levels and corresponding response times. Specific monitoring alert guidelines can be set for the installation. These details should be arranged by a Rackspace account manager Object Storage monitors Object Storage has its own set of monitors and alerts. For more information about installing the monitoring tools, see the Rackspace Private Cloud Installation Guide. The following checks are performed on services: Health checks on the services on each server. Object Storage makes a request to /healthcheck for each service to ensure it is responding appropriately. If this check fails, determine why the service is failing and fix it accordingly. Health check against the proxy service on the virtual IP (VIP). Object Storage checks the load balancer address for the swift-proxy-server service, and monitors the proxy servers as a whole rather than individually which is managed by the individual service health checks. If this check fails, it suggests that there is no access to the VIP or all of the services are failing. The following checks are performed against the output of the swift-recon middleware: md5sum checks on the ring files across all Object Storage nodes. This check ensures that the ring files are the same on each node. If this check fails, determine why the md5sum for the ring is different and determine which of the ring files is correct. Copy the correct ring file onto the node that is causing the md5sum to fail. md5sum checks on the swift.conf across all swift nodes. If this check fails, determine why the swift.conf is different and determine which of the swift.conf is correct. Copy the correct swift.conf onto the node that is causing the md5sum to fail. 23

29 Asyncs pending This check monitors the average number of async pending requests and the percentage that are put in async pending. This happens when a PUT or DELETE fails (due to, for example, timeouts, heavy usage, failed disk). If this check fails, determine why requests are failing and being put in async pending status and fix accordingly. Quarantine This check monitors the percentage of objects that are quarantined (objects that are found to have errors and moved to quarantine). An alert is set up against account, container, and object servers. If this fails, determine the cause of the corrupted objects and fix accordingly. Replication This check monitors replication success percentage. An alert is set up against account, container, and object servers. If this fails, determine why objects are not replicating and fix accordingly. 24

30 Appendix A. Object Storage configuration files Table of Contents A.1. swift.yml example configuration file A.2. user_variables.yml configuration file Object Storage is configured using the /etc/rpc_deploy/conf.d/swift.yml file and the /etc/rpc_deploy/user_variables.yml file. This appendix shows both configuration files and indicates which variables are required and which are optional. A.1. swift.yml example configuration file --- Copyright 2015, Rackspace US, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Overview ======== This file contains the configuration for the OpenStack Ansible Deployment (OSA) Object Storage (swift) service. Only enable these options for deployments that contain the Object Storage service. For more information on these options, see the documentation at You can customize the options in this file and copy it to /etc/openstack_deploy/conf.d/swift.yml or create a new file containing only necessary options for your environment before deployment. OSA implements PyYAML to parse YAML files and therefore supports structure and formatting options that augment traditional YAML. For example, aliases or references. For more information on PyYAML, see the documentation at Configuration reference ======================= 25

31 Level: global_overrides (required) Contains global options that require customization for a deployment. For example, the ring stricture. This level also provides a mechanism to override other options defined in the playbook structure. Level: swift (required) Contains options for swift. Option: storage_network (required, string) Name of the storage network bridge on target hosts. Typically 'br-storage'. Option: repl_network (optional, string) Name of the replication network bridge on target hosts. Typically 'br-repl'. Defaults to the value of the 'storage_network' option. Option: part_power (required, integer) Partition power. Applies to all rings unless overridden at the 'account' or 'container' levels or within a policy in the 'storage_policies' level. Immutable without rebuilding the rings. Option: repl_number (optional, integer) Number of replicas for each partition. Applies to all rings unless overridden at the 'account' or 'container' levels or within a policy in the 'storage_policies' level. Defaults to 3. Option: min_part_hours (optional, integer) Minimum time in hours between multiple moves of the same partition. Applies to all rings unless overridden at the 'account' or 'container' levels or within a policy in the 'storage_policies' level. Defaults to 1. Option: region (optional, integer) Region of a disk. Applies to all disks in all storage hosts unless overridden deeper in the structure. Defaults to 1. Option: zone (optional, integer) Zone of a disk. Applies to all disks in all storage hosts unless overridden deeper in the structure. Defaults to 0. Option: weight (optional, integer) Weight of a disk. Applies to all disks in all storage hosts unless overridden deeper in the structure. Defaults to 100. Example: Define a typical deployment: - Storage network that uses the 'br-storage' bridge. Proxy containers typically use the 'storage' IP address pool. However, storage hosts use bare metal and require manual configuration of the 'br-storage' bridge on each host. - Replication network that uses the 'br-repl' bridge. Only storage hosts contain this network. Storage hosts use bare metal and require manual configuration of the bridge on each host. - Ring configuration with partition power of 8, three replicas of each file, and minimum 1 hour between migrations of the same partition. All rings use region 1 and zone 0. All disks include a weight of

32 swift: storage_network: 'br-storage' replication_network: 'br-repl' part_power: 8 repl_number: 3 min_part_hours: 1 region: 1 zone: 0 weight: 100 Note: Most typical deployments override the 'zone' option in the 'swift_vars' level to use a unique zone for each storage host. Option: mount_point (required, string) Top-level directory for mount points of disks. Defaults to /mnt. Applies to all hosts unless overridden deeper in the structure. Level: drives (required) Contains the mount points of disks. Applies to all hosts unless overridden deeper in the structure. Option: name (required, string) Mount point of a disk. Use one entry for each disk. Applies to all hosts unless overridden deeper in the structure. Example: Mount disks 'sdc', 'sdd', 'sde', and 'sdf' to the '/mnt' directory on all storage hosts: mount_point: /mnt drives: - name: sdc - name: sdd - name: sde - name: sdf Level: account (optional) Contains 'min_part_hours' and 'repl_number' options specific to the account ring. Level: container (optional) Contains 'min_part_hours' and 'repl_number' options specific to the container ring. Level: storage_policies (required) Contains storage policies. Minimum one policy. One policy must include the 'index: 0' and 'default: True' options. Level: policy (required) Contains a storage policy. Define for each policy. Option: name (required, string) Policy name. Option: index (required, integer) Policy index. One policy must include this option with a '0' value. 27

33 Option: policy_type (optional, string) Defines policy as replication or erasure coding. Accepts 'replication' 'erasure_coding' values. Defaults to 'replication' value if omitted. Option: ec_type (conditionally required, string) Defines the erasure coding algorithm. Required for erasure coding policies. Option: ec_num_data_fragments (conditionally required, integer) Defines the number of object data fragments. Required for erasure coding policies. Option: ec_num_parity_fragments (conditionally required, integer) Defines the number of object parity fragments. Required for erasure coding policies. Option: ec_object_segment_size (conditionally required, integer) Defines the size of object segments in bytes. Swift sends incoming objects to an erasure coding policy in segments of this size. Required for erasure coding policies. Option: default (conditionally required, boolean) Defines the default policy. One policy must include this option with a 'True' value. Option: deprecated (optional, boolean) Defines a deprecated policy. Note: The following levels and options override any values higher in the structure and generally apply to advanced deployments. Option: repl_number (optional, integer) Number of replicas of each partition in this policy. Option: min_part_hours (optional, integer) Minimum time in hours between multiple moves of the same partition in this policy. Example: Define three storage policies: A default 'gold' policy, a deprecated 'silver' policy, and an erasure coding 'ec10-4' policy. storage_policies: - policy: name: gold index: 0 default: True - policy: name: silver index: 1 repl_number: 3 deprecated: True - policy: name: ec10-4 index: 2 policy_type: erasure_coding ec_type: jerasure_rs_vand 28

34 ec_num_data_fragments: 10 ec_num_parity_fragments: 4 ec_object_segment_size: Level: swift_proxy-hosts (required) List of target hosts on which to deploy the swift proxy service. Recommend three minimum target hosts for these services. Typically contains the same target hosts as the 'shared-infra_hosts' level in complete OpenStack deployments. Level: <value> (optional, string) Name of a proxy host. Option: ip (required, string) IP address of this target host, typically the IP address assigned to the management bridge. Level: container_vars (optional) Contains options for this target host. Level: swift_proxy_vars (optional) Contains swift proxy options for this target host. Typical deployments use this level to define read/write affinity settings for proxy hosts. Option: read_affinity (optional, string) Specify which region/zones the proxy server should prefer for reads from the account, container and object services. E.g. read_affinity: "r1=100" this would prefer region 1 read_affinity: "r1z1=100, r1=200" this would prefer region 1 zone 1 if that is unavailable region 1, otherwise any available region/ zone. Lower number is higher priority. When this option is specified the sorting_method is set to 'affinity' automatically. Option: write_affinity (optional, string) Specify which region to prefer when object PUT requests are made. E.g. write_affinity: "r1" - favours region 1 for object PUTs Option: write_affinity_node_count (optional, string) Specify how many copies to prioritise in specified region on handoff nodes for Object PUT requests. Requires "write_affinity" to be set in order to be useful. This is a short term way to ensure replication happens locally, Swift's eventual consistency will ensure proper distribution over time. e.g. write_affinity_node_count: "2 * replicas" - this would try to store Object PUT replicas on up to 6 disks in region 1 assuming replicas is 3, and write_affinity = r1 Example: Define three swift proxy hosts: swift_proxy-hosts: infra1: ip: container_vars: 29

35 swift_proxy_vars: read_affinity: "r1=100" write_affinity: "r1" write_affinity_node_count: "2 * replicas" infra2: ip: container_vars: swift_proxy_vars: read_affinity: "r2=100" write_affinity: "r2" write_affinity_node_count: "2 * replicas" infra3: ip: container_vars: swift_proxy_vars: read_affinity: "r3=100" write_affinity: "r3" write_affinity_node_count: "2 * replicas" Level: swift_hosts (required) List of target hosts on which to deploy the swift storage services. Recommend three minimum target hosts for these services. Level: <value> (required, string) Name of a storage host. Option: ip (required, string) IP address of this target host, typically the IP address assigned to the management bridge. Note: The following levels and options override any values higher in the structure and generally apply to advanced deployments. Level: container_vars (optional) Contains options for this target host. Level: swift_vars (optional) Contains swift options for this target host. Typical deployments use this level to define a unique zone for each storage host. Option: storage_ip (optional, string) IP address to use for accessing the account, container, and object services if different than the IP address of the storage network bridge on the target host. Also requires manual configuration of the host. Option: repl_ip (optional, string) IP address to use for replication services if different than the IP address of the replication network bridge on the target host. Also requires manual configuration of the host. Option: region (optional, integer) Region of all disks. Option: zone (optional, integer) Zone of all disks. Option: weight (optional, integer) 30

36 Weight of all disks. Level: groups (optional) List of one of more Ansible groups that apply to this host. Example: Deploy the account ring, container ring, and 'silver' policy. groups: - account - container - silver Level: drives (optional) Contains the mount points of disks specific to this host. Level or option: name (optional, string) Mount point of a disk specific to this host. Use one entry for each disk. Functions as a level for disks that contain additional options. Option: storage_ip (optional, string) IP address to use for accessing the account, container, and object services of a disk if different than the IP address of the storage network bridge on the target host. Also requires manual configuration of the host. Option: repl_ip (optional, string) IP address to use for replication services of a disk if different than the IP address of the replication network bridge on the target host. Also requires manual configuration of the host. Option: region (optional, integer) Region of a disk. Option: zone (optional, integer) Zone of a disk. Option: weight (optional, integer) Weight of a disk. Level: groups (optional) List of one or more Ansible groups that apply to this disk. Example: Define four storage hosts. The first three hosts contain typical options and the last host contains advanced options. swift_hosts: swift-node1: ip: container_vars: swift_vars: zone: 0 swift-node2: ip: container_vars: 31

37 swift_vars: zone: 1 swift-node3: ip: container_vars: swift_vars: zone: 2 swift-node4: ip: container_vars: swift_vars: storage_ip: repl_ip: region: 2 zone: 0 weight: 200 groups: - account - container - silver drives: - name: sdc storage_ip: repl_ip: weight: 75 groups: - gold - name: sdd - name: sde - name: sdf A.2. user_variables.yml configuration file --- Copyright 2014, Rackspace US, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Ceilometer Options ceilometer_db_type: mongodb ceilometer_db_ip: localhost ceilometer_db_port: swift_ceilometer_enabled: False heat_ceilometer_enabled: False cinder_ceilometer_enabled: False 32

38 glance_ceilometer_enabled: False nova_ceilometer_enabled: False Aodh Options aodh_db_type: mongodb aodh_db_ip: localhost aodh_db_port: Glance Options Set glance_default_store to "swift" if using Cloud Files or swift backend or "rbd" if using ceph backend; the latter will trigger ceph to get installed on glance glance_default_store: file glance_notification_driver: noop `internalurl` will cause glance to speak to swift via ServiceNet, use `publicurl` to communicate with swift over the public network glance_swift_store_endpoint_type: internalurl Ceph client user for glance to connect to the ceph cluster glance_ceph_client: glance Ceph pool name for Glance to use glance_rbd_store_pool: images glance_rbd_store_chunk_size: 8 Nova When nova_libvirt_images_rbd_pool is defined, ceph will be installed on nova hosts. nova_libvirt_images_rbd_pool: vms by default we assume you use rbd for both cinder and nova, and as libvirt needs to access both volumes (cinder) and boot disks (nova) we default to reuse the cinder_ceph_client only need to change this if you'd use ceph for boot disks and not for volumes nova_ceph_client: nova_ceph_client_uuid: This defaults to KVM, if you are deploying on a host that is not KVM capable change this to your hypervisor type: IE "qemu", "lxc". nova_virt_type: kvm nova_cpu_allocation_ratio: 2.0 nova_ram_allocation_ratio: 1.0 If you wish to change the dhcp_domain configured for both nova and neutron dhcp_domain: Glance with Swift Extra options when configuring swift as a glance back-end. By default it will use the local swift install Set these when using a remote swift as a glance backend glance_swift_store_auth_address: " glance_swift_store_user: "OPENSTACK_TENANT_ID:OPENSTACK_USER_NAME" glance_swift_store_key: "OPENSTACK_USER_PASSWORD" glance_swift_store_container: "NAME_OF_SWIFT_CONTAINER" glance_swift_store_region: "NAME_OF_REGION" Cinder Ceph client user for cinder to connect to the ceph cluster cinder_ceph_client: cinder 33

39 Ceph Enable these if you use ceph rbd for at least one component (glance, cinder, nova) ceph_apt_repo_url_region: "www" or "eu" for Netherlands based mirror ceph_stable_release: hammer Ceph Authentication - by default cephx is true cephx: true Ceph Monitors A list of the IP addresses for your Ceph monitors ceph_mons: Custom Ceph Configuration File (ceph.conf) By default, your deployment host will connect to one of the mons defined above to obtain a copy of your cluster's ceph.conf. If you prefer, uncomment ceph_conf_file and customise to avoid ceph.conf being copied from a mon. ceph_conf_file: [global] fsid = mon_initial_members = mon1.example.local,mon2.example.local,mon3.example. local mon_host = , , optionally, you can use this construct to avoid defining this list twice: mon_host = {{ ceph_mons join(',') }} auth_cluster_required = cephx auth_service_required = cephx SSL Settings Adjust these settings to change how SSL connectivity is configured for various services. For more information, see the openstack-ansible documentation section titled "Securing services with SSL certificates". SSL: Keystone These do not need to be configured unless you're creating certificates for services running behind Apache (currently, Horizon and Keystone). ssl_protocol: "ALL -SSLv2 -SSLv3" Cipher suite string from ssl_cipher_suite: "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH +AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS" To override for Keystone only: - keystone_ssl_protocol - keystone_ssl_cipher_suite To override for Horizon only: - horizon_ssl_protocol - horizon_ssl_cipher_suite SSL: RabbitMQ Set these variables if you prefer to use existing SSL certificates, keys and CA certificates with the RabbitMQ SSL/TLS Listener rabbitmq_user_ssl_cert: <path to cert on ansible deployment host> rabbitmq_user_ssl_key: <path to cert on ansible deployment host> rabbitmq_user_ssl_ca_cert: <path to cert on ansible deployment host> By default, openstack-ansible configures all OpenStack services to talk to 34

rackspace.com/cloud/private

rackspace.com/cloud/private rackspace.com/cloud/private Standalone Object Storage: Rackspace Private Cloud Powered By Open- Stack RPCO v11 (2015-12-09) Copyright 2015 Rackspace All rights reserved. This documentation is intended

More information

OpenStack Introduction. November 4, 2015

OpenStack Introduction. November 4, 2015 OpenStack Introduction November 4, 2015 Application Platforms Undergoing A Major Shift What is OpenStack Open Source Cloud Software Launched by NASA and Rackspace in 2010 Massively scalable Managed by

More information

Release Notes for Fuel and Fuel Web Version 3.0.1

Release Notes for Fuel and Fuel Web Version 3.0.1 Release Notes for Fuel and Fuel Web Version 3.0.1 June 21, 2013 1 Mirantis, Inc. is releasing version 3.0.1 of the Fuel Library and Fuel Web products. This is a cumulative maintenance release to the previously

More information

Prepared for: How to Become Cloud Backup Provider

Prepared for: How to Become Cloud Backup Provider Prepared for: How to Become Cloud Backup Provider Contents Abstract... 3 Introduction... 3 Purpose... 3 Architecture... 4 Result... 4 Requirements... 4 OS... 5 Networking... 5 Database... 5 Permissions...

More information

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager pchadwick@suse.com. Product Marketing Manager djarvis@suse.

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager pchadwick@suse.com. Product Marketing Manager djarvis@suse. SUSE Cloud 2.0 Pete Chadwick Douglas Jarvis Senior Product Manager pchadwick@suse.com Product Marketing Manager djarvis@suse.com SUSE Cloud SUSE Cloud is an open source software solution based on OpenStack

More information

Cloud Computing #8 - Datacenter OS. Johan Eker

Cloud Computing #8 - Datacenter OS. Johan Eker Cloud Computing #8 - Datacenter OS Johan Eker Outline What is a Datacenter OS? OpenStack Kubernetes Resource Management What is an OS? What is an OS? Manage hardware resources such as CPU, RAM, disk, I/O,

More information

Mirantis www.mirantis.com/training

Mirantis www.mirantis.com/training TM Mirantis www.mirantis.com/training Goals Understand OpenStack purpose and use cases Understand OpenStack ecosystem o history o projects Understand OpenStack architecture o logical architecture o components

More information

SWIFT. Page:1. Openstack Swift. Object Store Cloud built from the grounds up. David Hadas Swift ATC. HRL davidh@il.ibm.com 2012 IBM Corporation

SWIFT. Page:1. Openstack Swift. Object Store Cloud built from the grounds up. David Hadas Swift ATC. HRL davidh@il.ibm.com 2012 IBM Corporation Page:1 Openstack Swift Object Store Cloud built from the grounds up David Hadas Swift ATC HRL davidh@il.ibm.com Page:2 Object Store Cloud Services Expectations: PUT/GET/DELETE Huge Capacity (Scale) Always

More information

CS312 Solutions #6. March 13, 2015

CS312 Solutions #6. March 13, 2015 CS312 Solutions #6 March 13, 2015 Solutions 1. (1pt) Define in detail what a load balancer is and what problem it s trying to solve. Give at least two examples of where using a load balancer might be useful,

More information

How To Install Openstack On Ubuntu 14.04 (Amd64)

How To Install Openstack On Ubuntu 14.04 (Amd64) Getting Started with HP Helion OpenStack Using the Virtual Cloud Installation Method 1 What is OpenStack Cloud Software? A series of interrelated projects that control pools of compute, storage, and networking

More information

rackspace.com/cloud/private

rackspace.com/cloud/private rackspace.com/cloud/private Rackspace Private Cloud Installation (2014-11-21) Copyright 2014 Rackspace All rights reserved. This documentation is intended for users who want to install Rackspace Private

More information

Building Storage as a Service with OpenStack. Greg Elkinbard Senior Technical Director

Building Storage as a Service with OpenStack. Greg Elkinbard Senior Technical Director Building Storage as a Service with OpenStack Greg Elkinbard Senior Technical Director MIRANTIS 2012 PAGE 1 About the Presenter Greg Elkinbard Senior Technical Director at Mirantis Builds on demand IaaS

More information

OpenStack Object Storage Administrator Guide

OpenStack Object Storage Administrator Guide OpenStack Object Storage (02/03/11) Copyright 2010, 2011 OpenStack LLC All rights reserved. OpenStack Object Storage offers open source software for cloud-based object storage for any organization. This

More information

System Administrators, engineers and consultants who will plan and manage OpenStack-based environments.

System Administrators, engineers and consultants who will plan and manage OpenStack-based environments. OpenStack Foundations (HP-H6C68) Course Overview This three day course assists administrators and users to configure, manage, and use the OpenStack cloud services platform. An architectural overview ensures

More information

Storage solutions for a. infrastructure. Giacinto DONVITO INFN-Bari. Workshop on Cloud Services for File Synchronisation and Sharing

Storage solutions for a. infrastructure. Giacinto DONVITO INFN-Bari. Workshop on Cloud Services for File Synchronisation and Sharing Storage solutions for a productionlevel cloud infrastructure Giacinto DONVITO INFN-Bari Synchronisation and Sharing 1 Outline Use cases Technologies evaluated Implementation (hw and sw) Problems and optimization

More information

OpenStack Towards a fully open cloud. Thierry Carrez Release Manager, OpenStack

OpenStack Towards a fully open cloud. Thierry Carrez Release Manager, OpenStack OpenStack Towards a fully open cloud Thierry Carrez Release Manager, OpenStack Cloud? Why we need open source IaaS A cloud building block Emergence of a standard Eliminate cloud vendor lock-in Enable federation

More information

Snakes on a cloud. A presentation of the OpenStack project. Thierry Carrez Release Manager, OpenStack

Snakes on a cloud. A presentation of the OpenStack project. Thierry Carrez Release Manager, OpenStack Snakes on a cloud A presentation of the OpenStack project Thierry Carrez Release Manager, OpenStack Cloud? Buzzword End-user services Software as a Service (SaaS) End-user services Online storage / streaming

More information

SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment

SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment Best Practices Guide www.suse.com SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment Written by B1 Systems GmbH Table of Contents Introduction...3 Use Case Overview...3

More information

OpenStack Awareness Session

OpenStack Awareness Session OpenStack Awareness Session Affan A. Syed Director Engineering, PLUMgrid Inc. Pakistan Telecommunication Authority, Oct 20 th, 2015 PLUMgrid s Mission Deliver comprehensive virtual networking solutions

More information

Establishing Scientific Computing Clouds on Limited Resources using OpenStack

Establishing Scientific Computing Clouds on Limited Resources using OpenStack UNIVERSITY OF TARTU FACULTY OF MATHEMATICS AND COMPUTER SCIENCE Institute of Computer Science Valdur Kadakas Establishing Scientific Computing Clouds on Limited Resources using OpenStack Bachelor Thesis

More information

Veeam Cloud Connect. Version 8.0. Administrator Guide

Veeam Cloud Connect. Version 8.0. Administrator Guide Veeam Cloud Connect Version 8.0 Administrator Guide April, 2015 2015 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be

More information

Introduction to Openstack, an Open Cloud Computing Platform. Libre Software Meeting

Introduction to Openstack, an Open Cloud Computing Platform. Libre Software Meeting Introduction to Openstack, an Open Cloud Computing Platform Libre Software Meeting 10 July 2012 David Butler BBC Research & Development david.butler@rd.bbc.co.uk Introduction: Libre Software Meeting 2012

More information

How To Manage An Openstack Cloud On A Raspberry Pommer (Rpc) V9 Software (Rp) V2.5 (Rper) (Rproper) V3.5.5 And V3 (Rpl

How To Manage An Openstack Cloud On A Raspberry Pommer (Rpc) V9 Software (Rp) V2.5 (Rper) (Rproper) V3.5.5 And V3 (Rpl rackspace.com/cloud/private Rackspace Private Cloud v9 Installation RPC v9.0 (2015-09-10) Copyright 2015 Rackspace All rights reserved. This documentation is intended for Rackspace customers who are interested

More information

Running an OpenStack Cloud for several years and living to tell the tale. Alexandre Maumené Gaëtan Trellu Tokyo Summit, November 2015

Running an OpenStack Cloud for several years and living to tell the tale. Alexandre Maumené Gaëtan Trellu Tokyo Summit, November 2015 Running an OpenStack Cloud for several years and living to tell the tale Alexandre Maumené Gaëtan Trellu Tokyo Summit, November 2015 About the speakers Alexandre Maumené OpenStacker since 2012, Red-Hatter

More information

NephOS A Licensed End-to-end IaaS Cloud Software Stack for Enterprise or OEM On-premise Use.

NephOS A Licensed End-to-end IaaS Cloud Software Stack for Enterprise or OEM On-premise Use. NephOS A Licensed End-to-end IaaS Cloud Software Stack for Enterprise or OEM On-premise Use. Benefits High performance architecture Advanced security and reliability Increased operational efficiency More

More information

Investigating Private Cloud Storage Deployment using Cumulus, Walrus, and OpenStack/Swift

Investigating Private Cloud Storage Deployment using Cumulus, Walrus, and OpenStack/Swift Investigating Private Cloud Storage Deployment using Cumulus, Walrus, and OpenStack/Swift Prakashan Korambath Institute for Digital Research and Education (IDRE) 5308 Math Sciences University of California,

More information

Guide to the LBaaS plugin ver. 1.0.2 for Fuel

Guide to the LBaaS plugin ver. 1.0.2 for Fuel Guide to the LBaaS plugin ver. 1.0.2 for Fuel Load Balancing plugin for Fuel LBaaS (Load Balancing as a Service) is currently an advanced service of Neutron that provides load balancing for Neutron multi

More information

www.basho.com Technical Overview Simple, Scalable, Object Storage Software

www.basho.com Technical Overview Simple, Scalable, Object Storage Software www.basho.com Technical Overview Simple, Scalable, Object Storage Software Table of Contents Table of Contents... 1 Introduction & Overview... 1 Architecture... 2 How it Works... 2 APIs and Interfaces...

More information

DOCUMENTATION ON ADDING ENCRYPTION TO OPENSTACK SWIFT

DOCUMENTATION ON ADDING ENCRYPTION TO OPENSTACK SWIFT DOCUMENTATION ON ADDING ENCRYPTION TO OPENSTACK SWIFT BY MUHAMMAD KAZIM & MOHAMMAD RAFAY ALEEM 30/11/2013 TABLE OF CONTENTS CHAPTER 1: Introduction to Swift...3 CHAPTER 2: Deploying OpenStack.. 4 CHAPTER

More information

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure TECHNICAL WHITE PAPER Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure A collaboration between Canonical and VMware

More information

SUSE Cloud 5 Private Cloud based on OpenStack

SUSE Cloud 5 Private Cloud based on OpenStack SUSE Cloud 5 Private Cloud based on OpenStack Michał Jura Senior Software Engineer Linux HA/Cloud Developer mjura@suse.com 2 New solutions emerge: Infrastructure-as-Service Cloud = 3 SUSE Cloud Why OpenStack?

More information

CumuLogic Load Balancer Overview Guide. March 2013. CumuLogic Load Balancer Overview Guide 1

CumuLogic Load Balancer Overview Guide. March 2013. CumuLogic Load Balancer Overview Guide 1 CumuLogic Load Balancer Overview Guide March 2013 CumuLogic Load Balancer Overview Guide 1 Table of Contents CumuLogic Load Balancer... 3 Architectural Overview of CumuLogic Load Balancer... 4 How to Use

More information

CloudCIX Bootcamp. The essential IaaS getting started guide. http://www.cix.ie

CloudCIX Bootcamp. The essential IaaS getting started guide. http://www.cix.ie The essential IaaS getting started guide. http://www.cix.ie Revision Date: 17 th August 2015 Contents Acronyms... 2 Table of Figures... 3 1 Welcome... 4 2 Architecture... 5 3 Getting Started... 6 3.1 Login

More information

WP4: Cloud Hosting Chapter Object Storage Generic Enabler

WP4: Cloud Hosting Chapter Object Storage Generic Enabler WP4: Cloud Hosting Chapter Object Storage Generic Enabler Webinar John Kennedy, Thijs Metsch@ Intel Outline 1 Overview of the Cloud Hosting Work Package 2 Functionality Trust and Security Operations FI-WARE

More information

Openstack. Cloud computing with Openstack. Saverio Proto saverio.proto@switch.ch

Openstack. Cloud computing with Openstack. Saverio Proto saverio.proto@switch.ch Openstack Cloud computing with Openstack Saverio Proto saverio.proto@switch.ch Lugano, 23/03/2016 Agenda SWITCH role in Openstack and Cloud Computing What is Virtualization? Why is Cloud computing more

More information

Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP

Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP Agenda ADP Cloud Vision and Requirements Introduction to SUSE Cloud Overview Whats New VMWare intergration HyperV intergration ADP

More information

Introduction to OpenStack

Introduction to OpenStack Introduction to OpenStack Carlo Vallati PostDoc Reseracher Dpt. Information Engineering University of Pisa carlo.vallati@iet.unipi.it Cloud Computing - Definition Cloud Computing is a term coined to refer

More information

Service Description Cloud Storage Openstack Swift

Service Description Cloud Storage Openstack Swift Service Description Cloud Storage Openstack Swift Table of Contents Overview iomart Cloud Storage... 3 iomart Cloud Storage Features... 3 Technical Features... 3 Proxy... 3 Storage Servers... 4 Consistency

More information

Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide

Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide July 2010 1 Specifications are subject to change without notice. The Cloud.com logo, Cloud.com, Hypervisor Attached Storage, HAS, Hypervisor

More information

Introducing ScienceCloud

Introducing ScienceCloud Zentrale Informatik Introducing ScienceCloud Sergio Maffioletti IS/Cloud S3IT: Service and Support for Science IT Zurich, 10.03.2015 What are we going to talk about today? 1. Why are we building ScienceCloud?

More information

The OpenStack TM Object Storage system

The OpenStack TM Object Storage system The OpenStack TM Object Storage system Deploying and managing a scalable, open- source cloud storage system with the SwiftStack Platform By SwiftStack, Inc. contact@swiftstack.com Contents Introduction...

More information

Cloud on TEIN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University

Cloud on TEIN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University Cloud on TEIN Part I: OpenStack Cloud Deployment Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University Outline Objectives Part I: OpenStack Overview How OpenStack

More information

Integrating Scality RING into OpenStack Unified Storage for Cloud Infrastructure

Integrating Scality RING into OpenStack Unified Storage for Cloud Infrastructure Unified Storage for Cloud Infrastructure Nicolas Trangez Jordan Pittier Björn Schuberg Contents Introduction... 1 About OpenStack... 1 About Scality... 2 Services...3 OpenStack Swift...3 Architecture...4

More information

How to manage your OpenStack Swift Cluster using Swift Metrics Sreedhar Varma Vedams Inc.

How to manage your OpenStack Swift Cluster using Swift Metrics Sreedhar Varma Vedams Inc. How to manage your OpenStack Swift Cluster using Swift Metrics Sreedhar Varma Vedams Inc. What is OpenStack Swift Cluster? Cluster of Storage Server Nodes, Proxy Server Nodes and Storage Devices 2 Data

More information

Deploying workloads with Juju and MAAS in Ubuntu 13.04

Deploying workloads with Juju and MAAS in Ubuntu 13.04 Deploying workloads with Juju and MAAS in Ubuntu 13.04 A Dell Technical White Paper Kent Baxley Canonical Field Engineer Jose De la Rosa Dell Software Engineer 2 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014

How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014 How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014 2014 年 云 计 算 效 率 与 能 耗 暨 第 一 届 国 际 云 计 算 咨 询 委 员 会 中 国 高 峰 论 坛 Contents Background

More information

RSA Security Analytics Virtual Appliance Setup Guide

RSA Security Analytics Virtual Appliance Setup Guide RSA Security Analytics Virtual Appliance Setup Guide Copyright 2010-2015 RSA, the Security Division of EMC. All rights reserved. Trademarks RSA, the RSA Logo and EMC are either registered trademarks or

More information

KVM, OpenStack, and the Open Cloud

KVM, OpenStack, and the Open Cloud KVM, OpenStack, and the Open Cloud Adam Jollans, IBM Southern California Linux Expo February 2015 1 Agenda A Brief History of VirtualizaJon KVM Architecture OpenStack Architecture KVM and OpenStack Case

More information

Installation Runbook for Avni Software Defined Cloud

Installation Runbook for Avni Software Defined Cloud Installation Runbook for Avni Software Defined Cloud Application Version 2.5 MOS Version 6.1 OpenStack Version Application Type Juno Hybrid Cloud Management System Content Document History 1 Introduction

More information

INSTALL ZENTYAL SERVER

INSTALL ZENTYAL SERVER GUIDE FOR Zentyal Server is a small business server based on Ubuntu s LTS server version 10.04 and the ebox platform. It also has the LXDE desktop installed with Firefox web browser and PCMAN File manager.

More information

KVM, OpenStack, and the Open Cloud

KVM, OpenStack, and the Open Cloud KVM, OpenStack, and the Open Cloud Adam Jollans, IBM & Mike Kadera, Intel CloudOpen Europe - October 13, 2014 13Oct14 Open VirtualizaGon Alliance 1 Agenda A Brief History of VirtualizaGon KVM Architecture

More information

vrealize Operations Management Pack for OpenStack

vrealize Operations Management Pack for OpenStack vrealize Operations Management Pack for This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more

More information

Installation Runbook for F5 Networks BIG-IP LBaaS Plugin for OpenStack Kilo

Installation Runbook for F5 Networks BIG-IP LBaaS Plugin for OpenStack Kilo Installation Runbook for F5 Networks BIG-IP LBaaS Plugin for OpenStack Kilo Application Version F5 BIG-IP TMOS 11.6 MOS Version 7.0 OpenStack Version Application Type Openstack Kilo Validation of LBaaS

More information

Corso di Reti di Calcolatori M

Corso di Reti di Calcolatori M Università degli Studi di Bologna Scuola di Ingegneria Corso di Reti di Calcolatori M Cloud: Openstack Antonio Corradi Luca Foschini Anno accademico 2014/2015 NIST STANDARD CLOUD National Institute of

More information

Building Multi-Site & Ultra-Large Scale Cloud with Openstack Cascading

Building Multi-Site & Ultra-Large Scale Cloud with Openstack Cascading Building Multi-Site & Ultra-Large Scale Cloud with Openstack Cascading Requirement and driving forces multi-site cloud Along with the increasing popularity and wide adoption of Openstack as the de facto

More information

Parallels Cloud Storage

Parallels Cloud Storage Parallels Cloud Storage White Paper Best Practices for Configuring a Parallels Cloud Storage Cluster www.parallels.com Table of Contents Introduction... 3 How Parallels Cloud Storage Works... 3 Deploying

More information

AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013

AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013 AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013 OpenStack What is OpenStack? OpenStack is a cloud operaeng system that controls large pools of compute, storage, and networking resources

More information

SYNNEFO: A COMPLETE CLOUD PLATFORM OVER GOOGLE GANETI WITH OPENSTACK APIs VANGELIS KOUKIS, TECH LEAD, SYNNEFO

SYNNEFO: A COMPLETE CLOUD PLATFORM OVER GOOGLE GANETI WITH OPENSTACK APIs VANGELIS KOUKIS, TECH LEAD, SYNNEFO SYNNEFO: A COMPLETE CLOUD PLATFORM OVER GOOGLE GANETI WITH OPENSTACK APIs VANGELIS KOUKIS, TECH LEAD, SYNNEFO 1 Synnefo cloud platform An all-in-one cloud solution Written from scratch in Python Manages

More information

Syncplicity On-Premise Storage Connector

Syncplicity On-Premise Storage Connector Syncplicity On-Premise Storage Connector Implementation Guide Abstract This document explains how to install and configure the Syncplicity On-Premise Storage Connector. In addition, it also describes how

More information

StorPool Distributed Storage Software Technical Overview

StorPool Distributed Storage Software Technical Overview StorPool Distributed Storage Software Technical Overview StorPool 2015 Page 1 of 8 StorPool Overview StorPool is distributed storage software. It pools the attached storage (hard disks or SSDs) of standard

More information

Ubuntu OpenStack Fundamentals Training

Ubuntu OpenStack Fundamentals Training Ubuntu OpenStack Fundamentals Training Learn from the best, how to use the best! You ve made the decision to use the most powerful open cloud platform, and now you need to learn how to make the most of

More information

GRAVITYZONE HERE. Deployment Guide VLE Environment

GRAVITYZONE HERE. Deployment Guide VLE Environment GRAVITYZONE HERE Deployment Guide VLE Environment LEGAL NOTICE All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including

More information

Multi Provider Cloud. Srinivasa Acharya, Engineering Manager, Hewlett-Packard rsacharya@hp.com

Multi Provider Cloud. Srinivasa Acharya, Engineering Manager, Hewlett-Packard rsacharya@hp.com Multi Provider Cloud Srinivasa Acharya, Engineering Manager, Hewlett-Packard rsacharya@hp.com Agenda Introduction to OpenStack Multi Hypervisor Architecture Use cases for Multi Hypervisor cloud Ironic

More information

1 Keystone OpenStack Identity Service

1 Keystone OpenStack Identity Service 1 Keystone OpenStack Identity Service In this chapter, we will cover: Creating a sandbox environment using VirtualBox and Vagrant Configuring the Ubuntu Cloud Archive Installing OpenStack Identity Service

More information

Deployment Guide Oracle Siebel CRM

Deployment Guide Oracle Siebel CRM Deployment Guide Oracle Siebel CRM DG_ OrSCRM_032013.1 TABLE OF CONTENTS 1 Introduction...4 2 Deployment Topology...4 2.1 Deployment Prerequisites...6 2.2 Siebel CRM Server Roles...7 3 Accessing the AX

More information

Benchmarking Sahara-based Big-Data-as-a-Service Solutions. Zhidong Yu, Weiting Chen (Intel) Matthew Farrellee (Red Hat) May 2015

Benchmarking Sahara-based Big-Data-as-a-Service Solutions. Zhidong Yu, Weiting Chen (Intel) Matthew Farrellee (Red Hat) May 2015 Benchmarking Sahara-based Big-Data-as-a-Service Solutions Zhidong Yu, Weiting Chen (Intel) Matthew Farrellee (Red Hat) May 2015 Agenda o Why Sahara o Sahara introduction o Deployment considerations o Performance

More information

insync Installation Guide

insync Installation Guide insync Installation Guide 5.2 Private Cloud Druva Software June 21, 13 Copyright 2007-2013 Druva Inc. All Rights Reserved. Table of Contents Deploying insync Private Cloud... 4 Installing insync Private

More information

SMB in the Cloud David Disseldorp

SMB in the Cloud David Disseldorp SMB in the Cloud David Disseldorp Samba Team / SUSE ddiss@suse.de Agenda Cloud storage Common types Interfaces Applications Cloud file servers Microsoft Azure File Service Demonstration Amazon Elastic

More information

Introduction to Gluster. Versions 3.0.x

Introduction to Gluster. Versions 3.0.x Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster

More information

1.1 SERVICE DESCRIPTION

1.1 SERVICE DESCRIPTION ADVANIA OPENCLOUD SERCVICE LEVEL AGREEMENT 1.1 SERVICE DESCRIPTION The service is designed in a way that will minimize Advania s operational involvement. Advania administrates the cloud platform and provides

More information

SUSE Cloud Installation: Best Practices Using an Existing SMT and KVM Environment

SUSE Cloud Installation: Best Practices Using an Existing SMT and KVM Environment Best Practices Guide www.suse.com SUSE Cloud Installation: Best Practices Using an Existing SMT and KVM Environment Written by B1 Systems GmbH Table of Contents Introduction...3 Use Case Overview...3 Hardware

More information

IBRIX Fusion 3.1 Release Notes

IBRIX Fusion 3.1 Release Notes Release Date April 2009 Version IBRIX Fusion Version 3.1 Release 46 Compatibility New Features Version 3.1 CLI Changes RHEL 5 Update 3 is supported for Segment Servers and IBRIX Clients RHEL 5 Update 2

More information

An Introduction to OpenStack and its use of KVM. Daniel P. Berrangé <berrange@redhat.com>

An Introduction to OpenStack and its use of KVM. Daniel P. Berrangé <berrange@redhat.com> An Introduction to OpenStack and its use of KVM Daniel P. Berrangé About me Contributor to multiple virt projects Libvirt Developer / Architect 8 years OpenStack contributor 1 year

More information

w w w. u l t i m u m t e c h n o l o g i e s. c o m Infrastructure-as-a-Service on the OpenStack platform

w w w. u l t i m u m t e c h n o l o g i e s. c o m Infrastructure-as-a-Service on the OpenStack platform w w w. u l t i m u m t e c h n o l o g i e s. c o m Infrastructure-as-a-Service on the OpenStack platform http://www.ulticloud.com http://www.openstack.org Introduction to OpenStack 1. What OpenStack is

More information

SUSE Cloud. www.suse.com. Deployment Guide. February 20, 2015

SUSE Cloud. www.suse.com. Deployment Guide. February 20, 2015 SUSE Cloud 5 February 20, 2015 www.suse.com Deployment Guide Deployment Guide List of Authors: Frank Sundermeyer, Tanja Roth Copyright 2006 2015 SUSE LLC and contributors. All rights reserved. Except where

More information

StreamServe Persuasion SP5 StreamStudio

StreamServe Persuasion SP5 StreamStudio StreamServe Persuasion SP5 StreamStudio Administrator s Guide Rev B StreamServe Persuasion SP5 StreamStudio Administrator s Guide Rev B OPEN TEXT CORPORATION ALL RIGHTS RESERVED United States and other

More information

Desktop virtualization using SaaS Architecture

Desktop virtualization using SaaS Architecture Desktop virtualization using SaaS Architecture Pranit U. Patil, Pranav S. Ambavkar, Dr.B.B.Meshram, Prof. Varshapriya VJTI, Matunga, Mumbai, India. pranit_patil@aol.in Abstract - Desktop virtualization

More information

JAMF Software Server Installation and Configuration Guide for OS X. Version 9.0

JAMF Software Server Installation and Configuration Guide for OS X. Version 9.0 JAMF Software Server Installation and Configuration Guide for OS X Version 9.0 JAMF Software, LLC 2013 JAMF Software, LLC. All rights reserved. JAMF Software has made all efforts to ensure that this guide

More information

Acronis Storage Gateway

Acronis Storage Gateway Acronis Storage Gateway DEPLOYMENT GUIDE Revision: 12/30/2015 Table of contents 1 Introducing Acronis Storage Gateway...3 1.1 Supported storage backends... 3 1.2 Architecture and network diagram... 4 1.3

More information

VMware vcloud Automation Center 6.0

VMware vcloud Automation Center 6.0 VMware 6.0 Reference Architecture TECHNICAL WHITE PAPER Table of Contents Overview... 4 Initial Deployment Recommendations... 4 General Recommendations... 4... 4 Load Balancer Considerations... 4 Database

More information

Comparing Ganeti to other Private Cloud Platforms. Lance Albertson Director lance@osuosl.org @ramereth

Comparing Ganeti to other Private Cloud Platforms. Lance Albertson Director lance@osuosl.org @ramereth Comparing Ganeti to other Private Cloud Platforms Lance Albertson Director lance@osuosl.org @ramereth About me OSU Open Source Lab Server hosting for Open Source Projects Open Source development projects

More information

2) Xen Hypervisor 3) UEC

2) Xen Hypervisor 3) UEC 5. Implementation Implementation of the trust model requires first preparing a test bed. It is a cloud computing environment that is required as the first step towards the implementation. Various tools

More information

Project Documentation

Project Documentation Project Documentation Class: ISYS 567 Internship Instructor: Prof. Verma Students: Brandon Lai Pascal Schuele 1/20 Table of Contents 1.) Introduction to Cloud Computing... 3 2.) Public vs. Private Cloud...

More information

Rackspace Private Cloud Reference Architecture

Rackspace Private Cloud Reference Architecture RACKSPACE PRIVATE CLOUD REFERENCE ARCHITECTURE SOLIDFIRE Rackspace Private Cloud Reference Architecture SolidFire Legal Notices The software described in this user guide is furnished under a license agreement

More information

Mobile Cloud Computing T-110.5121 Open Source IaaS

Mobile Cloud Computing T-110.5121 Open Source IaaS Mobile Cloud Computing T-110.5121 Open Source IaaS Tommi Mäkelä, Otaniemi Evolution Mainframe Centralized computation and storage, thin clients Dedicated hardware, software, experienced staff High capital

More information

VMware vrealize Automation

VMware vrealize Automation VMware vrealize Automation Reference Architecture Version 6.0 and Higher T E C H N I C A L W H I T E P A P E R Table of Contents Overview... 4 What s New... 4 Initial Deployment Recommendations... 4 General

More information

Fuel User Guide. version 8.0

Fuel User Guide. version 8.0 Fuel version 8.0 Contents Preface 1 Intended Audience 1 Documentation History 1 Introduction to the 2 Create a new OpenStack environment 3 Create an OpenStack environment in the deployment wizard 3 Change

More information

Postgres on OpenStack

Postgres on OpenStack Postgres on OpenStack Dave Page 18/9/2014 2014 EnterpriseDB Corporation. All rights reserved. 1 Introduction PostgreSQL: Core team member pgadmin lead developer Web/sysadmin teams PGCAC/PGEU board member

More information

OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora

OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora docs.openstack.org OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora (2013-06-11) Copyright 2012, 2013 OpenStack Foundation All rights reserved. The OpenStack system has several

More information

App Orchestration Setup Checklist

App Orchestration Setup Checklist App Orchestration Setup Checklist This checklist is a convenient tool to help you plan and document your App Orchestration deployment. Use this checklist along with the Getting Started with Citrix App

More information

On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform

On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform Page 1 of 16 Table of Contents Table of Contents... 2 Introduction... 3 NoSQL Databases... 3 CumuLogic NoSQL Database Service...

More information

OpenStack Manila File Storage Bob Callaway, PhD Cloud Solutions Group,

OpenStack Manila File Storage Bob Callaway, PhD Cloud Solutions Group, OpenStack Manila File Storage Bob Callaway, PhD Cloud Solutions Group, Agenda Project Overview API Overview Architecture Discussion Driver Details Project Status & Upcoming Features Q & A 2 Manila: Project

More information

Déployer son propre cloud avec OpenStack. GULL 18.11.2014 François Deppierraz francois.deppierraz@nimag.net

Déployer son propre cloud avec OpenStack. GULL 18.11.2014 François Deppierraz francois.deppierraz@nimag.net Déployer son propre cloud avec OpenStack GULL francois.deppierraz@nimag.net Who Am I? System and Network Engineer Stuck in the Linux world for almost 2 decades Sysadmin who doesn't like to type the same

More information

ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE

ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE Hadoop Storage-as-a-Service ABSTRACT This White Paper illustrates how EMC Elastic Cloud Storage (ECS ) can be used to streamline the Hadoop data analytics

More information

Moving Virtual Storage to the Cloud

Moving Virtual Storage to the Cloud Moving Virtual Storage to the Cloud White Paper Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage www.parallels.com Table of Contents Overview... 3 Understanding the Storage

More information

How To Design A Private Cloud Powered By Openstack

How To Design A Private Cloud Powered By Openstack Rackspace Private Cloud Powered By OpenStack: The Customer Experience Author: Christian Foster Director, Rackspace Private Cloud Rackspace Private Cloud Powered By OpenStack : The Customer Experience Cover

More information

Designing a Cloud Storage System

Designing a Cloud Storage System Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes

More information

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction

More information

Eucalyptus 3.4.2 User Console Guide

Eucalyptus 3.4.2 User Console Guide Eucalyptus 3.4.2 User Console Guide 2014-02-23 Eucalyptus Systems Eucalyptus Contents 2 Contents User Console Overview...4 Install the Eucalyptus User Console...5 Install on Centos / RHEL 6.3...5 Configure

More information