Self-Service Cloud Infrastructure For Dynamic IT Environments
|
|
|
- Cathleen Murphy
- 10 years ago
- Views:
Transcription
1 The All-Flash Array Built for the Next Generation Data Center Self-Service Cloud Infrastructure For Dynamic IT Environments A Validated Reference Design developed for an OpenStack Cloud Infrastructure using SolidFire s All-Flash block storage system, Dell compute and networking, and Red Hat Enterprise Linux OpenStack Platform
2 Table of Contents Intro & Exec Summary 3 Reference Architecture Scope 5 Audience 5 How We Got Here 6 Validated Benefits 6 The AI Advantage 7 AI for OpenStack 7 Configuration Detail 8 Workload/Use Case Detail 9 Solution Overview 11 Design Components 12 Compute / Storage Infrastructure 13 Network Infrastructure 13 Operations Environment 14 OpenStack Services 14 Network Architecture 15 Hardware Deployment 17 Network Switch Configuration 17 Preparing OpenStack Nodes For Automated Bare-Metal Provisioning 19 SolidFire Cluster Configuration 25 OpenStack Configuration and Deployment via Foreman 33 Installing and configuring the Foreman server 33 Install Foreman Packages 34 Configuring Foreman Server and installing Foreman 35 Building an OpenStack Cloud with Foreman 49 Appendix 69 Appendix A: Bill of Materials 69 Appendix B: Support Details 69 Appendix C: How to Buy 70 Appendix D: Data Protection Considerations 70 Appendix E: Scripts & Tools 70 Appendix F: Agile Infrastructure Network Architecture Details 71 Appendix G: Agile Infrastructure Node Network Interface Configuration 76 Appendix H: Rack Configuration 77 Solution Validation 58 Deployment 58 Integration/Interoperability 59 Operational Efficiency 59 Quality of Service 62 Storage Efficiency 66 Conclusion 68 2 solidfire.com
3 Intro & Exec Summary The agility, efficiency and scale benefits demonstrated from the move to cloud computing infrastructure has raised the bar on the expectations for IT service delivery globally. This paradigm shift has established a new benchmark for delivering IT services that is as much about self-service and automation as it is about cost. To stay ahead, in the face of these heightened service delivery standards, CIOs are searching for more innovative ways to deliver infrastructure resources, applications and IT services, in a more agile, scalable, automated and predictable manner. Helping make this vision a reality is the promise of the Next Generation Data Center. Closing the service delivery gap that exists between IT today and the Next Generation Data Center will not be easily accomplished using tools and technologies designed for legacy infrastructure. The challenges IT departments are faced with today (see Figure 1) present an entirely different set of problems than those addressed by legacy vendors. The innovation currently occurring up and down the infrastructure stack from vendors across the ecosystem is a direct reflection of this trend. Figure 1: The Challenges of The Next Generation Data Center Properly harnessing these innovations into an easily consumable solution is not without its own challenges. With the many piece parts that compose a cloud infrastructure, the ability to successfully design and deploy a functional cloud environment is often impaired by issues encountered at various stages of implementation including setup, configuration and deployment. Introducing powerful, yet complex, tools like OpenStack into the equation can make the task even more daunting. To help accelerate an enterprises ability to embrace these innovations, SolidFire has aligned with leading technology partners to deliver a pre-validated design for customers looking to deploy a self-service OpenStack cloud infrastructure. The SolidFire Agile Infrastructure (AI) reference architecture for OpenStack, as shown in Figure 2, is the result of an extensive configuration design, testing and validation effort. 3 solidfire.com
4 Figure 2: Agile Infrastructure for OpenStack With SolidFire AI, customers can stand up a dynamic self-service cloud infrastructure in significantly less time, less space and for less money than alternative converged infrastructure offerings. AI allows enterprises to experience the benefits of Next Generation Data Center design today without creating unnecessary hardware or software lock-in. By combining best-of-breed tools and technologies into a modular pre-validated design, AI drastically reduces the complexity of adopting OpenStack, while increasing the agility, predictability, automation and scalability of an enterprise IT infrastructure. Relative to alternative approaches, unique attributes of the AI solution for OpenStack include; Best-of-Breed Design AI has been built using best-of-breed technology across all layers of the stack to deliver a fully featured cloud infrastructure solution True Scale-Out Scale-out design across compute, network and storage allows for a more flexible and cost effective design as infrastructure expands No Lock-In Modularity at each layer of the stack eliminates threat of hardware or software lock-in Accelerated OpenStack Time to Value Pre-validated solution drastically reduces complexity of adopting OpenStack infrastructure in a Next Generation Data Center design Guaranteed Performance With SolidFire s unique QoS controls, AI can easily accommodate mixed workload environments without compromising performance to any one application The configuration utilized to achieve these benefits, including SolidFire s leading all-flash storage system, is described throughout the remainder of this document. 4 solidfire.com
5 Reference Architecture Scope The document is intended to be used as a design and implementation guide to assist enterprise IT administrators and managers in deploying a fully-functional OpenStack cloud infrastructure. The reference architecture included in this document extends up to, but not including, the service layer above the infrastructure. Specifically, the technologies outlined in this document encompasses cloud management software, configuration tools, compute, networking and block storage. Services such as load balancing, firewalls, VPN and core network are outside the scope of this document. From a sizing perspective, the Agile Infrastructure design outlines a baseline configuration from which users can expect to accommodate a certain size and scale of environment. Throughout this document there are tips to consider when evaluating variation and scale considerations that deviate from the initial configuration. For additional assistance with specific configuration or sizing details that fall outside the scope of this document, please contact SolidFire Support at [email protected] or visit Audience This document is intended for IT infrastructure administrators (server, virtualization, network and storage) and IT managers that have been tasked with designing enterprise-class cloud infrastructure. The detail covered in this document encompasses the necessary software, hardware components, as well as key operations and integration considerations. 5 solidfire.com
6 How We Got Here The SolidFire AI design was architected with a focus on modularity, flexibility and scalability. The design validation and testing performed against this infrastructure was tailored to specifically highlight the enhanced operational experience customers can expect from deploying this reference architecture in dynamic IT-as-a-Service style offerings such as Test & Development or Private Cloud. Validated Benefits Following a comprehensive validation process, SolidFire AI has proven to deliver a scalable OpenStack cloud infrastructure design in significantly less time, less complexity and less footprint than what could be achieved with alternative converged infrastructure solutions. Figure 3 below outlines these specific benefits in more detail. Figure 3: The Operational Benefits of SolidFire AI 6 solidfire.com
7 SolidFire Agile Infrastructure (AI) SolidFire Agile Infrastructure (AI) is a series of pre-validated reference architectures that are thoroughly tested and validated by SolidFire. Built with SolidFire s all-flash storage system at the foundation, each AI design also includes leading compute, networking and orchestration technologies that dramatically reduces the cost and complexity of deploying a cloud infrastructure for enterprise-class data centers. Each AI reference architectures is constructed with a focus on modularity. Individual components can be scaled independently of each other depending on use case as well as the density, size and scale priorities of the environment. The AI Advantage SolidFire AI combines industry leading technologies into a more easily consumable reference architecture that has been tested and validated by SolidFire. The AI validated design provides the reference architecture, bill of materials, configuration details, and implementation guidelines to help accelerate your IT transformation. AI is intended help to accelerate time-to-value for operators and administrators deploying a functional cloud infrastructure. Leveraging AI, IT departments can confidently design a cloud infrastructure to help them achieve greater infrastructure agility, automation, predictability and scale. AI for OpenStack SolidFire AI for OpenStack is a pre-validated solution built specifically for enterprise-class environments looking to accelerate the deployment of a functional OpenStack cloud. While the flexibility of AI allows it to easily accommodate mixed workload environments, the specific OpenStack use case covered in this document focuses on building a self-service cloud infrastructure for dynamic IT-as-a-Service style offerings such as Test & Development or Private Cloud. 7 solidfire.com
8 Configuration Detail For this specific use case, we targeted a mid to large size OpenStack cloud infrastructure configuration. The specific hardware configuration (see Figure 4) was designed to accommodate up to 70 vcpus per compute node. Across the 15 compute node deployment utilized in this reference architecture, assuming a conservative oversubscription rate of 1.5, total vcpu count aggregates to 980. Assuming that VM provisioning in this environment adheres to the same vcpu oversubscription rate, this would translate to at least 1000 VMs within this footprint. These metrics can vary considerably depending on instance sizing and resource requirements. Figure 4: AI Rack Configuration 8 solidfire.com
9 Workload/Use Case Detail To help infrastructure administrators better comprehend the usable capacity of the architecture, Figure 5 defines some sample enterprise use-cases. While singling out specific workloads in these examples, it is important to understand that SolidFire s unique storage quality-of-service controls allows administrators to confidently mix-and-match most any workload types within the shared infrastructure while still ensuring predictable performance to each individual application. This QoS mechanism affords administrators the flexibility to run many of one workload (an entire architecture dedicated to serving OLTP workloads, for example) or run any combination of block storage workloads without compromising performance. Figure 5: Reference Workloads For AI Sample AI Workload Software Development Lifecycle (e.g. Test/Dev, QA Staging, Production) Large Scale Web Applications (e.g. 3-Tier LAMP stack) Persistent Block Storage/On-Premise version of Amazon Elastic Block Storage (EBS) or Provisioned IOPS (piops) Database Consolidation/Database-as-a-Service (DBaaS) IT Consolidation Workload Description Creating logical segmentation between development tiers as we as QoS-enabled performance isolation between tiers Running presentation, middleware and database systems on the same system, without causing resource contention issues Leading support for OpenStack Cinder, allowing for easy deployment of persistent storage via OpenStack APIs Ability to run multiple database types and workloads on the same platform and eliminate storage resource contention. Database types supported include both SQL (Microsoft SQL Server, MySQL) and NoSQL (MongoDB) Moving from multiple, siloed, fixed-use platforms to a single, consolidated, virtualized platform allows infrastructure teams to scale better, operate more efficiently and eliminate unnecessary cost and complexity Since workload types have varying characteristics, we have included a baseline virtual machine workload size in Figure 6. This reference workload does not represent any individual application, but is designed to be a point of reference when comparing the resource demands of different application workloads. 9 solidfire.com
10 Figure 6: Reference Virtual Machine Specification Definition Value Operating System Red Hat Enterprise Linux 6 Storage Capacity 100GB IOPS 25 I/O Pattern Random I/O Read/Write Ratio 2:1 vcpu 1 vram 2GB Using this reference VM size, Figure 7 shows the total workload capacities for each of the different system resources. Regardless of which resource you evaluate, each runs into available capacity constraints well before the storage performance. Available vcpu in this base configuration could support up to 540 of the reference VMs, while storage capacity (GBs) and vram and could support up to 614 and 2400 reference VMs respectively. These metrics pale in comparison to the 10,000 reference VMs that could be supported from the available IOPS in the configuration. Figure 7: AI Per Resource Capacity Threshold (15 Compute Nodes) 10 solidfire.com
11 The point of this exercise is to convey the amount of headroom in storage performance left in the system when other resources are fully utilized. Using our reference VM profile from Figure 6, the utilization of the storage performance when the other resources are fully depleted is not more than 25%. Upon adding additional compute resources, either within or beyond the single rack configuration, the excess storage performance (IOPS) can easily be used to host additional enterprise application load such as databases, VDI or virtual server infrastructure. SolidFire s ability to guarantee performance per volume allows additional applications to be hosted in parallel from the shared storage infrastructure without creating performance variability from IOPS contention. Solution Overview The validated design reviewed in this document was built using the Red Hat Enterprise Linux OpenStack Platform and related OpenStack configuration services including Foreman. The hardware components included in the design include Dell compute and networking, and SolidFire s all-flash block storage system. A brief overview of each of these vendor and communities behind these technologies is included below; Launched in 2010, OpenStack is open source software for building clouds. Created to drive industry standards, accelerate cloud adoption, and put an end to cloud lock-in, OpenStack is a common, open platform for both public and private clouds. The open source cloud operating system enables businesses to manage compute, storage and networking resources via a self-service portal and APIs on standard hardware at massive scale. Red Hat is the world s leading provider of open source software solutions, taking a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As the connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT. Dell is a leading provider of compute, networking, storage and software solutions for enterprise IT infrastructure. As one the earliest vendor contributors to the OpenStack project, Dell has developed significant expertise within the OpenStack ecosystem across their hardware and software portfolio. SOLIDFIRE Built for the Next Generation Data Center, SolidFire is the leading block storage solution in OpenStack. For enterprise IT environments that require scalable, highly available, predictable and automated block storage there is no better option in OpenStack. SolidFire has the knowledge, integrations, partnerships and proof points within the OpenStack ecosystem, unlike any other storage vendor in the market. 11 solidfire.com
12 Design Components The design and implementation for each OpenStack cloud deployment will vary depending on your specific needs and requirements. This reference architecture includes the components to deploy a robust, operational, OpenStack infrastructure for a medium to large-scale environment. A logical view of the AI design can be seen in Figure 8. Figure 8: AI Design Components The components used in this reference architecture were chosen to reduce the complexity of both the initial deployment, while easily accommodating future scale requirements. The specific configuration details of each infrastructure component is outlined below. While not documented in this reference architecture, a user may choose to add or change services or components, such as highavailability, advanced networking or higher density SolidFire storage options, to meet changing requirements. Outlined below are the specific components used for each layer of the AI design: 12 solidfire.com
13 Compute / Storage Infrastructure Role Platform Configuration Provisioning / Configuration Management (Foreman) (1) Dell PowerEdge R620 CPU: 2 x Intel Xeon CPU E GHz, 12 core RAM: 256GB Network: 2 x 1GbE, 2 x 10Gbe RAID: Integrated Drives: 2 x 250GB OpenStack Cloud Controller (2) Dell PowerEdge R620 CPU: 2 x Intel Xeon CPU E GHz, 12 core RAM: 256GB Network: 2 x 1GbE, 2 x 10Gbe RAID: Integrated Drives: 2 x 250GB OpenStack Cloud Compute (15) Dell PowerEdge R620 CPU: 2 x Intel Xeon CPU E GHz, 12 core RAM: 256GB Network: 2 x 1GbE, 2 x 10Gbe RAID: Integrated Drives: 2 x 250GB Shared Storage (5) SolidFire SF3010 Single cluster of 5 SF3010 nodes 250,000 random IOPS 60TB effective capacity Network Infrastructure Role Platform Configuration Admin Connectivity / OpenStack Provisioning / SolidFire Management General Openstack Infrastructure connectivity / SolidFire Storage Access (2) Dell s55 Switches 44 10/100/1000Base-T ports 4 GbE SFP ports Reverse Airflow (2) Dell s4810 Switches 48 line-rate 10 Gigabit Ethernet SFP+ ports 4 line-rate 40 Gigabit Ethernet QSFP+ ports Reverse Airflow 13 solidfire.com
14 Operations Environment Role Vendor Description Host Operating System Red Hat Red Hat Enterprise Linux OpenStack Platform 4.0 Configuration Management & Provisioning Server Red Hat Red Hat Enterprise Linux OpenStack Platform s Foreman tool is responsible for provisioning Openstack systems and performing ongoing configuration management Storage Operating System SolidFire SolidFire Element OS OpenStack Services Role Openstack Service Description Authentication/Identity Keystone The OpenStack Identity (Keystone) service provides a central directory of users mapped to the OpenStack services they can access. It acts as a common authentication system across the cloud operating system and can integrate with existing backend directory services like LDAP. Dashboard / UI Horizon The OpenStack dashboard (Horizon) provides administrators and users a graphical interface to access, provision and automate cloud-based resources. The extensible design makes it easy to plug in and expose third party products and services, such as billing, monitoring and additional management tools. Block Storage Cinder OpenStack Block Storage Service (Cinder) provides persistent block level storage devices for use with OpenStack compute instances. The block storage system manages the creation, attaching and detaching of the block devices to servers. Network Nova-Network Core Network functionality is provided by the Nova-network service, allowing for anything from very simple network deployments to more complex, secure multi-tenant networks Virtual Machine Images Glance The OpenStack Image Service (Glance) provides discovery, registration and delivery services for disk and virtual machine images. Stored images can be used as a template to get new servers up and running quickly. Stored images allow OpenStack users and administrators to provision multiple servers quickly and consistently. Compute Nova OpenStack Compute provisions and manages large networks of virtual machines. It is the backbone of OpenStack s Infrastructure-as-a-Service (IaaS) functionality 14 solidfire.com
15 Network Architecture SolidFire s Agile Infrastructure is designed to allow easy integration into your existing enterprise data center environment, while retaining the flexibility to address the ever-changing needs of today s data center. Leveraging best-of-breed top-of-rack (TOR) switching, and single rack-unit server and storage hardware, the architecture design provides a cost-effective path to incrementally scale compute, network, and storage as your needs dictate. The density of the chosen hardware configuration allows a complete solution stack including compute, networking and storage to be contained within a single rack. Scaling the solution beyond a single rack is easily done by replicating the entire reference architecture design as a single building block. The network architecture is designed to provide full physical redundancy to maximize uptime and availability of all physical infrastructure components (See Figure 9). Figure 9: Physical Topology Overview 15 solidfire.com
16 Data Center Integration The SolidFire AI design, as documented here, provides connectivity only at Layer-2 (L2); No Layer-3 (L3) routing exists within this design. L2 and L3 connectivity is separated at the data center network aggregation layer. Integration of into the existing data center environment is achieved by establishing L2 uplinks to the aggregation layer. As additional racks are added to scale the solution, inter-rack connectivity is maintained strictly at the L2 domain boundary L3 connectivity between existing enterprise users and/or systems, and the applications and services that are provided by SolidFire s Agile Infrastructure, is provided upstream by the data center aggregation or core network infrastructure. For more specific network architecture and configuration details, please refer to Appendix F. Network Topology The network topology consists of (5) separate networks to segregate various types of traffic in order to provide security, as well as to maximize performance and stability. Figure 10 depicts the network architecture used in SolidFire s Agile Infrastructure design. Figure 10: Network Topology 16 solidfire.com
17 It is necessary to point out that the network subnets referenced throughout this document were sized according to the reference architecture environment. It is important to properly size your networks according to your specific requirements and needs, particularly when considering future scale-out expectations. For more detail on the purpose of each network defined above in Figure 10 refer to Appendix F. Hardware Deployment After the installation and cabling of the infrastructure (see Appendix F for more detail), the setup of the environment described in this reference architecture is comprised of the following steps: 1. Network Switch Configuration 2. Prepare the Openstack nodes for Automated Bare-Metal Provisioning 3. SolidFire Cluster Configuration Network Switch Configuration The following steps outline key configuration settings and steps to setup the network infrastructure according to the design outlined in this reference architecture. Key Considerations Ensure that all OpenStack and SolidFire Node 10G switch ports are configured to meet the following requirements: OpenStack Node Ports 802.1Q VLAN tagging enabled MTU 9212 enabled Spanning-Tree edge-port (portfast) enabled Member of OpenStack Service, Public/DC, Private, and Storage Networks SolidFire Node Ports MTU 9212 enabled Spanning-Tree edge-port (portfast) enabled Member of the Storage VLAN/Network Switch Configuration Steps 1. Define VLAN IDs for each of the networks defined in this reference architecture. (Refer to Network Topology section above) 2. Configure s55 TOR switches in stacking mode. These are the Admin/Provisioning switches. 3. On the s55 TOR switch, configure VLANs and VLAN port membership. This will be based on the specific physical port allocations determined at the time of the physical cabling of each system. The only VLAN required on this switch should be the Administrative VLAN. 4. Configure s4810 TOR switches in VLT mode. 17 solidfire.com
18 5. On each s4810 switch, ensure that all physical port, VLAN, and Port-Channel interfaces have settings according to the requirements listed above. 6. On each s4810 switch, configure VLANs and VLAN port membership according to the network topology defined in the Network Topology section of this document. This will be based on the specific physical port allocations determined at the time of the physical cabling of each system. For Openstack node ports, ensure that VLAN tagging is properly configured for the required VLANs. For more detail refer to Appendix G: OpenStack Node Network Interface Configuration. 7. On each s4810 switch, setup and configure SolidFire node switch ports for LACP bonding. Sample configuration templates are as follows: a. Create a port channel for each individual node; interface Port-channel 31 description SF-OS-1 no ip address mtu 9212 switchport vlt-peer-lag port-channel 31 no shutdown b. Assign the switch ports for a particular node to their respective port channel interface s4810 Switch A: interface TenGigabitEthernet 0/1 description SF-OS-1:Port 1 no ip address mtu 9212 flowcontrol rx on tx off! port-channel-protocol LACP port-channel 31 mode active no shutdown s4810 Switch B: interface TenGigabitEthernet 0/1 description SF-OS-1:Port 2 no ip address mtu 9212 flowcontrol rx on tx off! port-channel-protocol LACP port-channel 31 mode active no shutdown At this point the network should be ready to move on to the next step. If you are deploying this configuration to ultimately connect to the data center aggregation / core infrastructure, ensure that all VLANs are tagged on all uplink interfaces as required. 18 solidfire.com
19 Preparing OpenStack Nodes For Automated Bare-Metal Provisioning SolidFire s Agile Infrastructure facilitates quick deployment of an OpenStack infrastructure by utilizing automated Bare-Metal Provisioning (BMP). By utilizing automated BMP, OpenStack nodes can be deployed and redeployed at the click of a button, taking a physical system from no configuration or operating system, to a fully operational OpenStack node in a matter of minutes. Prior to deployment into the OpenStack environment via automated BMP, each physical system destined to be an OpenStack Node needs to have certain features enabled in the System Setup configuration in order to support automated provisioning. Note: This process simply enables the system to support the automated BMP process. Before actual provisioning of a node can be started, further configuration is necessary to register the system in the Foreman provisioning server and define system profiles. See Appendix G: Bare Metal Provisioning With Foreman to complete this process. System Setup Note: The following steps are based on Dell System BIOS version 2.1.3, and idrac version Establish a console connection by connecting a keyboard and monitor to the system. 2. Power on the system. When the options appear, select <F2> to enter System Setup. 3. From the System Setup Screen, select Device Settings. The Device Settings screen will be displayed: 19 solidfire.com
20 From here, proceed to modify the NIC configurations for each NIC as described in the steps below. 4. For each Integrated NIC Port, configure settings as directed below. Refer to the following example configuration screens for visual reference: Example: Main Configuration Page 20 solidfire.com
21 Example: NIC Configuration Page a. For Integrated NIC 1 Ports 1, 2, and 4, do the following: i. Select NIC Configuration ii. iii. Set Legacy Boot Protocol to None Select Back button, or ESC to return to Main Configuration Page. Select Finish to save changes for the NIC. b. For Integrated NIC 1 Port 3 do the following: i. Select NIC Configuration ii. iii. Set Legacy Boot Protocol to PXE Select Back button, or ESC to return to Main Configuration Page. Select Finish. Select Yes when prompted to save changes. 5. Once all NIC configuration changes have been completed, select the Finish button or press ESC to exit the Device Settings Page and return to the System Setup page. Then from the System Setup page, select the Finish button to exit and reboot. 6. During system reboot, press <F2> to enter System Setup again. 21 solidfire.com
22 7. Select System BIOS, then Boot Settings, to enter the Boot Settings page. 22 solidfire.com
23 8. Within the Boot Settings page, select BIOS Boot Settings, then on the next page, select Boot Sequence. 23 solidfire.com
24 9. Modify the order such that Integrated NIC 1 Port 3.. is first in the boot order, above Hard drive C:, then select OK button to return to Boot Settings Page. 10. From the Boot Settings Page, press the Back button to return to the System BIOS Settings page. Select the Finish button, and select Yes when prompted to save changes. You ll be returned to the System Setup Main Menu page. 11. From the System Setup Page, select the Finish button to exit and reboot. The system is now enabled for automated BMP deployment. 24 solidfire.com
25 SolidFire Cluster Configuration Each SolidFire storage node is delivered as a self-contained storage appliance. These individual nodes are then connected over a 10Gb Ethernet network in clusters ranging from 5 to 100 nodes. Scaling performance and capacity with a SolidFire system is as simple as adding new nodes to the cluster as demand dictate. The Agile Infrastructure design in this document consists of 5 x 3010 SolidFire storage nodes, yielding 250,000 IOPS and 60 TBs of effective capacity. (Figure 11: SF3010 Node Front/Rear) A 100 node cluster scales to 7.5M IOPS and 3.4 PBs, with the ability to host more than 100,000 volumes from within a single management domain. Figure 11: SF3010 Node Front/Rear View IP Address Assignment and Cluster Settings Prior to configuring the SolidFire Cluster, for each SolidFire node it is important to first determine the following; Hostname Management IP Storage IP In addition, it is necessary to define the SolidFire cluster name, management virtual IP address (MVIP), and storage virtual IP address (SVIP). 25 solidfire.com
26 A dedicated management network IP, and a dedicated storage network IP is assigned to each SolidFire cluster node. In our configuration, we use the following settings: SolidFire Node Hostname Management IP Address (MIP) Mask: Gateway: Storage IP Address (SIP) Mask: Gateway: none SF-OS SF-OS SF-OS SF-OS SF-OS Cluster Name Cluster MVIP Cluster SVIP OSCI-SolidFire SolidFire Node Configuration The Terminal User Interface (TUI) is used to initially configure nodes and assign IP addresses and prepare the node for cluster membership. While the TUI can be used to configure all the required settings, here we initially just use the TUI to configure the Management IP. Then we proceed to configure the remaining settings via the per-node Web UI. Initial Management IP Configuration With a keyboard and monitor attached to the node and the node powered on, the TUI will display. The TUI will display on tty1 terminal display. See Figure 12. Note: DHCP generated IP addresses or self-assigned IP addresses may be available if your network supports it. If they are available you can use these IP addresses to access a new node in the Element UI or from the API to proceed with the remaining network configuration. All configurable TUI fields described in this section will apply when using the per node UI to configure the node. When using this method of accessing the node use the following format to directly access the node: 26 solidfire.com
27 To manually configure network settings for a node with the TUI, do the following: 1. Attach keyboard and monitor to the video console ports 2. The terminal User Interface (TUI) will initially display the Network Settings tab with the 1G and 10G network fields. Figure 12: Example Terminal User Interface - Network Settings 3. Use the on-screen navigation to configure the 1G (MIP) Network settings for the node. Optionally, note the DHCP IP address which can be used to finish initial configuration. 4. Select the s key on the keyboard to save the settings. At this point, the node will have an Management IP address which can be used to configure the remaining node settings to prepare the node to be added to the SolidFire Cluster. 27 solidfire.com
28 Finalizing Node Configuration 1. Using the Management IP (MIP) of the SolidFire node, enter the Management URL for the node in a web browser. Example: The Network Settings tab is displayed automatically and opened to the Network Settings Bond1G page. Figure 13: SolidFire Node Bond1G settings 28 solidfire.com
29 2. Click Bond10G to display the settings for the 10G network settings. Figure 14: SolidFire Node Bond10G settings 3. Enter the Storage IP (SIP) address for the node. The Gateway Address field will stay blank as there is no gateway for the storage network. 4. Click Save Changes to have the settings take effect. 29 solidfire.com
30 5. Click the Cluster Settings tab. The Cluster Settings page appears. Figure 15: SolidFire Node Cluster Settings 6. Enter the cluster name in the Cluster: field. Each node must have the same cluster name in order to become eligible for cluster membership. 7. Click Save Changes to have the settings take effect. After this process is completed for all nodes, you are ready to create a SolidFire cluster. 30 solidfire.com
31 SolidFire Cluster Creation Creating a new cluster initializes a node as communications owner for a cluster and establishes network communications for each node in the cluster. This process is performed only once for each new cluster. You can create a cluster by using the SolidFire user interface (UI) or the API. To create a new cluster, do the following: 1. Using the Management IP (MIP) first SolidFire node, SF-OS-1 ( ), enter the following URL in a web browser The Create a New Cluster window displays. Figure 16: Create New Cluster dialog 2. Enter the Management VIP (MVIP) address and a iscsi (Storage) VIP (SVIP) address. Note: The MVIP and SVIP cannot be changed after the cluster is created. 3. Create Cluster Admin user account. The Cluster Admin will have permissions to manage all cluster attributes and can create other cluster administrator accounts. Note that this Cluster Admin username and password is also used by OpenStack to provision volumes on the SolidFire cluster. a. Enter a username to be used for authentication in the Create User Name field. User name can be upper and lower case letters, special characters and numbers. b. Enter a password for future authentication in Create Password c. Enter (confirm) password in Repeat Password 4. Nodes must have the same version of Element software installed that is currently installed on the node selected to create the cluster. If not, then the node will show incompatible and cannot be added to the cluster. Ensure all nodes are of the same Element OS version. 5. Select Create Cluster. The system may take several minutes to create the cluster depending on the number of nodes being added to the cluster. A small cluster of five nodes, on a properly configured network, should take less than one minute. After the cluster has been created, the Create a New Cluster window will be redirected to the MVIP URL address ( for the cluster and you will be required to log in using the credentials defined in the steps above. 31 solidfire.com
32 Adding Drives to the Cluster Drives can be added to the cluster after it has been created. Node and drive information is gathered when a new cluster is created and is displayed when the cluster is ready for use. A progress bar will display the progress of the drive search and you can choose to add the drives at the current time or add them at a later time. Figure 17: Cluster Ready Window Once drives have been added, the cluster is available for use. Refer to the SolidFire Summary Report to view total space available on the cluster for provisioning (See Figure 18) Figure 18: SolidFire Cluster summary screen 32 solidfire.com
33 OpenStack Configuration and Deployment via Foreman The Foreman Provisioning Server from Red Hat provides all the necessary network services such as DHCP, PXE and TFTP, as well as any required operating system images, to support automated deployment of OpenStack nodes. Note: The Admin network/vlan is used for all bare-metal provisioning, as well as ongoing configuration management after deployment. Since the Provisioning server provides DHCP, DNS, PXE, and TFTP services locally on the Admin network, these services do not need to be configured elsewhere in your data center environment in order for automated provisioning and deployment to work. After preparing bare-metal systems for the BMP process (As outlined in Preparing OpenStack Nodes For Automated Bare-Metal Provisioning section), the next step is the provisioning and deployment of OpenStack. For this reference architecture we deployed Red Hat Enterprise Linux 6.5 on the bare-metal nodes and subscribed them to the following channels: Red Hat Enterprise Linux Server 6.5 Red Hat Enterprise OpenStack 4.0 A user can opt for different operating system deployment methods if they desire. The requirement here is just that you have Red Hat Enterprise Linux 6.5 enabled with the appropriate subscriptions. Installing and configuring the Foreman server Preview of Foreman configuration and deployment: Networking topology requirements Local media via http on the Foreman server Customized PXE template Customized Kickstart template Customization of Host Group parameters There are two different modes which Foreman can be configured to run in. The first is Provisioning mode, and the alternative is non-provisioning mode. Provisioning mode adds the capability to use Foreman to do bare-metal deployment of your OS. This configuration mode sets the foreman server to act as a PXE, TFTP, DNS and DHCP server, and allows you to configure a boot and kickstart image to perform custom OS installs all from the Foreman server. Provisioning consists of the following general steps: 1. Power on Node 2. Node acquires DHCP address configuration from Foreman Server, including PXE boot information. 3. Node retrieves custom PXE configuration file from Foreman server. This file contains instructions on which image to load for this particular node. 4. Node retrieves custom Kickstart image and initial configuration settings from Foreman server. 5. Reboot is issued to the node 6. Provisioning and deployment complete. 33 solidfire.com
34 The provisioning process and required network topology is shown in Figure 19: Figure 19: Bare-Metal provisioning process overview Install Foreman Packages Configure the firewall settings on the foreman server using either lokkit or disabling the firewall altogether if your environment permits run the set of commands associated with your configuration as root: Method 1 - Using lokkit # lokkit --service http # lokkit --service https # lokkit --service dns # lokkit --service tftp # lokkit --port 8140:tcp Method 2 - Disabling local server firewall settings altogether # service iptables save # service iptables stop # chkconfig iptables off 34 solidfire.com
35 Install the foreman packages as root # yum install -y openstack-foreman-installer foreman-selinux Configuring Foreman Server and installing Foreman To enable provisioning: As root, edit /usr/share/openstack-foreman-installer/bin/foreman_server.sh; uncomment the variables and add the values as follows: FOREMAN_GATEWAY= FOREMAN_PROVISIONING=true Configuration Best Practices By default the Host Groups that we will configure in the next few sections set their default values via a default ruby file. You have a choice here to modify the default settings in the foreman seed files using a text editor of your choice, or you can override the parameters via the Foreman Web UI during Host Group configuration. For our deployment we chose to modify the seed files to limit the need for modifications later. The advantage to modifying the seed file is that your default Host Groups will be require less modification via the Web Interface and as a result you will be less prone to making mistakes in your customization. In our case we re using our Foreman server fairly heavily to deploy multiple OpenStack clouds in various topologies and configs. The one parameter of interest in our case was the password settings. By default Foreman will generate a random hex string for passwords. This may be fine, but in our case we wanted to simplify things a bit and set all of our passwords to our own custom value to make access and maintenance easier. You can edit /usr/share/openstack-foreman-installer/bin/seeds.rb and set things like passwords. You can also pre-configure things such as IP ranges, and specific service settings. Don t worry if you make a mistake you can still edit this via the Web UI later. In our case we re just going to modify the passwords to make things easier for us to remember when we go to use our Cloud. Changing the values in the seeds file is convenient because you can use an editor like vim to easily search and replace items, and it eliminates a chance of error when trying to find all of the passwords or IP entries in the Foreman web interface. Save a copy of the original seeds file # cp /usr/share/openstack-foreman-installer/bin/seeds.rb seeds.orig.rb Open the seeds file with the vi editor # vi /usr/share/openstack-foreman-installer/bin/seeds.rb Replace password and Controller private/public IP settings %s/securerandom.hex/ mypassword /g %s/ / /g %s/ / /g 35 solidfire.com
36 Run the openstack-foreman-installer # cd /usr/share/openstack-foreman-installer/bin/ # sh./foreman_server.sh Upon successful installation you should see the following messages and instructions: Foreman is installed and almost ready for setting up your OpenStack You ll find Foreman at The user name is admin and default password is changeme. Please change the password at Then you need to alter a few parameters in Foreman. Visit: From this list, click on each class that you plan to use Go to the Smart Class Parameters tab and work though each of the parameters in the left-hand column Then copy /tmp/foreman_client.sh to your openstack client nodes Run that script and visit the HOSTS tab in foreman. Pick some host groups for your nodes based on the configuration you prefer Login and change the password 1. Point your web-browser to the newly deployed Foreman server ( 2. The login screen is displayed. Type admin in the Username field and changeme in the Password field. Click the Login button to log in. Figure 20: Foreman Login Screen 36 solidfire.com
37 3. The Overview screen is displayed. Select the Admin User My account option in the top right hand corner of the screen to access account settings. Figure 21: Account Settings Navigation 4. In the resulting Edit User screen, enter your new Password as prompted 5. Click Submit to save changes Configuring Foreman to perform provisioning By default most of what s needed for provisioning is already built and configured for you. It is necessary to go through the provisioning items however and make some adjustments. The first step is to setup a local media distribution. You can set provisioning to use either an FTP or HTTP server as a media location. It s convenient to use the Foreman server itself for this. Foreman has already installed a web-server for us, so the only thing that s required is to create the local copy of the media on the server. The Foreman public web files are located in /var/lib/foreman/public. Add a directory there named repo. You ll need to create the entire directory structure shown here: /var/lib/foreman/public/repo/rhel/6.5/isos/x86_64 There are a number of ways that a local media source can be setup. The most straightforward is to download a copy of the current Red Hat Enterprise Linux 6 server directly to your server, mount it and copy the contents from the mount directory. The following assumes we ve downloaded an iso name rhel-server-6.5-x86_64 to our users Downloads directory. # sudo mount -o loop /home/user/downloads/rhel-server-6.5-x86_64-dvd.iso # /mnt/rhel-iso # sudo mount -o loop /home/user/downlaods/rhel-server-6.5-x86_64-dvd.iso # /mnt/rhel-iso # sudo cp -R /mnt/rhel-iso/* \ /var/lib/foreman/public/repo/rhel/6.5/isos/x86_64/ 37 solidfire.com
38 Now you re ready to configure the PXE, DHCP and DNS proxies in Foreman and set up your provisioning options. The following steps walk through the items that need to be modified or added using the Foreman web interface. Architectures Add RHEL Server 6.5 to the x86_64 Architecture Domains Verify the Domain fields are populated NOTE: If these entries are not filled in, that s an indication that your host does not have it s FQDN set properly. 38 solidfire.com
39 Hardware Models We ll let Puppet/Foreman populate this after discovery, just skip for now Installation Media 39 solidfire.com
40 Operating System Navigate to the Provisioning/Operating System tab and create a new OS entry for Red Hat Enterprise Linux 6.5. Verify that you ve selected; Architecture, Partition tables and Installation media as shown in the figure below. Navigate to the Templates tab and set the provision and PXELinux drop downs. 40 solidfire.com
41 Provisioning Templates Provisioning templates are what allow you to provide information on how you would like your system configured during PXE boot deployment. For a Red Hat Enterprise Linux deployment you ll need a PXELinux boot up file, as well as a Red Hat Kickstart file. In addition Foreman provides the ability for you to provide custom Snippets here that you can apply across multiple Kickstart files. The default install includes a number of Templates, the included OpenStack Kickstart and PXE templates are complete and work for many installations. In our case however we had some variations, our PXE boot interface on our servers was configured to em3, so we wanted to be sure and set that up, as well as make the PXE boot and Kickstart run automated so we set the interface in the Template files ourselves. In our RA we also utilized interface Bonding and VLAN s. We added a networking snippet to run after base configuration to setup our nodes the way we wanted including entries for the /etc/hosts file. The templates we used can be downloaded and used as an example from the solidfire-ai git repository; Either modify the existing Templates to suit your needs from the Foreman UI or create your own. The Foreman UI provides a built in editor as well as the option of just importing the Template file from your local machine. NOTE: When using the editor mouse clicks for things like copy/paste are disabled, you ll need to utilize your keyboard short-cuts (ctrl-c/ctrl-v). 41 solidfire.com
42 After you ve made your edits or added your new Template files, click on the Build PXE Default button located on the Provisioning Templates main page. Create your subnets for provisioning The provisioning process needs to know how you want the network configured for boot and install. To do this you ll need to setup a Subnet object in the provisioning group. In our case we re only creating a single subnet to get the system booted and on the network so that it can communicate with our Foreman server. The settings we used are shown in the figure below. Note that the machine os-2. solidfire.net in this example is the FQDN of our Foreman server. 42 solidfire.com
43 Provisioning in Foreman is now configured and ready for use. The next section of the Foreman setup and configuration is the post provisioning components. This is the set of configuration options you need to set up in order to deploy OpenStack on a node after you ve installed an OS and booted it. Note that in Foreman we ll reference each server or node as a Host. Setting Host-Groups in Foreman Host-Groups specify what software packages should be deployed on a system via puppet and what role the host(s) within that Host-Group will fulfill. After a Host has been provisioned and has checked in to Foreman (is managed) you can auto deploy software packages by setting adding it to a Host-Group. In our case we will utilize the default Controller(Nova-Network) and Compute(Nova-Network) Host-Group s with some minor modifications. 1. Navigate to the Host Groups Configuration section 43 solidfire.com
44 Configure Controller (Nova Network) Host Group The following OpenStack components will be deployed on our Controller node(s): OpenStack Dashboard (Horizon) OpenStack Image service (Glance) OpenStack Identity service (Keystone) MySQL database server Qpid message broker OpenStack Scheduler Services OpenStack API Services In addition to the default services listed above, the deployment process also includes adding the cinder-volume service to the control node as well. Since we re using the SolidFire backend for our storage device we can utilize the Controller for our Cinder resources in most cases. If you were to add volume backends like the reference LVM implementation you should definitely deploy your cinder volume service on a dedicated node due to resource requirements. Keep in mind that as with most components of OpenStack you can also add additional cinder-volume nodes later without difficulty. 1. In the base configuration select the following items from the Drop Downs: a. Environment (production) b. Puppet CA (hostname of our Foreman Server) c. Puppet Master (hostname of our Foreman Server) 44 solidfire.com
45 2. Navigate to the Puppet Classes tab and set entries 45 solidfire.com
46 a. Expand the cinder class by clicking on the box to the left of the class-name b. Add the Cinder::Volume::SolidFire class This will allow us to enable the SolidFire Driver during OpenStack deployment 46 solidfire.com
47 3. Navigate to the Parameters tab and modify parameters The Parameters tab displays the various OpenStack configuration variables that are associated with the services that run on the Controller. Here you ll need to modify a few of the parameters which are typically defined during the network design planning phase prior to deployment. Most of what we re doing here is ONLY setting appropriate network address information and changing some passwords. Remember in our case we already set our custom passwords in the seeds file, so the only items we need to change here are the IP addresses and the three settings to set up the SolidFire Driver in Cinder. 4. Click Submit to save your changes 47 solidfire.com
48 Configuring Compute (Nova Network) Host Group The Compute Host Group is the main component of your OpenStack deployment. The Compute nodes that are deployed in this Host Group are the systems that will run our virtual machine instances. Their primary responsibility is to provide the compute virtualization, and enable network and storage access to the instances that a user creates on them. The following services will be configured and run on these nodes: OpenStack Compute (Nova) Networking (Nova-Network) The process of updating the Compute Host Group is the same as the process we used to update the Controller Host Group. We won t repeat all of the steps here, but instead just skip to the overrides page in the Host Group Parameters section: Notice that we assigned a /21 network to our fixed range. It s important to plan ahead and have an idea of how many instances you plan to support in your OpenStack deployment. In our case we want to support at least 1000 instances, so we chose a /21 (giving us 2000 IP s) to ensure we had plenty of IP space and allowed for overhead. We re also sharing this on the same private network that we configured earlier so there s some overhead we have to keep in mind for each Hosts private nic configuration. <<WARNING>> If your deployment is similar to what we ve done here you MUST remember after deploying your Host Groups to use the novamanage command to reserve the fixed IP s that you ve used on your nodes. Since we re using FlatDHCP and we re sharing the fixed range with the IP s we ve assigned to the nics on our hosts, the used IP s need to be marked as reserved in the pool. If you don t do this the result is that as you launch instances they ll use the first available IP s in the range. This will result in a duplicate IP assignment. The verification section below includes information on how to reserve these IP s as well as information regarding a script available in the solidfire-ai Github repository. 48 solidfire.com
49 Building an OpenStack Cloud with Foreman Foreman is now configured and ready for use. At this point we can provision systems and deploy OpenStack on our bare metal servers. There are now just two steps required to get from Bare Metal to OpenStack ; 1. Provision OS to hosts 2. Assign Host Groups after deployment You will notice that in our deployment model we performed the Host Provisioning and OpenStack Host Group assignment in two separate steps. You could consolidate this to do the deploy and the Host Group assignment all in a single step. However, in our case we added some kernel modifications in our Kickstart file that required a reboot. Rather than implement reboot/updates during the Kickstart process we chose to separate the two steps. Instead we created a Base Host Group that didn t have any Puppet Manifests associated with it. Using a Base Host Group like this is helpful, because it reduces the number of fields you have to enter when Provisioning a new host. Fields like Puppet Server, Subnet etc can all be associated automatically via the Host Group association. Some important things to keep in mind, Foreman deploys the OS for us and configures Puppet. After creation the Host will automatically check in with Foreman every 30 minutes (or on each reboot) via the local Puppet Agent. Provision a Host In order to provision a host there are two pieces of information that you need to have: 1. The MAC address for the NIC you want to PXE boot off of 2. The DHCP address you want to assign during PXE boot Foreman will auto select the DHCP address for you, but in our case we utilize the DHCP address to make some settings in our Kickstart file. In order to track things better we find it helpful to keep a consistent pattern here. Rather than use the random DHCP address that Foreman provides, we ll override it and match the hostname. For example if our Host is os-1 we ll be sure to select DHCP address To provision a Host, we ll need to add it and mark it for Build. Navigate to the Foreman Web UI Hosts page and click on New Host in the upper right corner. 49 solidfire.com
50 The New Host page has a number of items that need to be filled in. Most of the items here are drop-down selections, so the only pieces you ll need to really know here are the name, mac-address and IP as mentioned earlier. Set the Name, Environment, Puppet CA and Puppet Master fields. 50 solidfire.com
51 Set the MAC address for the interface that you want to PXE boot off of. Foreman will create an entry for this MAC address when you mark it for build. The DHCP and PXE servers will only respond requests from known MAC addresses that are currently marked for build. Set Architecture, Operating system, Build mode, Media, Partition table and Root password. Not that most of these will be autopopulated once you select an architecture. Be sure to click Provision Templates after setting all of the fields. NOTE: If you receive an error setting DHCP during the Resolve process it s most likely a problem with the dchp.leases file. We ran into this occasionally and were able to fix it by cleaning the dhcpd.leases files, it s important to make sure that there are NO Hosts marked for Build when you reset these: # service dhcpd stop # rm /var/lib/dhcpd/dhcpd.leases; touch /var/lib/dhcpd/dhcpd.leases # rm /var/lib/dhcpd/dhcpd.leases~; touch /var/lib/dhcpd/dhcpd.leases~ # service dhcpd start 51 solidfire.com
52 Clicking Submit will automatically mark the Host for build. This creates an entry for the MAC address in the PXE/DHCP directory and adds the host information to the dhcpd.leases file. Upon reboot the Host will receive a DHCP address from the Foreman Proxy, PXE boot into our PXELinux file and run the Kickstart Template. The install/provisioning is completely automated, the only step required is the reboot. 52 solidfire.com
53 In our case we added all of our Hosts and marked for Build, then using our APC just power cycled all of them at once to kick off the process. You can also use the IDRAC to reboot and also monitor the process on the first run and make sure everything goes as planned. Upon completion of the Kickstart process, the Host status may go to an error state in the Foreman UI. In our case this was because we made a network change at the end of our kickstart file that made the Host unreachable. After the reboot, the Puppet Agent will check back in to the Foreman server and it s status should update to OK. Your Host is now ready for Host-Group assignment and OpenStack deployment. Assign and Deploy the OpenStack Controller via Host Group The first node we have to deploy is the Controller node or Host Group. The Compute Nodes and all other OpenStack nodes are dependent on the Controller or Controllers if you re doing HA. 1. Point your web-browser to the IP address of your Foreman Server 2. Login using the admin credentials you set up earlier 3. Click on the Hosts tab 4. Check the selection box to the left of the host you want to deploy OpenStack Controller 5. Click on Select Action from the drop down to the upper right and select Change Group 53 solidfire.com
54 6. Choose the Controller (Nova Network) Host Group from the selection list and click Submit The next time the Client Node checks in with Foreman it will be listed as out of sync, and Foreman will deploy the Controller Host Group. NOTE: If you don t want to wait for the next host update cycle, you can force an update from the client using the following command: # puppet agent -t This will force the client node to check in with the Foreman server, download the current manifest and begin deployment of the updates. Assign and Deploy the OpenStack Compute Nodes via Host Group Once your Controller node has been deployed and is up to date according to the Foreman Web UI, you re ready to start assigning the Compute Host Group to your nodes. This process is the same as the process you followed for the Controller Host Group. Everything is the same except Node selection and Host Group selection. It s STRONGLY recommended to start by deploying the Controller and just a single Compute node and then performing some basic functional testing. This way you can verify your Host Group configs are correct and that your deployment model is usable. The steps below outline the minimum checks you should perform, in addition it s a good idea to boot an instance, attach a volume and verify that you can log on to the instance, create partitions, format and mount the volume. Once you ve successfully completed these checks, you are ready to assigning the Compute Host Group to the rest of your nodes. Also keep in mind you don t have to deploy everything now. You can come back and scale out your compute nodes later if you like. 54 solidfire.com
55 Verifying the OpenStack deployment You should now be able to log in to the OpenStack Dashboard. Using your web-browser point to the Dashboard and login as admin (the password can be obtained from your Controller Host Group s parameters tab if needed): Log in using the admin account and navigate to the Access & Security section, select Download OpenStack RC File. This RC file holds credentials to allow you to access the OpenStack Services API s via http (curl or client utilities). 55 solidfire.com
56 Download and run SolidFire-AI openstack_verification_scripts At this point we should have a functional OpenStack deployment. Upon completion of the OpenStack deployment, running the SolidFire AI Verification Script is recommended. The script and other useful tools are available in the solidfire-ai repository within the SolidFire Inc. public GitHub.com organization ( Upload the credentials file you downloaded in the previous step to your newly deployed Controller Host and log in via ssh. Install git and clone the SolidFire-AI repo on the Host. # git clone For now, we re interested in the sfai_openstack_config_scripts, cd to that directory: # cd solidfire-ai/sfai-openstack/openstack_config_scripts Details describing what these scripts do and the order they should be run in are available in the Repos README file as well as comment/documentation headers in the scripts themselves. Be sure to take a look at these before executing the scripts to ensure they re set up for your particular environment. We will use the following scripts to test our deployment and make some configuration settings: reserve_ips.sh Reserves our ranges of floating and fixed IP s that we re using on the Hosts basic_verify.sh Runs basic functionality exercises, creates an image, boots and instance, attaches a volume, etc. cloud_setup.sh Finishes up our configuration by creating SolidFire QoS Specification, Volume Types and downloads/creates a number of images to use in our Cloud as well as creates a default keypair to assign to instances. One critical piece here is to create a default Upon successful running the scripts listed above, we should feel confident that we haven t misconfigured our deployment options. If you ran these tests after a single Compute Host deployment (recommended), now we can go through and assign remaining Hosts to the Compute Host Group. IMPORTANT: If you came across an issue such as an incorrect parameter being set in the Host Group, you can simply edit the Host Group Parameters via Foreman. The next time the Host checks in (30 minute interval), it will be notified that it s out of sync and the changes will be pulled on to that Host. You can also force the check in from the Host using puppet agent -t. Additionally it s possible to configure Foreman to push puppet updates, however we chose not to implement that feature in our RA. From the Foreman WEB UI, navigate to the Hosts page and select all of the remaining Hosts that you d like to deploy as Nova Compute Hosts. From the Select Action drop down in the upper right portion of the table, select Change Group, and select Compute (Nova Network) from the drop down and then Submit the changes. 56 solidfire.com
57 At each selected Hosts check in they ll receive notification that they re out of Sync and they ll be updated/reconfigured as specified by the indicated Host Group. At this point we have a functional OpenStack Cloud deployment. If the steps and verification scripts above completed successfully you re ready to create tenants and users and begin using your cloud. 57 solidfire.com
58 Solution Validation Throughout the validation process, specific tests were conducted against the outlined configuration using industry standard benchmarking tools in order to validate the AI design against the kind of workloads most often encountered in an IT-as-a-Service style cloud infrastructure. Each individual category below outlines the specific test conducted to validate the use case and highlight the key benefits of SolidFire AI. Deployment Through our testing we were able to validate significant acceleration of deployment times with the AI architecture. The targeted deployment for this validation was 18 nodes, composed of 2 Control nodes (HA), 15 Compute nodes and 1 Foreman node. The key achievement from this effort was the deployment from bare metal to OpenStack in approximately one hour. Specific duration of key steps in this process include; Provision from bare metal to operating system 17 nodes within 30 minutes Install and Configure OpenStack on the same 17 nodes in less than 30 minutes. More specific details from important steps in the deployment process are highlighted below; Utilizing Foreman Provisioning (bare-metal deployment) and OpenStack deployment we were able to install, configure and register the 17 nodes into Foreman running OpenStack in less than 1 hour. For comparison purposes, deployments without Foreman typically entails a multi-day effort that is susceptible to error due to significant manual administration. Using Foreman not only expedites the install process in terms of time, it also results in a more accurate and repeatable process. In our setup we utilized a local mirror for OS install and provisioning, as well as customized Kickstart files for networking. The result was provision time from bare metal to operating system in under 30 minutes. This process included registration on the Red Hat Satellite, YUM updates, networking configuration, and installation and configuration of Foreman/Puppet Clients. At this point, deploying OpenStack components is simply a matter of adding a host (or hosts) to the appropriate host group. Foreman using Puppet to install and configure the select OpenStack packages during an update interval, this process completed within 30 minutes of the Hosts check in with the server. For the first step in our deployment sequence we deployed all of our bare metal systems. Next we deployed our OpenStack Controllers and a single OpenStack Compute node. We then ran our verification scripts (see GitHub) against the resulting OpenStack Cloud to ensure that everything worked correctly. Upon successful completion of verification checks we assigned the Compute Host Group to the remaining 15 nodes. The resulting configuration amounted to 2 controller nodes and 15 compute nodes ready for consumption inside of OpenStack. 58 solidfire.com
59 Integration/Interoperability Initial integration and interoperability of the reference architecture deployment was performed using two standard tests. These tests include the Red Hat Enterprise Linux OpenStack Platform Certification Test Suite and the Tempest Test Suite. Red Hat Enterprise Linux OpenStack Certification Test Suite This test is provided by Red Hat and is intended to ascertain that a third-party component is compatible with Red Hat Enterprise Linux OpenStack Platform. There are two parts to the Certification Test Suite, Storage and Networking. In our reference architecture we ran the Storage Certification suite to further verify the interoperability of our deployment. Test Type RHEL OpenStack Certification Test Suite Status PASSED Tempest Test This pass/fail test evaluates interoperability of the majority of the functionality inside OpenStack. The items tested by Tempest includes booting instances, creating/attaching volumes, creating bootable volumes, performing snapshots and much more. This particular test-suite is utilized for the CI gating checks which run against every commit to the OpenStack code base. Test Type Tempest Test Suite Status PASSED Operational Efficiency To demonstrate operational efficiency of the AI design we performed a number of test-scenarios. These ranged from testing functional scale limits of the deployment and timing, as well as real world simulations of various workloads running simultaneously in the AI Cloud. Scalability This Scalability test is intended to exercise the ability to scale Block Storage and Compute Instances in our AI cloud as well as dynamically add Compute Nodes into this environment. We started this test by reprovisioning all of the Hosts in our Deployment. We used the same process as in the initial deployment and loaded a fresh version of the Operating System on each Host. To start however we only assigned 8 of the hosts to Compute Host Group and left the remaining servers with just the base OS. The first thing the test does is create a template volume of 10GB, which we make bootable by copying a Fedora-20 image on to it during creation. In addition we create a second and third template volume of the same size with Ubuntu and CentOS-6.5 images. 59 solidfire.com
60 We then launch random_boot.py which launches between 1 and 5 Instances sequentially by cloning one of the Template Volumes at random, waits for the Volumes to become ready and then Boots an Instance from each of the Volumes in the set. The script continues this pattern until we ve booted 200 Instances. Once we ve reached 200 running instances the test then goes through and randomly terminates instances and deletes volumes, then rebuilds them in the same manner as we did in the initial build up. For this we randomly selected from a list of four Flavors that we had created: 1. rboot.small 1 VCPU and 2GB RAM 2. rboot.med 2 VCPU and 4GB RAM 3. rboot.large 4 VCPU and 8GB RAM 4. rboot.huge 6 VCPU and 16GB RAM In addition to cycling the resources deployed in our Cloud, we also exercised the scalability of our OpenStack deployment by reassigning the freshly built Hosts to the Compute Host Group after launching the test scripts. We then went back to Foreman and added 5 of the nodes back into the configuration by assigning them to the Compute Group. The process of adding the Compute Hosts to a running cloud worked exactly as expected and within 15 minutes of assigning the Hosts in Foreman, they were available in our resource pool and Instances were being assigned to them. At the end of this test run, one of the interesting observations is the balanced Instance distribution across the infrastructure. During the 8 hour run the system naturally rebalanced across all the nodes in the AI cloud. Figure 22 shows the Instance distribution from the OpenStack Dashboard at the conclusion of the test run. Figure 22: OpenStack Instance Deployment Distribution The Hosts that were added in after the test started are os-9 through os-15. The low number of Instances allocated to os-1 and os-7 were the result of failures on those nodes early in the test. Altogether, the results from Instance distribution exercise clearly demonstrated the scalability benefits of the SolidFire OpenStack deployment and the value of the Foreman deployment tool. Upon completion of the 8 hour run of this test, using a limited subset of Compute Hosts, we were able to cycle through building a total of 973 Instances. 60 solidfire.com
61 ** Note that the total build count referenced above did not include any Volumes or Instances that failed during the test (In the case of failures the test script would attempt to delete the resource and replace it with another). Resource Agility This test was designed to measure provisioning agility of the infrastructure by measuring the elapsed time, or completed cycles, in different virtual machine instance deployment scenarios. The instances were again provisioned with a single vcpu and 2GB of RAM. We verified that each instance could be logged in to via ssh and performed a simple file create/delete (dd from dev/random to file of 500MB) to ensure that it was in fact operational. Each Boot Volume was 100GB in size, and was a qcow2 image of Fedora20. Measured duration for specific deployment scenarios under this configuration included; Time to Create and boot 1000 new persistent instances w/100gb root disks and key injection: 2.5 hours Time to delete 1000 Instances: 11 minutes Time to Clone 1000 Volumes: 15 minutes Time to delete 1000 Instances and Volumes: 17 minutes During this instance deployment process, there can be elongated creation and boot times due to the larger volume sizes. The image download process in particular experiences significant overhead in the qemu conversion process of the empty blocks. However, the image to Volume download overhead can be addressed by utilizing the SolidFire clone functionality. In this model you only have to download the image once, the cloning operations are handled internally on the SolidFire Cluster. It s also important to point out that the deployment times were not linear with respect to Instance or Volume count. The same test was performed with 200 Instances and Volumes. Time to Create and boot 200 new persistent instances w/100gb root disks and key injection: 40 minutes Time to delete 200 Instances: 11 minutes Time to Clone 200 Volumes: 6 minutes Time to delete 200 Instances and Volumes: 10 minutes The biggest difference in the 200 instance deployment was that the bulk of the time during deployment was spent during the boot process of the instances. The process of cloning the 100GB parent volumes completed within 42 seconds. As opposed to the 1000 instance test where this took upwards of an hour. In use cases where durable and persistent Instances are desired the advantage of using the Boot From Volume method is significant. Using bootable Volumes provides more flexibility as well as the ability to customize Images based on your needs. Instance migration also becomes a much easier process. For use cases where you want a higher level of availability for your Instance, and you require not only higher performance but also consistent and guaranteed performance, the use of Bootable Volumes with SolidFire is the best way to achieve this. ** Note, throughout the 1000 instance cycle test, we encountered timing issues as the deployment eclipsed 500 instances. We believe these issues were the result of long wait times for API requests due to what appears to be database transaction delays. We suspect these performance delays will be addressed in future OpenStack release efforts. 61 solidfire.com
62 Quality of Service This series of tests was designed to demonstrate the quality-of-service (QoS) merits of the SolidFire storage system by showing multiple applications running in parallel on the same shared storage infrastructure. The goal here was simulate varying use cases and work loads, and to verify that the SolidFire resident volumes delivered consistent performance in line with their QoS settings. We again ran the same test cycle as above, creating 200 Instances and then running delete/create in the background. For this test, however, we also introduced IO generators on each of the Instances. For IO generation and testing we used Flexible I/O Tester (Fio). Fio is a versatile IO workload generator offering a high level of configuration to allow you to simulate various real world workloads. We created two new Template Volumes (Fedora 20, Ubuntu 12.04), both modified to have Fio pre installed and configured. In addition, we set up six varying Fio profiles to simulate different use-cases and work loads that would exist in our deployment. The six profiles below, representing a significant variance in IO load, were selected from the Fio package with some minor modifications (these Profiles are available in the SolidFire-AI github repo). 1. database 2. file/web server 3. general operating system IO 4. readonly 5. writeonly 6. 80/20 mixed read/write We then added a call to the crontab on the Instance Templates, so that when booted they would execute our Fio run script. The run script would then randomly choose one of the six available profiles and run the corresponding IO test. Most of these tests were setup to run for several hours, but in the case that a test completed the script would pause and then select another test and start the process over again. Finally, in order to demonstrate consistent performance amidst this mixed workload environment we utilized Cinders Volume Type mapping to SolidFire QoS settings. This allows a user to specify their IO requirements for a Volume at create time. The IOP settings are on a Volume by Volume basis and are completely dynamic. The QoS settings on the SolidFire Volume can be changed at any time without interrupting IO to the Volume, or doing any sort of migration or data relocation. The Volume Type definitions we used for this test are shown in the table below. Name Min IOPs Max IOPs Burst IOPs low-iops med-iops high-iop solidfire.com
63 While creating the Volumes we added an option to the Clone script that would select one of the above three Volume Types and pass that in as an option to the Create call. The result was a randomly distributed deployment of not only QoS settings on the Volumes, but also the IO Workloads being run on each of those Instances. In order to verify that our IOP levels were performing within the bounds of our QoS settings we viewed the performance statistics for various Volumes from the SolidFire Web UI. There were two key questions to focus on: 1. Is the Volume operation within the configured ranges in terms of IOP performance? 2. Is the IO performance on the Volume consistent? In order to verify the QoS settings matched the Volume Type we assigned in OpenStack during Volume creation we first go to the Edit QoS page, shown in Figure 23 below, for the Volume being checked. While this page also allows an admin to adjust QoS if desired, for this purpose it is only used for information gathering. Figure 23: SolidFire Edit QoS Tab 63 solidfire.com
64 The Volume Info section within the Edit QoS tab above contains the internal SolidFire ID as well as the UUID assigned to the Volume in OpenStack, and it also includes the OpenStack Tenant ID. The Modify Volume section shows that this is Volume is Type high-iop, the settings match with our assignment in OpenStack. Next, to check the consistency of the IO performance you need to view the IO summary page for the specific Volume. This page (Figure 24 below) shows a graphical representation of the Volumes performance over time, including IOP s, Bandwidth in MB/s, and latency. We expect to see consistent performance history here in line with our IOP boundaries set via QoS. Figure 24: SolidFire Performance History Tab 64 solidfire.com
65 To demonstrate the capability of SolidFire s QoS controls to support multiple IO settings across Volumes on a single cluster, we also include a screenshot of the same information for a Volume with the low-iops setting. As was observed in the high-iops graphic above, we can see that the performance in figure below remains consistent within the set QoS parameters. 65 solidfire.com
66 The results in these figures above clearly demonstrate the predictable and consistent performance provided from the SolidFire device. It s important to keep in mind that this is just a sample of two of the Volumes. There were approximately 185 Volumes currently all handling varying IO workloads to the same system when the sample was viewed. Each of these volumes maintaining a consistent performance level within the bounds of their QoS settings. Storage Efficiency During our testing process we utilized the SolidFire Web UI to inspect a sample of the Volumes and get a feel for the overall storage efficiency being realized by the system. Figure 25 is a screen capture of the SolidFire Web UI s Reporting page. This view provides a quick graphical depiction of the overall Cluster utilization for both capacity and IOP usage. Key statistics from this system report include; Overall Efficiency: x (including thin provisioning, deduplication and compression) Provisioned Space: 63,039.4 GBs / Unprovisioned Space 23,317.7 GBs Used Space GBs / Unused Space 14,155.9 GBs 66 solidfire.com
67 Figure 25: SolidFire Storage System Efficiency These statistics summarize the efficiency benefits to be gained from the in-line data reduction inherent to the SolidFire architecture. In this particular use case, the significant efficiency comes from the fact that in most cases the base operating system image that is deployed in an OpenStack environment is going to have extreme rates of duplication. 67 solidfire.com
68 Conclusion The validated AI design reviewed throughout this document demonstrates how operators and administrators can more easily deploy a functional OpenStack cloud infrastructure utilizing SolidFire storage, Dell compute and networking, and the Red Hat Enterprise Linux OpenStack Platform. Using these technologies, the goal of this document was to provide guidance on how to complete the following tasks; Setup server, network and storage infrastructure Deploy Foreman Server Deploy Red Hat Enterprise Linux 6.5 (using Foreman) Deploy OpenStack (using Foreman) Validate configuration This reference architecture covers each of these topics in sufficient detail to allow users of an AI design to reproduce them in their own environments. The configuration described in this document across various tools and technologies can be customized and expanded to meet specific requirements. For enterprise IT administrators and managers evaluating an OpenStack-powered infrastructure, SolidFire s AI significantly removes configuration and deployment complexity, eliminates lock-in risk and accelerates time to value. By capturing the value of robust technologies from Red Hat, Dell and SolidFire, enterprises stand to benefit from a significantly more agile, efficient, automated and predictable cloud infrastructure. If you would like to learn more about SolidFire AI please contact your local sales representative, [email protected] or visit the SolidFire AI solution page for more information. 68 solidfire.com
69 Appendix Appendix A: Bill of Materials Quantity Vendor Component 2 Dell s55 Switch w/stacking module, Reverse Airflow 2 Dell s4810 Switch, Reverse Airflow 18 Dell PowerEdge R620 Server CPU: 2 x Intel Xeon CPU E GHz, 12 core RAM: 256GB Network: 2 x 1GbE, 2 x 10Gbe RAID: Integrated Drives: 2 x 250GB 5 SolidFire SF3010 Storage Node 18 Red Hat Red Hat Enterprise Linux Openstack Platform Subscription Note: The BOM provided above is specific to the reference design outlined in this document. Performance and scale requirements of the environment may necessitate deviation from the above design. Appendix B: Support Details Agile Infrastructure is a reference architecture that has been tested and validated by SolidFire. SolidFire Support will follow established cooperative support guidelines to resolve any issues that customers encounter with their Agile Infrastructure design. Each of the components within the solution are also supported by the individual vendor partners. Detail on support contact information for each vendor is provided below; SolidFire For detail on the SolidFire Support offerings, visit the SolidFire Support Portal ( or contact SolidFire Support via ([email protected]) or phone ( ). Red Hat For support with Red Hat it is necessary to open a support case through the Red Hat Customer Portal. Alternatively, customers in the United States and Canada can dial 888-GO-REDHAT. For customers outside of these geographies, regional support contact information can be found here. 69 solidfire.com
70 Dell For customers in the United States, call 800-WWW-DELL ( ). Dell provides several online and telephone-based support and service options. Availability varies by country and product, and some services may not be available in your area. To contact Dell for sales, technical support, or customer service issues: Visit support.dell.com. Click your country/region at the bottom of the page. For a full listing of country/region click All. Click All Support from the Support menu. Select the appropriate service or support link based on your need. Choose the method of contacting Dell that is convenient for you. Appendix C: How to Buy For assistance with AI solution design and/or help finding a channel partner in your region that can help, please contact SolidFire sales at [email protected]. Appendix D: Data Protection Considerations Remote replication The SolidFire storage system can support both host and array based replication techniques. For more information on these features please contact SolidFire sales at [email protected]. Backup to Object Storage In addition to the implemented backup to Object Storage support offered in Cinder, SolidFire also provides the ability to backup directly to Swift or S3 compatible storage targets from it s own native API s/ Web UI. SolidFire can also perform delta block backups to other SolidFire volumes that reside on a separate physical cluster. For more information on these features please contact SolidFire sales at [email protected]. Appendix E: Scripts & Tools The SolidFire AI Verification Script and other useful tools are available in the solidfire-ai repository within the SolidFire Inc. public GitHub.com organization ( 70 solidfire.com
71 Appendix F: Agile Infrastructure Network Architecture Details The following appendix dives a bit deeper into the details of the network architecture, it s design, and specific configurations and requirements. Administrative / Provisioning TOR Access Switches The Dell s55 switches serve to provide 1 Gb/s connectivity for administrative in-band access to all infrastructure components as well as dedicated connectivity for infrastructure bare-metal provisioning. These switches are configured with stacking. In the stacked configuration, the switches provide both redundancy and a single point of switch administration. Stacked switches are seen as a single switch, and thus have the added benefit of allowing the directly-connected systems to utilize LACP bonding for aggregated bandwidth, in addition to providing redundant connectivity. When connected to the data center aggregation layer, it s recommended that each Admin Switch is connected to each upstream aggregation switch independently with at least two links. As shown in the diagram below, this facilitates uptime and availability during an upstream aggregation switch outage or maintenance period. Figure 26: Admin Switch Redundant Link Configuration 71 solidfire.com
72 Data and Storage TOR Access Switches The Dell s4810 switches provide the necessary high-bandwidth 10 Gb/s connectivity required to support OpenStack cluster operations and SolidFire storage access. Dual s4810 switches are configured to utilize Virtual-Link Trunking (VLT). VLT facilitates enhanced high-availability, and greater aggregated bandwidth to both the upstream data center aggregation/core layer, as well as downstream to the directly-connected hosts by allowing the switch pair to be seen as one switch, thus allowing the formation of Link-Aggregation-Groups (LAGs) to combine multiple links into one logical link, and aggregating bandwidth. The VLT trunk utilizes 2 x 40GbE links to provide a total of 80Gb/s of total throughput between TOR switches. One or two of the remaining 40GbE ports may be used as uplinks to the data center aggregation switches. This configuration will deliver up to 160Gb/s total throughput between the TOR data/storage switches as the solution scales out rack by rack. The redundant link configuration as shown in the diagram below, facilitates high-availability and maintains a minimum number of network hops through the infrastructure in the event of an upstream outage. Figure 27: Redundant Link Configuration 72 solidfire.com
73 Cabling Considerations This reference architecture utilizes copper cabling throughout the design. For 10G connectivity, copper direct-attach cables, also known as TwinAx cabling, is more cost effective than comparable optical options, and SFPs are directly attached to the cable. Depending on port availability, connectivity to the aggregation layer switches can be via QSFP+ cables, or QSFP+ to 4x10Gbe SFP+ breakout cables. Both options deliver up to 40Gb/s per QSFP+ port. Fiber is also an option, however SFP+ direct-attach cables tend to be more cost effective vs. optical SFP+ fiber options. Physical Node Connectivity As a general design guideline, all Agile Infrastructure nodes (OpenStack and SolidFire) are dual connected to both the administrative TOR access switches and the TOR data/storage access switches. This provides switch fault-tolerance, and increased aggregated bandwidth when needed via multiple NIC Bonding options. (See Figure 28: AI Node Physical Connectivity) Figure 28: AI Node Physical Connectivity 73 solidfire.com
74 In this reference architecture all nodes are configured to utilize NIC bonding to facilitate minimum downtime and added performance when needed. This provides simple, consistent physical connectivity across the deployment. However, different bonding modes were used between OpenStack Nodes and SolidFire Storage Nodes, as explained below. Openstack Nodes are configured to use NIC bonding mode 1, or Active-Passive. In addition to providing physical NIC failover in case of a failed link, Active-Passive bonding allows for simple predictable traffic flow over the designated active link. However, if bandwidth becomes a concern at the compute layer, especially when sustained traffic loads are nearing 10 Gbp/s, changing to Bonding mode 5 (TLB) is recommended to alleviate bandwidth constraints, while maintaining redundancy. Either bonding mode 1 (Active-Passive) or bonding mode 5 (TLB) is recommended for OpenStack Nodes because switch-side port configuration is not necessary, making scaling compute nodes simple. SolidFire Nodes are configured to use NIC bonding mode 4, or LACP bonding. This bonding mode provides both redundant switch connectivity, as well as aggregated bandwidth, with one main difference between bonding mode 5 (TLB) being that it uses the same mac address on all outbound traffic, and traffic is load-balanced inbound and outbound on all bonded links. LACP bonding suits the storage nodes well, giving up to 20 Gbp/s aggregated storage throughput, per node. Jumbo Frames For maximum performance, use of jumbo frames across the 10G network infrastructure is recommended. The design presented in this reference architecture relies on a MTU setting of 9212 or higher be configured end-to-end across the network path for proper operation and performance. OpenStack Nodes and SolidFire nodes have all 10G interfaces set to use an MTU of This MTU is less than that defined on the switches because of the overhead required to process jumbo frames. The switches add additional information to each frame prior to transmission, and thus MTU must be higher than 9000 on the switch side. 74 solidfire.com
75 Networks Summary Networks used in this reference architecture are described below. Network Administrative Network Data Center Network OpenStack Services Network Storage Network Private Network Purpose In addition to providing general administrative and management access, it is also used by the Foreman provisioning server to provision and manage the OpenStack Node configurations. Additionally, this network is used by Controller Nodes to provision storage volumes on the SolidFire SAN This network provides access between OpenStack VMs and the rest of the corporate or data center networks. It also provides portal / API access to the Foreman Provisioning Server and OpenStack Cloud Controllers Also known as the Public or Floating network, Floating IPs are assigned from this network to individual VMs to enable external connectivity to networks outside the OpenStack environment. The term public does not necessarily imply that the network is Internet accessible. Provides isolated access for OpenStack Cloud internal communications such as DB queries, messaging, HA services, etc. Isolated network access between Openstack Nodes and SolidFire storage cluster. Internal network to provide internal connectivity between tenant VMs across the entire Openstack Cloud. Also known as a Fixed network, Fixed IPs are assigned to individual VMs 75 solidfire.com
76 Appendix G: Agile Infrastructure Node Network Interface Configuration The following table details the physical and logical interface mappings for Agile Infrastructure OpenStack and Storage Node systems used in this reference architecture. You may use this table as a reference for physical network cabling and VLAN port configuration on network switches. Replace VLAN IDs with those which are used in your environment. Role Network Physical Interface VLAN ID Logical Interface OpenStack Controller Admin Port1, Port bond1 Public Port 3, Port bond0:vlan1001 Storage Port 3, Port bond0:vlan1002 OpenStack Services Port 3, Port bond0:vlan1003 OpenStack Compute Admin Port1, Port bond1 Public Port 3, Port bond0:vlan1001 Storage Port 3, Port bond0:vlan1002 OpenStack Services Port 3, Port bond0:vlan1003 Private Port 3, Port bond0:vlan1004 Provisioning Server Admin Port1, Port bond1 Public Port 3, Port bond0:vlan1001 SolidFire Storage Node Admin Port1, Port bond1g Storage Port 3, Port bond10g 76 solidfire.com
77 Appendix H: Rack Configuration RU Role Device 42 10Gb Network TOR Data Access Switch A - Dell S Gb Network TOR Data Access Switch B - Dell S Gb Network TOR Administrative Access Switch A - Dell S Gb Network TOR Administrative Access Switch B - Dell S55 38 Provisioning Server Dell PowerEdge R OS Controller 1 Dell PowerEdge R OS Controller 2 Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R OS Compute Dell PowerEdge R solidfire.com
78 RU Role Device 20 OPEN OPEN 19 OPEN OPEN 18 OPEN OPEN 17 OPEN OPEN 16 OPEN OPEN 15 OPEN OPEN 14 OPEN OPEN 13 OPEN OPEN 12 OPEN OPEN 11 OPEN OPEN 10 OPEN OPEN 9 OPEN OPEN 8 OPEN OPEN 7 OPEN OPEN 6 OPEN OPEN 5 Storage SolidFire SF Storage SolidFire SF Storage SolidFire SF Storage SolidFire SF Storage SolidFire SF3010 Disclaimer: The OpenStack Word Mark and OpenStack Logo are either registered trademarks / service marks or trademarks / service marks of the OpenStack Foundation, in the United States and other countries, and are used with the OpenStack Foundation s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation or the OpenStack community. Red Hat, Red Hat Enterprise Linux and the Red Hat Shadowman logo are registered trademarks of Red Hat, Inc. in the United States and other countries. All other trademarks referenced herein are the property of their respective owners. 78 solidfire.com
Building a Virtual Desktop Infrastructure A recipe utilizing the Intel Modular Server and VMware View
Building a Virtual Desktop Infrastructure A recipe utilizing the Intel Modular Server and VMware View December 4, 2009 Prepared by: David L. Endicott NeoTech Solutions, Inc. 2816 South Main St. Joplin,
Rackspace Private Cloud Reference Architecture
RACKSPACE PRIVATE CLOUD REFERENCE ARCHITECTURE SOLIDFIRE Rackspace Private Cloud Reference Architecture SolidFire Legal Notices The software described in this user guide is furnished under a license agreement
Release Notes for Fuel and Fuel Web Version 3.0.1
Release Notes for Fuel and Fuel Web Version 3.0.1 June 21, 2013 1 Mirantis, Inc. is releasing version 3.0.1 of the Fuel Library and Fuel Web products. This is a cumulative maintenance release to the previously
Eucalyptus 3.4.2 User Console Guide
Eucalyptus 3.4.2 User Console Guide 2014-02-23 Eucalyptus Systems Eucalyptus Contents 2 Contents User Console Overview...4 Install the Eucalyptus User Console...5 Install on Centos / RHEL 6.3...5 Configure
Quick Start Guide for Parallels Virtuozzo
PROPALMS VDI Version 2.1 Quick Start Guide for Parallels Virtuozzo Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the current
Backup & Disaster Recovery Appliance User Guide
Built on the Intel Hybrid Cloud Platform Backup & Disaster Recovery Appliance User Guide Order Number: G68664-001 Rev 1.0 June 22, 2012 Contents Registering the BDR Appliance... 4 Step 1: Register the
Web Application Firewall
Web Application Firewall Getting Started Guide August 3, 2015 Copyright 2014-2015 by Qualys, Inc. All Rights Reserved. Qualys and the Qualys logo are registered trademarks of Qualys, Inc. All other trademarks
OnCommand Performance Manager 1.1
OnCommand Performance Manager 1.1 Installation and Setup Guide For Red Hat Enterprise Linux NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501
VELOCITY. Quick Start Guide. Citrix XenServer Hypervisor. Server Mode (Single-Interface Deployment) Before You Begin SUMMARY OF TASKS
If you re not using Citrix XenCenter 6.0, your screens may vary. VELOCITY REPLICATION ACCELERATOR Citrix XenServer Hypervisor Server Mode (Single-Interface Deployment) 2013 Silver Peak Systems, Inc. This
Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide
Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide July 2010 1 Specifications are subject to change without notice. The Cloud.com logo, Cloud.com, Hypervisor Attached Storage, HAS, Hypervisor
SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager [email protected]. Product Marketing Manager djarvis@suse.
SUSE Cloud 2.0 Pete Chadwick Douglas Jarvis Senior Product Manager [email protected] Product Marketing Manager [email protected] SUSE Cloud SUSE Cloud is an open source software solution based on OpenStack
HP CloudSystem Enterprise
HP CloudSystem Enterprise F5 BIG-IP and Apache Load Balancing Reference Implementation Technical white paper Table of contents Introduction... 2 Background assumptions... 2 Overview... 2 Process steps...
SHAREPOINT 2013 IN INFRASTRUCTURE AS A SERVICE
SHAREPOINT 2013 IN INFRASTRUCTURE AS A SERVICE Contents Introduction... 3 Step 1 Create Azure Components... 5 Step 1.1 Virtual Network... 5 Step 1.1.1 Virtual Network Details... 6 Step 1.1.2 DNS Servers
Web Sites, Virtual Machines, Service Management Portal and Service Management API Beta Installation Guide
Web Sites, Virtual Machines, Service Management Portal and Service Management API Beta Installation Guide Contents Introduction... 2 Environment Topology... 2 Virtual Machines / System Requirements...
Quick Start Guide for VMware and Windows 7
PROPALMS VDI Version 2.1 Quick Start Guide for VMware and Windows 7 Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the
Required Virtual Interface Maps to... mgmt0. bridge network interface = mgmt0 wan0. bridge network interface = wan0 mgmt1
VXOA VIRTUAL APPLIANCE KVM Hypervisor In-Line Deployment (Bridge Mode) 2012 Silver Peak Systems, Inc. Support Limitations In Bridge mode, the virtual appliance only uses mgmt0, wan0, and lan0. This Quick
Windows Azure Pack Installation and Initial Configuration
Windows Azure Pack Installation and Initial Configuration Windows Server 2012 R2 Hands-on lab In this lab, you will learn how to install and configure the components of the Windows Azure Pack. To complete
SUSE Cloud Installation: Best Practices Using an Existing SMT and KVM Environment
Best Practices Guide www.suse.com SUSE Cloud Installation: Best Practices Using an Existing SMT and KVM Environment Written by B1 Systems GmbH Table of Contents Introduction...3 Use Case Overview...3 Hardware
F-Secure Messaging Security Gateway. Deployment Guide
F-Secure Messaging Security Gateway Deployment Guide TOC F-Secure Messaging Security Gateway Contents Chapter 1: Deploying F-Secure Messaging Security Gateway...3 1.1 The typical product deployment model...4
Installing the Operating System or Hypervisor
Installing the Operating System or Hypervisor If you purchased E-Series Server Option 1 (E-Series Server without preinstalled operating system or hypervisor), you must install an operating system or hypervisor.
LOCKSS on LINUX. CentOS6 Installation Manual 08/22/2013
LOCKSS on LINUX CentOS6 Installation Manual 08/22/2013 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 6 BIOS Settings... 9 Installation... 10 Firewall Configuration...
Prepared for: How to Become Cloud Backup Provider
Prepared for: How to Become Cloud Backup Provider Contents Abstract... 3 Introduction... 3 Purpose... 3 Architecture... 4 Result... 4 Requirements... 5 OS... 5 Sizing... 5 Third-party software requirements...
NSi Mobile Installation Guide. Version 6.2
NSi Mobile Installation Guide Version 6.2 Revision History Version Date 1.0 October 2, 2012 2.0 September 18, 2013 2 CONTENTS TABLE OF CONTENTS PREFACE... 5 Purpose of this Document... 5 Version Compatibility...
Virtual Appliance Setup Guide
Virtual Appliance Setup Guide 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their respective
Deploy Remote Desktop Gateway on the AWS Cloud
Deploy Remote Desktop Gateway on the AWS Cloud Mike Pfeiffer April 2014 Last updated: May 2015 (revisions) Table of Contents Abstract... 3 Before You Get Started... 3 Three Ways to Use this Guide... 4
How To Install Openstack On Ubuntu 14.04 (Amd64)
Getting Started with HP Helion OpenStack Using the Virtual Cloud Installation Method 1 What is OpenStack Cloud Software? A series of interrelated projects that control pools of compute, storage, and networking
Virtual Desktop Infrastructure (VDI) made Easy
Virtual Desktop Infrastructure (VDI) made Easy HOW-TO Preface: Desktop virtualization can increase the effectiveness of information technology (IT) teams by simplifying how they configure and deploy endpoint
Installing and Using the vnios Trial
Installing and Using the vnios Trial The vnios Trial is a software package designed for efficient evaluation of the Infoblox vnios appliance platform. Providing the complete suite of DNS, DHCP and IPAM
Deploy XenApp 7.5 and 7.6 and XenDesktop 7.5 and 7.6 with Amazon VPC
XenApp 7.5 and 7.6 and XenDesktop 7.5 and 7.6 Deploy XenApp 7.5 and 7.6 and XenDesktop 7.5 and 7.6 with Amazon VPC Prepared by: Peter Bats Commissioning Editor: Linda Belliveau Version: 5.0 Last Updated:
Apache CloudStack 4.x (incubating) Network Setup: excerpt from Installation Guide. Revised February 28, 2013 2:32 pm Pacific
Apache CloudStack 4.x (incubating) Network Setup: excerpt from Installation Guide Revised February 28, 2013 2:32 pm Pacific Apache CloudStack 4.x (incubating) Network Setup: excerpt from Installation Guide
Dell Desktop Virtualization Solutions Simplified. All-in-one VDI appliance creates a new level of simplicity for desktop virtualization
Dell Desktop Virtualization Solutions Simplified All-in-one VDI appliance creates a new level of simplicity for desktop virtualization Executive summary Desktop virtualization is a proven method for delivering
Introduction to Mobile Access Gateway Installation
Introduction to Mobile Access Gateway Installation This document describes the installation process for the Mobile Access Gateway (MAG), which is an enterprise integration component that provides a secure
How To Install An Org Vm Server On A Virtual Box On An Ubuntu 7.1.3 (Orchestra) On A Windows Box On A Microsoft Zephyrus (Orroster) 2.5 (Orner)
Oracle Virtualization Installing Oracle VM Server 3.0.3, Oracle VM Manager 3.0.3 and Deploying Oracle RAC 11gR2 (11.2.0.3) Oracle VM templates Linux x86 64 bit for test configuration In two posts I will
StorSimple Appliance Quick Start Guide
StorSimple Appliance Quick Start Guide 5000 and 7000 Series Appliance Software Version 2.1.1 (2.1.1-267) Exported from Online Help on September 15, 2012 Contents Getting Started... 3 Power and Cabling...
Deploying Red Hat Enterprise Virtualization On Tintri VMstore Systems Best Practices Guide
TECHNICAL WHITE PAPER Deploying Red Hat Enterprise Virtualization On Tintri VMstore Systems Best Practices Guide www.tintri.com Contents Intended Audience... 4 Introduction... 4 Consolidated List of Practices...
Installation Guide July 2009
July 2009 About this guide Edition notice This edition applies to Version 4.0 of the Pivot3 RAIGE Operating System and to any subsequent releases until otherwise indicated in new editions. Notification
ISERink Installation Guide
ISERink Installation Guide Version 1.1 January 27, 2015 First developed to support cyber defense competitions (CDCs), ISERink is a virtual laboratory environment that allows students an opportunity to
WEBTITAN CLOUD. User Identification Guide BLOCK WEB THREATS BOOST PRODUCTIVITY REDUCE LIABILITIES
BLOCK WEB THREATS BOOST PRODUCTIVITY REDUCE LIABILITIES WEBTITAN CLOUD User Identification Guide This guide explains how to install and configure the WebTitan Cloud Active Directory components required
SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment
Best Practices Guide www.suse.com SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment Written by B1 Systems GmbH Table of Contents Introduction...3 Use Case Overview...3
SolidFire SF3010 All-SSD storage system with Citrix CloudPlatform 3.0.5 Reference Architecture
SolidFire SF3010 All-SSD storage system with Citrix CloudPlatform 3.0.5 Reference Architecture 2 This reference architecture is a guideline for deploying Citrix CloudPlatform, powered by Apache CloudStack,
CloudPortal Business Manager 2.2 POC Cookbook
CloudPortal Business Manager 2.2 POC Cookbook February 9, 2014 Contents 1 Overview... 3 2 Prepare CloudPlatform to Be Used with CloudPortal Business Manager... 4 2.1 Assumptions... 4 2.2 Steps to configure
D-Link Central WiFiManager Configuration Guide
Table of Contents D-Link Central WiFiManager Configuration Guide Introduction... 3 System Requirements... 3 Access Point Requirement... 3 Latest CWM Modules... 3 Scenario 1 - Basic Setup... 4 1.1. Install
Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN
Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN A Dell EqualLogic best practices technical white paper Storage Infrastructure and Solutions Engineering Dell Product Group November 2012 2012
SOA Software API Gateway Appliance 7.1.x Administration Guide
SOA Software API Gateway Appliance 7.1.x Administration Guide Trademarks SOA Software and the SOA Software logo are either trademarks or registered trademarks of SOA Software, Inc. Other product names,
Private cloud computing advances
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud
Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V
Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised
About the VM-Series Firewall
About the VM-Series Firewall Palo Alto Networks VM-Series Deployment Guide PAN-OS 6.0 Contact Information Corporate Headquarters: Palo Alto Networks 4401 Great America Parkway Santa Clara, CA 95054 http://www.paloaltonetworks.com/contact/contact/
Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure
TECHNICAL WHITE PAPER Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure A collaboration between Canonical and VMware
Building a Private Cloud Cloud Infrastructure Using Opensource
Cloud Infrastructure Using Opensource with Ubuntu Server 10.04 Enterprise Cloud (Eucalyptus) OSCON (Note: Special thanks to Jim Beasley, my lead Cloud Ninja, for putting this document together!) Introduction
LOCKSS on LINUX. Installation Manual and the OpenBSD Transition 02/17/2011
LOCKSS on LINUX Installation Manual and the OpenBSD Transition 02/17/2011 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 7 BIOS Settings... 10 Installation... 11 Firewall
Plexxi Control Installation Guide Release 2.1.0
Plexxi Control Installation Guide Release 2.1.0 702-20002-10 Rev 1.2 February 19, 2015 100 Innovative Way - Suite 3322 Nashua, NH 03062 Tel. +1.888.630.PLEX (7539) www.plexxi.com Notices The information
User Guide. Cloud Gateway Software Device
User Guide Cloud Gateway Software Device This document is designed to provide information about the first time configuration and administrator use of the Cloud Gateway (web filtering device software).
If you re not using Citrix XenCenter 6.0, your screens may vary. Required Virtual Interface Maps to... mgmt0. virtual network = mgmt0 wan0
If you re not using Citrix XenCenter 6.0, your screens may vary. VXOA VIRTUAL APPLIANCES Citrix XenServer Hypervisor In-Line Deployment (Bridge Mode) 2012 Silver Peak Systems, Inc. Support Limitations
Interworks. Interworks Cloud Platform Installation Guide
Interworks Interworks Cloud Platform Installation Guide Published: March, 2014 This document contains information proprietary to Interworks and its receipt or possession does not convey any rights to reproduce,
Talari Virtual Appliance CT800. Getting Started Guide
Talari Virtual Appliance CT800 Getting Started Guide March 18, 2015 Table of Contents About This Guide... 2 References... 2 Request for Comments... 2 Requirements... 3 AWS Resources... 3 Software License...
How To Set Up A Firewall Enterprise, Multi Firewall Edition And Virtual Firewall
Quick Start Guide McAfee Firewall Enterprise, Multi-Firewall Edition model S7032 This quick start guide provides high-level instructions for setting up McAfee Firewall Enterprise, Multi-Firewall Edition
Dell UPS Local Node Manager USER'S GUIDE EXTENSION FOR MICROSOFT VIRTUAL ARCHITECTURES Dellups.com
CHAPTER: Introduction Microsoft virtual architecture: Hyper-V 6.0 Manager Hyper-V Server (R1 & R2) Hyper-V Manager Hyper-V Server R1, Dell UPS Local Node Manager R2 Main Operating System: 2008Enterprise
Virtual Appliance Setup Guide
The Virtual Appliance includes the same powerful technology and simple Web based user interface found on the Barracuda Web Application Firewall hardware appliance. It is designed for easy deployment on
Load Balancing Microsoft Sharepoint 2010 Load Balancing Microsoft Sharepoint 2013. Deployment Guide
Load Balancing Microsoft Sharepoint 2010 Load Balancing Microsoft Sharepoint 2013 Deployment Guide rev. 1.4.2 Copyright 2015 Loadbalancer.org, Inc. 1 Table of Contents About this Guide... 3 Appliances
StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster
#1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the
Installing and Configuring vcenter Support Assistant
Installing and Configuring vcenter Support Assistant vcenter Support Assistant 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced
Core Protection for Virtual Machines 1
Core Protection for Virtual Machines 1 Comprehensive Threat Protection for Virtual Environments. Installation Guide e Endpoint Security Trend Micro Incorporated reserves the right to make changes to this
Implementing Microsoft Windows Server Failover Clustering (WSFC) and SQL Server 2012 AlwaysOn Availability Groups in the AWS Cloud
Implementing Microsoft Windows Server Failover Clustering (WSFC) and SQL Server 2012 AlwaysOn Availability Groups in the AWS Cloud David Pae, Ulf Schoo June 2013 (Please consult http://aws.amazon.com/windows/
Solution Overview. 2015, Hitachi Data Systems, Inc. Page 3 of 39 pages. Figure 1
Deploying Windows Azure Pack on Unified Compute Platform for Microsoft Private Cloud Tech Note Jason Giza/Rick Andersen Hitachi Unified Compute Platform Director is a converged platform architected to
Configuring Windows Server Clusters
Configuring Windows Server Clusters In Enterprise network, group of servers are often used to provide a common set of services. For example, Different physical computers can be used to answer request directed
Virtual Appliance for VMware Server. Getting Started Guide. Revision 2.0.2. Warning and Disclaimer
Virtual Appliance for VMware Server Getting Started Guide Revision 2.0.2 Warning and Disclaimer This document is designed to provide information about the configuration and installation of the CensorNet
Getting Started Guide
Getting Started Guide Sophos Firewall Software Appliance Document Date: November 2015 November 2015 Page 1 of 14 Contents Preface...3 Minimum Hardware Requirement...3 Recommended Hardware Requirement...3
XenDesktop Implementation Guide
Consulting Solutions WHITE PAPER Citrix XenDesktop XenDesktop Implementation Guide Pooled Desktops (Local and Remote) www.citrix.com Contents Contents... 2 Overview... 4 Initial Architecture... 5 Installation
CloudCIX Bootcamp. The essential IaaS getting started guide. http://www.cix.ie
The essential IaaS getting started guide. http://www.cix.ie Revision Date: 17 th August 2015 Contents Acronyms... 2 Table of Figures... 3 1 Welcome... 4 2 Architecture... 5 3 Getting Started... 6 3.1 Login
How To Set Up Egnyte For Netapp Sync For Netapp
Egnyte Storage Sync For NetApp Installation Guide Introduction... 2 Architecture... 2 Key Features... 3 Access Files From Anywhere With Any Device... 3 Easily Share Files Between Offices and Business Partners...
ProphetStor Federator Runbook for Mirantis FUEL 4.1 Revision 078282014
ProphetStor ProphetStor Federator Runbook for Mirantis FUEL 4.1 Revision 078282014 P r o p h e t S t o r Federator Installation and Configuration Guide V1 1 Figure... 2 Table... 2 Copyright & Legal Trademark
Deploying Windows Streaming Media Servers NLB Cluster and metasan
Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................
Guide to the LBaaS plugin ver. 1.0.2 for Fuel
Guide to the LBaaS plugin ver. 1.0.2 for Fuel Load Balancing plugin for Fuel LBaaS (Load Balancing as a Service) is currently an advanced service of Neutron that provides load balancing for Neutron multi
Required Virtual Interface Maps to... mgmt0. virtual network = mgmt0 wan0. virtual network = wan0 mgmt1. network adapter not connected lan0
VXOA VIRTUAL APPLIANCES Microsoft Hyper-V Hypervisor Router Mode (Out-of-Path Deployment) 2013 Silver Peak Systems, Inc. Assumptions Windows 2008 server is installed and Hyper-V server is running. This
CommandCenter Secure Gateway
CommandCenter Secure Gateway Quick Setup Guide for CC-SG Virtual Appliance and lmadmin License Server Management This Quick Setup Guide explains how to install and configure the CommandCenter Secure Gateway.
Kerio Operator. Getting Started Guide
Kerio Operator Getting Started Guide 2011 Kerio Technologies. All rights reserved. 1 About Kerio Operator Kerio Operator is a PBX software for small and medium business customers. Kerio Operator is based
How To Set Up A Backupassist For An Raspberry Netbook With A Data Host On A Nsync Server On A Usb 2 (Qnap) On A Netbook (Qnet) On An Usb 2 On A Cdnap (
WHITEPAPER BackupAssist Version 5.1 www.backupassist.com Cortex I.T. Labs 2001-2008 2 Contents Introduction... 3 Hardware Setup Instructions... 3 QNAP TS-409... 3 Netgear ReadyNas NV+... 5 Drobo rev1...
Getting Started with OpenStack and VMware vsphere TECHNICAL MARKETING DOCUMENTATION V 0.1/DECEMBER 2013
Getting Started with OpenStack and VMware vsphere TECHNICAL MARKETING DOCUMENTATION V 0.1/DECEMBER 2013 Table of Contents Introduction.... 3 1.1 VMware vsphere.... 3 1.2 OpenStack.... 3 1.3 Using OpenStack
DEPLOYMENT GUIDE Version 1.2. Deploying F5 with Oracle E-Business Suite 12
DEPLOYMENT GUIDE Version 1.2 Deploying F5 with Oracle E-Business Suite 12 Table of Contents Table of Contents Introducing the BIG-IP LTM Oracle E-Business Suite 12 configuration Prerequisites and configuration
With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments
RED HAT ENTERPRISE VIRTUALIZATION DATASHEET RED HAT ENTERPRISE VIRTUALIZATION AT A GLANCE Provides a complete end-toend enterprise virtualization solution for servers and desktop Provides an on-ramp to
Appendix B Lab Setup Guide
JWCL031_appB_467-475.indd Page 467 5/12/08 11:02:46 PM user-s158 Appendix B Lab Setup Guide The Windows Server 2008 Applications Infrastructure Configuration title of the Microsoft Official Academic Course
Using iscsi with BackupAssist. User Guide
User Guide Contents 1. Introduction... 2 Documentation... 2 Terminology... 2 Advantages of iscsi... 2 Supported environments... 2 2. Overview... 3 About iscsi... 3 iscsi best practices with BackupAssist...
http://docs.trendmicro.com
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,
Aerohive Networks Inc. Free Bonjour Gateway FAQ
Aerohive Networks Inc. Free Bonjour Gateway FAQ 1. About the Product... 1 2. Installation... 2 3. Management... 3 4. Troubleshooting... 4 1. About the Product What is the Aerohive s Free Bonjour Gateway?
Set Up Panorama. Palo Alto Networks. Panorama Administrator s Guide Version 6.0. Copyright 2007-2015 Palo Alto Networks
Set Up Panorama Palo Alto Networks Panorama Administrator s Guide Version 6.0 Contact Information Corporate Headquarters: Palo Alto Networks 4401 Great America Parkway Santa Clara, CA 95054 www.paloaltonetworks.com/company/contact-us
Remote PC Guide Series - Volume 1
Introduction and Planning for Remote PC Implementation with NETLAB+ Document Version: 2016-02-01 What is a remote PC and how does it work with NETLAB+? This educational guide will introduce the concepts
Intelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2)
Intelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2) Hyper-V Manager Hyper-V Server R1, R2 Intelligent Power Protector Main
Introduction to the EIS Guide
Introduction to the EIS Guide The AirWatch Enterprise Integration Service (EIS) provides organizations the ability to securely integrate with back-end enterprise systems from either the AirWatch SaaS environment
http://docs.trendmicro.com
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,
Introduction to VMware EVO: RAIL. White Paper
Introduction to VMware EVO: RAIL White Paper Table of Contents Introducing VMware EVO: RAIL.... 3 Hardware.................................................................... 4 Appliance...............................................................
SuperLumin Nemesis. Administration Guide. February 2011
SuperLumin Nemesis Administration Guide February 2011 SuperLumin Nemesis Legal Notices Information contained in this document is believed to be accurate and reliable. However, SuperLumin assumes no responsibility
Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide
Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer.
How To Integrate An Ipm With Airwatch With Big Ip On A Server With A Network (F5) On A Network With A Pb (Fiv) On An Ip Server On A Cloud (Fv) On Your Computer Or Ip
F5 Networks, Inc. F5 Recommended Practices for BIG-IP and AirWatch MDM Integration Contents Introduction 4 Purpose 5 Requirements 6 Prerequisites 6 AirWatch 6 F5 BIG-IP 6 Network Topology 7 Big-IP Configuration
The steps will take about 4 hours to fully execute, with only about 60 minutes of user intervention. Each of the steps is discussed below.
Setup Guide for the XenApp on AWS CloudFormation Template This document walks you through the steps of using the Citrix XenApp on AWS CloudFormation template (v 4.1.5) available here to create a fully
IronPOD Piston OpenStack Cloud System Commodity Cloud IaaS Platforms for Enterprises & Service
IronPOD Piston OpenStack Cloud System Commodity Cloud IaaS Platforms for Enterprises & Service Deploying Piston OpenStack Cloud POC on IronPOD Hardware Platforms Piston and Iron Networks have partnered
NOC PS manual. Copyright Maxnet 2009 2015 All rights reserved. Page 1/45 NOC-PS Manuel EN version 1.3
NOC PS manual Copyright Maxnet 2009 2015 All rights reserved Page 1/45 Table of contents Installation...3 System requirements...3 Network setup...5 Installation under Vmware Vsphere...8 Installation under
Team Foundation Server 2010, Visual Studio Ultimate 2010, Team Build 2010, & Lab Management Beta 2 Installation Guide
Page 1 of 243 Team Foundation Server 2010, Visual Studio Ultimate 2010, Team Build 2010, & Lab Management Beta 2 Installation Guide (This is an alpha version of Benjamin Day Consulting, Inc. s installation
A SHORT INTRODUCTION TO BITNAMI WITH CLOUD & HEAT. Version 1.12 2014-07-01
A SHORT INTRODUCTION TO BITNAMI WITH CLOUD & HEAT Version 1.12 2014-07-01 PAGE _ 2 TABLE OF CONTENTS 1. Introduction.... 3 2. Logging in to Cloud&Heat Dashboard... 4 2.1 Overview of Cloud&Heat Dashboard....
INTRODUCTION TO CLOUD MANAGEMENT
CONFIGURING AND MANAGING A PRIVATE CLOUD WITH ORACLE ENTERPRISE MANAGER 12C Kai Yu, Dell Inc. INTRODUCTION TO CLOUD MANAGEMENT Oracle cloud supports several types of resource service models: Infrastructure
