Dissecting Open Source Cloud Evolution: An OpenStack Case Study
|
|
|
- Clarissa Ferguson
- 10 years ago
- Views:
Transcription
1 Dissecting Open Source Cloud Evolution: An OpenStack Case Study Salman A. Baset, Chunqiang Tang, Byung Chul Tak, Long Wang IBM T.J. Watson Research Center, Yorktown Heights, NY, USA Abstract Open source cloud platforms are playing an increasingly significant role in cloud computing. These systems have been undergoing rapid development cycles. As an example, OpenStack has grown approximately 10 times in code size since its inception two and a half years ago. Confronting such fast-pace changes, cloud providers are challenged to understand OpenStack s up-to-date behaviors and adapt and optimize their provisioned services and configurations to the platform changes quickly. In this work, we use a black-box technique for conducting a deep analysis of four versions of OpenStack. This is the first study in the literature that tracks the evolution of a popular open source cloud platform. Our analysis results reveal important trends of SQL queries in OpenStack, help identify precise points for targeted error injection, and point out potential ways to improve performance (e.g. by changing authentication to PKI). The OpenStack case study in this work effectively demonstrates that our automated black-box methodology aids quick understanding of platform evolution and is critical for effective and rapid consumption of an open-source cloud platform. 1 Introduction Open source cloud management systems are being developed at a rapid pace. In one such infrastructure-as-acloud (IaaS) system, namely, OpenStack [8], a new version is released every six months and incorporates numerous features and bug fixes from community developers. As shown in Table 1, Grizzly, the latest version of OpenStack, which was released in April 2013 comprises of approximately 330 K lines of code. This is a ten fold increase in term of lines of code from the first release of OpenStack in October 2010, and an order of magnitude increase from the previous release Folsom. Cloud providers looking to adopt latest changes from the source trunk or a release of a rapidly evolving cloud platforms such as OpenStack are faced with a unique dilemma. On one hand, they want to adopt features and bug fixes as soon as they become available in the source trunk. On the other hand, they are wary of the lack of understanding of component interaction, and how the upgrade will impact the system s stability under normal conditions as well as failure scenarios. The plethora of options available for configuring an open source cloud system further complicates the dilemma of a cloud provider. Without effectively understanding how OpenStack evolves from one version to the next and across versions, and the potential impact of configuration options and failures, a cloud provider may not be able to quickly consume the rapidly evolving system. The distributed nature of cloud management systems complicates their understanding and analysis. These systems consist of many distributed components and leverage many existing tools, which sometimes interact with each other in non-intuitive ways. Currently, the choice of readily available tools for tracing and analyzing heterogeneous distributed systems in a black-box manner is limited. Mostly, black-box tracing tools offer little help when it comes to system diagnosis based on precise tracing of messages across distributed nodes since they are mostly based on statistical techniques [4, 5]. Instrumentation-based approaches [6,7] are also not easy to use because source code availability and comprehension is required. In this paper, we present a deep analysis of how Open- Stack has evolved over the years. By significantly enhancing vpath [10], we analyze complete message interaction among all OpenStack components for logical operations such as creating or deleting a virtual machine under multiple configuration options. We perform this analysis without modifying any source code in Open- Stack, and analyze four releases. Using our analysis, we precisely reason about the data flow, understand the impact of various configuration options, and perform whatif analysis. As part of ongoing work, we are building a 1
2 Release Time Nova Glance Keyst. Cinder Quant. Swift Total Austin Oct 10 17,288 12,979 30,627 Bextar Feb 11 27,734 3,629 16,014 47,377 Cactus Apr 11 43,947 4,927 16,665 65,539 Diablo Sep 11 66,395 9,961 12,451 15,591 91,947 Essex Apr 12 87,750 15,698 11,555 17, ,596 Folsom Sep ,637 20,271 13,939 20,271 42,118 19, ,320 Grizzly Apr ,968 21,261 20,071 49,797 60,485 23, ,081 Table 1: OpenStack evolution in terms of lines of code written in python. An empty box indicates that the component was not part of OpenStack at the time of release. Keyst. and Quant. are abbreviations for Keystone and Quantum, respectively. glance-api nova-api glance-registry keystone nova-compute glance DB keystone DB AMQP nova-scheduler nova DB dashboard nova-conductor nova-network Added in Grizzly to replace direct DB acccess in a compute node Figure 1: OpenStack Grizzly logical architecture without Quantum. nova-conductor service has been added in the Grizzly release. The service runs on a controller and removes direct database operations from nova-compute which runs on a compute node. tool that will leverage the message interaction gathered from our analysis to inject faults at precise points and explore the robustness of OpenStack system. As such, this analysis is not a performance study, but can serve as a useful input to any performance evaluation. The rest of the paper is organized as follows. In Section 2, we give an overview of OpenStack. In Section 3, we describe challenges and experiences in designing a tool for performing dynamic analysis of OpenStack releases. In Section 4, we discuss our results across Open- Stack releases. Section 5, we discuss ongoing work. 2 OpenStack Background glance-registry glance DB keystone DB keystone nova-conductor nova-compute AMQP nova-scheduler nova-api quantum-dhcp AMQP nova DB dashboard quantum-server quantum DB quantum-agent Figure 2: OpenStack Grizzly logical architecture with Quantum. OpenStack is a fully distributed infrastructure-as-aservice software system. Its components communicate via REST, SQL, and AMQP (Advanced Message Queuing Protocol) to perform cloud operations. In this paper, we restrict our analysis to three logical operations in OpenStack, namely, creating, deleting, and listing virtual machines. OpenStack has six main components: compute (Nova), image repository (Glance), authentication (Keystone), networking-as-a-service (Quantum), volume storage (Cinder), and object storage (Swift). Figure 1 and Figure 2 shows the logical architecture of Openglance-api Stack with and without Quantum, respectively. (Cinder and Swift are not discussed in this paper due to space limitation). Quantum implements a subset of features for software defined networking. In Grizzly release, Quantum has been renamed as OpenStack Networking. Each component comprises of one or more processes (or services). Except for nova-compute and nova-network, all services shown in Figure 1 run on one or more controller nodes. For OpenStack deployment leveraging Quantum (Figure 2), nova-compute and quantum-agent 1 run on a compute node. Up to Folsom release, (nova-compute) directly connected to nova database and performed database operations. In Grizzly release, the direct database operations have been factored out of the compute node and moved into a service nova-conductor which runs on a controller node. Nova services comprise of an API server (nova-api), a scheduler (nova-scheduler), and a conductor (nova-conductor) that run on the controller, and compute and network workers (nova-compute, nova-network, quantum-agent) that run on each physical server that is part of the OpenStack compute cloud. The nova services interact with each other through a asynchronous message passing server (RabbitMQ [3]) which runs the advanced message queuing protocol (AMQP) [1]. The use of AMQP prevents nova-* components from storing state and facilitate scalability. In addition, nova-* components use a database for storing persistent state. The use of central database and AMQP is potentially a limiting factor for scalability. Any user or services of OpenStack must be authenticated using a token issued by Keystone (Keystone queries the database during authentication). Moreover, any message from the cloud controller node containing the token must be authenticated and verified with Keystone. Thus, Keystone (and its querying of the database 1 In OpenStack terminology, all Quantum components except for Quantum server are referred to as agents. We refer quantum-agent to the component that runs on the compute node. 2
3 during authentication) is a potential bottleneck in Open- Stack (as shown in Figure 1 and Figure 2). As a result, Grizzly changes the default authentication mechanism to PKI for reducing database interaction for token verification. Further details about OpenStack architecture are available at [2]. 3 Discovering Message Flow Among Components: Challenges and Experience Understanding application behavior and its evolution can be achieved in several ways. One approach is to manually read the source code. Another approach is to read the easily-available information such as application logs or system monitoring utilities to infer application behavior. These approaches usually take a significant amount of time and effort, and moreover, capture only bits-andpieces of application execution. For a complex system comprising of multiple components such as OpenStack, significant inferencing efforts are required to get a clear understanding of individual system execution flows out of the source code and application logs. In order to avoid these difficulties, we have chosen the vpath [10] approach that traces end-to-end message flows across application components by capturing and analyzing system and library call traces. Particularly, captures of receive()/send() and read()/write() calls (and their variations) allow for observing causal relationships among components; capturing other calls helps understand what steps are taken to complete user-issued operations. Such dynamic interactions of components are difficult to ascertain otherwise, through methods such as tcpdump. Path tracing consists of two steps, namely, monitoring and analyzing system and library calls. Source code of the target application (OpenStack in our case) is not required in either step. The target application is handled as a black box. Monitoring of system and library calls. We use vpath [10] tool to intercept system and library calls. vpath tool makes use of the LD PRELOAD technique. By setting this environment variable, it can attach a wrapper to most of the glibc functions. Upon intercepting these glibc functions, vpath logs the various parameters in the invocation, e.g. process name/id, thread ID, port number, parameters passed, timestamp, socket number, and connection ID. Then the original glibc functions are called to serve the application requests. The noted event sequence and parameters are processed in the analysis step to construct the causal event chain. Analysis of collected data and construction of traces. In vpath, tracking of message causality across application processes is made possible based on the following assumption about the threading behavior: once a message is received by a thread, that thread is the one to engage in processing of the data; it continues to be the processing thread until it makes another send or receive operation. This implies that any activities of a certain thread following the receive of a message can be considered to belong to the processing of the same user request. This assumption allows establishing a linkage between the initial incoming receive with subsequent activities and outgoing send messages. The linkage between send and receive across processes can be formed by pairing the TCP/IP connection s socket tuple information which is unique per connection. Any scenarios violating this assumption lead to failure of path tracing. Such scenarios are observed in Open- Stack messages that are exchanged among nova components of OpenStack using RabbitMQ running AMQP protocol. The presence of message passing queues allows a thread to continue execution, thus violating vpath s original assumption. Without using time stamps or application level knowledge, it is not possible to construct end-to-end path tracing. In this paper, we use time stamp information of queued messages for constructing the path trace. As using time stamp information across distributed nodes requires clock synchronization, we conduct our experiments by placing all OpenStack processes within a single machine. As part of ongoing work, we are instrumenting the analyzer with application specific knowledge (i.e., message identifiers) to correlate messages that are pushed and pulled from AMQP queues. We use analyzer to (i) gather aggregate statistics about operations (ii) ask what-if questions (iii) analyze a subset of actions from a message flow that meet certain conditions. In the next section, we present a subset of results. 4 Experimental Setup and Results We ran all components of OpenStack in a single machine, since our focus is on message flow/path and what-if analysis, and not performance. We disabled all OpenStack timers, and started logging of system and library calls on all OpenStack components using vpath. We initially choose the command-utility, python-novaclient, to create, delete, and list VMs but quickly realized that it performs a multitude of other operations to support the logical operations. Consequently, we discontinued the use of this utility and instead crafted REST call messages that were initiated using curl. Further, we ran CLI of each OpenStack component so that each service is forced to authenticate with Keystone and cache the authenticated token. The authentication for user tokens is in addition to service tokens. In 3
4 Diablo Essex Folsom Folsom Grizzly Grizzly nova-net quantum nova-net quantum SELECT (create) 16 (450) 17 (95) 21 (409) 26 (560) 20 (139) 37 (343) SELECT (delete) 8 (37) 10 (36) 17 (63) 23 (241) 13 (36) 31 (192) SELECT (list) 5 (31) 4 (12) 6 (24) 7 (25) 1 (1) 1(1) INSERT (create) 4 (4) 4 (4) 8 (23) 9 (24) 10 (37) 13 (40) INSERT (delete) 0 (0) 0 (0) 1 (3) 1 (3) 3 (6) 4 (6) INSERT (list) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) UPDATE (create) 2 (9)-5 3 (12)-5 7 (60)-11 7 (58)-12 8 (74)-13 8 (70)-16 UPDATE (delete) 4 (6)-4 6 (10)-7 8 (22)-9 8 (25)-9 10 (31)-11 10(26)-12 UPDATE (list) 0 (0)-0 0 (0)-0 0 (0)-0 0 (0)-0 0 (0)-0 0(0)-0 DELETE (create) DELETE (delete) 1 (1) 1 (1) 1 (1) 1 (1) 1 (1) 1 (1) DELETE (list) Total tables glance keystone nova quantum n/a n/a n/a 16 n/a 24 Table 2: SQL statistics for create, delete, list VMs. The number before parenthesis is the total number of tables touched, and the number within parenthesis is the actual number of queries. The number after - in the UPDATE rows shows the total number of tables touched across INSERT and UPDATE queries. For Grizzly, there are 55 shadow tables for nova which archive operation information. Diablo Essex Folsom Folsom Grizzly Grizzly nova-net quantum nova-net quantum keystone 30 GET 9 GET 17 GET 23 GET 3 GET 6 GET 2 POST nova-api 1 POST 1 POST 1 POST 1 POST 1 POST 1 POST nova-compute nova-conductor n/a n/a n/a n/a nova-network n/a 20 n/a nova-scheduler glance-api 2 GET 5 HEAD 4 HEAD 8 HEAD 8 HEAD 8 HEAD 8 HEAD glance-registry GET 4 GET 8 GET 8 GET 8 GET 8 GET quantum-server n/a n/a n/a 44 n/a 62 5 GET 9 GET 1 POST 1 POST Table 3: Creating a VM: SELECT queries issued by components and HTTP requests received by different components. steady-state, OpenStack services are expected to cache tokens, baring any bugs or configuration options. For each experiment, we collected logs generated by vpath, analyzed them offline to construct message path, and performed detailed analysis. For Folsom and Grizzly release, we analyzed OpenStack operation with and without Quantum. 4.1 Database and REST Call Analysis Table 2 shows the total number of distinct SQL queries for VM creation, VM deletion, and listing all VMs. There are several interesting observations to be made from this table that shed light on the evolution of Open- Stack. First, the number of SQL SELECT queries increases from Essex to Folsom, and then it decreases in the Grizzly release. The decrease in the number of SQL SELECT queries in Grizzly is attributed to significant refactoring of code in Keystone, the authentication component. A large number of SELECT queries does imply a processing overhead, especially, if a SELECT query includes complicated joins. A detailed performance study is needed to evaluate the impact, which is not the focus of this paper. In Diablo, the large number of SELECT queries is related to HTTP GET requests that Keystone receives (30 GET requests in total, see Table 3). In Diablo, Keystone was just integrated with other OpenStack components. The developers did not optimize Keystone interaction with other OpenStack components. The increase in SELECT queries for Grizzly (quantum) is attributed to Quantum service token authentication. We were expecting the Quantum token to be cached (as described earlier), but we did not observe this to be the case over multiple runs. We are investigating the cause of service token verification in Quantum. Second, the number of INSERT and UPDATE queries continue to grow across releases. For a single VM creation in Grizzly (quantum), 40 INSERT and 70 UPDATE queries are issued that altogether touch 16 tables. Since OpenStack is a distributed system, a request can fail when traversing from one component to the other. If a VM creation or deletion request fails midway, only a subset of INSERT and UPDATE queries would have been issued which will need to be subsequently processed to effectively recover from a failed request. Moreover, any other configuration state will also need to be removed. Our path analysis technique can precisely identify which tables and configuration state were touched by a particular component, which makes it easier for developers to write failure recovery code and for operations personnel to zoom in on a problem. Another implication of these 4
5 Diablo Essex Folsom Folsom Grizzly Grizzly nova-net quantum nova-net quantum SELECT INSERT UPDATE Table 4: Creating a VM: SQL queries before a request is sent to compute. Compare results with Table 2. (1) API Scheduler RabbitMQ Compute Network Create server: 4892B Create server: 4981B Create server (on the selected node): 7040B (2) Create server (on the selected node): 7136B Server creation request returns a 200 OK response Create network: 3117B Status check of VM still needed (3) Create network: 3204B Network created: 994B Network created: 854B Close queue: 57B Close queue: 197B Figure 3: Creating a VM: Message flow among nova components over AMQP up to Folsom (nova-network). results is for database sizing. Since these results are per logical operation (e.g., create VM), a system architect can easily obtain a bound on the number of newly created records as a function of VM provisioning rate. Table 4 shows an aggregate of INSERT and UP- DATE queries issued by different components before nova-scheduler sends a request to nova-compute over AMQP to create a VM. From Table 2 and Table 4, it can be easily deduced that bulk of UPDATE operations occur after a request is sent to nova-compute. Though not shown in the paper, we discovered from our analysis that bulk of UPDATE queries (44 in total for Grizzly (quantum)) were for updating statistics for the compute node on which VM was being created, while the remaining queries were for instance related updates. Third, when deprovisioning a VM, only one or two records are deleted from the tables while all other VMrelated records are archived. Our analysis indicates that the deleted record corresponds to virtual interface and ports (for Folsom/Grizzly-quantum) of a VM being deprovisioned. In addition, in Folsom and Grizzly release, records are inserted in the reservations table to indicate the release of VM resource. For instance, if a VM is provisioned with 512 MB of memory, a record with MB will be inserted into the reservations table to indicate the release of memory resource. From a logging and audit perspective, such a mechanism is better than a simple decrementing of provisioned resource. While SQL query count shown in Table 2 could also have been obtained through SQL server logs, the logs do not tell us which component is issuing the queries, and the message flow which triggered these logs. For (4) (5) create VM operation, we want to understand which components are issuing SELECT queries. Table 3 gives a break down of SQL SELECT queries sent by different components. It becomes very clear from looking at this table that token-based authentication in Keystone is responsible for issuing a bulk of SQL SELECT queries. If we configure PKI-based token verification, the number of SELECT queries in Grizzly (nova-network) drops to seven. Our tracing tool allows us to easily understand the impact of configuration options such as the one mentioned above. Single byte recv(). We observed a strange behavior where components receive HTTP responses one byte at a time by issuing a (recv()) call for one byte. From studying the source code, we have found that the cause was a python library (webob) used for HTTP message communications. Although, Openstack is not the direct reason for this single byte recv(), the use of this library may degrade performance. 4.2 AMQP and aggregate traffic analysis Figure 3 shows the message flow over AMQP among nova-* components of OpenStack which we were able to deduce using our message path when using nova-network. This flow is applicable till Folsom. In Grizzly, the introduction of nova-conductor service has increased interaction with AMQP server by an order of magnitude. In total, five messages are exchanged among nova-* components over AMQP. After scheduler sends the VM creation request to compute node, a successful (200 OK) HTTP response is returned to the caller. As shown in Table 4, our path analysis allows us to precisely determine what happens in terms of database queries before scheduler sends the VM creation request to nova-compute over AMQP. 4.3 Flow Evolution How does OpenStack flow evolve from one version to the next and how much similarity is there in the shape of paths? We believe that analyzing flow evolution can best be performed using an interactive tool (similar to those available for checking source code differences). Confined to text, we offer some insights. First, nova-api s interaction with databases over releases has significantly increased. Up until Essex, only four records were inserted by nova-api in total, three of which were inserted before a request was sent to nova-compute (Table 4). For Folsom and Grizzly, the number of inserted records for a similar path are 10 and 22, respectively. The increase in these inserted records is due to introduction of a new table instance system metadata which contains the bulk of new records (13 for Grizzly). 5
6 Second, the code path for interacting with Glance has been relatively stable since Folsom. This is also evident from the limited change in Glance code base from Folsom to Grizzly. Third, while the introduction of nova-conductor service has removed direct database interaction from a compute node, the interaction between nova-compute and database through nova-conductor has actually increased which manifests itself in SQL UPDATE operations. As OpenStack code evolves, the developers are focusing on storing intermediate state from compute node on the persistent storage, which results into an increase in UPDATE queries. 5 Discussion and Ongoing work Tracing OpenStack message flow for logical operations is the first step in understanding evolution of OpenStack. The tracing, by design, is performed before a system is actually put into production, and is meant to uncover any changes in overall code. We are currently enhancing the tool for automatically correlating log writes with the message flow and for systematic error injection to analyze OpenStack robustness against failures. The idea is to automatically insert an error (e.g., modifying the return value of a system call) at potentially every point in the trace message flow. As part of systematic error injection, the errors inserted, message flow resulting from the error, and the logs will be collected in a central repository which will be indexable. The goal is to create this repository in a small time-frame (e.g., one day). System administrator can potentially consult this repository when searching for a problem diagnosis. Such an automated mechanism for inserting faults can significantly increase the test coverage of a rapidly evolving code base such as OpenStack. The tracing results alone shed light on the working of the system and allow an architect to perform a rough sizing of the components (e.g., DB disk or AMQP message flow as a function of logical operations such as number of VMs created.). However, tracing is performed prior to putting the system in production without instrumenting any source code. It may also be useful to follow a Dapper [9] and Zipkin [11] like approach for collecting request progress in a distributed system through code instrumentation. A particular challenge in tracing, which we also identified in earlier sections, is the use of AMQP queues which cause path information to break. As a design principle, the distributed systems making use of a similar queueing mechanism must augment messages with a unique message identifier that traverses different components of the stack to enable offline or online tracing. 6 Conclusion OpenStack, an open-source cloud management system, is being widely adopted as an IaaS platform. It s rapid development and complexity growth complicates its understanding. This paper presents a deep analysis of how OpenStack has evolved over the past few years. This is the first study in the literature that tracks the evolution of a popular open source cloud platform. We leverage and enhance a path-tracing tool to automate analysis of four versions of OpenStack by capturing and analyzing interactions among distributed components for creating and deleting a virtual machine. Our analysis reveals important trends of SQL queries, help understand impact of configuration options, analyze subset of messages to answer what-if questions, and identify precise points for error injection. As part of ongoing work, we are leveraging the message flow to develop a framework for error injection. The techniques presented in this paper are useful for understanding other rapidly evolving open source systems for effectively consuming them within a provider. References [1] AMQP. Advanced_Message_Queuing_Protocol. [2] Openstack architecture. grizzly/openstack-compute/admin/content/ logical-architecture.html. [3] Rabbitmq. [4] M. K. Aguilera, J. C. Mogul, J. L. Wiener, P. Reynolds, and A. Muthitacharoen. Performance debugging for distributed systems of black boxes. In SOSP, pages 74 89, New York, NY, USA, ACM. [5] A. Anandkumar, C. Bisdikian, and D. Agrawal. Tracking in a spaghetti bowl: monitoring transactions using footprints. In SIG- METRICS, pages , New York, NY, USA, ACM. [6] P. Barham, A. Donnelly, R. Isaacs, and R. Mortier. Using magpie for request extraction and workload modelling. In OSDI, Berkeley, CA, USA, USENIX Association. [7] M. Y. Chen, E. Kiciman, E. Fratkin, A. Fox, and E. Brewer. Pinpoint: Problem determination in large, dynamic internet services. In DSN, pages , Washington, DC, USA, IEEE Computer Society. [8] Openstack. [9] B. H. Sigelman, L. A. Barroso, M. Burrows, P. Stephenson, M. Plakal, D. Beaver, S. Jaspan, and C. Shanbhag. Dapper, a large-scale distributed systems tracing infrastructure. Technical Report dapper , Google, April [10] B. C. Tak, C. Tang, C. Zhang, S. Govindan, B. Urgaonkar, and R. N. Chang. vpath: precise discovery of request processing paths from black-box observations of thread and network activities. In USENIX, pages 19 19, Berkeley, CA, USA, USENIX Association. [11] Twitter. Zipkin [Online; accessed May 2013]. 6
Introduction to Openstack, an Open Cloud Computing Platform. Libre Software Meeting
Introduction to Openstack, an Open Cloud Computing Platform Libre Software Meeting 10 July 2012 David Butler BBC Research & Development [email protected] Introduction: Libre Software Meeting 2012
OpenStack Introduction. November 4, 2015
OpenStack Introduction November 4, 2015 Application Platforms Undergoing A Major Shift What is OpenStack Open Source Cloud Software Launched by NASA and Rackspace in 2010 Massively scalable Managed by
Cloud on TEIN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University
Cloud on TEIN Part I: OpenStack Cloud Deployment Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University Outline Objectives Part I: OpenStack Overview How OpenStack
Cloud Operational Excellence Model
Toward achieving operational excellence in a cloud S. A. Baset L. Wang B. C. Tak C. Pham C. Tang A cloud pools resources such as compute, network, and storage and delivers them quickly and automatically
การใช งานและต ดต งระบบ OpenStack ซอฟต แวร สาหร บบร หารจ ดการ Cloud Computing เบ องต น
การใช งานและต ดต งระบบ OpenStack ซอฟต แวร สาหร บบร หารจ ดการ Cloud Computing เบ องต น Kasidit Chanchio [email protected] Thammasat University Vasinee Siripoonya Electronic Government Agency of Thailand
Multi Provider Cloud. Srinivasa Acharya, Engineering Manager, Hewlett-Packard [email protected]
Multi Provider Cloud Srinivasa Acharya, Engineering Manager, Hewlett-Packard [email protected] Agenda Introduction to OpenStack Multi Hypervisor Architecture Use cases for Multi Hypervisor cloud Ironic
Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure
TECHNICAL WHITE PAPER Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure A collaboration between Canonical and VMware
Mirantis www.mirantis.com/training
TM Mirantis www.mirantis.com/training Goals Understand OpenStack purpose and use cases Understand OpenStack ecosystem o history o projects Understand OpenStack architecture o logical architecture o components
Change the Game with HP Helion
Change the Game with HP Helion Transform your business with DevOPS and Open Hybrid Clouds Anthony Rees HP Helion Cloud Consultant Copyright 2014 Hewlett-Packard Development Company, L.P. The information
SWIFT. Page:1. Openstack Swift. Object Store Cloud built from the grounds up. David Hadas Swift ATC. HRL [email protected] 2012 IBM Corporation
Page:1 Openstack Swift Object Store Cloud built from the grounds up David Hadas Swift ATC HRL [email protected] Page:2 Object Store Cloud Services Expectations: PUT/GET/DELETE Huge Capacity (Scale) Always
Snakes on a cloud. A presentation of the OpenStack project. Thierry Carrez Release Manager, OpenStack
Snakes on a cloud A presentation of the OpenStack project Thierry Carrez Release Manager, OpenStack Cloud? Buzzword End-user services Software as a Service (SaaS) End-user services Online storage / streaming
An Introduction to OpenStack and its use of KVM. Daniel P. Berrangé <[email protected]>
An Introduction to OpenStack and its use of KVM Daniel P. Berrangé About me Contributor to multiple virt projects Libvirt Developer / Architect 8 years OpenStack contributor 1 year
EFFICIENT ANALYSIS OF APPLICATION SERVERS IN THE CLOUD
EFFICIENT ANALYSIS OF APPLICATION SERVERS IN THE CLOUD Progress report meeting December 2012 Phuong Tran Gia [email protected] Under the supervision of Prof. Michel R. Dagenais Dorsal Laboratory,
OpenStack Ecosystem and Xen Cloud Platform
OpenStack Ecosystem and Xen Cloud Platform Amit Naik Prasad Nirantar BMC Software 1 Agenda Introduction Rise of OpenStack OpenStack Details and Ecosystem OpenStack and Xen Cloud Platform - Demo Conclusion
AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013
AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013 OpenStack What is OpenStack? OpenStack is a cloud operaeng system that controls large pools of compute, storage, and networking resources
OpenStack IaaS. Rhys Oxenham OSEC.pl BarCamp, Warsaw, Poland November 2013
OpenStack IaaS 1 Rhys Oxenham OSEC.pl BarCamp, Warsaw, Poland November 2013 Disclaimer The information provided within this presentation is for educational purposes only and was prepared for a community
Cloud on TIEN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat
Cloud on TIEN Part I: OpenStack Cloud Deployment Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat Outline Part I: OpenStack Overview How OpenStack components work
Zend and IBM: Bringing the power of PHP applications to the enterprise
Zend and IBM: Bringing the power of PHP applications to the enterprise A high-performance PHP platform that helps enterprises improve and accelerate web and mobile application development Highlights: Leverages
OpenStack Assessment : Profiling & Tracing
OpenStack Assessment : Profiling & Tracing Presentation by Hicham ABDELFATTAH Master Director Mohamed Cheriet Outline Introduction OpenStack Issues Rally Perspectives 2 Definition Cloud computing is a
OCCI and Security Operations in OpenStack - Overview
Allocation of VMs: A primer Alex Glikson (IBM), John M. Kennedy (Intel), Giovanni Toffetti (IBM) FI-WAE Cloud Hosting Chapter June 6th, 2013 http://www.fi-ware.eu http://www.fi-ppp.eu Agenda Overview Web-based
An Intro to OpenStack. Ian Lawson Senior Solution Architect, Red Hat [email protected]
An Intro to OpenStack Ian Lawson Senior Solution Architect, Red Hat [email protected] What is OpenStack? What is OpenStack? Fully open source cloud operating system Comprised of several open source sub-projects
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &
Cloud Cruiser and Azure Public Rate Card API Integration
Cloud Cruiser and Azure Public Rate Card API Integration In this article: Introduction Azure Rate Card API Cloud Cruiser s Interface to Azure Rate Card API Import Data from the Azure Rate Card API Defining
SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager [email protected]. Product Marketing Manager djarvis@suse.
SUSE Cloud 2.0 Pete Chadwick Douglas Jarvis Senior Product Manager [email protected] Product Marketing Manager [email protected] SUSE Cloud SUSE Cloud is an open source software solution based on OpenStack
Isabell Sippli Cloud Architect, Lab Based Services IBM Software Group 2013 IBM Corporation
1 Isabell Sippli Cloud Architect, Lab Based Services IBM Software Group Disclaimer This document represents the author's views and opinions. It does not necessarily represent IBM's position or strategy.
Software-Defined Networks Powered by VellOS
WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible
2) Xen Hypervisor 3) UEC
5. Implementation Implementation of the trust model requires first preparing a test bed. It is a cloud computing environment that is required as the first step towards the implementation. Various tools
UZH Experiences with OpenStack
GC3: Grid Computing Competence Center UZH Experiences with OpenStack What we did, what went well, what went wrong. Antonio Messina 29 April 2013 Setting up Hardware configuration
Is OpenStack the best path forward towards successful Clouds? Cor van der Struijf Senior Cloud Advisor [email protected]
Is OpenStack the best path forward towards successful Clouds? Cor van der Struijf Senior Cloud Advisor [email protected] What is the best path forward? I was just wondering if you could help me find my
SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment
Best Practices Guide www.suse.com SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment Written by B1 Systems GmbH Table of Contents Introduction...3 Use Case Overview...3
Sales Slide Midokura Enterprise MidoNet V1. July 2015 Fujitsu Limited
Sales Slide Midokura Enterprise MidoNet V1 July 2015 Fujitsu Limited What Is Midokura Enterprise MidoNet? Network Virtualization Software Coordinated with OpenStack Provides safe & effective virtual networks
Design and Implementation of IaaS platform based on tool migration Wei Ding
4th International Conference on Mechatronics, Materials, Chemistry and Computer Engineering (ICMMCCE 2015) Design and Implementation of IaaS platform based on tool migration Wei Ding State Key Laboratory
can you effectively plan for the migration and management of systems and applications on Vblock Platforms?
SOLUTION BRIEF CA Capacity Management and Reporting Suite for Vblock Platforms can you effectively plan for the migration and management of systems and applications on Vblock Platforms? agility made possible
Agile Infrastructure: an updated overview of IaaS at CERN
Agile Infrastructure: an updated overview of IaaS at CERN Luis FERNANDEZ ALVAREZ on behalf of Cloud Infrastructure Team [email protected] HEPiX Spring 2013 CERN IT Department CH-1211 Genève
Building a Cloud Computing Platform based on Open Source Software. 10. 18. 2011. Donghoon Kim ( [email protected] ) Yoonbum Huh ( huhbum@kt.
Building a Cloud Computing Platform based on Open Source Software 10. 18. 2011. Donghoon Kim ( [email protected] ) Yoonbum Huh ( [email protected]) Topics I.Open Source SW and Cloud Computing II. About OpenStack
OpenStack Alberto Molina Coballes
OpenStack Alberto Molina Coballes Teacher at IES Gonzalo Nazareno [email protected] @alberto_molina Table of Contents From public to private clouds Open Source Cloud Platforms Why OpenStack? OpenStack
What Is Specific in Load Testing?
What Is Specific in Load Testing? Testing of multi-user applications under realistic and stress loads is really the only way to ensure appropriate performance and reliability in production. Load testing
Shoal: IaaS Cloud Cache Publisher
University of Victoria Faculty of Engineering Winter 2013 Work Term Report Shoal: IaaS Cloud Cache Publisher Department of Physics University of Victoria Victoria, BC Mike Chester V00711672 Work Term 3
Corso di Reti di Calcolatori M
Università degli Studi di Bologna Scuola di Ingegneria Corso di Reti di Calcolatori M Cloud: Openstack Antonio Corradi Luca Foschini Anno accademico 2014/2015 NIST STANDARD CLOUD National Institute of
Simplified Management With Hitachi Command Suite. By Hitachi Data Systems
Simplified Management With Hitachi Command Suite By Hitachi Data Systems April 2015 Contents Executive Summary... 2 Introduction... 3 Hitachi Command Suite v8: Key Highlights... 4 Global Storage Virtualization
Mobile Cloud Computing T-110.5121 Open Source IaaS
Mobile Cloud Computing T-110.5121 Open Source IaaS Tommi Mäkelä, Otaniemi Evolution Mainframe Centralized computation and storage, thin clients Dedicated hardware, software, experienced staff High capital
Towards an understanding of oversubscription in cloud
IBM Research Towards an understanding of oversubscription in cloud Salman A. Baset, Long Wang, Chunqiang Tang [email protected] IBM T. J. Watson Research Center Hawthorne, NY Outline Oversubscription
KVM, OpenStack, and the Open Cloud
KVM, OpenStack, and the Open Cloud Adam Jollans, IBM & Mike Kadera, Intel CloudOpen Europe - October 13, 2014 13Oct14 Open VirtualizaGon Alliance 1 Agenda A Brief History of VirtualizaGon KVM Architecture
Iron Chef: Bare Metal OpenStack
Rebecca Brenton Partner Alliances Manager Rob Hirschfeld Principal Cloud Architect Session Hashtags #chefconf #openstack About the Solution: http://dell.com/openstack http://dell.com/crowbak Iron Chef:
solution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms?
solution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms? CA Capacity Management and Reporting Suite for Vblock Platforms
This module provides an overview of service and cloud technologies using the Microsoft.NET Framework and the Windows Azure cloud.
Module 1: Overview of service and cloud technologies This module provides an overview of service and cloud technologies using the Microsoft.NET Framework and the Windows Azure cloud. Key Components of
Cloud Essentials for Architects using OpenStack
Cloud Essentials for Architects using OpenStack Course Overview Start Date 18th December 2014 Duration 2 Days Location Dublin Course Code SS906 Programme Overview Cloud Computing is gaining increasing
Introduction to OpenStack
Introduction to OpenStack Carlo Vallati PostDoc Reseracher Dpt. Information Engineering University of Pisa [email protected] Cloud Computing - Definition Cloud Computing is a term coined to refer
KVM, OpenStack, and the Open Cloud
KVM, OpenStack, and the Open Cloud Adam Jollans, IBM Southern California Linux Expo February 2015 1 Agenda A Brief History of VirtualizaJon KVM Architecture OpenStack Architecture KVM and OpenStack Case
CERN Cloud Infrastructure. Cloud Networking
CERN Cloud Infrastructure Cloud Networking Contents Physical datacenter topology Cloud Networking - Use cases - Current implementation (Nova network) - Migration to Neutron 7/16/2015 2 Physical network
How To Build An Openstack Cloud System
Open Cloud Networking Vision The state of OpenStack networking and a vision of things to come... Dan Sneddon Member Technical Staff Twitter: @dxs OCS 2.0 Public Cloud Benefits Private Cloud Control Open
WHITE PAPER September 2012. CA Nimsoft Monitor for Servers
WHITE PAPER September 2012 CA Nimsoft Monitor for Servers Table of Contents CA Nimsoft Monitor for servers 3 solution overview CA Nimsoft Monitor service-centric 5 server monitoring CA Nimsoft Monitor
Appendix to; Assessing Systemic Risk to Cloud Computing Technology as Complex Interconnected Systems of Systems
Appendix to; Assessing Systemic Risk to Cloud Computing Technology as Complex Interconnected Systems of Systems Yacov Y. Haimes and Barry M. Horowitz Zhenyu Guo, Eva Andrijcic, and Joshua Bogdanor Center
The Evolution of Load Testing. Why Gomez 360 o Web Load Testing Is a
Technical White Paper: WEb Load Testing To perform as intended, today s mission-critical applications rely on highly available, stable and trusted software services. Load testing ensures that those criteria
Building Multi-Site & Ultra-Large Scale Cloud with Openstack Cascading
Building Multi-Site & Ultra-Large Scale Cloud with Openstack Cascading Requirement and driving forces multi-site cloud Along with the increasing popularity and wide adoption of Openstack as the de facto
Infrastructure as a Service (IaaS)
Infrastructure as a Service (IaaS) (ENCS 691K Chapter 4) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ References 1. R. Moreno et al.,
Glassfish Architecture.
Glassfish Architecture. First part Introduction. Over time, GlassFish has evolved into a server platform that is much more than the reference implementation of the Java EE specifcations. It is now a highly
Administration Guide for the System Center Cloud Services Process Pack
Administration Guide for the System Center Cloud Services Process Pack Microsoft Corporation Published: May 7, 2012 Author Kathy Vinatieri Applies To System Center Cloud Services Process Pack This document
SUSE Cloud Installation: Best Practices Using an Existing SMT and KVM Environment
Best Practices Guide www.suse.com SUSE Cloud Installation: Best Practices Using an Existing SMT and KVM Environment Written by B1 Systems GmbH Table of Contents Introduction...3 Use Case Overview...3 Hardware
FireScope + ServiceNow: CMDB Integration Use Cases
FireScope + ServiceNow: CMDB Integration Use Cases While virtualization, cloud technologies and automation have slashed the time it takes to plan and implement new IT services, enterprises are still struggling
How To Use Openstack At Cern
IPv6 on OpenStack Feature Parity is a Tricky Question Today s Sequence Quick Review of OpenStack Is OpenStack IPv6 Ready? Case Study: CERN s use of OpenStack Takeaways Today s Sequence Quick Review of
EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS
EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS A Detailed Review ABSTRACT This white paper highlights integration features implemented in EMC Avamar with EMC Data Domain deduplication storage systems
Release Notes for Fuel and Fuel Web Version 3.0.1
Release Notes for Fuel and Fuel Web Version 3.0.1 June 21, 2013 1 Mirantis, Inc. is releasing version 3.0.1 of the Fuel Library and Fuel Web products. This is a cumulative maintenance release to the previously
This presentation covers virtual application shared services supplied with IBM Workload Deployer version 3.1.
This presentation covers virtual application shared services supplied with IBM Workload Deployer version 3.1. WD31_VirtualApplicationSharedServices.ppt Page 1 of 29 This presentation covers the shared
WHITE PAPER. Software Defined Storage Hydrates the Cloud
WHITE PAPER Software Defined Storage Hydrates the Cloud Table of Contents Overview... 2 NexentaStor (Block & File Storage)... 4 Software Defined Data Centers (SDDC)... 5 OpenStack... 5 CloudStack... 6
Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH
Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH CONTENTS Introduction... 4 System Components... 4 OpenNebula Cloud Management Toolkit... 4 VMware
OpenStack An Open Cloud for an Open Data World IBM s Contributions, Commitments & Products
IBM Cloud: Think it. Build it. Tap into it. OpenStack An Open Cloud for an Open Data World IBM s Contributions, Commitments & Products Henry Nash ([email protected]) IBM OpenStack Architect & Wild
Introduction to Windows Azure Cloud Computing Futures Group, Microsoft Research Roger Barga, Jared Jackson,Nelson Araujo, Dennis Gannon, Wei Lu, and
Introduction to Windows Azure Cloud Computing Futures Group, Microsoft Research Roger Barga, Jared Jackson,Nelson Araujo, Dennis Gannon, Wei Lu, and Jaliya Ekanayake Range in size from edge facilities
TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage
TECHNICAL PAPER Veeam Backup & Replication with Nimble Storage Document Revision Date Revision Description (author) 11/26/2014 1. 0 Draft release (Bill Roth) 12/23/2014 1.1 Draft update (Bill Roth) 2/20/2015
OpenStack Awareness Session
OpenStack Awareness Session Affan A. Syed Director Engineering, PLUMgrid Inc. Pakistan Telecommunication Authority, Oct 20 th, 2015 PLUMgrid s Mission Deliver comprehensive virtual networking solutions
How To Compare Cloud Computing To Cloud Platforms And Cloud Computing
Volume 3, Issue 11, November 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Cloud Platforms
Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP
Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP Agenda ADP Cloud Vision and Requirements Introduction to SUSE Cloud Overview Whats New VMWare intergration HyperV intergration ADP
Redefining Microsoft SQL Server Data Management. PAS Specification
Redefining Microsoft SQL Server Data Management APRIL Actifio 11, 2013 PAS Specification Table of Contents Introduction.... 3 Background.... 3 Virtualizing Microsoft SQL Server Data Management.... 4 Virtualizing
InfiniteGraph: The Distributed Graph Database
A Performance and Distributed Performance Benchmark of InfiniteGraph and a Leading Open Source Graph Database Using Synthetic Data Objectivity, Inc. 640 West California Ave. Suite 240 Sunnyvale, CA 94086
Nessus or Metasploit: Security Assessment of OpenStack Cloud
Nessus or Metasploit: Security Assessment of OpenStack Cloud Aleksandar Donevski, Sasko Ristov and Marjan Gusev Ss. Cyril and Methodius University, Faculty of Information Sciences and Computer Engineering,
Building Storage as a Service with OpenStack. Greg Elkinbard Senior Technical Director
Building Storage as a Service with OpenStack Greg Elkinbard Senior Technical Director MIRANTIS 2012 PAGE 1 About the Presenter Greg Elkinbard Senior Technical Director at Mirantis Builds on demand IaaS
Towards a Fault-Resilient Cloud Management Stack
Towards a Fault-Resilient Cloud Management Stack Xiaoen Ju Livio Soares Kang G. Shin Kyung Dong Ryu University of Michigan IBM T.J. Watson Research Center Abstract Cloud management stacks have become a
Towards Smart and Intelligent SDN Controller
Towards Smart and Intelligent SDN Controller - Through the Generic, Extensible, and Elastic Time Series Data Repository (TSDR) YuLing Chen, Dell Inc. Rajesh Narayanan, Dell Inc. Sharon Aicler, Cisco Systems
Client Monitoring with Microsoft System Center Operations Manager 2007
Client Monitoring with Microsoft System Center Operations Manager 2007 Microsoft Corporation Published: December 18, 2006 Updated: December 18, 2006 Executive Summary Client monitoring is a new feature
VMware vcloud Automation Center 6.1
VMware vcloud Automation Center 6.1 Reference Architecture T E C H N I C A L W H I T E P A P E R Table of Contents Overview... 4 What s New... 4 Initial Deployment Recommendations... 4 General Recommendations...
XpoLog Center Suite Log Management & Analysis platform
XpoLog Center Suite Log Management & Analysis platform Summary: 1. End to End data management collects and indexes data in any format from any machine / device in the environment. 2. Logs Monitoring -
SUSE Cloud 5 Private Cloud based on OpenStack
SUSE Cloud 5 Private Cloud based on OpenStack Michał Jura Senior Software Engineer Linux HA/Cloud Developer [email protected] 2 New solutions emerge: Infrastructure-as-Service Cloud = 3 SUSE Cloud Why OpenStack?
Qlik Sense Enabling the New Enterprise
Technical Brief Qlik Sense Enabling the New Enterprise Generations of Business Intelligence The evolution of the BI market can be described as a series of disruptions. Each change occurred when a technology
Management-based License Discovery for the Cloud
Management-based License Discovery for the Cloud Minkyong Kim, Han Chen, Jonathan Munson, Hui Lei IBM T.J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532 {minkyong, chenhan, jpmunson, hlei}@us.ibm.com
Automation and DevOps Best Practices. Rob Hirschfeld, Dell Matt Ray, Opscode
Automation and DevOps Best Practices Rob Hirschfeld, Dell Matt Ray, Opscode Deploying & Managing a Cloud is not simple. Deploying to physical gear on layered networks Multiple interlocking projects Hundreds
McAfee Application Control / Change Control Administration Intel Security Education Services Administration Course
McAfee Application Control / Change Control Administration Intel Security Education Services Administration Course The McAfee University Application Control / Change Control Administration course enables
VMware vcloud Automation Center 6.0
VMware 6.0 Reference Architecture TECHNICAL WHITE PAPER Table of Contents Overview... 4 Initial Deployment Recommendations... 4 General Recommendations... 4... 4 Load Balancer Considerations... 4 Database
SharePoint 2013 Best Practices
SharePoint 2013 Best Practices SharePoint 2013 Best Practices When you work as a consultant or as a SharePoint administrator, there are many things that you need to set up to get the best SharePoint performance.
Block Storage in the Open Source Cloud called OpenStack
Block Storage in the Open Source Cloud called OpenStack PRESENTATION TITLE GOES HERE June 3, 2015 Webcast Presenters Alex McDonald, Vice Chair SNIA-ESF NetApp Walter Boring, Software Engineer, HP 2 SNIA
Hadoop in the Hybrid Cloud
Presented by Hortonworks and Microsoft Introduction An increasing number of enterprises are either currently using or are planning to use cloud deployment models to expand their IT infrastructure. Big
