Collaborative Project in Cloud Computing



Similar documents
Powered by VCL - Using Virtual Computing Laboratory (VCL) Technology to Power Cloud Computing

Using VCL Technology to Implement Distributed Reconfigurable Data Centers and Computational Services for Educational Institutions

BigData Management and Analytics in the Cloud** In the Contenxt of the Materials Genome Initiative

This presentation provides an overview of the architecture of the IBM Workload Deployer product.

What is virtualization

Integration of Cloud Computing and Cloud Storage

Virtual Computing Laboratory (VCL)

Open Education Services with Cloud Computing

The Insurance Business Application

Panasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory

Virtualization. Dr. Yingwu Zhu

Virtualization. Types of Interfaces

Powered by VCL - Using Virtual Computing Laboratory (VCL) Technology to Power Cloud Computing

Deploying Baremetal Instances with OpenStack

Solution for private cloud computing

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed and Cloud Computing

Private cloud computing advances

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Server and Storage Virtualization. Virtualization. Overview. 5 Reasons to Virtualize

Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U

Cloud Computing through Virtualization and HPC technologies

VMware vsphere 5.0 Evaluation Guide

C Examcollection.Premium.Exam.34q

managing the risks of virtualization

Managing Traditional Workloads Together with Cloud Computing Workloads

IOS110. Virtualization 5/27/2014 1

WHITE PAPER: Egenera Cloud Suite

SAS deployment on IBM Power servers with IBM PowerVM dedicated-donating LPARs

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University

Virtualization Technologies (ENCS 691K Chapter 3)

Converged Infrastructure

MODULE 3 VIRTUALIZED DATA CENTER COMPUTE

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

COS 318: Operating Systems. Virtual Machine Monitors

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Mitigating Information Security Risks of Virtualization Technologies

Cloud Infrastructure Management - IBM VMControl

Private Cloud for WebSphere Virtual Enterprise Application Hosting

Introducing Red Hat Enterprise Linux. Packaging. Joe Yu. Solution Architect, Red Hat

HP Cloud Map for TIBCO ActiveMatrix BusinessWorks: Importing the template

Parallels Virtuozzo Containers vs. VMware Virtual Infrastructure:

Server and Storage Virtualization

How to Configure an Initial Installation of the VMware ESXi Hypervisor

Performance Evaluation of the XDEM framework on the OpenStack Cloud Computing Middleware

International Journal of Advancements in Research & Technology, Volume 1, Issue6, November ISSN

Taking the Disaster out of Disaster Recovery

Configuring and Launching ANSYS FLUENT Distributed using IBM Platform MPI or Intel MPI

How To Make A Virtual Machine Aware Of A Network On A Physical Server

Directions for VMware Ready Testing for Application Software

System Models for Distributed and Cloud Computing

Private Cloud in Educational Institutions: An Implementation using UEC

Virtual SAN Design and Deployment Guide

SOA REFERENCE ARCHITECTURE: SERVICE ORIENTED ARCHITECTURE

Unisys ClearPath Forward Fabric Based Platform to Power the Weather Enterprise

How To Create A Cloud Based System For Aaas (Networking)

Virtualization Overview

Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009

The Journey to Cloud Computing: from experimentation to business reality

Virtualization and the U2 Databases

Grid Computing Vs. Cloud Computing

Networking for Caribbean Development

Optimizing and Securing an Industrial DCS with VMware

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**

Manjrasoft Market Oriented Cloud Computing Platform

WHITE PAPER: Egenera Cloud Suite

User's Guide - Beta 1 Draft

A Project Summary: VMware ESX Server to Facilitate: Infrastructure Management Services Server Consolidation Storage & Testing with Production Servers

Marco Mantegazza WebSphere Client Technical Professional Team IBM Software Group. Virtualization and Cloud

CA Virtual Assurance/ Systems Performance for IM r12 DACHSUG 2011

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Solution for private cloud computing

Types Of Operating Systems

Design of Cloud Services for Cloud Based IT Education

Pluribus Netvisor Solution Brief

Virtualization and Data Center Management:

The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland

Restricted Document. Pulsant Technical Specification

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

ABSTRACT INTRODUCTION SOFTWARE DEPLOYMENT MODEL. Paper

Chapter 1: Introduction. What is an Operating System?

Logical Partitioning Feature of CB Series Xeon Servers Suitable for Robust and Reliable Cloud

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

White Paper on NETWORK VIRTUALIZATION

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent Version Fix Pack 2.

Desktop Virtualization Technologies and Implementation

Secure Multi Tenancy In the Cloud. Boris Strongin VP Engineering and Co-founder, Hytrust Inc.

VIRTUALIZATION. While not a new concept virtualization has hit the main stream over the last few

The Review of Virtualization in an Isolated Computer Environment

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

9/26/2011. What is Virtualization? What are the different types of virtualization.

2) Xen Hypervisor 3) UEC

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution

NC State University Model for Providing Campus High Performance Computing Services

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems

systems WHITE PAPER Automating Continuous Integration over Complex IT Infrastructure

How To Install Eucalyptus (Cont'D) On A Cloud) On An Ubuntu Or Linux (Contd) Or A Windows 7 (Cont') (Cont'T) (Bsd) (Dll) (Amd)

How to Add and Remove Virtual Hardware to a VMware ESXi Virtual Machine

Management of VMware ESXi. on HP ProLiant Servers

Experience with Server Self Service Center (S3C)

Transcription:

Collaborative Project in Cloud Computing Project Title: Virtual Cloud Laboratory (VCL) Services for Education Transfer of Results: from North Carolina State University (NCSU) to IBM Technical Description: 1. Types of services: From the end-user perspective, VCL offers a series of services ranging from: single-seat desktop-type offerings with access to either routine or specialized computational resources and applications, to: groups of seats that can be reserved for a particular timeslot, to long-term reservation of one or more servers, to reservation of homogeneous or heterogeneous aggregates (or bundles) of resources called environments, which are the building blocks of virtual clouds that VCL supports, to long-term reservation of research clusters, to HPC cluster and facilities, and so on. A seat can represent either access to a sole-use virtual or bare-metal resource (e.g., a Windows or Linux server) that in turn can be used on its own, or as a management node for a group of services or resources, or it can represent access/portal to an already preconfigured shared service or resource (e.g., access to an account on an IBM System z machine, or access to HPC clusters via a sole-use or shared-login node). One of the primary characteristics of VCL is that it can dynamically change its configuration and move resources from one type of service to another. One can distinguish two major groups of service categories: 1. HPC services: consists of access to cluster-based resources controlled through an HPC scheduler such as Platform LSF (load-sharing facility), to shared memory resources, and in special cases to supercomputers. 2. Other services include:

Access to single-node bare-metal or virtual computers; typically in the desktop mode, these resources come to an end-user with administrative privileges (root access) Ability to make a reservation for a group of desktops for a particular timeslot (e.g., for use in a class during a class period or a bank or office during working hours) Ability to reserve one or more (bare-metal or virtual) servers Ability to reserve aggregations of tightly coupled or loosely coupled computers and servers either of the same type (e.g., blades for a computational test cluster) or an integrated set of diverse components (e.g., a heterogeneous aggregate might consist of a Web server, a database server, a System z resource running applications, and a cell cluster to perform some related analytics). Ability to use VCL to request a portal into HPC cluster resources and submit batch or real-time jobs to such an environment. 2. Types of resources: We distinguish between two types of resources, undifferentiated and differentiated. 1. Undifferentiated resources are those that can be reconfigured and reloaded at will with whatever suits the end-user: A group of blades that can be loaded from scratch (bare metal) with Linux, Windows, or some other OS and applications on short notice represent undifferentiated resources. A group of servers that is already loaded with a hypervisor (e.g., VMware ESX) and can receive any virtual image of choice represents an undifferentiated resource for that type of virtual image. When a user is finished using the resource, it is again returned to the pool of undifferentiated resources. 2. Differentiated resources are preconfigured but can be made available to the end-user at will or on schedule. A group of machines that may be located in a university computing laboratory and that are made available to users over the network when the laboratory is closed may be classified as differentiated if the end-user does

not have the right to reload them at will, that is, can use them only in the already configured (differentiated) state. Similarly, access to a logical partition (LPAR) or an account on an IBM System z resource may be considered as access to an already differentiated resource. In this case, when the user is finished with the resource, it is returned to the pool but remains differentiated. Any Web services offered through VCL-affiliated resources fall into the same category. VCL offers the ability to incorporate new hardware with ease, to administer metadata about the hardware, and the hardware itself (reload, default image, etc.). Hardware can be grouped into logical groups that may reflect its properties (interconnectivity, processor type, and ownership), and images can be mapped onto any hardware component, some hardware groups (e.g., those tightly coupled through a high-speed interconnect), & assigned to specific user groups. 3. VCL architecture: basic VCL network configurations One of the key features of the undifferentiated VCL resources is their networking setup. It allows for secure dynamic reconfiguration, loading of images, and isolation of individual images and groups of images. Every undifferentiated resource is required to have at least two networking interfaces: one on a private network, the other on either a public or a private network depending on the mode in which the resource operates. For full functionality, undifferentiated resources must have a way of managing the hardware state through an external channel, e.g., through the IBM BladeCenter chassis management module (MM). 3.1. The dual-use physical network configuration Typically, the eth0 interface of a blade is connected to a private network (10.1 subnet in the example), which is used to load images. Out-of-band management of the blades (e.g., power cycling) is effected through the management network (172.30.1 in the example) connected to the MM interface. The public network is typically connected to the eth1 interface. The VCL node manager (which could be one of the blades in the cluster or an external computer) at the VCL site has

access to all three links; that is, it needs to have three network (public, management and image-loading) interfaces (Fig. 1). Fig. 1 VCL dual-use physical network configuration This VCL configuration can be used for environments where seats and services are assigned individually or in synchronized groups, or when assigning or constructing an end-user image aggregate or environment where every node can be accessed from a public network. 3.2. VCL high-performance computing physical network setup Now, the node manager is still connected to all three networks - public, management and image-loading, and preparation private network-but now eth1 is connected (through VLAN manipulation, VLAN 5) to what has now become a Message Passing Interface (MPI) network switch. This network now carries intra-cluster communications needed to result in tightly coupled computing usually given to the HPC cloud services. Switching between non-hpc mode and HPC mode takes place electronically, through VLAN manipulation and table entries; the actual physical setup does not change. We use two different VLANs to eth1 to separate public network (external) access to individual blades when those are in individual-seat mode (VLAN 3), from the MPI communications network to the same blade when it is in HPC mode (VLAN 5 in configuration 2), see Fig. 2.

Fig. 2 VCL high-performance computing physical network setup This second VCL architecture has a configuration with blades assigned to a tightly coupled HPC cluster environment or to a large overlay (virtual) cloud that has relatively centralized public access and computational management. Benefits for NCSU NC State s intelligent provisioning system also improves its ability to flexibly allocate resources between instructional, administrative and research activities, each of which has its variations of resource requirements. A key example is the near complete drop-off of student computing activity that is typical during breaks between semesters. The new system gives NC State the ability to quickly and easily switch the bulk of its roughly 1,000 IBM BladeCenter server blades to the computationally intense requirements of researchers such as running complex models and simulations and in the process leveraging what would have been idle server capacity to advance the goals of the university s researchers. The same capability would enable NC State to shift capacity to administrative functions like class registration that produce a surge in processing activity before each academic semester. All of these benefits point to how intelligent provisioning and similar cloud initiatives effect the more granular optimization of computing resources, which

enables NC State to handle the academic computing requirements of a growing student population while minimizing the growth of its infrastructure. NCSU s goal was to rethink the way it met the academic computing needs of students, instructors and the other populations the university serves. By collaborating with IBM and transferring R&D results, NCSU was better able to deliver on that mission. VCL Project Partners: NCSU: North Carolina State University, US IBM US