The Missing Link: Putting the Network in Networked Cloud Computing



Similar documents
Cloud Network Infrastructure as a Service: An Exercise in Multi-Domain Orchestration

ExoGENI: A Mul-- Domain IaaS Testbed

ExoGENI: Principles and Design of a Multi-Domain Infrastructure-as-a-Service Testbed

Lightpath Planning and Monitoring

GENI Network Virtualization Concepts

Facility Usage Scenarios

The FEDERICA Project: creating cloud infrastructures

ExoGENI: A Multi-Domain Infrastructure-as-a-Service Testbed

Een Semantisch Model voor Complexe Computer Netwerken

Network Virtualization and SDN/OpenFlow for Optical Networks - EU Project OFELIA. Achim Autenrieth, Jörg-Peter Elbers ADVA Optical Networking SE

Capacity of Inter-Cloud Layer-2 Virtual Networking!

OPTICAL TRANSPORT NETWORKS

OnVector 2009: Topology handling in GLIF Cees de Laat!

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Cluster B Networking Meeting Chris Tracy

Towards an Autonomic Computing Testbed

Experiences with Dynamic Circuit Creation in a Regional Network Testbed

SDN Applications in Today s Data Center

Transport SDN Toolkit: Framework and APIs. John McDonough OIF Vice President NEC BTE 2015

VMDC 3.0 Design Overview

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions

Flexible SDN Transport Networks With Optical Circuit Switching

OpenNaaS based Management Solution for inter-data Centers Connectivity

Network Virtualization

Cisco Active Network Abstraction Gateway High Availability Solution

Fibre Channel over Ethernet in the Data Center: An Introduction

Ethernet Fabrics: An Architecture for Cloud Networking

Network Virtualization: A Tutorial

Software Defined Optical Networks with Optical OpenFlow. Jörg-Peter Elbers, Achim Autenrieth ADVAnced Technology August 2012 Rev 1.

1-Oct 2015, Bilbao, Spain. Towards Semantic Network Models via Graph Databases for SDN Applications

OpenFlow -Enabled Cloud Backbone Networks Create Global Provider Data Centers. ONF Solution Brief November 14, 2012

GENI Architecture: Transition

Network Virtualization Server for Adaptive Network Control

15 th April 2010 FIA Valencia

Policy-Based Fault Management for Integrating IP over Optical Networks

A Software Defined Network Architecture for Transport Networks

Semantic Search in Portals using Ontologies

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

SDN v praxi overlay sítí pro OpenStack Daniel Prchal daniel.prchal@hpe.com

An Architecture for the Self-management of Lambda-Connections in Hybrid Networks

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心

Impact of Virtualization on Cloud Networking Arista Networks Whitepaper

White Paper. Requirements of Network Virtualization

Software Defined Network (SDN)

Ethernet-based Software Defined Network (SDN)

Broadband Networks. Prof. Karandikar. Department of Electrical Engineering. Indian Institute of Technology, Bombay. Lecture - 26

Pentaho High-Performance Big Data Reference Configurations using Cisco Unified Computing System

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Controlling Dynamic Guests in a Virtual Computing Utility

Extending the Internet of Things to IPv6 with Software Defined Networking

Addressing Self-Management in Cloud Platforms: a Semantic Sensor Web Approach

Network virtualization in AutoI

Network Technologies for Next-generation Data Centers

Network topology descriptions in hybrid networks

A Coordinated. Enterprise Networks Software Defined. and Application Fluent Programmable Networks

Performance Management for Next- Generation Networks

Cloud Computing and the Internet. Conferenza GARR 2010

Brocade One Data Center Cloud-Optimized Networks

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems

ICTTEN6172A Design and configure an IP- MPLS network with virtual private network tunnelling

Strategic Direction of Networking IPv6, SDN and NFV Where Do You Start?

Network Virtualization

How to Manage Data Center Networks - A Tutorial

The Advantages of Multi-Domain Layer-2 Exchange Model

Towards the Future Internet

MRV EMPOWERS THE OPTICAL EDGE.

Extending Networking to Fit the Cloud

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Cisco Nexus Data Broker: Deployment Use Cases with Cisco Nexus 3000 Series Switches

NEC ProgrammableFlow:

A Network Management Framework for Emerging Telecommunications Network.

Challenges of an Information Model for Federating Virtualized Infrastructures

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates

Smart Cyber Infrastructure for Big Data processing

Chapter 3. Enterprise Campus Network Design

HBA Virtualization Technologies for Windows OS Environments

Dynamic Virtual Cluster reconfiguration for efficient IaaS provisioning

Value Proposition for Data Centers

Using YANG for the Dissemination of the Traffic Engineering Database within Software Defined Elastic Optical Networks

vsphere Upgrade vsphere 6.0 EN

Testbeds as a Service Building Future Networks A view into a new GEANT Service. Jerry Sobieski (NORDUnet) GLIF Tech Atlanta, Mar 18, 2014

Managing a Fibre Channel Storage Area Network

BROCADE NETWORKING: EXPLORING SOFTWARE-DEFINED NETWORK. Gustavo Barros Systems Engineer Brocade Brasil

Virtualization Technologies (ENCS 691K Chapter 3)

GENICLOUD: AN ARCHITECTURE FOR THE INTERCLOUD

SDN Testbed Experiences: Challenges and Next Steps

QoS Switching. Two Related Areas to Cover (1) Switched IP Forwarding (2) 802.1Q (Virtual LANs) and 802.1p (GARP/Priorities)

IP over Optical Networks - A Framework draft-ip-optical-framework-01.txt

Performance Analysis, Data Sharing, Tools Integration: New Approach based on Ontology

The Evolution of the Central Office

Infrastructure as a Service (IaaS)

STeP-IN SUMMIT June 18 21, 2013 at Bangalore, INDIA. Performance Testing of an IAAS Cloud Software (A CloudStack Use Case)

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

Top Five Things You Need to Know Before Building or Upgrading Your Cloud Infrastructure

Real-World Insights from an SDN Lab. Ron Milford Manager, InCNTRE SDN Lab Indiana University

This presentation provides an overview of the architecture of the IBM Workload Deployer product.

Analysis on Virtualization Technologies in Cloud

Transcription:

The Missing Link: Putting the Network in Networked Cloud Computing Ilia Baldine Yufeng Xin Daniel Evans Chris Heerman Renaissance Computing Institute (RENCI) Jeff Chase Varun Marupadi Aydan Yumerefendi Department of Computer Science Duke University 1. INTRODUCTION The backbone of IT infrastructure is evolving towards a service-oriented model, in which distributed resources, either software or hardware, can be composed as a customized IT service on demand. In particular, cloud computing infrastructure services manage a shared cloud of servers as a unified hosting substrate for diverse applications, using various technologies to virtualize servers and orchestrate their operation. Emerging cloud infrastructure-asa-service efforts include Eucalyptus, Nimbus, Tashi, OpenCirrus, and IBM s Blue Cloud. Extending cloud hosting into the network is a crucial step to enable on-demand allocation of complete networked IT environments. This paper reports on our effort to advance cloud resource control to cloud networks with multiple substrate providers, including network transit providers. Our vision is to enable cloud applications to request virtual servers at multiple points in the network, together with bandwidth-provisioned network pipes and other network resources to interconnect them. This capability is a significant advance beyond the cloud infrastructure-as-a-service models that are generating so much excitement today. This paper reports on a RENCI-Duke collaboration (http://www.geni-orca.renci.org) to build a cloud network testbed for the Global Environment for Network Innovation (GENI) Initiative recently launched by the National Science Foundation and BBN. GENI (http://www.geni.net) is an ambitious futuristic vision of cloud networks as a platform for research in network science and engineering. A key goal of GENI is to enable researchers to experiment with radically different forms of networking by running experimental systems within private isolated slices of a shared testbed substrate. A GENI slice gives its owner control over some combination of virtualized substrate resources assigned to the slice, which may include virtual servers, storage, programmable network elements, networked sensors, mobile/wireless platforms, and other programmable infrastructure components attached to the cloud network. GENI slices are built-to-order for the needs of each experiment. We focus on progress in building a unified control framework for a prototype GENI facility incorporating RENCI s optical network stacks on the Breakable Experimental Network (BEN). BEN is a testbed for open experimentation on dedicated optical fiber that spans the Research Triangle area, and links server clusters on each campus. We have demonstrated a key milestone: on-demand creation of complete end-to-end slices with private IP networks linking virtual machines allocated at multiple sites (RENCI, Duke, and UNC). The private IP networks are configured within stitched layer-2 VLANs instantiated from the BEN metro-scale optical network and the National Lambda Rail (NLR) FrameNet service. In the context of GENI, this capability enables a researcher to conduct safe, reproducible experiments with arbitrarily modified network protocol stacks on a private isolated network that meets defined specifications for the experiment. 2. A CONTROL FRAMEWORK FOR A MULTI-LEVEL CLOUD NETWORK Our ultimate goal is to manage the network substrate as a first-class resource that can be co-scheduled and coallocated along with compute and storage resources, to instantiate a complete built-to-order network slice hosting a guest application, service, network experiment, or software environment. The networked cloud hosting substrate can incorporate network resources from multiple transit providers and server hosting or other resources from This work was supported by the National Science Foundation GENI Initiative, NSF award CNS-0509408, and an IBM Faculty Award.

multiple edge sites (a multi-domain substrate). Cloud networks present new challenges for the control and management software. How to incorporate diverse substrate resources into a unified cloud hosting environment? How to allocate and configure all the parts of a guest environment a slice of the cloud network in a coordinated way? How to stitch interconnections among substrate resources obtained from different providers to create a seamless end-to-end slice? How to protect the security and integrity of each provider s infrastructure, and protect hosting providers from abuse by the hosted guests? How to verify that a slice built to order for a particular guest is in fact behaving as expected? How to ensure isolation of different guest slices hosted on the same substrate? How to provide connectivity across slices when connectivity is desired, and police the flow of traffic? 2.1 BEN Substrate IP networks are often deployed as overlays on dedicated circuits provisioned from an underlying network substrate. Networks that support both IP overlays and dynamic circuit provisioning are known as hybrid or multi-layer networks. The regional Breakable Experimental Network (BEN) is an example of a multi-layer optical network. In 2008, the Triangle Universities (UNC-CH, Duke and NCSU) in collaboration with RENCI (Renaissance Computing Institute) and MCNC began the rollout of a metro-scale optical testbed. BEN consists of dark fiber, provided by MCNC, interconnecting sites (BEN PoPs) at the three Universities, RENCI, and MCNC. It provides access for university researchers to a unique facility dedicated exclusively to experimentation with disruptive technologies. RENCI has installed access equipment at each of the BEN PoPs, based on Polatis fiber switches that mediate access to the shared fiber. Above each Polatis switch, RENCI maintains a default stack of network equipment that can provision dynamic circuits between pairs of PoPs, and instantiate layer-2 VLANs and IP connectivity across those circuits. Figure 1 depicts the stack of network elements at each BEN PoP, reflecting the multiple layers of the BEN network. At the bottom of the stack is an all-optical fiber switch, in the middle an optical transport network switch (Infinera DTN), and at the top an Ethernet switch (Cisco 6509). The BEN network architecture defines adaptations at each layer. Figure 2.1 shows the the functional diagram of the layer stack. The Infinera DTN is equipped with multiple 10 Gigabit Ethernet (10 GE) client-side interfaces that connect to the 10 GE line-side interfaces of the 6509, which itself exposes multiple 1 Gigabit Ethernet (1 GE) client-side interfaces. The DTN first adapts the 10 GE signal into wavelength, then multiplexes 10 wavelengths into an internal channel group, then multiplexes up to four channel groups onto a line-side fiber. BEN includes a secure management plane, a private IP network for communicating with control interfaces on the various network elements. These control interfaces accept management commands to provision circuits, link them together into well-formed networks, and expose them as VLANs at the BEN edge. Some of the BEN PoPs also have links to NLR FrameNet endpoints, which can be used to link VLANs through NLR s national-footprint network and connect them with the VLANs hosted on BEN. 2.2 ORCA Control Framework Our control framework software is based on the Open Resource Control Architecture (ORCA) [Irwin et al. 2006; Chase et al. 2007; Yumerefendi et al. 2007; Chase et al. 2008; Constandache et al. 2008; Lim et al. 2009], an extensible platform for dynamic leasing of resources in a shared network infrastructure. The O RCA platform is in open-source release as a candidate control framework for GENI, and is a basis for ongoing research on secure cloud computing and autonomic hosting systems. For this project, we developed plug-in handler extensions for ORCA to control BEN network elements by issuing commands over the secure management plane. We also developed plug-in resource control extensions to coordinate allocation of BEN circuits and VLAN tags, and to oversee VLAN linkages. Finally, we extended virtual machine handlers in ORCA to connect virtual machines to VLANs, and configure them as nodes in an IP network overlaid on VLANs. In this way, a guest can request the ORCA service to allocate virtual machines on server sites adjacent to the BEN PoPs on each campus, link them to a transit network dynamically provisioned from BEN, and configure them to form a complete private IP network. Users can build a network through a Web portal interface, or using a programmed slice controller that interacts with ORCA resource servers to build and control their custom network.

6509 Network Functional Diagram 1GE Link Connection: 1 GB 10GE BMM TAM DTF-1 10 GE Link Connection: 10 GB OCG-1 OCG-2 10 =100GB OCG DTF-2 Link Connection: 1 (DTF) = 10 GB OCG-3 OCG-4 DTF-10 Line 4OCG =400GB Infinera OCG Link Connection: 1 OCG = 10 = 100 GB Reconfigurable Fiber Switch Fiber Link Connection: 4 OCG = 40 = 400 GB (a) BEN PoP Network Element (b) Layer Adaptation Functional Diagram Fig. 1. Network elements in each PoP of BEN, a multi-layer transport network. 2.3 A Language for Cloud Networks One focus of the project is to advance standards and representations for describing network cloud substrates declaratively. There is a need for a common declarative language that can represent multi-level physical network substrate, complex requests for network slices, and the virtualized network resources (e.g., linked circuits and VLANs) leased for a slice, i.e., allocated and assigned to a slice. Ideally, we could specify all substrate-specific details declaratively, so that we can incorporate many diverse substrates into a network cloud based on a generalpurpose control framework and resource leasing core. Declarative representations are difficult in this domain because of the need to express complex relationships among components (e.g., network adjacency), properties and constraints of each network level, and constraints involving multiple levels. Our approach extends the Network Description Language (NDL [Ham et al. 2008]). NDL representations are documents in RDF (Resource Description Framework), a syntax for describing sets of objects and their properties and relationships (predicates). NDL is an ontology: a set of resource types and relationships (properties or predicates) that make up a vocabulary for describing complex networks in RDF syntax. An NDL document uses the NDL vocabulary to specify a set of resource elements and relationships among them, whose meanings are defined by NDL. NDL has been shown to be useful for describing NDL heterogeneous optical network substrates and identifying candidate cross-layer paths through those networks. One contribution of the project is to extend NDL to use a more powerful ontology defined using OWL (Ontology Web Language). The result is an NDL-compatible extension of NDL which we refer to as NDL-OWL. The ultimate goal of this process is to create a representation languages that is sufficiently powerful to enable generic resource control modules to reason about substrate resources and the ways that the system might share them, partition them, and combine them. Each resource control action, such as allocating or releasing resources for a slice, affects the disposition of the remaining substrate inventory. To meet our goals, the declarative representation must also capture these substrate-specific constraints on allocation and sharing. These constraints are crucial for the resource control plug-in modules in ORCA, which are responsible for allocating and configuring substrate resources for each slice. OWL is an RDF vocabulary for describing ontologies. The power of OWL derives from a rich vocabulary

for defining relationships among the resource types and among the predicates in the ontologies that it describes. In addition to hierarchical classes and predicates, OWL introduces logic-expressive capabilities including class constraints like disjointness, intersection, union, and complement, property constraints like transitive, symmetric, inversive, cardinality, etc. An OWL ontology uses these capabilities to define the structure and relationships of predicates and resource types that make up the ontology s vocabulary. Given knowledge of these relationships in an ontology, an inference engine can ingest an RDF document based on the ontology, and manipulate it or infer additional properties beyond those that are explicitly represented in the document. For example, in NDL-OWL, the hasinterface and interfaceof propertiesare related in the ontology using the inverseof property axiom in OWL: thus software can infer the property in one direction from a statement that the inverse property holds in the other direction. We use the Transitive property axiom in OWL to define connectivity and adaptation properties. These features are useful for path finding algorithms. For example, if a sequence of pairs of points are connected, an end-to-end path can be inferred. RDF and OWL were developed as core technologies for the Semantic Web, and are widely used W3C standards [Antoniou and Harmelen 2008]. They are powerful, flexible, and expressive formalisms for representing structured knowledge. They are especially suitable to model graph structures such as complex network clouds. We have developed an ontology-based cross-layer network provisioning service system that contains following components: (1) a suite of ontology (NDL-OWL) that can describe various network and compute resources; (2) representation of user requests and allocated subnetworks (slices) at multiple abstract levels; (3) available and used resource abstraction and accounting that integrates with policy controller interfaces in the ORCA control framework; (4) common end-to-end path and virtual topology mapping and release APIs that can generate schedules of configuration actions to the network elements. 3. NDL-OWL We emphasize a common suite of ontology elements that can describe the physical network substrate, requests for allocations of slice resources from the cloud network, and the current configuration of a partially allocated substrate after satisfying some set of requests. NDL, the basis for our work with NDL-OWL, is sufficiently powerful to express network topology and connectivity in multiple layers or levels of abstraction. NDL also models the adaptations between layers in a multi-layer network setting (see Figure 2.1). For example, each transport service at a layer (WDM, SONET/SDH, ATM, Ethernet, etc.) supports some set of defined adaptations, e.g., different styles of Ethernet over WDM (e.g., 10GBase- R) and VLAN over native Ethernet. Consistent and compatible adaptations between layers must be present to establish connectivity along a path. The fundamental classes and properties in NDL include the Interface class, Adaptation class to define the adaptation relationship between layers, connectedto and linkedto predicates to define the connectivity between instances of Interface, and the switchedto predicate to define the cross connect within a switching matrix among a groups of interfaces. A valid path between two devices normally comprises a sequence of triples with property combinations of hasinterface, adpatation, connectedto, linkto, adaptationof, interfaceof. 3.1 Accounting for Dynamic Provisioning In addition to specifying the topology of the network substrate and adaptations between layers, an NDL-OWL model incorporates important concepts necessary for dynamic service provisioning, such as capacity of a network resource, e.g. bandwidth and QoS attributes. One important concept added by NDL is Label, an entity to distinguish or identify a given connection or adaptation instance among others sharing a given network component. For example, some labels correspond to channel IDs along a physical link, e.g., a particular fiber in a conduit, a wavelength along a fiber, or a time-slot in a SONET or OTN frame. Labels may be viewed as a type of resource to be allocated from a label pool associated with each component. The label range is fixed and a particular physical channel has fixed capacity. For example, for the 802.11q tagged Ethernet network, the VLAN ID is used as the unique resource label and has a fixed range (0-4096). NDL-OWL generalizes the NDL concept of Label to enable dynamic accounting of network resources. We extend the Label class to associate the capacity and QoS characteristics with each transport entity. NDL-OWL defines two properties, availablelabelset and usedlabelset, to track dynamic resource allocation. We use the

<ndl:interface rdf:about= #UNC/Infinera/DTN/fB/1/fiber > <rdf:type rdf:resource= &dtn;fibernetworkelement <dtn:availableocgset rdf:resource= #UNC/Infinera /DTN/fB/1/fiber/availableOCGSet /> <dtn:usedocgset rdf:resource= #UNC/Infinera /DTN/fB/1/fiber/usedOCGSet /> <dtn:ocg rdf:resource= #UNC/Infinera /DTN/fB/1/ocgB/1 /> <ndl:interfaceof rdf:resource= &ben;unc/infinera /DTN /> <ndl:linkto rdf:resource= &ben;unc/polatis/f6-22 /> </ndl:interface> Fig. 2. An NDL-OWL description of a DTN line-side port. <dtn:ocgnetworkelement rdf:about= #UNC/Infinera /DTN/fB/1/ocgB/1 > <rdf:type rdf:resource= &ndl;interface /> <dtn:availablelambdaset rdf:resource= #UNC/Infinera /DTN/fB/1/ocgB/1/availableLambdaSet /> <dtn:usedlambdaset rdf:resource= #UNC/Infinera /DTN/fB/1/ocgB/1/usedLambdaSet > </dtn:ocgnetworkelement> Fig. 3. An NDL-OWL description of a DTN line-side port OCG Interface. OWL collection data structure ontology set, list, and collection to define various label pool constructs. Figure 2 describes a line-side port in the Infinera DTN switch at a BEN PoP. Each element is an instance of one or more resource types (classes) defined by NDL-OWL, and is uniquely named by a URI. The DTN line-side port is an instance of the classes Interface and FiberNetworkElement, which is a subclass of LayerNetworkElement. The port interface has an available set of four OCG labels accounted by the properties dtn:availableocgset and dtn:usedocgset. Figure 3 is a snippet of the OCG interface definition. The fiber and OCG ports provide physical fiber connectivity in the substrate, and are described within the model. As lease requests arrive and network resources are provisioned and assigned to slices, various virtual connectivity elements are added or removed to the virtual interfaces in the various layers accordingly. For example, the lambda interface adapted within an OCG port, and the 10GE port adapted to the lambda port, are generated dynamically. Dynamically allocated resources are tracked through operations on label pools, which are specified declaratively in the NDL-OWL data model. The available lambda set on an OCG port (10 wavelengths in one OCG) is defined as the property availablelambdaset. 3.2 Dynamic Provisioning Using SPARQL Queries We integrated basic networking provisioning support under the policy and resource control plug-in interfaces for the Java-based ORCA control framework. The algorithmic challenges for dynamic provisioning include finding a (shortest) well-formed path between two entities (device, interface, subnetwork), and mapping a virtual topology consisting of multiple paths among multiple entities. Once a mapping is selected, the system generates the configuration actions needed to establish the selected connectivity service along the selected paths. For example, the most general action is to command some switch along the path to cross-connect two interfaces. The command sets include ordered lists of ports to be configured, and labels selected from the available label sets by the provisioning algorithm. Similarly, when a path is torn down, the algorithm generates action lists to release its allocated labels and other configuration state and resources held along the path. Our software implements most network management tasks as queries on in-memory data structures built from these declarative representations. Our Java-based prototype uses the Jena semantic web toolkit and SPARQL (SPARQL Protocol and RDF Query Language). Jena is a Java programmatic environment for RDF models based on OWL ontologies. It provides basic RDF and OWL reasoning engines, a mechanism to plug in customized rule engines, and a powerful SPARQL API package for semantic queries using the familiar SQL query syntax.

SPARQL is essentially a graph pattern matching query language. The semantics and complexity analysis of SPARQL queries can be found in [Recommendation 2008; Pérez et al. 2009]. Our prototype handles constrained forms of the virtual topology mapping problem, which is known to be NP-hard. The software formulates suitable queries and relies on Jena to perform the mapping operations, given NDL-OWL descriptions of the substrate and a sequence of request and release operations. The details of the topology mapping queries are beyond the scope of this paper. Since the BEN network is small, the cost of the mapping operation is not prohibitive. 4. PROJECT STATUS AND FUTURE We have deployed the software prototype along with ORCA in the BEN metro-scale cloud network testbed. The provisioning engine runs under an ORCA network domain authority server. It emits configuration command sets to software drivers we developed for the native TL-1 interface of the fiber switch, WDM DTN, and the CLI interface of Ethernet switch so that cross-connects can be configured and released as needed to map slice requests onto the testbed. The BEN authority exposes a range of available VLAN tags to an ORCA broker. The broker issues tickets for VLANs and virtual machines on server arrays adjacent to specific BEN PoPs. Once the broker issues a ticket for VLAN on BEN, the requester (more precisely, an ORCA slice controller running on the requester s behalf) presents an NDL-OWL order specifying the requested virtual topology to the BEN transit authority. If the request can be filled, the BEN authority maps and instantiates the topology and exports it as an isolated end-to-end VLAN terminating at each of the mapped PoPs. Once it has received notification that the VLAN has been instantiated, the slice controller presents its virtual machine tickets and secure VLAN tokens to site authorities at the ticketed PoPs. The site authorities instantiate the virtual machines and attach them to the VLAN. As each VM instantiates, the owning slice controller connects to it over a secure socket on the BEN management network, and configures it for IP service on an IP subnet within the configured VLAN. The slice controller can also launch and control a networked application or guest environment within the configured slice. Our initial experience with the ontology-based approach has been promising. The prototype functions sufficiently well to demonstrate reliable dynamic provisioning of multiple, concurrent, isolated slices of the BEN network, in tandem with Xen virtual machines provisioned from the edge sites. We currently use canned request descriptions in NDL-OWL, and process requests in an arbitrary sequence rather than attempting to optimize the mappings when all requests are not simultaneously satisfiable. We are continuing to refine our approaches to topology mapping and request generation, and to enrich the abstract view of the BEN network exported to the ORCA broker. REFERENCES ANTONIOU, G. AND HARMELEN, F. 2008. A Semantic Web Primer. MIT Press. CHASE, J., CONSTANDACHE, I., DEMBEREL, A., GRIT, L., MARUPADI, V., SAYLER, M., AND YUMEREFENDI, A. 2008. Controlling Dynamic Guests in a Virtual Computing Utility. In International Conference on the Virtual Computing Initiative (an IBM-sponsored workshop). CHASE, J., GRIT, L., IRWIN, D., MARUPADI, V., SHIVAM, P., AND YUMEREFENDI, A. 2007. Beyond Virtual Data Centers: Toward an Open Resource Control Architecture. In Selected Papers from the International Conference on the Virtual Computing Initiative (ACM Digital Library). CONSTANDACHE, I., YUMEREFENDI, A., AND CHASE, J. 2008. Secure Control of Portable Images in a Virtual Computing Utility. In First Workshop on Virtual Machine Security (VMSec). HAM, J., DIJKSTRA, F., GROSSO, P., POL, R., TOONK, A., AND LAAT, C. 2008. A distributed topology information system for optical networks based on the semantic web. Journal of Optical Switching and Networking 5, 2-3 (June). IRWIN, D., CHASE, J. S., GRIT, L., YUMEREFENDI, A., BECKER, D., AND YOCUM, K. G. 2006. Sharing Networked Resources with Brokered Leases. In Proceedings of the USENIX Technical Conference. LIM, H., BABU, S., CHASE, J., AND PAREKH, S. 2009. Automated Control in Cloud Computing: Challenges and Opportunities. In Proc. of the First Workshop on Automated Control for Datacenters and Clouds (ACDC). PÉREZ, J., ARENAS, M., AND GUTIERREZ, C. 2009. semantics and complexity of sparql. ACM Trans. Database Syst. 34, 3. RECOMMENDATION, W. 2008. sparql query language for rdf. YUMEREFENDI, A., SHIVAM, P., IRWIN, D., GUNDA, P., GRIT, L., DEMBEREL, A., CHASE, J., AND BABU, S. 2007. Towards an Autonomic Computing Testbed. In Workshop on Hot Topics in Autonomic Computing (HotAC).