Advanced Network Virtualization: Definition, Benefits, Applications, and Technical Challenges Version 1.0 January 2012 1/35
Table of Contents 1. Introduction... 5 1.1. Objectives of the specification... 5 1.2. Definition of the advanced network virtualization... 5 1.3. Requirements for the network virtualization... 6 1.4. Objectives of the network virtualization... 7 1.5. Benefits from the network virtualization... 7 1.6. Features and capabilities not achieved by existing technologies... 7 1.7. Existing projects and studies... 8 1.8. Map of requirements/features, benefits, applications, and possible business influences of the network virtualization... 9 2. Benefits from the network virtualization... 10 2.1. Players category... 10 2.2. Benefits specific to the players - summary... 11 2.3. Benefits to infrastructure operators... 12 2.4. Benefits to service providers... 13 2.5. Benefits to users... 14 3. Technical challenges for the network virtualization... 16 3.1. Infrastructure operator viewpoint... 16 3.1.1. Abstraction... 16 3.1.1.1. Enable evolvable schema for abstracting and naming resources... 16 3.1.1.2. Substrate technologies... 16 3.1.1.3. Organize a large number of edge devices and sensors and a large amount of resources... 16 3.1.2. Isolation... 17 3.1.2.1. Enable performance and security isolation on resource-scarce edge devices 17 3.1.2.2. Substrate technologies to enable stringent isolation... 17 3.1.2.3. Scalability of the number of slices... 17 3.1.3. Elasticity... 17 3.1.3.1. Enable instant (in the order of seconds) allocation of resources... 17 3.1.3.2. Scalability for resource control... 17 3.1.4. Programmability... 18 3.1.4.1. Operation and management for system-level network programmability 18 3.1.4.2. Candidate technologies for programmability... 18 3.1.5. Whole system... 18 3.1.5.1. System integration technology enabling inter-cloud interactions... 18 3.1.5.2. Data plane for inter-cloud interactions... 19 3.1.5.3. Control plane for inter-cloud interactions... 19 3.2. Service provider viewpoint... 19 3.2.1. Abstraction... 19 3.2.1.1. Development of service provisioning interfaces to specify service requirements... 19 3.2.1.2. Development of interpretation from service requirements to the corresponding resources... 19 3.2.1.3. Development of specialized languages for service requirements and resource allocations... 20 3.2.2. Isolation... 20 3.2.2.1. Development of performance benchmarking and SLA... 20 2/35
3.2.3. Programmability... 20 3.2.3.1. Dynamic service composition... 20 3.2.3.2. Controllability and manageability... 20 3.2.4. Elasticity... 20 3.2.4.1. Optimal resource assignments... 21 3.2.4.2. Consistent and sustainable provisioning... 21 3.2.5. Whole system... 21 3.2.5.1. Operation... 21 3.2.5.2. Inter-cloud networking... 21 3.2.6. Traffic measurement and its analysis... 21 4. Candidate applications suitable for network virtualization... 22 4.1. Network architecture... 22 4.1.1. OpenFlow in a slice... 22 4.1.2. OpenTag... 22 4.1.3. Content-oriented switching... 23 4.2. Network services... 23 4.2.1. P2P packet cache... 23 4.2.2. High-definition video delivery... 24 4.2.3. Ad-targeting... 25 4.2.4. Inter-cloud interactions... 26 4.2.5. In-network data grid... 27 References and glossaries... 28 3/35
Disclaimers This paper is contributed by NVSG (Network Virtualization Study Group), however, NVSG does not guarantee any accuracy, availability, or reliability in terms of any materials contained in the paper. Readers should undertake all responsibilities for exploitation of the paper, and NVSG has no obligation for any damages deprived from the exploitation. Copyright of the paper is reserved by NVSG. Its printing, copying, free distribution is allowed, while modification, selling, public transmission, publication, translation, or adaptation without the authors prior consent is strictly prohibited regardless of whether its purpose is for profit or nonprofit. Contact Information Network Virtualization Study Group: nvlab-study@nakao-lab.org (possibly to be changed) NVSG Members Akihiro NAKAO (University of Tokyo) Tomonori AOYAMA (Keio University) Atsushi TAKAHARA (NTT) Noriyuki TAKAHASHI (NTT) Hideaki TANAKA (KDDI R&D Laboratories) Kenichi OGAKI (KDDI R&D Laboratories) Michiaki HAYASHI (KDDI R&D Laboratories) Toshiaki KURI (NICT) Kiyohide NAKAUCHI (NICT) Nozomu NISHIHARA (NICT) Toshiaki SUZUKI (NICT/Hitachi, Ltd.) Motoo NISHIHARA (NEC Corporation Central Research Laboratories) Yoshiaki KIRIHA (NEC Corporation Central Research Laboratories) Daisuke MATSUBARA (Hitachi, Ltd.) Tomohiro ISHIHARA (FUJITSU LABORATORIES LTD.) 4/35
1. Introduction This clause briefly introduces the advanced network virtualization in terms of definitions, requirements for the fundamental technologies, objectives, benefits, related existing technologies, and possible candidate applications. The following clauses describe the details of them. 1.1. Objectives of the specification To identify key issues on the network virtualization technology As an achievement of the center of excellence on the advanced network virtualization technology in the new-generation network (NWGN) community, the specification should identify the direction of research and development of the technology, formulize key issues to be tackled with, and invigorate intensive technical discussions on them without causing misunderstandings and confusions. To familiarize communities with the technology The specification should help relevant academia and organizations to understand the network virtualization technology and their issues identified above. The organizations include the new-generation network forum, the technical committee of network virtualization (TC-VN) in the Institute of Electronics, Information and Communication Engineers (IEICE), the Global Inter-Cloud Technology Forum (GICTF), and Japan Cloud Consortium (JCC). To proceed to the design principle study for assessing the technology s social and economic values The specification should guide the community to developing the subsequent document, which specifies the basic design principles of the advanced network virtualization. The document, to which the community and industry can publically refer, enables fair assessment of the technology whether it has social and economic values and whether the new-generation networks, derived from the virtualization technology, can produces new business opportunities. 1.2. Definition of the advanced network virtualization This clause introduces the definition of the advanced network virtualization, which is the subject of the specification, by referring to the general terms virtualization and network. The rest of the specification uses the term network virtualization with the meaning of the advancement described below. Virtualization is one of the ICT technologies to release the original resources from their physical constraints and model the resources to be able to handle in a general way regardless their original nature. 5/35
A network is, in general, an information and telecommunication infrastructure which consists of nodes (at which the installed software program interprets the telecommunication protocols exchanged by the data and selects the destination node for the data) and links (which connect the nodes and transport the data). The advanced network virtualization, which is introduced in this specification, applies the virtualization technology to the networks in a sense that the networks offer extra capabilities provided by software resources in the nodes. These capabilities are beyond the basic network capabilities, i.e., link and their selection. The extra involvement enables the nodes, and the network, to provide computation and data storage capabilities to the users. The ICT infrastructure in the new-generation networks, which install the advanced network virtualization, govern a collection of the resources ranging from links, networks, and nodesoftware as a slice and create a virtualized network over the slice with dynamically controllable and programmable links and nodes. 1.3. Requirements for the network virtualization The following five requirements characterize the infrastructure for the network virtualization. (1) Abstraction All kinds of available physical resources such as link, computational and storage ones are systematically abstracted and named so that they may be manipulated through well-defined and extensible interfaces and allocated to create, modify, reclaim and release a slice. (2) Isolation Resources for forming a slice are isolated from those for another so that the slices may not mutually interfere with one another in terms of performance, security, and namespace and that any single slice may not cause disruptions to the entire infrastructure. (3) Elasticity Resources for constructing a slice are elastically allocated, reclaimed and released on demand so that the operators may maximize the accommodation of multiple slices, optimize the usage of the resources both temporally and spatially, and also allow instantaneous and bursty resource usage as well as continuous usage of the resources. (4) Programmability Resources for building a slice can be programmed for developing, deploying and experimenting with new communication protocols for innovative data dissemination and for facilitating efficient data processing to be enabled within the slice. 6/35
(5) Authentication, Authorization, and Accounting Usage of resources for creating a slice must be authenticated and authorized so that it may achieve safe and secure operations of slices preventing the abuse of the resources and malicious attacking on them. It is necessary to account for the allocated resources in the infrastructure so that the integrity of resources may be examined and monitored and the usage of the resources may be optimized. 1.4. Objectives of the network virtualization Harmonized with the new-generation network concept, the objective is to establish a meta-architecture, which simultaneously accommodates multiple heterogeneous network architectures and services, by isolating resources to enable creating new functionalities without any constraints. 1.5. Benefits from the network virtualization The ICT infrastructure making use of the network virtualization will introduce the following new benefits to our information society. Provisioning of multiple and independent networks which respectively configure specialized network behaviors and operations on QoS classification, data (packet) handling, and security/privacy protection policies. They can be customized per targeted services, applications, or users. Creation of new network capabilities in an agile manner for emerging uses and requirements of the network, which improve data transport efficiency and network robustness. Co-existence of well-examined legacy networks as well as innovative advanced networks simultaneously, which allows multiple migration scenarios from legacy ones to innovative ones in an evolving fashion. Parallel set-up and operation of trial networks with commercial networks over a common ICT infrastructure, which produces economical test beds. Exploration for new business models which may change the roles of the stakeholders in virtualized network environment, such as infrastructure providers, service providers, and users. 1.6. Features and capabilities not achieved by existing technologies Lack of programmability: Existing technologies implying network virtualization such as logical routers or virtual private network (VPN) technologies partly provides separation of network resources. Their node programs are, however, dedicated for a particular purpose with specific protocols. The dedication prohibits examination of new protocols and, thus, creation of new networks. 7/35
Lack of isolation and elasticity: Dynamic controls for network nodes have been considered under the concept of active networks, although there is no decisive definition of it. The active network assumes that users can put specific software or part of it into packets. The received nodes of the packets interpret the content of the packet and perform extra tasks, services, or applications. In an extreme case, a small embedded source code in the packet runs over the node. Even in the active network concept, slicing the network resources into independent and mutually-isolated portions, and running the code over the specific portion identified by the code have not been considered. Note - Some preliminary thoughts might be already proposed augmenting active networks to cover resource isolation. This specification, however, recognizes matured technologies and commercially available products, establishes the clear definition and objective, and proposes practical solutions. Lack of integrated isolation: Current technologies for cloud and application servers apply virtualization primarily to their computational and storage resources. There are some cases to involve a network aspect but not in an integral way with the servers. They do not have a comprehensive view and actions of the entire infrastructure including networks. Lack of meta-architecture: In our analysis, OpenFlow enhances the switching function in a single sliced network. The meta-architecture proposed in this specification allows multiple networks, some of which are not necessarily controlled by the OpenFlow switches. The network virtualization described in the specification explores more generalized architecture over multiple individual networks. The technology, thus, tackles with more complex issues in a wider view than the OpenFlow does. 1.7. Existing projects and studies GENI (Global Environment for Network Innovations) GENI, which is driven by the National Science Foundation (NSF) in the US, is one of the most relevant projects providing a large scale of test bed for experimental innovative technologies. In GENI, multiple candidate groups, i.e., clusters, of technologies are developed in parallel to seek for multiple possibilities simultaneously. Their main topic is to investigate control frameworks. At the same time, interconnections and interoperations through different clusters are studied to form a federation. In particular, PlanetLab/OpenFlow and ProtoGENI clusters are closely related to the project in this specification. The former cluster, PlanetLab/OpenFlow, is based on the pseudovirtualization technology using Linux Vserver of PlanetLab, which is hard to examine new layer 2 protocols due to the lack of programmability. As is 8/35
described in previous sub-clause in this specification, OpenFlow view is narrower than one in this specification. Similarly, the latter cluster, ProtoGENI, does not achieve programmability, which is the subject of this specification. Across the whole GENI environment, flexible and smooth handling of the resources is still at the initial stage. Required level of elasticity has not been achieved yet. Regarding isolation, the current nodes in GENI do not perform multiple virtualized networks using different and new protocols. FIRE (Future Internet Research and Experimentation) In Europe, FIRE project under European Union (EU) framework program drives several individual projects such as OneLab2 based on PlanetLab, FEDERICA using Juniper virtual routers, and OFFELIA based on OpenFlow switches. Unfortunately, none of them has reached the target goal of the requirement in terms of programmability and isolation. NOVI is another EU project aiming to harmonize with cloud technology, which is at an initial phase yet. It should be noted that PanLab project contributes to TEAGLE federation mechanism, which provides a comprehensive control across different network and computer resources to achieve elasticity. 1.8. Map of requirements/features, benefits, applications, and possible business influences of the network virtualization Figure 1 shows the map of relationship of technical and business aspects of network virtualization. The following clauses discuss the elements and their relationships. Technical Requirements Impacts on Applications Specific Values (Possible Business Drivers) Novel network functions created with freewheeling thinking Business Assessment (maximizing benefits minus costs) Abstraction Isolation Elasticity Programmability Authentication Authorization Accounting Creativity Designing free from traditional practices Customizability Instant and flexible updating to meet various requirements Efficiency Ultralow wasted facility operation/maintenance Security Containment of failure and individual security setting Sustainability Continuous growth and smooth migration of facilities and services Novel services with advanced in-network processing Quick service provisioning and renewal to fill an order any time soon Light-weight service provisioning allowing for short-time services Multiple service provisioning bundled or integrated over the common facility Simple operation and maintenance for quick fault detection and restoration Multi-grade security setting for specific services and/or users Robust operation against security attacks and unexpected accidents Adjustable operation in response to emergency situation and given priority Dependable service provisioning without changes for a long time assessment Figure 1. Relationship of technical and business aspects for network virtualization 9/35
2. Benefits from the network virtualization Summary of the features of the network virtualization is as follows: Abstraction, which enables resources (i.e., computation, storage, and network) to be identified and manipulated in a well-defined manner, Isolation, which protects the reserved resources from interfering with each other, Elasticity, which allows resources to be prepared according to demand, Programmability, which mobilizes the reserved resources to perform different functions, and Authentication, Authorization, and Accounting, which govern the resource use in a secure and safe way and keep track the use for integrity. By means of the network virtualization, the users can enjoy safe, sound, and tailor-made services. The following sub-clauses describe specific benefits of the network virtualization from the viewpoints of 1) infrastructure operators who provide the elementary resources, 2) service providers who compose the services from the resource, and 3) users who enjoy the offered services. 2.1. Players category 1) Infrastructure operators who provide the elementary resources The infrastructure operators manage the elementary resources, which are produced from physical network, logical network, computational, and storage resources. They offer the resources to the service providers in an abstract way. 2) Service providers who compose the services from the resource The service providers design services and applications, prepare the required elementary resources from the infrastructure operators, and provide the composed services and applications to the users. Thanks to the features of network virtualization, service providers can offer their services according to users demand without annoying constraints of physical resources. 3) Users who enjoy the offered services The users consume the offered services by the service providers. 10/35
2.2. Benefits specific to the players summary This sub-clause describes benefits specific to players, i.e., infrastructure operators, service providers, and users in terms of their functional provisioning, service provisioning, and service consumption. Table 1 lists the player-specific benefits. Table 1. List of player-specific benefits Players Infrastructure operators Service providers Users Benefits Creativity New innovative service creation without physical constraints Customizability Expansion or reduction of the services according to their demand Service reception according to the demand Efficiency No need for service-specific infrastructures Efficient operation of the infrastructure Distribution of different service loads Optimized resource consumption from all service viewpoints Reduced OPEX by efficient service offering Optimal service consumption Reduced OPEX by efficient infrastructure operation 11/35
Security Prevention of the total network shutdown caused by malicious attacks Security grade setting reflecting service characteristics Consumption of services without interference with other users conditions Different security grades in accordance with SLA Sustainability Business continuity by dividing affected parts of the infrastructure Step-by-step infrastructure update without affecting other services Prioritized service offering in response to emergency situation Continuous service use without affected by update of the physical infrastructure 2.3. Benefits to infrastructure operators Efficiency No need for service-specific infrastructures and efficient operation of the infrastructure Abstraction, which leaves the resource from the physical infrastructure, and programmability, which allows flexible service composition from the resources, permit to share the common infrastructure for multiple services. They reduce the unused resources. There is no need for service-specific infrastructures. Distribution of different service loads Abstraction, which leaves the resources from the physical infrastructure, permits flexible provisioning of the resources free from the physical constraints. Elasticity, which adjusts the required resources by manipulating the elementary resources, allows load balancing. Reduced OPEX by efficient infrastructure operation Abstraction, which separates the underlying resources from their contributing services, allows infrastructure operators to manage solely the infrastructure, and thus optimizes the operation in a simple and efficient 12/35
manner. This leads to the reduction of operation expenditure (OPEX). Quick detection and recovery for network failures are also possible. Security Prevention of the entire network shutdown caused by malicious attacks Isolation, which separates individual service providers and their services over a common infrastructure not to interfere with each other, limits the impact of malicious cyber attacks, e.g., denial of services (DoS), into the separated each portion. Thus, the entire network shutdown can be avoided. Sustainability Business continuity by dividing affected parts of the infrastructure Isolation, which makes the separated resources independent, limits the impact of the particular failure of the separated one on others, and thus helps business continuity of other services and their service providers. Step-by-step infrastructure update without affecting other services is also possible. 2.4. Benefits to service providers Creativity New innovative service creation free from physical constraints Abstraction, which provides service providers with the network and computation resources in a logical way, helps creating new service components in a flexible manner along with network design and developing innovative services and network capabilities, such as routing control, operation, and maintenance, without restrictions of physical infrastructure. Customizability Expansion or reduction of the services according to their demand Elasticity, which changes required resources very dynamically in response to the demand and deletion of required resources and capabilities, allows service providers to expand or reduce the scale of the target services. Efficiency Optimized resource consumption from all service viewpoints Abstraction and elasticity, which offers optimal amount of the resources fitting to the target service and its scale, contribute to run the resources in an efficient manner. Reduced OPEX by efficient service offering 13/35
Abstraction, which offers stringent separation of infrastructures (resources) from services, allows service providers to concentrate on their service management and consequently leads to reduction of operational expenditures (OPEX) and instant detection/recovery of network failures through such effective and simplified operations. Security Security grade setting reflecting service characteristics Abstraction, isolation, and programmability, which realize the independent logical networks per services, contribute to set different security grade according to the service requirements. Sustainability Prioritized service offering in response to emergency situation Abstraction and programmability, which provides wide range of customized resources, enables service offerings in response to the time, place, and situation (or occasion) and the priority of the given services, and thus help sustainable service creation and provisioning. 2.5. Benefits to users Customizability Service reception according to the demand Elasticity, which allows dynamic addition and reduction of the resources, enables the environment which demanded services are provided in a timely fashion. Efficiency Optimal service consumption Abstraction, programmability, and elasticity, which enable service customization with logically arranged resources according to their individual demand, enable low-cost and just-fit services. Security Consumption of services without interference with other users conditions Isolation, which logically separates resources associated with specific services, provides highly-secured services without any interference with other users service conditions and consumptions Different security grades in accordance with SLA 14/35
Programmability, which allows dynamic use of computational resources, offers services with different security grades in accordance with service level agreement between users and providers. Sustainability Continuous service use without affected by update of the physical infrastructure Abstraction, which leaves the logical resources from the physical infrastructure, allows continuous use of the service without affected by the maintenance and entire update of the physical infrastructure. 15/35
3. Technical challenges for the network virtualization In this clause, challenges from viewpoints of infrastructure operators and service providers are described. 3.1. Infrastructure operator viewpoint 3.1.1. Abstraction Abstraction, which separate services from the underlying infrastructures, is one to the essential capabilities to achieve efficient resource use, service and business continuation in case of replacement of the infrastructure, and simple operation. Regarding specification on resources, existing research projects do not refer to the impact of service level agreement on the resources. The scope of them is not generalized in terms of definitions and capabilities. The limited scope makes the technology hard to support unforeseeable technologies. Those untouched areas should be tackled. 3.1.1.1. Enable evolvable schema for abstracting and naming resources The definition of the resource should be general enough to deal with a wide spectrum of physical resources as handy logical ones. The definition should be applicable to future network technologies in terms of capacity, throughput, and performance aspects including QoS and delay, and connection types (e.g., point-to-point, point-to-multipoint, and multipoint-to multipoint) The definition should allow hierarchical composition of resources. Point-tomultipoint and multipoint-to-multipoint resources should be explicitly represented by the composition of multiple primitive resources, e.g., pointto-multipoint and multipoint-to-multipoint resources and computational resources. The definition should include operation, administration, and management functions and their interface specifications. They are for availability and continuity checking at the phase of resource provisioning and performance monitoring and fault localization in case of in-service. 3.1.1.2. Substrate technologies The substrate technology should be specified, which is the core mechanism to model physical resources as generalized logical resources and combine the individual logical resources into the aggregated ones regardless the different nature of the original resources. This provides the logical resource made from different media. 3.1.1.3. Organize a large number of edge devices and sensors and a large amount of resources The resource management architecture should be scalable enough to support a wide-spectrum as well as huge amount of physical resources as the logical resource. 16/35
3.1.2. Isolation Isolation is the essential capability to provide logically independent resources designated for each service. The isolation capability prohibits interference and mutual impacts between the co-existing logical resources for individual services over a common infrastructure. The interference includes performance as well as security aspects. Isolation capability has been studied by means of logical division of virtual LAN, time division based on time slots, and wavelength division. Those existing ideas should be reviewed, however, and a new approach should be established to achieve high scalable isolation for the network virtualization. The following technologies should be established. 3.1.2.1. Enable performance and security isolation on resource scarce edge devices Augmented machine architecture and enhanced resource-separation technology suitable for access-network devices. They are to separate resources in a secure fashion at relatively low-performance edge devices 3.1.2.2. Substrate technologies to enable stringent isolation New resource isolation technologies which complement existing physicallayer isolation technologies (e.g., VLAN, time slots, and wavelength division) should be established. It should be suitable for the network virtualization. 3.1.2.3. Scalability of the number of slices Scalability in terms of not only the number of the resource-isolated slices but also the number of setup and release of the slices per unit interval should be established. 3.1.3. Elasticity Elasticity is the essential capability to optimize the required resources in response to the service demand efficiently and rapidly. There are a few discussions on interfaces for resource provisioning. It is, however, necessary to specify protocols for real-time and highly scalable resource provisioning. 3.1.3.1. Enable instant (in the order of seconds) allocation of resources Resource provisioning technology (including signaling protocols) should be established, which responds very rapidly from service demand to relevant network equipment. 3.1.3.2. Scalability for resource control Intelligent resource control should be designed to manage multiple elementary resources originated from different physical sources simultaneously, arrange the requested resource from them, and complete providing the requested slice on demand. 17/35
3.1.4. Programmability Programmability on virtualization-enabled network equipment is to sustain optimal network performance in accordance with service requests and to foster technological innovations of network equipment towards new services and applications. 3.1.4.1. Operation and management for system level network programmability There are several platform technologies available to achieve programmability, such as advanced processors adopting multi-core and hetero-core technologies, storage devices, reconfigurable devices (e.g., FPGA), software processing over the device (OS and real-time monitors), and software development environment. Besides, to apply the programmability to network infrastructures, the integrated design process should be established which consists of the platform technologies listed above. For the total operation and management, the cyclic loop should be repeated ranging from key technology research, design and development of devices and systems, consideration and management of core intellectual properties, installation to the infrastructure, auditing of the operation, and assessment of the isolation against interferences among relevant technologies. It is important to continuously run the loop in a secured manner with low costs. 3.1.4.2. Candidate technologies for programmability Multi-core and hetero-core processors are the latest candidates for virtualization-enabled network equipment. Multi-core is the definite trend in processor technology. The issue is how to reap rewards of its parallel processing for the network-specific tasks. Intelligent operation for optimized parallel processing is another challenge. As of the device, Field-Programmable Gate Array (FPGA) is a typical reconfigurable one and has been widely used for telecommunication equipment in general. More FPGA use for network equipment depends on a couple of key factors such as improvement of cost/performance ratio of it than existing special-purpose processors and application specific integrated circuits (ASICs). Power saving is another factor. Total productivity for FPGA design and development should be justified in comparison with design and development based purely on software. 3.1.5. Whole system 3.1.5.1. System integration technology enabling inter cloud interactions Inter-cloud interactions are so emerging that the system architecture for the network virtualization should be flexible enough to incorporate them without any technical hurdles. A new model of stacking with multiple functional plains is introduced in this sub-clause. The model allows the virtualization architecture to be evolved to support various pieces of network equipment with enhanced capabilities. It also permits layerspecific changes and improvements. 18/35
3.1.5.2. Data plane for inter cloud interactions According to the required performance for the target application, it is necessary to arrange relevant computing processes and network connectivity by making use of available tools and technologies. For example, required network performances (i.e., loss rate, delay, and available bandwidth) depend on tasks for specific applications (e.g., transportation of virtual machine and synchronization of database). The suitable transport protocol (e.g., TCP/IP, UDP/IP, or UDT) should be selected corresponding to the geographical distance and transmission latency as well as traffic patterns, which are specific to applications. 3.1.5.3. Control plane for inter cloud interactions Technologies and their use for control plane may vary according to the characteristic and behavior of target applications. Example technologies include naming/addressing and their resolution, identification of location, security, and authentication. These aspects should be carefully considered and conducted accordingly. 3.2. Service provider viewpoint 3.2.1. Abstraction In order for users or service providers to be able to set up their required virtual network according to service requirements, clear identification/specification of the service requirements and their correct interpretation, which further results in the required resources should be defined. Currently, many research projects including GENI are considering direct interfaces to designate primitive resources. Intelligent processes, however, to interpret abstracted service requirements (as exemplified above) and then convert them into specific primitive resources have not yet been developed. 3.2.1.1. Development of service provisioning interfaces to specify service requirements Multiple levels of application program interfaces (APIs) should be developed. A highly abstracted API should represent network characteristics (e.g., bandwidth, delay, security, filtering criteria, and caching availability). A direct API should allow service providers to manipulate elementary resources of virtualized nodes and links and their functionalities. 3.2.1.2. Development of interpretation from service requirements to the corresponding resources Technologies for interpreting service requirements claimed by service providers (including Internet providers) and users, calculating the corresponding resources, and requesting them to the relevant portion of networks. 19/35
3.2.1.3. Development of specialized languages for service requirements and resource allocations 3.2.2. Isolation Verification of achieved performance as a benchmarking is essential to confirm that the performance isolations are well performed and the agreed performance are guaranteed. At present, performance benchmarking and SLAs are available mainly for a single network. Further works for virtualized network environment should be studied, such as performance analysis at the boundary between infrastructure operators and service providers, and SLA issues in case of federated networks. 3.2.2.1. Development of performance benchmarking and SLA Performance analysis technology should be developed to discover performance bottlenecks in a network or in a node. The technology contributes to the clarification of the responsibility among the infrastructure operators and the service providers involved. 3.2.3. Programmability Programmability is indispensable for providing multi-dimensional services with fine level granularity. The programmability refers to many aspects such as functional components, telecommunication systems installing the components, their controlling signaling, and composed services. The programmability gives sufficient flexibility to service providers and users without any physical constraints for developing innovative services and their customization or optimization by enhancing the service components in a short period. 3.2.3.1. Dynamic service composition The service composition technology should be established, which discovers resources and functional components needed to meet the service requirements identified, reserves them on demand, and compose them into the required service. 3.2.3.2. Controllability and manageability The technology should be established, whose API controls configurations of the distributed resources and functional components when the target service is composed or the assumed network condition changes, and maintains the target service level agreement (SLA) by managing the configurations to be optimal. 3.2.4. Elasticity Elasticity is of significant, which discovers the resources and functional component to be reserved for service composition, discovers the composed service itself, and configures the service automatically. The technology contributes to maintain the service integrity and sustainability against both internal causes (e.g., service modifications and feature changes) and external causes (e.g., network condition change, resource availability change, and user request change). 20/35
3.2.4.1. Optimal resource assignments The technology is to select the most appropriate resources and functional components among their multiple candidates, change the selection when the situation changes, and optimize the resource to be assigned. The technology includes identification/discovery of the required resources and functional components to meet the given SLA and maximization of the objective performance index specific to the service. 3.2.4.2. Consistent and sustainable provisioning The technology includes rapid identification and discovery for the required resource and functional components, swift composition of services, and the corresponding synchronization and inheritance of the service and node status. The technology is to achieve service consistency and sustainability against either internal or external causes. 3.2.5. Whole system 3.2.5.1. Operation The operation technology should expand to cover the service aspect which enables efficient service rendering with integrity. The technology includes service composition, provisioning, resource reassignment, and service migration when new technologies are introduced. 3.2.5.2. Inter cloud networking 3.2.6. Traffic measurement and its analysis Slices provided by the network virtualization apply to different purposes (for providing different services and applications) and, thus, the offered traffic to one of the slices and the required SLA to it differ from each other. Specifically, required SLAs may vary whether to put a priority to performance (e.g., delay or throughput) or security isolation. To meet those different requirements, the first thing to do is to monitor the offered load per slice, audit the performance of the involved resources and functional components, and judge whether their assignment is enough or not. Unacceptable results will trigger the reassignment of the resources. The performance monitoring function is also necessary to indentify bottle necks and failures for providing slices. The monitoring function should provide useful diagnosis enough to indicate whether the trouble occurs inside the slice or the substrate clearly. This clarifies the responsibility of the relevant players in case of failure and, thus, helps fostering the reliable and dependable infrastructure for users. By applying deep packet inspection (DPI), capturing and monitoring individual packets, and identifying packets with a specific nature should be performed. Besides traffic measurement on individual slices, monitoring total performances of application, platform, and physical infrastructures in a holistic approach is required. This is to verify the objective of the 21/35
virtualization; whether the separated configuration and isolation of individual slices are beneficial. 4. Candidate applications suitable for network virtualization 4.1. Network architecture 4.1.1. OpenFlow in a slice In-network processing (INP) is an idea for network equipment to perform not only traditional packet routing and forwarding but also basic data processing and to achieve efficient information processing as a networked system. Currently, the networking for INP is not flexible enough to adapt time-varying or multi-purpose use cases. To provide flexible networking with INP, OpenFlow can be a candidate. OpenFlow is a design concept for switches (and routers) which separates route-calculation capability from switching capability so that the route calculation and the relevant packet processing rule can be independent from the switches and flexible and dynamic decisions rather than a fixed route calculation rule is possible. With OpenFlow, one can deploy a pipeline of data processing modules for INP, and enable or disable any modules to/from the pipeline seamlessly on demand. However, the current OpenFlow switches itself doesn't support scalable multiplexing. OpenFlow in a slice (OFIAS) is a software implementation that combines the programmable feature used in OpenFlow with network virtualization. This combination provides multiple slices that can implement independent OpenFlow networks over the common infrastructure and thus provides different services using INP. 4.1.2. OpenTag OpenTag is a mechanism that allows routers (OpenTag routers) to process packets in different ways based on their natures, characteristics, or groups, which are indicated by tags attached to the packets, and creates slices with performance and security isolations. Since routers in OpenTag are assumed to perform different packets processing depending on the packet, the mechanism is a good example to be demonstrated by the network virtualization. In OpenTag, (1) users are allowed to embed a slice ID tag per packet that denotes which slice the packet belongs to and to install a packet processing module to the slice, (2) each router installs a redirector that parses slice ID tags attached in the packets and classifies them into slices for enabling slice-specific packet processing, and (3) slice ID tags are encrypted on per-packet basis to avoid malicious or unintended injection and attacks onto the slices. The redirector receives all the incoming 22/35
packets to the router and decrypts slice IDs attached in the packets if any, and redirects the packets into the slice of the slice ID. 4.1.3. Content oriented switching Increasing services and applications with huge variety reveal limitations of the existing client-server computer networks. Although peer-to-peer connectivity and dedicated content delivery network (CDN) have been proposed to solve the limitations in terms of scalability and efficiency of the network, no fundamental solution is given yet. To tackle the issues, give solutions, and create an innovative service environment, transparent networks, which is free from any legacy signaling and protocols (directed by specific naming/addressing/routing and their resolutions) and implementable with any new potential techniques, are essential. For example, to achieve scalable and efficient discovery and transport of contents, there is a proposal of a built-in mechanism to learn best holders and their locations of the requested contents and to deliver the contents with the help of the holders in a tree topology. Figure 2 shows the proposal. The network virtualization will make the idea possible. The abstracted Ethernet infrastructure given by the virtualization provides programmable computational resources, which can be configured to learn MAC addresses of Ethernet packets in a different way from the existing ones and to modify the switching behavior accordingly. Thus, the proposed innovative content delivery can be achieved. Ethernet network Contents abc. Switch Multicast Receiver X of contents abc Hash function H (abc) 1. Learn destinations 2. Generate the forwarding table Contents Destinations Ports fedf:59c8 MMAC A A, B : : : Receiver Y of contents abc Figure 2. Content oriented switching 4.2. Network services 4.2.1. P2P packet cache The continuous growth of P2P traffic imposes a large burden on ISP network operation. To reduce P2P traffic, the redundancy elimination using P2P packet cache scheme is effective since P2P swarm contains the high redundancy. 23/35
BitTorrent is one of the most popular peer-to-peer (P2P) applications used for sharing large files, in which the data files are segmented and replicated over a multitude of P2P overlay nodes. While this distributed data storage platform is robust against failures, the flash crowd of P2P swarms significantly consumes the bandwidth of ISP networks typically for several days. Furthermore, in the peer selection of P2P client, BitTorrent protocol cannot always work with the optimal arrangement of peers because of the incomplete information on the network topology and causes excessive cross-domain traffic. P2P packet cache exploits the shared packet cache architectures to store the transmitted data packets at the remote node and to transfer the data identifier instead of data itself for the packet containing redundant bytes in the previously transmitted packets. Deploying this architecture in the inter-domain link, the traffic congestion due to P2P swarm is expected to be mitigated. Since the P2P packet cache receives packets, analyzes the contents of the packets, extracts its identifier of the content if the packets have already been transmitted, and send the identifier while keeping the packets, this technology is a good candidate for the demonstration of the network virtualization, which provides packet processing in addition to the ordinary packet transport over multiple slices. 4.2.2. High definition video delivery High-definition video delivery for mass users over networks and low- or middle-class video delivery on demand have already been available. The video delivery is recognized of significance in the new-generation network project. A possible target application of the advanced virtualization would be to deliver high-quality and broadband video streams in an efficient manner. An issue is a way to provide huge required resources for the delivery. Rather than reserving the resources in advance, the network virtualization can permit on-demand reservation of them more dynamically. The data, which is high-quality content exchanged in movie/tv industry, is huge. In addition, the exchange should be highly On-demand broadband path configuration Project C video on demand Network video processing (clipping, scaling, trans-coding) Project A - sports Project B - movie Network virtualization infrastructure 24/35 Figure 3. High definition video delivery
secured per project to protect their ideas and digital rights. The network virtualization can meet this particular demand by providing the secured collaboration network environment which is secured from each other and suitable for huge traffic handling. Figure 2 shows a possible use case. Programmability over the network nodes can further implement codec adaptation and media handling at the nodes. This advanced task can help optimizing individual receivers conditions in terms of transport bandwidth and terminal capabilities. 4.2.3. Ad targeting Collecting statistics of users and their behaviors, analyzing them, and providing attractive advertisements according to the analysis are becoming popular as ad-targeting. There are several technical issues, however, to be considered in this scenario. One of them is the distance between collecting, analyzing, and reflecting points of the data. For example, analyzing data center is far from vending machines showing the targeted advertisement. They are connected by networks. Another issue is the response time. Large-size rich content is hard to display in a timely fashion due to data transmission delay. All contents are impossible to store at vending machines in advance. The idea is to implement content storage function in network nodes as cache and relate them with sensed user information, which gives a good hint for future user behavior. The cache will help the target advertisement (i.e., content) displayed for potential users in time. Figure 4 illustrates an example sequence of interactions. 1. At a wicket in the station, user appearances (e.g., via electric purse) are captured with the identifier. 2. Data center estimates possible future behaviors of the user. It also selects candidate digital contents 3. The selected contents are sent to possible cache in the network node near candidate displays. 4. The node received the content retrieves suitable advertisement from data center. 5. On arrival of the user, the display gets the target content immediately and the user sees the suitable advertisement in time. 25/35
2. Determine candidate contents 1. Inform ID of user #A and the location Analyze user preference 3. Push Content B Node1 Contents server 4. Retrieve suitable advertisement Cache In time Data Center 5. Display the advertisement In NW processing provides cache and just in time advertisement. Node2 Cache Wicket User A moves to a display Station 1 Vending machine with networked display Figure 4. Ad targeting Station 2 4.2.4. Inter cloud interactions Potential of the cloud system is highly recognized as a new value-creating IT infrastructure. A single cloud system alone, however, may have a limitation to continue its service and business in case of large-scale failures. The failures may occur due to unexpected overloads or natural disasters. To keep high availability and acceptable performance, even in such cases, collaboration among multiple cloud systems, i.e., inter-cloud interactions, is indispensable. There are several use cases that reap the benefits of inter-cloud interactions such as: Load distribution - a congested cloud reserves other clouds resources, activates the same application over it, and shares the load with the other clouds. Performance guarantee - when a cloud detects its user s performance degradation due to the user s move, the original serving cloud searches for another cloud in the same federation that is nearer to the user, and delegates the service provisioning to the remote cloud, which is nearer to the user, and thus maintain the original acceptable-delay performance. 26/35
Disaster and recovery - when cloud systems detect serious failures of some of the clouds in the same federation, the healthy clouds reserve their Cloud System B Applications Servers Cloud system A e Government Finance Medical service CDN Communication Transfer Communication optimization Data conversion Cache e Government Medical service Finance CDN Storages Users Figure 5. Inter cloud interactions resources, activate the same applications, which has run over the damaged clouds, and provide the same applications on behalf of the damaged clouds with minimum service discontinuity. In these use cases, the interworked clouds use the dedicated networks for their interactions such as control message exchange, data copy, and application migration. At present, such networks need human operations to set up their new configurations. The networks themselves only work as simple connecting pipes. The cloud systems should implement all functionalities required for the inter-cloud interactions. In the future, networks should support autonomous virtual network configuration without bothering human operations and achieve optimal configuration in terms of network performance and users quality of services (i.e., dynamic network configuration). In addition, the networks can store data as cache, for helping cloud interactions, support suitable transport protocol, and convert data formats when cloud systems use different formats. Supporting those useful functionalities inside the networks accelerate the inter-cloud services to spread more rapidly. Figure 5 illustrates such use cases. The network virtualization enables dynamic network configurations and embedded additional functionalities inside networks, and thus contribute to the progress of the inter-cloud services as one of the key technologies. 4.2.5. In network data grid Upcoming majority of the traffic is assumed to be machine-to-machine communications (M2M) generated by devices and sensors and consumer 27/35
generated media (CGM), which are captured by smart phones and updated by users. Smooth exchange of them is the key to create various high values in the successful network infrastructure. When it comes to the traffic, however, their behavior is unknown and hard to predict due to the machine generation sometimes unsynchronized and sometimes highly synchronized. Discovery for the target content becomes a serious issue because of the huge amount of total information. The distributed data grid middleware are installed in servers and virtualized nodes. The middleware collects and holds data generated by sensors and smart phones. Applications can access the data in the middleware by the key/values store interface. APL Data Grid APL Data Grid Data Grid Data center B KEY/VALUE STORE Application server (SET/GET) APL Data Grid Camera Data center A KEY/VALUE STORE (SET/GET) Sensor Data Grid Data Grid Data Grid Data Grid Data Grid Data Grid CGM contents Data center C Sensor data The advanced network virtualization isolates data in different slices Smart phone PC Figure 6. In network data grid The network virtualization allows the possibility to store key elements or the entire data in the network nodes, run the indexing and search engines in a distributed manner, and control destitution for the information. These technologies will constitute in-network data grid, which comprises huge size of database with high-speed addition, update, exchange, search, and retrieval of the data. The large and dynamically updated information provided by the in-network data grid will contribute to creating innovative service environment. Figure 6 shows such use cases. References and glossaries New-generation Network Promotion Forum: A forum which consists of experts of industry, academia and government, such as carriers, telecommunications vendors, and persons with academic backgrounds. Its purpose is to strategically and comprehensively promote initiatives concerning realization of Newgeneration Network (NwGN). The Forum was established in October 28/35
2007. http://forum.nwgn.jp/english/ The new-generation network (NWGN) is a study project concept seeking for networks that inherently meet fundamental future requirements, in 2015/2020, regarding large-scale capacity, maximum availability and reliability, provision of high quality services, security, and energy saving as well as positive contributions to the social life. Since the requirements are far beyond the past ones which existing networks were designed to meet, the design approaches and candidate technologies are envisioned as a new-generation as being free from conventional practices and stereotypical ideas. The study project concept is now driven by the new-generation network promotion forum. The Forum has close collaboration with the similar research projects in the world seeking for Future Internet and Future Networks in the world. IEICE (the Institute of Electronics, Information and Communication Engineers) Technical Committee on Network Virtualization: A scientific expert committee in IEICE for a specific period of time whose purpose is to specify issues which are expected to be core competency technologies in network virtualization and to facilitate R&D. It also aims to provide opportunities to discuss network virtualization technologies necessary for simultaneous implementation of multiple different logical networks: to strengthen collaboration between the network and the computing industries: and to facilitate realization of a network environment providing data processing services converted from a network merely for data transmission; along with a discussion on test beds in order to promote experimentations of the research and technologies as well as on societal impacts of the network virtualization. http://www.ieice.org/eng/index.html GICTF (Global Inter-Cloud Technology Forum): A forum aiming to promote standardization of network and the interfaces through which cloud systems interwork with each other, and to enable the provision of more reliable cloud services. The Forum was established in July 2009. 29/35
http://www.gictf.jp/index_e.html JCC (Japan Cloud Consortium): A consortium whose purpose is to carry out the activities of making suggestions, etc. on the cross-sectional sharing of information on and identifying/resolving any new issues with various measures for use in the dissemination/development of cloud services by the relevant enterprises/organizations. http://www.japan-cloud.org/english/index.html Namespace: A concept which prevents coexistence of network addresses or URIs (Uniform Resource Identifier) which have the same names. Logical router: A functionality to logically segment a single physical router to perform or to be manipulated as multiple routers. Active Network: A network suggested by DARPA around 1995 whose structure is capable of high-level processing in network routers for each application or user and of converting network behaviors properly according to demands of users and network administrators. OpenFlow: A network control technology proposed by OpenFlow Switching Consortium. It defines as flow a series of communication identified by a combination of MAC address, IP address, and port number and implements routing in units of the flow. http://openflow.org GENI (Global Environment for Network Innovations): A test bed sponsored by the National Science Foundation (NSF), for introduction of innovative technologies which do not necessarily depend on IP to networks in operation in order to experiment new 30/35
architectures and elemental technologies. Planetlab: A global overlay research network for experimentations to testify prototypes of the internet applications and network services, in which Princeton University, UC Berkley, and Intel play central roles as well as other universities worldwide participate. http://www.planet-lab.org ProtoGENI: An NSF-funded and GPO-funded prototype implementation and deployment of GENI, led by the Flux research group at the University of Utah, and largely based on our Emulab software. It is the Control Framework for GENI Cluster C. http://www.protogeni.net/trac/protogeni Linux Vserver: A virtual server implemented by adding OS-level virtualization capabilities to the Linux kernel. FIRE (Future Internet Research and Experimentation): An integrated experimental facility which is constructed by gradually connecting and federating test beds for Future Internet technology in order to promote the concept of experimentally-driven research for validating new networking technologies and service paradigms. http://cordis.europa.eu/fp7/ict/fire/ OneLab2: An open and sustainable large-scale shared experimental facility which will allow European industry and academia to innovate today and to design the Future Internet. The OneLab2 project will leverage the original OneLab project's PlanetLab Europe test bed. http://www.onelab.eu/index.php/projects/past-projects/onelab2.html 31/35
FEDERICA: A project which has created a European wide technology agnostic infrastructure based upon Gigabit Ethernet circuits, transmission equipment and computing nodes capable of virtualization, to host experimental activities on new Internet architectures and protocols. http://www.fp7-federica.eu/ OFELIA: A collaborative project within the European Commission s FP7 ICT Work Program. The project creates a unique experimental facility that allows researchers to not only experiment on a test network but to control and extend the network itself through OpenFlow technology. http://www.fp7-ofelia.eu/ NOVI (Networking innovations Over Virtualized Infrastructures): One of OneLab project, a test bed for Future Internet in Europe. NOVI concentrates on methods, algorithms and information systems that will enable users to compose and manage isolated slices, baskets of virtual resources and services provided by Future Internet (FI) platforms. http://www.onelab.eu/index.php/projects/novi.html TEAGLE: A coordination federation that holds together Panlab Partner labs that allows them testing and prototyping on a large scale. It also enables the participants to browse resources provided by Partner labs as well as to configure and deploy their own Private Virtual Test Lab. http://www.fire-teagle.org/ OpenTag: A mechanism which enables a wide-area coordination of packet processing, traffic classification with performance isolation, and avoidance of cross-talks among slices for wide-area coordinated packet processing such as packet caching and advanced routing. CDN (Contents Delivery Network): 32/35
A network which enables contents providers to effectively deliver contents to end-users. Name Service: A service which in networks resolves network addresses based on names given to the resources such as hosts and files, and also provides them. Progressive Coding: A coding format which at first transmits a full image of low definition with low resolution, quantization level and frequency band, etc., optionally followed by a sub-image of high definition if necessary, to be reproduced by recipients. Transcoding: A conversion of video data compressed/encoded in a particular format into another format or into data with different resolution. M2M (Machine-to-Machine) Communication: A communication form in which machines connected to a network communicate with each other in the absence of human involvement. CGM (Consumer Generated Media): Media which are generated and sent by consumers/users through the Internet. In-network Processing: To have a network itself a function for data processing and a selforganized network configuration. GENI RSpec (Resource Description): Resource description (RSpec) for advertising, requesting, and describing the resources in the use of GENI. http://www.geni.net/ 33/35
FPGA (Full Programmable Gate Array): A gate array to which users can configure optional logical circuits (A semi-custom IC with master wafer prepared in advance, on which logical gates are arranged in grid array. Multi-core/Hetero-core: A technology enclosing multiple processor cores in a single chip package mainly for improving its capability of parallel processing (Multi-core), and another enclosing multiple processor cores with different architectures in a single chip package (Hetero-core). ASIC (Application Specific Integrated Circuit): An integrated circuit bundling multi-functional circuits up for a specific use. VM (Virtual Machine): A logical computer operating with its unique OS in a physical computer which is segmented through virtualization technology of resources such as CPUs and storages of computers. UDT (UDP-based Data Transfer): A UDP-based Data Transfer Protocol with window control which has been developed by a research group which is consists of University of Illinois at Chicago etc., for the purpose of resolving the congestion of throughput with TCP. Substrate: Physical resources provided through virtualization such as links, processors and storages. Deep Packet Inspection: A behavior to inquire header information such as payload of a packet for some purpose in the course of communication path. 34/35
35/35