1 1 Annex A (normative): NFV ISG PoC Proposal Template A.1 NFV ISG PoC Proposal Template A.1.1 PoC Team Members Include additional manufacturers, operators or labs should additional roles apply. PoC Project Name: Demonstration of multi-location, scalable, stateful Virtual Network Function Network Operators/ Service Providers: NTT Contact: Eriko Shigeki Toshima Manufacturer A:FUJITSU Contact: Hideki Iwamura Yasuhiro Seta Manufacturer B:Alcatel-Lucent Contact: Tetsuya Norimasa A.1.2 PoC Project Goals PoC Project Goal #1: This multi-stage PoC will demonstrate and verify geo-redundancy and multi-location distribution of sample () on top of. PoC Project Goal #2: Demonstrate New sophisticated NW node development framework on top of using sample ( ). A.1.3 PoC Demonstration Venue for the demonstration of the PoC: Bankoku Shinryokan Resort MICE Facility VIP room (Okinawa, Japan) A.1.4 (optional) Publication Currently not planned. A.1.5 PoC Project Timeline What is the PoC start date? (First) Demonstration target date PoC Report target date When is the PoC considered completed? February 7, 2014 May 14,2014 Mid June,2014 End of December,2014
2 2 A.2 NFV PoC Technical Details (optional) A.2.1 PoC Overview In this PoC, we demonstrate reliability and scaling out/in mechanism of stateful that consists of several s running on top of multi-location, utilizing purpose build middleware to synchronize state across all. As an example, we chose proxy server as a, which consists of a load balancer and distributed servers with highavailability middleware as s. Figure 2.1 depicts the configuration of this PoC. A proxy server system () is composed of several servers (()s) and a couple of load balancers configured in redundant way (()s.) () are running on top of specialised middleware, used to provide state replication and synchronization services between () instances. We demonstrate the near-linearly-scalable system with regard to the network load by allocating/removing virtual machines (s) flexibly on, and by installing/terminating server application. Additionally, we will demonstrate policy based placement of in multi-location to cater for and processing redundancy needs. We also demonstrate the system can autonomously maintain its redundancy even if there is a change in the number of ()s either because of the failure of single (), failure of multiple () due to failure of in one location, or because of the addition or removal of ()s. In this POC, Alcatel-Lucent is providing its product CloudBand, which assumes the roles of and, partial role of M and full role of. Additionally, Alcatel-Lucent is providing () as part of the CloudBand product. FUJITSU is providing middleware composed of distributed processing technologies, which are collaborative technologies of NTT R&D and FUJITSU, and operation and maintenance technologies that realize reliability and scaling out/in mechanism of stateful. Cloud Band Node (Alcatel-Lucent) Distributed processing platform including NTT R&D technology (FUJITSU) O R O R O R O R Virtual NW Application deployment Location #1 * = Distributed processing middleware Location #2 O * = Data (original) * R = Data (replica) Figure 2.1 PoC configuration
3 3 A.2.2 PoC Scenarios Scenario 1 Automatic policy based deployment of System is configured to provision in HA mode, equally spread across all instances of. We will demonstrate ability of the system to automatically deploy full as group of () and () across all available instances of in automatic mode, taking policy, usage and affinity rules in consideration(figure2.2). Once are placed, middleware platform will ensure proper synchronization across all instances. Add/Remove to/from based on policy New Virtual NW Application deployment Location #1 * = Distributed processing middleware Location #2 Figure 2.2: based deployment Scenario 2 Autonomous maintenance of redundancy configuration during the failure of () Each is configured to have correspondent s that store its backup including state information of the dialogues in normal condition on a remote location. s are monitored from both MANO as well as EMS perspective, to ensure -level operation as well as application and consistency. When one fails, the correspondent s utilizes their backup information and continues services (figure 2.3). At the same time, the s taking over the failed, copy their backup to other s, which are assigned as new backup s. The autonomously reconfigures system in this way, and maintains its redundancy.
4 4 te state to another Another continue processing when the failed failure Data relocation * = Data (original) * = Data (replica) Balancing arrangement Figure 2.3: Autonomous maintenance of redundancy configuration at the failure of. Similarly, when one location fails, M and middleware EMS will trigger failover to remote location, rebuilding missing instances for capacity purposes. Scenario 3 Auto-scaling out/in of In this scenario, we demonstrate auto-scaling of the system by increasing/decreasing the number of ()s autonomously according to the load added on the system. The figure 2.4 shows the example that () is added to the system when the load of the system increases. During the increase of the system load, the load added to each also increases. When the measured traffic at each () exceeds the pre-defined threshold, the EMS manager will trigger the scaling function of M, that in turn will add and to the system (). Once added, new will also be logically added to pool by EMS, replicated and traffic load balancing will be adjusted among the ()s to include new member.
5 5 Load increase Balancing arrangement Figure 2.4: Auto-scaling out/in of. A.2.3 Mapping to NFV ISG Work Describe how this PoC relates to the NFV ISG work: 1) Specify below the most relevant NFV ISG end-to-end concept from the NFV Use Cases, Requirements, and Architectural Framework functional blocks or reference points addressed by the different PoC scenarios: Use Case Requirement E2E Arch Comments Scenario 1 Use Case#1 (GS NFV 001 v1.1.1; use case aas) GS NFV 004 v1.1.1: Virtualisation requirements: Section 5.9 (OaM.1, OaM.7, OaM.12) GS NFV 002 v1.1.1: NFV Architecture Framework: Distribution of functionality between EMS,,, M/MANO and. management (mutilocation resource distribution and placement). Implementation of best foundation provided in multilocation. Scenario 2 Use Case#3 (GS NFV 001 v1.1.1; use case VNPaaS) GS NFV 004 v1.1.1: Virtualisation requirements: Section 5.5 (Res.1) GS NFV 002 v1.1.1: NFV Architecture Framework: Distribution of functionality between EMS,,, M/MANO and : role of EMS vs. NFV-MANO. Realize the continuity of service even in disaster; use of aas and distributed processing platform. Section 5.7 (Cont.2,Cont.3, Cont.4) _ Scenario 3 Scalability is applicable to GS NFV 004 v1.1.1: GS NFV 002 v1.1.1 : NFV Realize automated scaling out/in of s.
6 6 multiple GS NFV 001 use cases Virtualisation requirements: Section 5.4 (Elas.2, Elas.5) Architecture Framework: Distribution of functionality between EMS,,, M/MANO and :role of EMS vs. NFV-MANO. A.2.4 PoC Success Criteria Automatic deployment of full VFN across all available instances will be considered a successful end of scenario 1 of this POC. Automatic failover and reinstallation of failed will be considered a successful end of scenario 2 of this POC.(Failover of individual case is currently implemented. Disaster Recovery (Taking an entire location down) is targeted for end of December.) Proper scaling triggered by increase of the load will be considered successful end of scenario 3 of this POC. A.2.5 Expected PoC Contribution One of the intended goals of the NFV PoC activity is to support the various groups within the NFV ISG. The PoC Team is therefore expected to submit contributions relevant to the NFV ISG work as a result of their PoC Project. List of contributions towards specific NFV ISG Groups expected to result from the PoC Project: PoC Project Contribution #1: Geo-redundant failover NFV Group REL PoC Project Contribution #2: based, multi-location distribution NFV Group MAN PoC Project Contribution #3: Realize redundancy, near-linear scalability by the middleware NFV Group SWA
GS NFV 002 V1.1.1 (2013-10) Group Specification Network Functions Virtualisation (NFV); Architectural Framework Disclaimer This document has been produced and approved by the Network Functions Virtualisation
An Oracle White Paper September 2013 Oracle VM 3: Backup and Recovery Best Practices Guide Contents Introduction... 1 Part 1: Product Architecture, Concepts and Tools... 2 How to use this Guide... 2 Understanding
DeltaV Distributed Control System Whitepaper October 2014 DeltaV Virtualization High Availability and Disaster Recovery This document describes High Availiability and Disaster Recovery features supported
Informix Dynamic Server May 2007 Availability Solutions with Informix Dynamic Server 11 1 Availability Solutions with IBM Informix Dynamic Server 11.10 Madison Pruet Ajay Gupta The addition of Multi-node
ITIL A guide to service asset and configuration management The goal of service asset and configuration management The goals of configuration management are to: Support many of the ITIL processes by providing
Microsoft Cross-Site Disaster Recovery Solutions End-to-End Solutions Enabled by Windows 2008 Failover Clustering, Hyper-V, and Partner Solutions for Data Replication Published: December 2009 Introduction:
OpenNaaS based Management Solution for inter-data Centers Connectivity José Ignacio Aznar 1, Manel Jara 1, Adrián Roselló 1, Dave Wilson 2, Sergi Figuerola 1 1 Distributed Applications and Networks Area
Best Practices to Ensure SAP Availability Abstract Ensuring the continuous availability of mission-critical systems is a high priority for corporate IT groups. This paper presents five best practices that
NetVault : Backup Application Plugin Module (APM) for Exchange Server version 4.1 User s Guide MEG-101-4.1-EN-01 10/29/09 Copyrights NetVault: Backup APM for Exchange Server User s Guide Software Copyright
Best Practices for Deploying and Managing Linux with Red Hat Network Abstract This technical whitepaper provides a best practices overview for companies deploying and managing their open source environment
SOLUTION BRIEF Seven Secrets to High Availability in the Cloud High Availability (HA) is about designing systems and applications that deliver superior uptime and service. Businesses depend on their IT
White Paper Backup & Recovery for VMware Environments with Avamar 7 A Detailed Review Abstract With the ever increasing pace of virtual environments deployed in the enterprise cloud, the requirements for
sm OPEN DATA CENTER ALLIANCE : The Private Cloud Strategy at BMW SM Table of Contents Legal Notice...3 Executive Summary...4 The Mission of IT-Infrastructure at BMW...5 Objectives for the Private Cloud...6
WHY COX BUSINESS? SIP TRUNKING: BUSINESS CONTINUITY AND REDUNDANCY A White Paper 1 P a g e Table of Contents INTRODUCTION... 1 WHAT IS FAILURE?... 2 THE APPROACHES... 3 SINGLE SITE OPTIONS... 3 SEPARATE
Preflight Checklist: Preparing for a Private Cloud Journey Preflight Checklist: Preparing for a Private Cloud Journey What do you need to have in place before you move forward with cloud computing? This
High Availability and Scalability with Domino Clustering and Partitioning on AIX Marcelo R. Barrios, Ole Conradsen, Charles Haramoto, Didac Marin International Technical Support Organization http://www.redbooks.ibm.com
WHITE PAPER: CA ARCserve Backup Network Data Management Protocol (NDMP) Network Attached Storage (NAS) Option: Integrated Protection for Heterogeneous NAS Environments CA ARCserve Backup: Protecting heterogeneous
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
Best Practices for Architecting Your Hosted Systems for 100% Application Availability Overview Business Continuity is not something that is implemented at the time of a disaster. Business Continuity refers
ebook A Successful Data Center Migration - Cradle to Grave Given the dynamic operational environment in which today s data centers operate, wherein applications and data in the production environment is
Cloud Computing at CDC Current Status and Future Plans Earl Baum March, 2014 1 Background Current Activities Agenda Use Cases, Shared Services and Other Considerations What s Next 2 Background Cloud Definition
Pervasive PSQL Meets Critical Business Requirements Pervasive PSQL White Paper May 2012 Table of Contents Introduction... 3 Data Backup... 3 Pervasive Backup Agent... 3 Pervasive PSQL VSS Writer... 5 Pervasive
Computing and Informatics, Vol. 31, 2012, 743 757 FLEXIBLE ORGANIZATION OF REPOSITORIES FOR PROVISIONING CLOUD INFRASTRUCTURES Joanna Kosińska, Jacek Kosiński S lawomir Zieliński, Krzysztof Zieliński Department