Load Balancing in Recursive Networks using Whatevercast Names
|
|
|
- Allen Clark
- 10 years ago
- Views:
Transcription
1 Load Balancing in Recursive Networks using Whatevercast Names E. Elahi, J. Barron, M. Crotty, Miguel Ponce de Leon and S. Davy {eelahi, jbarron, mcrotty, miguelpdl, Telecommunication Systems and Software Group (TSSG), Waterford Institute of Technology, Ireland Abstract Unlike anycast and multicast, whatevercast comprises of a set of rules which may return one or more results against a name resolution request. The objective of this paper is to highlight how whatevercast name and Name space Manager (NSM) can be exploited for distributed autonomic load balancing and optimal resource utilization inside the RINA (Recursive Internet Architecture) based data center networks. The proposed architecture is based on the feedback principal and runs distributively over the nodes participating in the name resolution process without the need of any dedicated load balancer node. Keywords: Whatevercast name, Recursive Network Architecture, Network Load Balancing I. INTRODUCTION In the last few years, interest has grown in Future Internet architectures [1]. This interest is mainly driven by the pragmatic concerns of large scale ISPs and data center (DC) deployments, cloud providers and businesses that want a more adaptable, configurable, flexible, and resilient network on which to build different services. The fabric of the Internet, its size and scope are the base-ground problems in this space as it is preserved with workarounds which will not meet the requisites needed to build such services [2]. RINA is the most promising architecture with the goal in mind is to provide configurable, secure, resilient, predictable and flexible network services. RINA s communication model is based upon distributed Inter-Process Communication (IPC). As argued in [3] that networking is IPC and only IPC just like communication between two applications over the same host. Two applications on the same host can communicate with each other through local IPC facility provided by the operating system. The same IPC facility can also be extended to allow two applications to communicate on different hosts in the same or in different networks. This extended IPC facility is termed as Distributed IPC Facility (DIF) [2]. Unlike the five-layer model of Internet, in RINA, A DIF can be seen as a single type of layer which can be recursively repeated as many times as required providing the same functions/mechanisms but tuned under different policies to operate over different ranges of the performance space (e.g. capacity, delay, loss). Like the Domain Name system (DNS), there is a directory like service in RINA called Inter-DIF Directory (IDD) or *Work towards this paper was partially funded by the Commission of the European Union, FP7 Collaborative PRISTINE Project, and Waterford Institute of Technology (WIT), Ireland DIF allocator. It is a distributed application that contains the names of the applications and the DIF name from which this application can be accessed. Alongside IDD, there is Name Space Manager (NSM) which keeps the information of application names and the number of instances of each application currently running and how the client applications can access them. When a client application needs to connect with a server application, it will initiate an allocate request to its local IPC Manager (IPCM) and if the local IPCM does not know about the DIF name this server application is enrolled to, then it will send a whatevercast name request to the IDD. The IDD will look for the current instances of the server application and their respective DIFs in consultation with the NSM and reply back to the requesting application. If the IDD does not have the information about the requested application then it will forward the query to its neighbor IDD. Detailed procedure of whatevercast name resolution, NSM and IDD can be seen in the next section. Due to the non-existence of inherent QoS mechanisms, support for multi-homing, self-orchestration and security frameworks in the current TCP/IP based Internet architecture, the data center networks seems to be suffering the most and can be leveraged by employing RINA architecture in data center fabric. Although, in order to provision the support for QoS, security, and multi-homing, the TCP/IP protocol suit is being patched with number of additional protocols and mechanisms such as Diffserv, IPSec, Mobile-IP etc. However each of the additional protocol comes with more additional issues such as scalability, performance, adoptability etc. Optimal resource utilization and load balancing among hundreds of thousands of servers in a data center is imperative to make it energy efficient and to keep the operational costs at minimum level. Presently, dedicated nodes called load balancers are deployed in data centres in order to achieve energy efficient load balancing and optimal resource utilization thus increasing the installation and operational cost of data centres [8]. Support for multi-homing, security and QoS is implicit in RINA. Load balancing in RINA based data centres does not need to have special hardware or dedicated nodes thus minimizing the installation and operational cost of the data centres. In this paper we propose a design for load balancing in RINA based data centers by exploiting whatevercast name and Name Space Manager (NSM) in RINA. Whatevercast refers to one or more members from a set of names. A Distributed Application Facility for Load Balancing (LB-
2 DAF) have been proposed which is a distributed application like IDD that needs to be running in every node along with IDD and NSM. A DAF is a collection of Distributed Application Processes (DAPs) running on different nodes in the network. Each DAP of a DAF has to be registered with one or more common DIFs in order to communicate with each other. The LB-DAF can monitor the current load on the server application to share this information with its neighboring LB-DAF processes which can be termed as Distributed Application Processes for Load Balancing (LB- DAP). The rest of the paper is organized as follows: The next section briefly introduces the process of application discovery in RINA, whatevercast name, NSM and DIF Allocator (IDD). Section III presents the load balancing mechanism in TCP/IP based data centers. Section IV describes how whatevercast name and NSM can be used for load balancing and how this framework can be implemented. In Section V, we discuss the future plan about implementation and thorough testing of the proposed LB-DAF within data center scenario and conclude the paper in Section VI. II. APPLICATION DISCOVERY IN RINA Two processes in a single system discover themselves using the port numbers assigned to them by the operating system. These port numbers are unique addresses in that single system. The operating system provides an IPC facility to let processes communicate with each other. RINA is built with that concept in mind and visualizes the whole Internet as an operating system that provides Distributed IPC facilities (DIFs) to let two or more processes communicate. There may be more than one DIF in the network and each process needs to enroll with one or more DIFs in order to communicate with other processes available in those DIFs. Each DIF has a unique identifier and processes registered in that DIF have a unique name within that DIF [4]. Each process is associated with an instance. There could be two or more instances of the same process running in the network that can be enrolled with the same or different DIFs. Therefore a process can be identified and discovered by using its name, its instance number and the DIF on which it is enrolled. If a client process wants to connect with a server process then it needs to know the name and instance of the server application as well as the name of the DIF on which the server application is enrolled [5]. RINA introduced Whatevercast name and Inter-DIF Directory (IDD) when the client application has no information about the server instance and its relevant DIF. A. Whatevercast Name and Inter-DIF Directory (IDD) When the client application needs to connect with a server application and if it is unaware of the number of instances of the server application and the name of the DIF it is enrolled with, then it will generate and transmit a whatevercast name query. Whatevercast name is an identifier with some associated rules to select the names from a given set [5]. Whatevercast refers to one or more members from a set of names. Multicast and Anycast are special cases of Fig. 1. An example topology: Assuming Link State Routing is used, then all the IDDs in this network should have the information about all the instances of however the IDD at Node-1 should be the first node near source node to know about the existence of all these instances. Whatevercast name. Multicast refers to all the members in a particular set while anycast refers to a single member from a set. The Whatevercast name query which contains the name of the server application process is forwarded to the neighbor nodes. The Name Space Manager (NSM) at the neighbour nodes looks for the process instances currently running and the DIF allocator or Inter-DIF Directory (IDD) looks for the DIF on which these instances are enrolled. Inter-DIF Directory (IDD) [4] is a distributed application which is part of the RINA implementation and run on every RINA node. IDD provides mappings between application process names and their relevant DIFs. Unlike DNS in current Internet architecture, the IDD authenticates the requesting application process to access the requested process. IDD maintains Search Table, Directory Table, Naming Information (using NSM) and Neighbor Table [4]. When the NSM in a particular node receives a Whatevercast name resolution request, it performs lookup for the number of currently running instances of the requested application process from its own name and neighbour tables. If it finds one or more instances for that application process then it checks with the help of IDD for the relevant DIFs on which these instances are registered. After getting the information about the process instances and their relevant DIFs, it replies back to the requested IPC manager (IPCM) of the client node with the number of instances and DIF names otherwise it forwards the same query to its neighboring NSM for resolution. The process of application discovery is illustrated in Figure 2. When the requested IPCM receives a reply from an IDD, it will do one of the three options: 1.) send a resource allocation request to the DIF of the destination application process instance if it is already enrolled on the same DIF; 2.) attempts to join the relevant DIF using DIF extension procedure [5]; 3.) or attempts create a new DIF from source to destination using DIF allocator. An example topology for IDDs can be seen in Figure 1 in order to demonstrate how Whatevercast name resolution works. Let us say that we have a server application pro-
3 Fig. 2. Discovery of the application: Forwarding of the request between the peer IDDs until the destination application is found or the pre-defined termination condition (e.g. TTL) is met. Adopted from Figure by [4] cess named and there are three instances of the same process running at nodes 5, 7 and 20 as shown. Now if the client application process at source node wanted to connect with a particular instance say for example then the IPCM at this node will lookup only for the DIF name on which application process instance is enrolled so that the client can join that DIF or create a new DIF. On the other hand if the client application does not have its preferences for the server application instance then its IPCM will generate a Whatevercast name resolution request for Like the DNS in the current Internet architecture, this query will be resolved according to the node location and the nearest possible availability of the server instance. As can be seen from the given Figure that IDD at node 1 should be the first node near source node to know about all the three instances of the example server application process therefore when the source node transmits a Whatevercast name query to nodes 1 and 31 then node 1 is able to provide the most recent information as it has the shortest route to any of the three instances of Now if the directory table and naming information maintained by each IDD and NSM respectively are extended in such a way that alongwith the DIF name and instance information these tables also contain the information about the load on each server instance then this extended IDD and NSM can be effectively used for balancing the load on each server instance. Section III explains how the IDD can be extended for load balancing purpose. III. LOAD BALANCING IN DATA CENTRES In order for balancing the load between servers in a data centre, an additional entity/node is being used which is called Load Balancer (LBR) [7]. Like NAT, the LBR has one or more public routable IP addresses called virtual IP addresses (VIP) and has one or more servers behind it. The limitation behind this model is that the servers and LBR need to be in the same layer 2 domain. If one or more servers are not in the same layer 2 domain, then such servers would not be able to see the addresses of clients they should be connected to [8]. Therefore, in order for LBR to connect with a server in other layer 2 domains, the packets have to pass through a layer 3 node/router. RINA architecture does not have this limitation. In RINA, servers can be placed anywhere. Application names are location and layer independent; therefore servers can always see the client s application name. The capital cost for networking in data centres is primarily concentrated around the switches, routers and load balancers [8]. Reducing the number of switches, routers and/or load balancers can significantly reduce the capital cost for data centre deployment as well as operational cost. Introducing additional standalone intermediate nodes such as LBR in the end-to-end path might create some performance degradation specifically towards the delay and loss experienced by traffic flows, possibly due to excessive processing load at the LBR. Moreover, in order to avoid a single point of failure and to further balance the load, redundant/additional LBRs are normally deployed in the data centres. Thus making load balancing solution more costly and difficult to maintain. Unlike current Internet architectures, load balancing in RINA based data centres is envisaged to be implemented at the DAF level, rather than deploying additional node/s. DAF based load balancing will utilise a distributed application facility operating at various nodes on the network, which will coordinate with the resources and can redirect network traffic towards lightly loaded servers to make efficient use of resources. Most of the research in load balancing domain [9][10][11][12][13] is centered around the introduction of more and more efficient and intelligent algorithms to run on LBRs in order to balance the load on servers in data center. The algorithms being proposed need dedicated nodes (load balancers) inside data centers. On the other hand, the proposed RINA based LB-DAF is not limited to a single data centre. It is distributed over the network nodes and does not need any dedicated node to do its job thus reducing the capital and operational cost. There will not be any single point of failure. The RINA based LB-DAF provides a load balancing architecture and framework. It is capable to adopt any off-the-shelf load distribution algorithm to achieve the high level business objectives. IV. DAF BASED LOAD BALANCING Load balancing in RINA should enable applications to connect to the most lightly loaded server. In order to do that, we propose an implementation of LB-DAF (Load Balancing Distributed Application Facility) which would be running at every RINA node where IDD service is available. The processes, which are part of this LB-DAF and running on IDD nodes can be referred to as LB-DAP (Load Balancing Distributed Application Process). Each LB-DAP keeps track of the number of server applications running on the system, monitor their current load and share this load information with peer LB-DAPs whenever a change in current load occurs. An abstract implementation architecture for LB-DAF is shown in Figure 3 Each LB-DAP has connectivity to the local NSM and can update the IDD tables. When an LB-DAP receives an update about the from its peers, it instantly updates the local NSM and IDD tables about the server application instances and the load on each instance and forwards an update message to it s neighboring peers. Upon receiving an allocate request from the client application the IPCM will initiate a request to NSM
4 Fig. 3. An abstract architecture of the proposed LB-DAF Fig. 4. Preliminary testing scenario for the proposed load balancing mechanism for the Whatevercast name resolution. The modified NSM now has the information about the server instances as well as the load on each instance, therefore the NSM will reply the requesting IPCM with naming information about the most lightly loaded server instance. And with the help of IDD, the IPCM will also get and enroll with the DIF on which this server instance is currently registered and available. A. Implementation and Experimentation Preliminary test for load balancing was conducted using three virtual machines having RINA prototype [6] installed. Two server instances of a file transfer server application were made available on two different VMs. Each VM was enrolled on a common DIF named as normal DIF which was constituted by three IPC processes Test1.IRATI, Test2.IRATI and test3.irati each running on different VM as shown in Figure 4. The client VM is connected with each server VM through distinct non-overlapping virtual LAN clusters VLAN1 and VLAN2. There is a Shim DIF created by Shim IPC Processes to facilitate the hop-by-hop communication between two or more hosts in a single VLAN as shown. It is assumed that the client is aware of the both server instances available through Normal DIF. Therefore instead of sending an allocate request with Whatevercast name, the client sends an allocate request to the IPCM by specifying the server application name along with its instance. In this way the client can connect with more than one server if it knows the number of instances already running thus aggregating the available bandwidth on different interfaces. The experiment was conducted by transferring 20GB file placed on both server instances and 50% gain was achieved when the file was downloaded in parallel from both instances as compared to downloading from a single instance and each server has to transfer 50% less amount of data with half connection lifetime. There are two aspects to consider for load balancing in RINA: 1. Selection of server instance to connect to if multiple clients contend for the same server. If a client does not give any server preference, e.g. it just wanted to access example.com, then the NSM will decide which server instance to connect to and allocate a flow. By using this approach the NSM has a better view of allocations and could balance the load at servers and eventually clients can experience better throughput. This can be achieved by the proposed LB-DAF. 2. Re-ordering of received packets if a client connects to multiple servers and out of ordered data packets from the servers are received by the client. This is the case when there are multiple servers for the same service under a single administrative domain e.g. and etc. The client application process can choose the server/s to connect to. For example, if there are two file servers having a specific file of size 2GB. The client may connect to both the servers and request half of the file from server 1 and the other half from server 2. The client application needs to know the reordering of data packets if it intends to connect with multiple servers in order to aggregate the bandwidth available at it s multiple interfaces. The client must keep track of all the data being received from each server because if the connection with one server lost then the client application should be able to receive all the remaining data from the other connection/s. This approach could reduce the load on servers, enhance the throughput and aggregate the bandwidth if each flow adopt distinct path. Because, if the flows pass through a common intermediate node, then the available capabilities at that node need to be shared among each flow that might cause performance degradation. One more point to note is that if the client application knows the number of server instances then it can choose one instance and initiate a connection with it. However this server instance may not necessarily be the most lightly loaded server because the client application has no way to check the load on server instances. Therefore getting load statistics of each server instance through LB-DAF and then initiate connection with the most suitable server instance could produce desired network performance. V. FUTURE WORK In order to test the performance of LB-DAF in a large network and data center scenario, we have planned to use the virtual wall test bed provided by Fed4Fire [14]. The virtual wall test bed provided an emulation environment of more than 100 nodes of dual processor and dual core servers. All the nodes are connected with each other through a switch
5 Fig. 5. Virtual Wall testbed topology provided by Fed4Fire. Figure adopted from [14] Fig. 6. An example topology to test LB-DAF on virtual wall testbed. The the dotted filled circles shows the client nodes. The solid filled circles represents the nodes where IDD and LB-DAF processes are running while the unfilled circles represents the RINA based forwarding nodes. Such nodes will not participate in load balancing process however they will participate in routing process. The solid lines show the active connections and at least the presence of one DIF between two inter-connecting nodes while the broken lines show in-active connection and the absence of any DIF. having 1.5Tbps of non-blocking switching back-plane capacity as shown in Figure 5. The nodes are configurable and customizable to create and emulate any network topology. The topology proposed for LB-DAF testing can be seen in Figure 6. There will be different server farms virtually located on different geographical locations. Instances of a file server will be running on different server machines in one or more server farms. The arbitrarily spread client nodes will initiate one or more connections with the file server by just specifying the Whaevercast name of the file server. The LB-DAF running on arbitrary nodes will guide the clients to connect to the most lightly loaded server as per load balancing policy. The performance parameters are but not limited to percentage utilization of each server machine, average throughput gained for each flow, average response time and energy consumption by each server machine with and without LB-DAF. VI. CONCLUSION In this paper, Load Balancing Distributed Application Facility (LB-DAF) is presented in order to balance to the load on the RINA based servers in data centres. As the LB-DAF works in distributed manner and updates the Name space Managers (NSM) of its neighbouring nodes about the load on servers therefore it enables the client nodes to initiate the connection to the most lightly loaded server. The LB-DAF could abandon the use of any dedicated nodes in data centres for load balancing without degradation in service quality. Moreover as the LB-DAF updates its peers about the current load on the server instances therefore it is able to facilitate the load balancing among server instances available not only within data centers but also at distributed locations. REFERENCES [1] Investigating RINA as an Alternative to TCP/IP (IRATI), Website [Accessed August 06, 2015]. [2] J. Day: Patterns in Network Architecture: A Return to Fundamentals, Prentice Hall, [3] John Day, Ibrahim Matta, and Karim Mattar, Networking is IPC: a guiding principle to a better internet, In Proceedings of the 2008 ACM CoNEXT Conference (CoNEXT 08). ACM, New York, NY, USA, Article 67, 6 pages. [4] Eleni Trouva, Eduard Grasa, John Day and Steve Bunch, Layer Discovery in RINA networks, IEEE 17th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), 2012 [5] RINA Specifications, [Accessed on August 14, 2015] [6] RINA Prototype Implementation, [Accessed September 14, 2015] [7] Cisco systems: Data center: Load balancing data center services, 2004 [8] Albert Greenberg et. al.,the Cost of a Cloud: Research Problems in Data Center Networks, Published in ACM SIGCOMM Computer Communication Review archive Volume 39 Issue 1, January 2009 [9] Aameek Singh et. al., Server-Storage Virtualization: Integration and Load Balancing in Data Centers, IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, [10] M. Randles, Lamb, D., Taleb-Bendiab, A., A Comparative Study into Distributed Load Balancing Algorithms for Cloud Computing, IEEE 24th International Conference on Advanced Information Networking and Applications Workshops (WAINA), 2010 [11] Sourabh Bharti, K.K.Pattanaik, Dynamic Distributed Flow Scheduling with Load Balancing for Data Center Networks, The 4th International Conference on Ambient Systems, Networks and Technologies (ANT 2013) [12] Dharmesh Kashyap, Jaydeep Viradiya, A Survey Of Various Load Balancing Algorithms In Cloud Computing, INTERNATIONAL JOUR- NAL OF SCIENTIFIC & TECHNOLOGY RESEARCH (ISSN ) VOLUME 3, ISSUE 11, NOVEMBER 2014 [13] Geethu Gopinath, Shriram K. Vasudevan, An in-depth analysis and study of Load balancing techniques in the cloud computing environment, 2nd International Symposium on Big Data and Cloud Computing (ISBCC15), Procedia Computer Science 50 ( 2015 ) [14] Federation for Future Internet Research and Experimentation (Fed4Fire), Website [Accessed September 13, 2015]
IRATI - Investigating RINA as an Alternative to TCP/IP
Títol de la presentació de powerpoint IRATI - Investigating RINA as an Alternative to TCP/IP FIRE Engineering Workshop, Ghent, Belgium November 6th, 2012 Sergi Figuerola Project coordinator Director @
A Study of Network Security Systems
A Study of Network Security Systems Ramy K. Khalil, Fayez W. Zaki, Mohamed M. Ashour, Mohamed A. Mohamed Department of Communication and Electronics Mansoura University El Gomhorya Street, Mansora,Dakahlya
Virtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
WAN Traffic Management with PowerLink Pro100
Whitepaper WAN Traffic Management with PowerLink Pro100 Overview In today s Internet marketplace, optimizing online presence is crucial for business success. Wan/ISP link failover and traffic management
PowerLink Bandwidth Aggregation Redundant WAN Link and VPN Fail-Over Solutions
Bandwidth Aggregation Redundant WAN Link and VPN Fail-Over Solutions Find your network example: 1. Basic network with and 2 WAN lines - click here 2. Add a web server to the LAN - click here 3. Add a web,
MOC 6435A Designing a Windows Server 2008 Network Infrastructure
MOC 6435A Designing a Windows Server 2008 Network Infrastructure Course Number: 6435A Course Length: 5 Days Certification Exam This course will help you prepare for the following Microsoft exam: Exam 70647:
A REPORT ON ANALYSIS OF OSPF ROUTING PROTOCOL NORTH CAROLINA STATE UNIVERSITY
A REPORT ON ANALYSIS OF OSPF ROUTING PROTOCOL Using OPNET 14.5 Modeler NORTH CAROLINA STATE UNIVERSITY SUBMITTED BY: SHOBHANK SHARMA [email protected] Page 1 ANALYSIS OF OSPF ROUTING PROTOCOL A. Introduction
TRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Introduction to IP v6
IP v 1-3: defined and replaced Introduction to IP v6 IP v4 - current version; 20 years old IP v5 - streams protocol IP v6 - replacement for IP v4 During developments it was called IPng - Next Generation
Architecture of distributed network processors: specifics of application in information security systems
Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia [email protected] 1. Introduction Modern
How To Manage A Virtualization Server
Brain of the Virtualized Data Center Contents 1 Challenges of Server Virtualization... 3 1.1 The virtual network breaks traditional network boundaries... 3 1.2 The live migration function of VMs requires
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
TOPOLOGIES NETWORK SECURITY SERVICES
TOPOLOGIES NETWORK SECURITY SERVICES 1 R.DEEPA 1 Assitant Professor, Dept.of.Computer science, Raja s college of Tamil Studies & Sanskrit,Thiruvaiyaru ABSTRACT--In the paper propose about topology security
TRILL for Data Center Networks
24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: [email protected] Tel: 0041-798658759 Agenda 1 TRILL Overview
Ranch Networks for Hosted Data Centers
Ranch Networks for Hosted Data Centers Internet Zone RN20 Server Farm DNS Zone DNS Server Farm FTP Zone FTP Server Farm Customer 1 Customer 2 L2 Switch Customer 3 Customer 4 Customer 5 Customer 6 Ranch
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Datagram-based network layer: forwarding; routing. Additional function of VCbased network layer: call setup.
CEN 007C Computer Networks Fundamentals Instructor: Prof. A. Helmy Homework : Network Layer Assigned: Nov. 28 th, 2011. Due Date: Dec 8 th, 2011 (to the TA) 1. ( points) What are the 2 most important network-layer
SiteCelerate white paper
SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance
How To Manage Dns On An Elfiq Link Load Balancer (Link Balancer) On A Pcode (Networking) On Ipad Or Ipad (Netware) On Your Ipad On A Ipad At A Pc Or Ipa
White paper The IDNS module for incoming load balancing For Elfiq Operating System (EOS) version 3.x Document Revision 1.5 October 2007 Elfiq Solutions www.elfiq.com COPYRIGHT The content of this document
How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet
How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet Professor Jiann-Liang Chen Friday, September 23, 2011 Wireless Networks and Evolutional Communications Laboratory
DNS ROUND ROBIN HIGH-AVAILABILITY LOAD SHARING
PolyServe High-Availability Server Clustering for E-Business 918 Parker Street Berkeley, California 94710 (510) 665-2929 wwwpolyservecom Number 990903 WHITE PAPER DNS ROUND ROBIN HIGH-AVAILABILITY LOAD
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea ([email protected]) Senior Solutions Architect, Brocade Communications Inc. Jim Allen ([email protected]) Senior Architect, Limelight
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
Analysis of traffic engineering parameters while using multi-protocol label switching (MPLS) and traditional IP networks
Analysis of traffic engineering parameters while using multi-protocol label switching (MPLS) and traditional IP networks Faiz Ahmed Electronic Engineering Institute of Communication Technologies, PTCL
TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
Opnet Based simulation for route redistribution in EIGRP, BGP and OSPF network protocols
IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 9, Issue 1, Ver. IV (Jan. 2014), PP 47-52 Opnet Based simulation for route redistribution
Application and service delivery with the Elfiq idns module
Technical White Paper Application and service delivery with the Elfiq idns module For Elfiq Operating System (EOS) version 3.x Document Revision 1.63 June 2012 Table of Contents 1. The IDNS module... 3
Designing a Windows Server 2008 Network Infrastructure
Designing a Windows Server 2008 Network Infrastructure MOC6435 About this Course This five-day course will provide students with an understanding of how to design a Windows Server 2008 Network Infrastructure
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
Multidomain Network Based on Programmable Networks: Security Architecture
Multidomain Network Based on Programmable Networks: Security Architecture Bernardo Alarco, Marifeli Sedano, and Maria Calderon This paper proposes a generic security architecture designed for a multidomain
DEMYSTIFYING ROUTING SERVICES IN SOFTWAREDEFINED NETWORKING
DEMYSTIFYING ROUTING SERVICES IN STWAREDEFINED NETWORKING GAUTAM KHETRAPAL Engineering Project Manager, Aricent SAURABH KUMAR SHARMA Principal Systems Engineer, Technology, Aricent DEMYSTIFYING ROUTING
TRUFFLE Broadband Bonding Network Appliance. A Frequently Asked Question on. Link Bonding vs. Load Balancing
TRUFFLE Broadband Bonding Network Appliance A Frequently Asked Question on Link Bonding vs. Load Balancing 5703 Oberlin Dr Suite 208 San Diego, CA 92121 P:888.842.1231 F: 858.452.1035 [email protected]
Oracle SDN Performance Acceleration with Software-Defined Networking
Oracle SDN Performance Acceleration with Software-Defined Networking Oracle SDN, which delivers software-defined networking, boosts application performance and management flexibility by dynamically connecting
Chapter 3. Enterprise Campus Network Design
Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
Module 1: Overview of Network Infrastructure Design This module describes the key components of network infrastructure design.
SSM6435 - Course 6435A: Designing a Windows Server 2008 Network Infrastructure Overview About this Course This five-day course will provide students with an understanding of how to design a Windows Server
Extending Networking to Fit the Cloud
VXLAN Extending Networking to Fit the Cloud Kamau WangŨ H Ũ Kamau Wangũhgũ is a Consulting Architect at VMware and a member of the Global Technical Service, Center of Excellence group. Kamau s focus at
VMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
GLOBAL SERVER LOAD BALANCING WITH SERVERIRON
APPLICATION NOTE GLOBAL SERVER LOAD BALANCING WITH SERVERIRON Growing Global Simply by connecting to the Internet, local businesses transform themselves into global ebusiness enterprises that span the
Web Application Hosting Cloud Architecture
Web Application Hosting Cloud Architecture Executive Overview This paper describes vendor neutral best practices for hosting web applications using cloud computing. The architectural elements described
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
ITL BULLETIN FOR JANUARY 2011
ITL BULLETIN FOR JANUARY 2011 INTERNET PROTOCOL VERSION 6 (IPv6): NIST GUIDELINES HELP ORGANIZATIONS MANAGE THE SECURE DEPLOYMENT OF THE NEW NETWORK PROTOCOL Shirley Radack, Editor Computer Security Division
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Overview: Load Balancing with the MNLB Feature Set for LocalDirector
CHAPTER 1 Overview: Load Balancing with the MNLB Feature Set for LocalDirector This chapter provides a conceptual overview of load balancing and introduces Cisco s MultiNode Load Balancing (MNLB) Feature
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
Communications and Computer Networks
SFWR 4C03: Computer Networks and Computer Security January 5-8 2004 Lecturer: Kartik Krishnan Lectures 1-3 Communications and Computer Networks The fundamental purpose of a communication system is the
A Coordinated. Enterprise Networks Software Defined. and Application Fluent Programmable Networks
A Coordinated Virtual Infrastructure for SDN in Enterprise Networks Software Defined Networking (SDN), OpenFlow and Application Fluent Programmable Networks Strategic White Paper Increasing agility and
How To - Configure Virtual Host using FQDN How To Configure Virtual Host using FQDN
How To - Configure Virtual Host using FQDN How To Configure Virtual Host using FQDN Applicable Version: 10.6.2 onwards Overview Virtual host implementation is based on the Destination NAT concept. Virtual
FortiBalancer: Global Server Load Balancing WHITE PAPER
FortiBalancer: Global Server Load Balancing WHITE PAPER FORTINET FortiBalancer: Global Server Load Balancing PAGE 2 Introduction Scalability, high availability and performance are critical to the success
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
A Proposed Service Broker Strategy in CloudAnalyst for Cost-Effective Data Center Selection
A Proposed Service Broker Strategy in CloudAnalyst for Cost-Effective Selection Dhaval Limbani*, Bhavesh Oza** *(Department of Information Technology, S. S. Engineering College, Bhavnagar) ** (Department
Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy
Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy Objectives The purpose of this lab is to demonstrate both high availability and performance using virtual IPs coupled with DNS round robin
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more
How To Provide Qos Based Routing In The Internet
CHAPTER 2 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 22 QoS ROUTING AND ITS ROLE IN QOS PARADIGM 2.1 INTRODUCTION As the main emphasis of the present research work is on achieving QoS in routing, hence this
DEPLOYMENT GUIDE Version 1.1. DNS Traffic Management using the BIG-IP Local Traffic Manager
DEPLOYMENT GUIDE Version 1.1 DNS Traffic Management using the BIG-IP Local Traffic Manager Table of Contents Table of Contents Introducing DNS server traffic management with the BIG-IP LTM Prerequisites
High Performance Cluster Support for NLB on Window
High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) [email protected] [2]Asst. Professor,
AUTO DEFAULT GATEWAY SETTINGS FOR VIRTUAL MACHINES IN SERVERS USING DEFAULT GATEWAY WEIGHT SETTINGS PROTOCOL (DGW)
AUTO DEFAULT GATEWAY SETTINGS FOR VIRTUAL MACHINES IN SERVERS USING DEFAULT GATEWAY WEIGHT SETTINGS PROTOCOL (DGW) Suman Dutta 1, Shouman Barua 2 and Jishu Sen 3 1 IT Trainer, Logitrain.com.au 2 PhD research
Zscaler Internet Security Frequently Asked Questions
Zscaler Internet Security Frequently Asked Questions 1 Technical FAQ PRODUCT LICENSING & PRICING How is Zscaler Internet Security Zscaler Internet Security is licensed on number of Cradlepoint devices
TRUFFLE Broadband Bonding Network Appliance BBNA6401. A Frequently Asked Question on. Link Bonding vs. Load Balancing
TRUFFLE Broadband Bonding Network Appliance BBNA6401 A Frequently Asked Question on Link Bonding vs. Load Balancing LBRvsBBNAFeb15_08b 1 Question: What's the difference between a Truffle Broadband Bonding
MetroNet6 - Homeland Security IPv6 R&D over Wireless
MetroNet6 - Homeland Security IPv6 R&D over Wireless By: George Usi, President, Sacramento Technology Group and Project Manager, California IPv6 Task Force [email protected] Acknowledgement Reference:
Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud
Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud 1 S.Karthika, 2 T.Lavanya, 3 G.Gokila, 4 A.Arunraja 5 S.Sarumathi, 6 S.Saravanakumar, 7 A.Gokilavani 1,2,3,4 Student, Department
LOAD BALANCING IN WEB SERVER
LOAD BALANCING IN WEB SERVER Renu Tyagi 1, Shaily Chaudhary 2, Sweta Payala 3 UG, 1,2,3 Department of Information & Technology, Raj Kumar Goel Institute of Technology for Women, Gautam Buddh Technical
hp ProLiant network adapter teaming
hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based
packet retransmitting based on dynamic route table technology, as shown in fig. 2 and 3.
Implementation of an Emulation Environment for Large Scale Network Security Experiments Cui Yimin, Liu Li, Jin Qi, Kuang Xiaohui National Key Laboratory of Science and Technology on Information System
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
Experimental evaluation of a Recursive InterNetwork Architecture prototype
Experimental evaluation of a Recursive InterNetwork Architecture prototype Sander Vrijders, Dimitri Staessens, Didier Colle (Ghent University iminds) Francesco Salvestrini, Vincenzo Maffione (Nextworks
Network Level Multihoming and BGP Challenges
Network Level Multihoming and BGP Challenges Li Jia Helsinki University of Technology [email protected] Abstract Multihoming has been traditionally employed by enterprises and ISPs to improve network connectivity.
Virtual Machine in Data Center Switches Huawei Virtual System
Virtual Machine in Data Center Switches Huawei Virtual System Contents 1 Introduction... 3 2 VS: From the Aspect of Virtualization Technology... 3 3 VS: From the Aspect of Market Driving... 4 4 VS: From
How To Make A Network Plan Based On Bg, Qos, And Autonomous System (As)
Policy Based QoS support using BGP Routing Priyadarsi Nanda and Andrew James Simmonds Department of Computer Systems Faculty of Information Technology University of Technology, Sydney Broadway, NSW Australia
New Cloud Networking Enabled by ProgrammableFlow
New Cloud Networking Enabled by ProgrammableFlow NISHIHARA Motoo, IWATA Atsushi, YUN Su-hun WATANABE Hiroyuki, IIJIMA Akio, KANOH Toshiyuki Abstract Network virtualization, network programmability, and
Request Routing, Load-Balancing and Fault- Tolerance Solution - MediaDNS
White paper Request Routing, Load-Balancing and Fault- Tolerance Solution - MediaDNS June 2001 Response in Global Environment Simply by connecting to the Internet, local businesses transform themselves
Load Balancing. Final Network Exam LSNAT. Sommaire. How works a "traditional" NAT? Un article de Le wiki des TPs RSM.
Load Balancing Un article de Le wiki des TPs RSM. PC Final Network Exam Sommaire 1 LSNAT 1.1 Deployement of LSNAT in a globally unique address space (LS-NAT) 1.2 Operation of LSNAT in conjunction with
A Simulation Study of Effect of MPLS on Latency over a Wide Area Network (WAN)
A Simulation Study of Effect of MPLS on Latency over a Wide Area Network (WAN) Adeyinka A. Adewale, Samuel N. John, and Charles Ndujiuba 1 Department of Electrical and Information Engineering, Covenant
Using IPM to Measure Network Performance
CHAPTER 3 Using IPM to Measure Network Performance This chapter provides details on using IPM to measure latency, jitter, availability, packet loss, and errors. It includes the following sections: Measuring
FAQ: BroadLink Multi-homing Load Balancers
FAQ: BroadLink Multi-homing Load Balancers BroadLink Overview Outbound Traffic Inbound Traffic Bandwidth Management Persistent Routing High Availability BroadLink Overview 1. What is BroadLink? BroadLink
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 SDN - An Overview... 2 SDN: Solution Layers and its Key Requirements to be validated...
NETWORK ISSUES: COSTS & OPTIONS
VIDEO CONFERENCING NETWORK ISSUES: COSTS & OPTIONS Prepared By: S. Ann Earon, Ph.D., President Telemanagement Resources International Inc. Sponsored by Vidyo By:S.AnnEaron,Ph.D. Introduction Successful
LOAD BALANCING AND EFFICIENT CLUSTERING FOR IMPROVING NETWORK PERFORMANCE IN AD-HOC NETWORKS
LOAD BALANCING AND EFFICIENT CLUSTERING FOR IMPROVING NETWORK PERFORMANCE IN AD-HOC NETWORKS Saranya.S 1, Menakambal.S 2 1 M.E., Embedded System Technologies, Nandha Engineering College (Autonomous), (India)
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
VoIP versus VoMPLS Performance Evaluation
www.ijcsi.org 194 VoIP versus VoMPLS Performance Evaluation M. Abdel-Azim 1, M.M.Awad 2 and H.A.Sakr 3 1 ' ECE Department, Mansoura University, Mansoura, Egypt 2 ' SCADA and Telecom General Manager, GASCO,
Cisco Dynamic Multipoint VPN: Simple and Secure Branch-to-Branch Communications
Cisco Dynamic Multipoint VPN: Simple and Secure Branch-to-Branch Communications Product Overview Cisco Dynamic Multipoint VPN (DMVPN) is a Cisco IOS Software-based security solution for building scalable
Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc
(International Journal of Computer Science & Management Studies) Vol. 17, Issue 01 Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc Dr. Khalid Hamid Bilal Khartoum, Sudan [email protected]
Walmart s Data Center. Amadeus Data Center. Google s Data Center. Data Center Evolution 1.0. Data Center Evolution 2.0
Walmart s Data Center Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall emester 2013 1 2 Amadeus Data Center Google s Data Center 3 4 Data Center
Guide to TCP/IP, Third Edition. Chapter 3: Data Link and Network Layer TCP/IP Protocols
Guide to TCP/IP, Third Edition Chapter 3: Data Link and Network Layer TCP/IP Protocols Objectives Understand the role that data link protocols, such as SLIP and PPP, play for TCP/IP Distinguish among various
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Analysis of Network Segmentation Techniques in Cloud Data Centers
64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology
Demonstrating the high performance and feature richness of the compact MX Series
WHITE PAPER Midrange MX Series 3D Universal Edge Routers Evaluation Report Demonstrating the high performance and feature richness of the compact MX Series Copyright 2011, Juniper Networks, Inc. 1 Table
Planning and Maintaining a Microsoft Windows Server Network Infrastructure
Unit 27: Planning and Maintaining a Microsoft Windows Server Network Infrastructure Learning outcomes A candidate following a programme of learning leading to this unit will be able to: Configure security
Cisco Active Network Abstraction 4.0
Cisco Active Network Abstraction 4.0 Product Overview Cisco Active Network Abstraction (ANA) is a flexible, vendor-neutral network resource management solution for a multitechnology, multiservice network
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
CloudLink - The On-Ramp to the Cloud Security, Management and Performance Optimization for Multi-Tenant Private and Public Clouds
- The On-Ramp to the Cloud Security, Management and Performance Optimization for Multi-Tenant Private and Public Clouds February 2011 1 Introduction Today's business environment requires organizations
