1 Network Infrastructure TSL has a great deal of experience in designing and implementing secure, high availability broadcast network systems, and working with clients to integrate such with new or existing corporate LAN and WAN infrastructure. TSL enjoys a close relationship with vendors, including Cisco Systems, and has facilitated many proof of concepts verifying interoperability of broadcast equipment with high end network equipment. TSL has also undertaken many tests to derive the optimum design and implementation for what is such an important part of any project such as this. TSL will keep all network design and implementation in house, it will not be subcontracted to any third party. TSL s in house network team are all Cisco certified engineers with a rich experience of broadcast network implementation. The network consists of three main areas; Broadcast core and aggregation network at SPTN London Firewall, IPS and connection to WAN and existing networks Archive Fibre Channel network at SPTN London Core Network For the core and aggregation network at the main site, TSL proposes a collapsed core multitiered model, with dual Cisco Nexus 7010 core switches forming the broadcast core, and dual Cisco 4510 switches forming an aggregation tier layer. All hi-res hosts, such as playout servers, MAM, graphics, HSM and the central storage would be connected directly to the core Nexus switches. This will ensure maximum throughput and minimum latency between all such hosts. The Nexus 7000 has a wirespeed backplane to which all 1GE and 10GE cards connect. It is not possible to oversubscribe the switch, and ensures that network congestion cannot occur within the core. Two Cisco 4510 aggregation switches will be provisioned to provide connectivity to remaining hosts such as modular frames, multiviewers, desktops and other ancillary devices. These 4510 switches will also be uplinked to the core via multiple 10GE links. These switches are a high availability platform, and provide high density aggregation for remaining broadcast hosts to the network. TSL has found this collapsed core design the most expedient in a broadcast environment. The overall size of the network does not warrant separating into more layers such as access, distribution and core, as may be found in a more traditional corporate network design. The collapsed core model provides lower overall latency and a more deterministic traffic path due to a physically simpler topology. Furthermore reconvergance in the event of a failure is faster, as fewer devices need to take part. By reducing the overall switch count, ongoing support and management will also be lower. It is proposed to provide external connectivity via a WAN handoff to a resilient pair of multigigabit firewalls. Connectivity to external environments such as external content dropboxes, regional offices and Singapore Playout centre will be provided through these firewalls, along with connectivity to the existing enterprise network. An overview of the proposed core network is shown in the diagram below.
2 The broadcast core network is provisioned on a redundant pair of Cisco Nexus 7000 series switches as the core switching platform. The Nexus 7010 switch has many fully integrated high availability features, such as multiple PSUs, dual supervisor engines and multiple switching fabric modules. The backplane capacity allows an uncontended throughput of up to 80Gbits/second from each line card. TSL network specialists have tested this platform in a multichannel HD playout and production environment and found it to be more than capable in terms of throughput and utterly stable. Additional capacity would be available in terms of both ports and throughput to allow for future expansion if required. No non-broadcast hosts will be directly connected to this core, and the configuration will be optimised for high throughput and resilience. The Nexus 7010 switch is a service provider scaled platform, able to support multiple traffic streams at multi gigabit speeds with consistent latency and throughput. All hosts moving or generating hi-res traffic, such as MAM hi-res servers, ingest and playback servers and graphics application servers, and latency sensitive equipment would be directly connected to these core switches. It is proposed to deploy two Cisco 4510E+R switches for peripheral aggregation. The selection of the Cisco 4510R-E, suitably specified, provides redundant PSU s and supervisor engines, along with 10GE uplinks to the core. Each chassis is also be able to accommodate up to four hundred 10/100/1000 copper Ethernet ports for ancillary device connection. All devices such as the KVM, modular frames, multiviewers would connect to these switches. TSL have designed and deployed several broadcast system cores using Cisco s 6500 platform. TSL have considered, but would recommend against the 6500s suitability for deployment in this project. The 6500 is a very capable, high availability platform, with a backplane capacity of 80Gbit/sec to each line card (using a Sup 2T/PFC4/DFC4 combination). With a project on the scale of SPTN, with the proposed MAM and scale of associated workflows, all potentially operating with HD formats, it is believed the potential for
3 switch oversubscription in normal operation is too high. It is felt that caveats and limitations in the 6500 Sup 2T platform architecture could prove to overly restrict SPTN s future development. Furthermore, no green enhancements or special features are available on this platform. As detailed above, the broadcast core will consist of dual Cisco Nexus 7010 switches, each populated with dual supervisor engines, three fabric modules and two PSUs. Port modules for both 1GE and 10GE copper will be fitted with sufficient numbers of ports to support the broadcast systems. All high bandwidth hosts, such as the MAM and production servers, archive HSM, edit suites and Central Storage will be connected to these core switches. The core switches support jumbo (9000 byte) Ethernet frames. Where required by the end hosts, such as the MAM and production servers, use of jumbo frames will be configured. Multiple VLANs will be defined on each switch. Each VLAN will be assigned an equipment subnet. VLANs will be defined by equipment or LAN functionality, i.e. MAM, modular equipment, multviewers, KVM etc. The configured VLANs will be the same across both switches. HSRP will be deployed to provide host to default router resilience in the event that the primary core switch fails. Some broadcast hosts will have dual Ethernet ports configured as a main and backup. The above topology supports high availability connectivity of such hosts. One Ethernet connection will be made to each switch, in the event of loss of the primary port or switch, end host connectivity will be maintained. Some high bandwidth hosts, such as the MAM Hi-Res servers and HSM have multiple Ethernet ports which can be, configured to support link aggregation. The above topology supports LACP and PAGP in order to support multiple gigabit or 10 gigabit Ethernets in an aggregated bundle. As shown in the diagram on the previous page, the peripheral switches will be dual homed to each core switch via 10GE Ethernets from the supervisor engines. These switches will be configured as traditional access layer switches, supporting multiple VLANs and performing layer two aggregation to the core switches. It is not considered necessary to configure these switches as layer three devices. Operating as layer two devices, resilience will be controlled by RSTP (rapid spanning tree protocol), and broadcast domains will be kept small, extending only to the trunk and VLAN interfaces of the core switches. This is considered the optimal configuration for the size of the broadcast network and provides the most straightforward architecture for operational support. VTP will operate in transparent mode on all switches, allowing the use of extended VLAN IDs. VTP, when operating in client/server mode is considered a risky implementation in a network such as this. The overall number of switches and VLANs is small and easily manageable, and the risk that a switches VLAN database could be overwritten by the accidental connection of a misconfigured switch is considered unacceptable. All current spanning-tree switchport enhancements such as portfast, BPDU guard, unidirectional link detection, broadcast storm control and uplink fast will be enabled where required W Rapid Spanning tree will be deployed on the uplinks between core and peripheral switches. It is not believed necessary to deploy 802.1s multiple spanning tree due to the relatively low number of VLANs in use and the negligible BPDU overhead saving that would be made. Previous testing by TSL of such topologies, has shown that a deployment of RSTP provides the most stable and fastest reconverging overall architecture. Individual VLAN path costs will be set such that the primary link for all odd numbered VLANs will be via the A side core, and even numbered VLANs will be via the B side core in order to ensure that all 10GE links are utilised under normal operation. The use of a number of smaller, fixed configuration or stacked, discrete switches, all connected back to the core has been discounted. Whilst technically also a valid solution, previous experience has shown that this setup would be more difficult to support, is often more expensive and has a higher overall power consumption. Having a large number of
4 switches, distributed across many racks, adds unnecessary complexity and scope for errors to the daily or emergency support function. It is assumed that SPTN use RFC1918 private addressing across all sites. It is recommended that a new address range, that can be logically summarised be utilised for this project. Experience has shown that a class B sized subnet, i.e. /16, would be of a suitable scale. This could then be subnetted to allow a /24 size subnet to be allocated to each VLAN. An additional benefit of this scheme is that the third octet of the address could also be defined as the VLAN ID. With the IP address assignment as proposed, the complete broadcast network range can be summarised as a single route. This makes further upstream route redistribution simpler to deploy and support. All host addressing would be statically assigned, no addresses would be assigned via DHCP. It is proposed to use OSPF as the dynamic routing protocol within the broadcast network. The reference bandwidth will be set appropriately to support the multiple 10GE etherchannels in use, and all available enhancements, such as BFD will be configured. With the proposed system it is fully possible to include the firewalls within the OSPF domain and extend the dynamic routing domain to include the regional sites and integrate to the existing enterprise core. Alternatively the routing can be handled through the use of static and default route statements. Either way, the system has been designed to be flexible and expandable enough to accommodate any architectural requirements placed upon it. All of the networking equipment detailed fully supports PIM and IGMP protocols. If it is determined to be required it is proposed to configure the multicast to operate in PIM sparse mode. At this stage multicast clients have been identified in the broadcast system only to provide confidence monitoring of the Singapore Playout operation. A single multicast domain will be configured at the broadcast core with it s own RP (Rendevious Point). Should expansion of multicast services be required (such as an internal IPTV system), this configuration can be extended to include the corporate infrastructure as a separate multicast PIM Sparse Mode domain and it s own RP. This provides an efficient traffic path, with all multicast sources arriving at the broadcast core switches, and both the shared tree and shortest path tree taking the same path to the corporate LAN handoff layer. Furthermore, firewall configuration would be simplified, and all non-broadcast network multicast clients would be kept in the PIM domain away from the broadcast core. As with unicast addressing, the multicast addressing range will be agreed between TSL and SPTN. All broadcast equipment will connect to the broadcast network core. All external connectivity will be via a resilient handoff layers to a high availability pair of Cisco ASA5585 firewalls. The handoff layer will be provisioned as 10GE ports in transit VLANs from the core 7010 switches. It is not considered possible to precisely quantify the bandwidth requirement of the proposed system. Although detailed calculations have been made to ensure the proposed system will cope with theoretical limits, such bandwidth varies greatly with the complexity of the proposed workflows and number of users. Sufficient bandwidth has been provisioned in the network such that all hosts can operate at wire speed, so that network congestion simply would not occur. The use of the Nexus core switches allows for full wire speed connectivity from all hosts. It is considered that no tangible benefit can be gained from deploying a QOS policy within the network core. QOS is a form of managed unfairness and provides differentiated services to different classes of traffic. As outlined above, the network is designed such that all hosts can operate at wire speed. It is not considered effective to have to provision a QOS policy to manage bandwidth with the core LAN infrastructure. Firewalls, IPS and External Connectivity It is proposed that the broadcast network sit completely behind firewalls to ensure the highest level of security for the business critical content and services and allow granular control and monitoring over the access into and out of the broadcast system. It is therefore proposed that
5 all external connectivity to the SPTN broadcast system comes via a pair of broadcast firewalls. In order for this architecture to be practical, a multigigabit resilient firewall cluster is required. It is suggested that SPTN also considers integrated virus and malware protection of the broadcast network to all external threats. This functionality is included in the TSL proposal. TSL network specialists have recently undertaken a proof of concept for security appliances to be deployed in a role similar as required by this project. The overriding requirement is low latency multi gigabit throughput, with full stateful protection and integrated IPS. The following firewalls and security appliance were tested in a group by TSL network specialists; Tested Throughput Firewall Model (sustained file transfer) Notes Syphan ITC Gbit/sec Excellent performance, SNORT signatures supported, active/active redundant pair. Preproduction unit only. Production supplies delayed, was not available for production deployment Fortigate MC Gbit/sec Good performance, burst to 8.6Gbit/sec, sustained throughput 7.6Gbit/sec. Additional 10GE accelerators available. Signature detection via Fortinet Fortiguard signature packs Palo Alto PA Gbit/sec Memory leak on sustained high rate file transfer. Understand new software available, not re-tested Cisco ASA Gbit/sec Good performance, active/active available. Integrated IPS/IDS. After completing the above group test, TSL network specialists have recently spent further lab time at Cisco evaluating the new ASA5585.This platform is relatively new, and adds to the performance of the ASA5580 platform, but includes IPS and associated malware signature detection with no performance penalty. In comparative evaluation it is felt that the Cisco ASA5585 platform is the best performing platform in terms of availability and packet inspection throughput. The ASA range also support a feature known as contexts. Similar to the VDCs available on the Nexus switches, Contexts allow multiple isolated virtual firewalls to operate within a single chassis. Contexts also allow active-active operation, whereby both main and standby firewalls can be active and forwarding traffic at once. It is proposed to deploy dual Cisco ASA5585X-SSP40 firewalls with integrated IPS. The firewalls will be configured in a high availability active/standby cluster, and connect to the proposed broadcast core switches and SPTN s existing corporate core or distribution switches using either 1GE or 10GE connections (depending upon the traffic profiles and flows). Further resilient 10GE Connectivity will also be provisioned to a pair of WAN handoff switches to which WAN links to external sites and content provider drop boxes will terminate. The proposed firewall solution has the flexibility, capacity and security features to be positioned either as an internet facing firewall or as a handoff to the enterprise core for internet bound traffic. The detail of such connectivity would be decided during detailed design sessions held between SPTN and TSL network engineers. It is anticipated that SPTN s Harris Vision scheduling system either exists on the corporate network infrastructure, or on a network segment that will be connected to another untrusted interface on the firewalls. A number of MAM browse workstations will appear on the enterprise network which will require access to the browse material. Neither of the identified traffic flows will be of significant bandwidth. The diagram below shows a proposed configuration for the broadcast hand off layers and firewalls.
6 The firewalls will be provisioned with intrusion prevention and advanced anti-worm services delivered by the IPS modules via IPS SSP. All traffic entering the broadcast system will be scanned for malware or viruses. It is proposed to keep the entire broadcast environment sterile by both firewalling and utilising IPS at the ingress point. TSL has been instrumental in defining security and operational policies for file based media platforms at other broadcasters. It is proposed that we would advise SPTN on the most secure operation policies to both protect the environment from malware and viruses, and protect valuable hi-res media from unauthorised access or extraction, along with the most operationally effective management and monitoring procedures. It is proposed that TSL s Network Architects would undertake a workshop with SPTN s existing Security Analysts and propose a baseline security policy that could then be defined in detail. All implementation and operation would then adhere to this policy ongoing. The policy would cover matters from ingesting media from USB sticks, to periodicity of AV updates and operational monitoring. WAN Handoff Switches A resilient network infrastructure is proposed to support the connectivity of content provider drop boxes and remote site interconnection. It is proposed to deploy a pair of Cisco 4948E switches as a WAN handoff layer to which drop boxes and external connections will terminate. The switches will be connected to the external side of the broadcast firewalls via resilient 10GE links and all onward host or circuit connectivity will be presented as either 1 or 10GE connections directly from the TSL would propose to present SPTN with the sufficient number of interfaces as 1 or 10Gb Ethernet connections. It is understood that any onward connectivity, management and preparation for WAN connectivity will be handled by SPTN It is proposed to deploy 4948E switches as A and B side switches. The 4948E platform was selected over the Cisco 3750X platform due to it s higher quantity of 10GE interfaces, allowing greater future expansion. It is also felt that a 3750X in a stack configuration does not give adequate demarcation between the A & B sides of the installation and could prove more difficult to support operationally. Previous deployments and testing have also shown
7 that the 3750 in a stack configuration takes an unacceptable amount of time to reconverge in the event of a stack master failure. It was also noted that the relatively new 4948E model is less expensive than the original 4948 model, and similarly priced to an equivalent specified 3750X. The 4948E also has non-blocking architecture and slightly lower latency than the 3750E. It is proposed to configure a number of VLANs, or one VLAN containing multiple Private VLAN s, on the 4948Es to handle the multiple content distribution dropboxes. Granular access control policies will be applied at the ASA to permit only trusted access to these devices on approved protocols. The dropboxes will resiliently connect to both A and B switches either through individual 1GE or aggregated (LACP) GE bundles. HSRP will be configured on the 4948 s to provide first hop redundancy. Supplied with Enterprise level software the WAN handoff switches can fully participate in dynamic routing with remote sites should this be identified as a requirement. Little is currently known about any existing or proposed WAN connectivity to remote sites. It is fully anticipated that there will be some resilient WAN connections to sites such as the Singapore Playout Centre and Regional offices, and understood that any such WAN connectivity will be arranged by SPTN. TSL will present the required number of connections to SPTN WAN interfacing as 10/100/1000Mbps Ethernet on RJ45 connectors. In the unlikely event that 10GE is required to any remote sites there is sufficient capacity on the 4948E to provide a resilient pair of connections, one form each switch, which would be presented as singlemode or multimode fibre connections. It is expected that specific detailed design sessions would be held between SPTN and TSL Network Architects to fully scope the WAN connectivity and ensure full integration to the broadcast network. Archive Fibre Channel Network A resilient fibre channel network will be deployed to provide connectivity between the MAM, HSM servers and the archive robot. Two Cisco MDS9148 switches will be provisioned to provide the switched core of the fibre channel network. All hosts have dual port HBAs, and will be directly fibre attached to each of the switches. No single point of failure exists with the proposed implementation, multipath interfaces will be created on all servers so that both
8 fabrics can be used in the vent of a failure or fault. Both switches will have an Ethernet management connection back to one of the broadcast aggregation switches. Appropriate zone sets will be configured on the fibre channel switches such that zones are created for all target/initiators. HBA on all proposed servers and tape drives operate at 1/2/4 Gbit/sec. The switches have been chosen such that all HBAs can operate at 4Gbit/sec with no buffer occupancy issues ensuring maximum overall fabric throughput. It was not considered expedient to deploy an 8Gbit/sec fabric, as the read/write speed of LTO5 is around 1250Mbit/sec, and the maximum FC throughput on the proposed MAM servers is around 3.8Gbit/sec. It has been assumed that the tape archive will be co-located with the MAM and other equipment, and the fibre run between them would not exceed 300 metres. If the archive is to be located some distance away from the MAM servers, an additional pair of fibre channel switches would be located close to the archive, and ISL links deployed utilising single mode fibre between them. A dual site archive model has been discussed during this RFP response stage. Should this become a requirement the FC and HSM architecture will change. An overview of the proposed fibre channel network is shown in the diagram below. Microsoft AD Integration The TSL proposal includes a number of MAM options. Until a final decision is reached on which MAM would be installed, it is difficult to propose exactly how the system would integrate into a Windows Active Directory domain. The following information is based upon TSL s real world experiences with integrating broadcast MAM systems into Windows AD domains. While the specific detail of SPTN s integration may differ from that described below the principles of integration will remain true regardless of MAM selection. Some broadcast systems will be integrated with SPTN s existing corporate Microsoft Windows Active Directory domain. Whilst the general broadcast equipment will remain outside of AD for most purposes, it is anticipated that some equipment, including the MAM and desktop client machines will integrate with AD for the purposes of user authentication and root DNS resolution. This will allow user authentication and authorisation to be performed by users existing AD account. It is anticipated that the selected MAM will utalise AD users and groups attributes to perform authentication and grant user privileges for all MAM access. It is proposed to deploy a pair of Windows domain controller servers on the broadcast side of the network. These would be joined to the existing SPTN AD domain, in their own site, i.e. broadcast or mam. The primary role of these servers would be to provide LDAP or
9 RADIUS authentication of users and DNS service. The option also exists to authenticate RADIUS clients via IAS, i.e. equipment logins etc. All productions servers and associated devices would remain outside of AD. To ease site DNS management, it is suggested that the broadcast network be a DNS delegated subzone of the site s name space. This will permit all broadcast DNS to be maintained on these servers, and outside of the wider DNS zones, however will allow consistent naming schemes to be used throughout. The exact configuration of these servers can only be determined as part of the detailed design, with further details of the SPTN AD and naming scheme understood. It is proposed to configure one way replication from the corporate to broadcast domain controllers, as all domain and user administration would continue to be performed from the existing corporate servers. The diagram below shows the proposed Active Directory solution overview. Anti-Virus and Update Management
10 It is recommended that an Enterprise AV be considered to run within the broadcast system. This would ideally run on all clients and servers where possible. Some broadcast vendors do not support any AV system, and where this is the case those servers will have to exist outside of the AV domain. Previous experience has shown McAfee Enterprise AV a reliable solution, though it is acknowledged that any choice of vendors is likely to be influenced by SPTN s Security Policy and existing AV solutions. It is porposed that the Broadcast AV component could be deployed for the broadcast environment only, or operate in conjunction with any existing Corporate AV infrastructure and that the AV server will also be configured for distributing OS patches. Under normal operation, it would be proposed to install AV and OS patches manually in groups. This can be performed on alternate Ops shifts or days. This procedure limits any impact that may be caused by service packs or AV updates. Clearly for day zero type threats, automatic or bulk patching can be invoked. It is proposed that the operation and management of such updates is part of the Security Policy proposed in a previous section. Third Party Maintenance Access Whilst not defined in the RFP it is felt essential that some form of VPN remote access for suppliers and other 3 rd party engineering uses be provisioned. If not available via the SPTN Internet firewalls, TSL propose to deploy a VPN concentrator such as an ASA5505 to allow for remote access. The overview diagram below shows the proposed VPN solutions. Remote clients would be able to connect from the Internet using Cisco VPN client or SSL VPNs. Multiple VPN groups would be defined on the ASA, each group using a different client IP address range. Client groups would be assigned to different vendors or classes of clients to restrict overall network access once connected. I.e. Spectralogic groups would only allow
11 connection to the SpetraLogic tape archive, MAM group would only allow access to their servers etc. System Access During Factory Build TSL have facilities for system remote access and testing at the Maidenhead factory. A dedicated 10Mbit/sec Internet link is provisioned at Maidenhead, with a universal VPN concentrator. During system build and F.A.T., remote access via VPN will be available to SPTN and any of the suppliers as required. Point to point VPNs, or VPN client access via Microsoft (PPTP), Cisco IPSEC or SSL connection methods are available. This facility can be used for activities such as remote user testing and evaluation, or temporary SPTN OSS testing. Third party supplier access, such as Ardome, can also be provisioned for any engineering requirements. Whilst such remote access is proposed to provide the most flexible and useful remote system connectivity, a strict remote access policy will be defined and agreed by TSL and SPTN. All access will only be granted in accordance with the policy and all connections will be monitored. System Management and Monitoring It is proposed to deploy a Nagios network management platform to monitor the system. Nagios is a Linux based OSS platform, with the flexibility and reporting to comprehensively manage the proposed infrastructure. Nagios is an SNMP manager, and also utilises client agents. The agents will be installed on all suitable servers, and report utilisation of resources such as disk, memory, CPU, network and available services. One of Nagios strengths is the powerful alarm correlation. This allows rules and conditions to be defined such that a single failure does not generate multiple alarms. In order to maximise operational utility, alarm configuration and reporting will be determined in conjunction with SPTN s Operational Supervisors. Previous TSL installations have used Nagios agents to report on the following: Daemon availability Service availability Transfer subsystem metrics (# active, # queued, # recently_failed) Transcoding subsystem metrics (# active, # queued, # recently_failed) Database usage SPTN s actual report and alarm availiablity will be dependent upon the chosen solution. A minimum of three filtered user views will be configured to allow broadcast and operations to have visibility of a dashboard showing the status of Nexus components. Agreed SNMP enabled devices will be configured to forward traps to this and any other existing management systems including any existing Enterprise NMS. The proposed design also provides for scalable future growth by allowing future distributed Servers to manage the monitoring for a given number of devices. Should the system expand in the future, additional distributed servers can be added to take on additional load. Important links, such as the WAN links to the playout centre will be monitored on this platform. Statistics such as circuit load can be graphed and reported on. Such configuration and reporting detail would be decided during detailed design between TSL and SPTN.
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
Data Networking and Architecture The course focuses on theoretical principles and practical implementation of selected Data Networking protocols and standards. Physical network architecture is described
Layer 3 Network + Dedicated Internet Connectivity Client: One of the IT Departments in a Northern State Customer's requirement: The customer wanted to establish CAN connectivity (Campus Area Network) for
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
Description "Charting the Course... Course Summary Interconnecting Cisco Networking Devices: Accelerated (CCNAX), is a course consisting of ICND1 and ICND2 content in its entirety, but with the content
TO FROM NEN Community REANNZ DATE June 2010 SUBJECT Design Statement: NEN Edge Device Background This National Education Network (NEN) design statement was developed by REANNZ with input from the relevant
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
LAN Baseline Architecture Branch Office Network Reference Design Guide This document provides guidance on how to design a local area network (LAN) for a Business Ready Branch or autonomous Business Ready
Cisco Certified Network Associate Exam Exam Number 200-120 CCNA Associated Certifications CCNA Routing and Switching Operation of IP Data Networks Operation of IP Data Networks Recognize the purpose and
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Service Definition Introduction This Service Definition describes Nexium s from the customer s perspective. In this document the product is described in terms of an overview, service specification, service
Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is
Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Cisco Data Center Network Manager Release 5.1 (LAN) Product Overview Modern data centers are becoming increasingly large and complex. New technology architectures such as cloud computing and virtualization
Network Virtualization Network Admission Control Deployment Guide This document provides guidance for enterprises that want to deploy the Cisco Network Admission Control (NAC) Appliance for their campus
GTL-2691 Version: 1 Modules are to be ordered separately. 20 GE + 4 GE Combo SFP + 2 10G Slots L3 Managed Stackable Switch The LevelOne GEL-2691 is a Layer 3 Managed switch with 24 x 1000Base-T ports associated
CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network Olga Torstensson SWITCHv6 1 Components of High Availability Redundancy Technology (including hardware and software features)
TechBrief Introduction Leveraging Redundancy to Build Fault-Tolerant Networks The high demands of e-commerce and Internet applications have required networks to exhibit the same reliability as the public
Data Sheet Cisco Nexus 1000V Switch for Microsoft Hyper-V Product Overview Cisco Nexus 1000V Switches provide a comprehensive and extensible architectural platform for virtual machine and cloud networking.
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Network Virtualization Petr Grygárek 1 Network Virtualization Implementation of separate logical network environments (Virtual Networks, VNs) for multiple groups on shared physical infrastructure Total
Virtualization. Consolidation. Simplification. Choice. WHITE PAPER Virtualized Security: The Next Generation of Consolidation Virtualized Security: The Next Generation of Consolidation As we approach the
Synopsis Industry adoption of EtherNet/IP TM for control and information resulted in the wide deployment of standard Ethernet in manufacturing. This deployment acts as the technology enabler for the convergence
You can read the recommendations in the user, the technical or the installation for SONICWALL SWITCHING NSA 2400MX IN SONICOS ENHANCED 5.7. You'll find the answers to all your questions on the SONICWALL
SSL-VPN Combined With Network Security Introducing A popular feature of the SonicWALL Aventail SSL VPN appliances is called End Point Control (EPC). This allows the administrator to define specific criteria
529 Hahn Ave. Suite 101 Glendale CA 91203-1052 Tel 818.550.0770 Fax 818.550.8293 www.brandcollege.edu Cisco Certified Network Expert (CCNE) Program Summary This instructor- led program with a combination
Interconnecting Cisco Networking Devices Part 2 Course Number: ICND2 Length: 5 Day(s) Certification Exam This course will help you prepare for the following exam: 640 816: ICND2 Course Overview This course
Securely Architecting the Internal Cloud Rob Randell, CISSP Senior Security and Compliance Specialist VMware, Inc. Securely Building the Internal Cloud Virtualization is the Key How Virtualization Affects
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
Cisco Application Networking Manager Version 2.0 Cisco Application Networking Manager (ANM) software enables centralized configuration, operations, and monitoring of Cisco data center networking equipment
Leased Line + Remote Dial-in connectivity Client: One of the TELCO offices in a Southern state. The customer wanted to establish WAN Connectivity between central location and 10 remote locations. The customer
Across the globe, many companies choose a Cisco switching architecture to service their physical and virtual networks for enterprise and data center operations. When implementing a large-scale Cisco network,
IP Telephony Management How Cisco IT Manages Global IP Telephony A Cisco on Cisco Case Study: Inside Cisco IT 1 Overview Challenge Design, implement, and maintain a highly available, reliable, and resilient
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Juniper / Cisco Interoperability Tests August 2014 Executive Summary Juniper Networks commissioned Network Test to assess interoperability, with an emphasis on data center connectivity, between Juniper
Implementing Cisco Data Center Unified Fabric Course DCUFI v5.0; 5 Days, Instructor-led Course Description The Implementing Cisco Data Center Unified Fabric (DCUFI) v5.0 is a five-day instructor-led training
Secure Networks for Process Control Leveraging a Simple Yet Effective Policy Framework to Secure the Modern Process Control Network An Enterasys Networks White Paper There is nothing more important than
IP SAN BEST PRACTICES PowerVault MD3000i Storage Array www.dell.com/md3000i TABLE OF CONTENTS Table of Contents INTRODUCTION... 3 OVERVIEW ISCSI... 3 IP SAN DESIGN... 4 BEST PRACTICE - IMPLEMENTATION...
MOC 6435A Designing a Windows Server 2008 Network Infrastructure Course Number: 6435A Course Length: 5 Days Certification Exam This course will help you prepare for the following Microsoft exam: Exam 70647:
Matěj Grégr RESILIENT NETWORK DESIGN 1/36 2011 Brno University of Technology, Faculty of Information Technology, Matěj Grégr, email@example.com Campus Best Practices - Resilient network design Campus
Isilon IQ Network Configuration Guide An Isilon Systems Best Practice Paper August 2008 ISILON SYSTEMS Table of Contents Cluster Networking Introduction...3 Assumptions...3 Cluster Networking Features...3
Universal Network Access Policy Purpose Poynton Workmens Club makes extensive use of network ed Information Technology resources to support its research and administration functions and provides a variety
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
TP-LINK TM 24-Port Gigabit L2 Managed Switch with 4 SFP Slots Overview Designed for workgroups and departments, from TP-LINK provides full set of layer 2 management features. It delivers maximum throughput
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
ITE I Chapter 6 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Objectives Implement Spanning Tree Protocols LAN Switching and Wireless Chapter 5 Explain the role of redundancy in a converged
Switching in an Enterprise Network Introducing Routing and Switching in the Enterprise Chapter 3 Version 4.0 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Objectives Compare the types of
Q & A Cisco Small Business ISA500 Series Integrated Security Appliances Q. What is the Cisco Small Business ISA500 Series Integrated Security Appliance? A. The Cisco Small Business ISA500 Series Integrated
NEW TP-LINK 24-Port Gigabit L2 Managed Switch with 4 SFP Slots TM NEW Overview Designed for workgroups and departments, from TP-LINK provides full set of layer 2 management features. It delivers maximum
Question Number (ID) : 1 (jaamsp_mngnwi-025) Lisa would like to configure five of her 15 Web servers, which are running Microsoft Windows Server 2003, Web Edition, to always receive specific IP addresses
BDCOM S3900 Switch High Performance 10Gigabit Ethernet Switch BDCOM S3900 is a standard L3 congestion-less switch series, which are capable of multi-layer switching and wire-speed route forwarding. Its
200-101: Interconnecting Cisco Networking Devices Part 2 v2.0 (ICND2) Course Overview This course provides students with the knowledge and skills to successfully install, operate, and troubleshoot a small
Management Software AT-S106 Web Browser User s Guide For the AT-GS950/48 Gigabit Ethernet Smart Switch Version 1.0.0 613-001339 Rev. A Copyright 2010 Allied Telesis, Inc. All rights reserved. No part of
White Paper Using and Offering Wholesale Ethernet Using & Offering Wholesale Ethernet Network and Operational Considerations Introduction Business services customers are continuing to migrate to Carrier
100-101: Interconnecting Cisco Networking Devices Part 1 v2.0 (ICND1) Course Overview This course provides students with the knowledge and skills to implement and support a small switched and routed network.
N5 NETWORKING BEST PRACTICES Table of Contents Nexgen N5 Networking... 2 Overview of Storage Networking Best Practices... 2 Recommended Switch features for an iscsi Network... 2 Setting up the iscsi Network
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
Request for Proposal MDM0031012338 Offeror s Questions for RFP for Virtual Private Network Solution (VPN) 1. How much throughput must the VPN support long-term? Answer: 10 GB firewall, 4 GB 3DES/AES VPN
Unit title: Local Area Networking technologies Unit number: 26 Level: 5 Credit value: 15 Guided learning hours: 60 Unit reference number: L/601/1547 UNIT AIM AND PURPOSE Learners will gain an understanding
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more
Securing end devices Securing the network edge is already covered. Infrastructure devices in the LAN Workstations Servers IP phones Access points Storage area networking (SAN) devices. Endpoint Security
White Paper Redefine Network Visibility in the Data Center with the Cisco NetFlow Generation Appliance What You Will Learn Modern data centers power businesses through a new generation of applications,
TP-LINK JetStream 28-Port Gigabit Stackable L3 Managed Switch Overview TP-LINK s is an L3 managed switch designed to build a highly accessible, scalable, and robust network. The switch is equipped with
This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.
Whitepaper 10 Things to Know Before Deploying 10 Gigabit Ethernet Table of Contents Introduction... 3 10 Gigabit Ethernet and The Server Edge: Better Efficiency... 3 SAN versus Fibre Channel: Simpler and
CCNA R&S: Introduction to Networks Chapter 5: Ethernet 18.104.22.168 Introduction The OSI physical layer provides the means to transport the bits that make up a data link layer frame across the network media.