Data Centre Functional Areas



Similar documents
24 fiber MPOptimate Configuration and Ordering Guide

Migration to 40/100G in the Data Center with OM3 and OM4 Optical Connectivity

AMP CO Ultra System Catalog

Data Center Design for 40/100G

Data Center Market Trends

Data Center.

EVOLUTION OF NETWORKED STORAGE

Dat Da a t Cen t Cen er t St andar St d andar s d R o R undup o BICSI, TIA, CENELEC CENELE, ISO Steve Kepekci, RCDD

DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization

IP Networking and the Advantages of consolidation

The Need for Low-Loss Multifiber Connectivity

Block based, file-based, combination. Component based, solution based

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Infrastructure for the next three generations

an introduction to networked storage

Data Center Evolution without Revolution

Optimizing Infrastructure Support For Storage Area Networks

Migration Strategy for 40G and 100G Ethernet over Multimode Fiber

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Managing High-Density Fiber in the Data Center: Three Real-World Case Studies

Arista and Leviton Technology in the Data Center

CISCO METRO ETHERNET SERVICES AND SUPPORT

How To Make A Data Center More Efficient

SummitStack in the Data Center

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

TIA-942 Data Centre Standards Overview WHITE PAPER

Data Center Network Infrastructure The Smart Choice For Your Data Center

Zone Distribution in the Data Center

Innovation. Volition Network Solutions. Leading the way in Network Migration through Innovative Connectivity Solutions. 3M Telecommunications

Cisco IT Data Center and Operations Control Center Tour

E-Seminar. Financial Management Internet Business Solution Seminar

Corning Cable Systems Optical Cabling Solutions for Brocade

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

3 Red Hat Enterprise Linux 6 Consolidation

Cisco Conference Connection

Juniper Networks QFabric: Scaling for the Modern Data Center

Virtualizing the SAN with Software Defined Storage Networks

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

BT Connect Networks that think

WD Hard Drive Interface Guide

Sustaining the Cloud with a Faster, Greener and Uptime-Optimized Data Center

SAN and NAS Bandwidth Requirements

Obsolete Fiber Technology? Not in my Data Center!

What is network convergence all about?

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Intelligent Infrastructure Management System (IIMS)

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Enhanced Category 5 Cabling System Engineering for Performance

Integrated telecommunication solutions

VERITAS Backup Exec 9.0 for Windows Servers

iscsi: Accelerating the Transition to Network Storage

Cisco Blended Agent: Bringing Call Blending Capability to Your Enterprise

Data Center Topology Guide

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

Data Center Infrastructure Solution

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

Brocade One Data Center Cloud-Optimized Networks

Intelligent Data Center Solutions

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

FIBRE CHANNEL OVER ETHERNET

COMPARING STORAGE AREA NETWORKS AND NETWORK ATTACHED STORAGE

Overcoming OM3 Performance Challenges

Data Centre Cabling. 20 August 2015 Stefan Naude RCDD The SIEMON Company

How To Get A New Phone System For Your Business

Sybase Solutions for Healthcare Adapting to an Evolving Business and Regulatory Environment

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

CISCO MDS 9000 FAMILY PERFORMANCE MANAGEMENT

HP iscsi storage for small and midsize businesses

NetFlow Feature Acceleration

ADC KRONE s Data Centre Optical Distribution Frame: The Data Centre s Main Cross-Connect WHITE PAPER

SummitStack in the Data Center

Specifying Optical Fiber for Data Center Applications Tony Irujo Sales Engineer

ADC s Data Center Optical Distribution Frame: The Data Center s Main Cross-Connect WHITE PAPER

Designing Server, Storage and Client Workstation Solutions for Mission-Critical IP-Based Physical Security Applications

Issues Affecting the Design and Choice of Cabling Infrastructures within Data Centres

DATA SHEET. GigaStack GBIC THE CISCO SYSTEMS GIGASTACK GIGABIT INTERFACE CONVERTER (GBIC) IS A VERSATILE, LOW-COST,

The AMP NETCONNECT Cabling System Products

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

CISCO PIX SECURITY APPLIANCE LICENSING

Cisco Systems GigaStack Gigabit Interface Converter

Accelerating Microsoft Exchange Servers with I/O Caching

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

TOYOTA I_SITE More than fleet management

IBM Global Services. IBM Maintenance Services managed maintenance solution for Cisco products

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

It looks like your regular telephone.

Cisco Nexus Planning and Design Service

Transcription:

1. Scope This technical paper is intended to provide guidance in Data Centre planning and design consideration with main reference to latest technology and largely adopted Data Centres deployment strategies. Additionally, it provides information about the different functional elements and design architectures that are shaping the latest Data Centres trends.

2. Introduction Chief Technology Officers (CTO s), Data Centre Managers and Infrastructure Managers today are daunted with unprecedented challenges to keep up with current business requirements, reduce environmental impact and bring contribution to the bottom line with fewer resources. Figure 1 For these reasons each component of a Data Centre and its supporting systems must be implemented to work flawlessly together, providing the most reliable access to the resources. Most importantly the physical infrastructure has to be able to provide a seamless migration path able to support rapid changes in business requirements and embrace future applications. The two most significant standards for Data Centre, ISO/ IEC 24764 and TIA/EIA 942, are both characterised by a similar hierarchical nomenclature of the cabling subsystems infrastructure. Fig 2 shows the correlation between the two standard s hierarchical layouts. They both provide a good foundation to better understand how the different areas of a Data Centre are linked together and interoperate with one another. To avoid any confusion this technical paper will only make reference to ISO/IEC 24764 since this standard has an international scope Hierarchical structure of generic cabling within a Data Center according to ISO /IEC24764 Such a scenario requires a multilevel strategic approach supported by high quality skills and knowledge of the various Data Centre components and supporting systems. Enterprises are experiencing enormous growth rates in the volume of data being moved and stored across the network. Foe instance, the deployment of high-density blade servers and storage devices to handle these workloads has resulted in exponential rates of power consumption and heat generation. Distributor in acoordance with ISO/IEC 11801 ENI = External Network Interface MD = Main Distributor ZD = Zone Distributor LPD = Local Distribution Point EO = Equipment Outlet ZD Office MD ENI ENI ZD Network Acess Cabling System Main Distribution Cabling System Zone Distribution Cabling System Neglecting important aspect of the design can provide the illusion of cost reduction with the long-term result of higher capital expenditure, early obsolescence, and painful network disruption. This paper will discuss the main concept of the Data Centre areas, it will provide a foundation to create a suitable environment able to maximise existing resources and prepare for the implementation of future technologies. LDP LDP LDP LDP EO EO EO EO EO EO EO EO EO EO Optional Cables TIA 942 Data Center Satandard Note: Network access cabling is also used to connect ENI to ZD 3. The Core Elements in a Data Centre The main purpose of a data centre is running the applications that handle the core business and operational data of the organization. Internet, video streaming application, such as YouTube or Facebook, together with video-gaming and e-business applications have expanded rapidly reaching levels of adoption beyond any initial expectation. As a result, the amount of data that is being processed has grown rapidly, also fuelled by a global economy that requires data access at all times from everywhere. Offices, Operations Center, Support Rooms Horizontal Cabling Telecom Room (Office & Operations Centre LAN Switches) Horizontal Distribution (LAN/SAN/KVM Switches) Horizontal Cabling Zone Distribution Access Providers Backbone Cabling Horizontal Distribution (LAN/SAN/KVM Switches) Entrance Room (Carrier Equipment and Demarcation) Backbone Cabling Main Distribution (Rauters, Backbone, LAN/SAN Switches, PBX, M13 Muxes) Horizontal Distribution (LAN/SAN/KVM Switches) Access Providers Backbone Cabling Computer Room Horizontal Distribution (LAN/SAN/KVM Switches) Data Centres are the facilities that will house the equipment in order to secure, store, and exchange data. These drivers have prompt data centres to follow an evolution path that moves from backroom operations to the leading edge of today s strategic business models. Horizontal Cabling Equipment Distribution (Rack/Cabinet) Horizontal Cabling Horizontal Cabling Horizontal Cabling Equipment Distribution (Rack/Cabinet) Fig. 2 Equipment Distribution (Rack/Cabinet) Equipment Distribution (Rack/Cabinet) Page 2

Storage (DAS/NAS/SAN), Main Distribution (MD) and Zone Distribution (ZD) can be seen as the supporting pillars of any Data Centre architecture. Different network interfaces and applications can be used to create connection between the various elements. For instance Ethernet is adopted in the networking area contained within the Main Distribution, Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) support communication in the storage area. It is relevant to underline that these applications are not constrained by any physical media, so both fibre and/or copper can be deployed across the different areas. For more information please refer to DC Applications Reference Guide - Networking & Storage technical paper. 3.1 Storage Architectures The digital era has provided individuals and organizations alike with many different possibilities to exchange information. This has created more opportunities to communicate on a global level but also generated a new set of challenges for Data Centres. Storage needs are on a constant increase, IDC estimates storage consumption from enterprise organizations will grow at a compound annual growth rate of 91.8% through 2012. (1) 3.2 Direct-Attached Storage (DAS) DAS is the most basic level of storage and consists of a typical storage device, such as hard disk drive, that comes directly attached to a server or workstation. The main system interfaces used in DAS are the Small Computer Systems Interface (SCSI), serial-attached SCSI (SAS), and Fibre Channel (FC). Although simple to design DAS systems have some substantial downsides: Redundant paths are rarely deployed and this may increase risks of downtime Each DAS require a reserve capacity above the needed storage space. This is to make sure there is adequate room for data, but it also generates low utilization of storage capacity. For organizations that anticipate rapid data growth, it is important to keep in mind that DAS is limited in its scalability. Disk drive interface card Storage has become probably the most critical and yet the most vulnerable functional elements of an enterprise s data centre. For this reason organizations should really take a closer look at how to build and support the infrastructure to access one the most important digital assets data. Traditional Direct-Attach Storage (DAS) deployments have been preferred in the past for their low cost of ownership. However, since applications have become more complex and the need for flexibility has become more relevant there has been a migration towards more centralized approaches. For this reasons Network Attached Storage (NAS) or Storage Networks (SAN) have become more predominant as they also help reducing the amount of hardware and cabling infrastructure associated with it. These innovations are driving faster data transfer speeds such as 10GbE 40GbE 100GbE (2) and 2GFC - 4GFC - 8GFC and future 16GFC. (3) Let s now have a quick look at the different storage devices to better understand the advantages and disadvantages offered by each individual system. Station or server 3.3 Storage Network (SAN) A SAN system connects storage devices, such as disk arrays and tape library, allowing all clients and applications running over the network to access the storage area. SAN topologies help to increase storage capacity and simplifiy storage LAN Switch Fig. 3 Servers Disk drives SAN Switch RAID (1) IDC storage report 2008 (2) GbE Gigabit Ethernet (3) GFC Gigabit Fibre Channel LAN = Local area network RAID = Redundant array of independent disks SAN = Storage area network Fig. 4 Page 3

administration by the fact that multiple servers can share space on the storage disk. are designed to run on a SAN and they require a block-level storage device as opposed to file-based SAN systems have become largely adopted mainly for their ability to scale as the business requirements evolve. SANs adopt a protocol known as Fibre Channel (FC) although more lately there has been an increased adoption of Fibre Channel over Ethernet (FCoE) using switched fabric topologies. These protocols are preferred for their ability to provide a high speed throughput with low latency I/O. Fundamentally a SAN help to add more flexibility to the network so that planning and implementation are easier to achieve. 3.4 Network-Attached Storage (NAS) In essence a NAS topology is constituted by a regular server with minor operating system capabilities. The only purpose is to supply file-based data storage services to other devices on the network. These can be Network File System (NFS), Server Message Block (SMB) or Common Internet File System (CIFS). The benefit of a NAS over a SAN or DAS is that clients across the network can access the same data simultaneously, whereas SAN or DAS allow only a single client at a time. This makes NAS systems ideal for applications that are shared between multiple clients, such as Web content and e-mail storage. NAS is an ideal choice for organizations looking for a simple and cost-effective way to achieve fast data access for multiple clients. Fig. 5 3.5 Storage systems comparison at a glance Choosing the right storage solution for your business can be a tricky task often driven by personal and individual preconception. In reality there is no one right answer for everyone. Instead, it is important to focus on the specific needs and long-term business goals of your organization. Several key criteria included in Table 1 should be considered and the best chance for success comes with choosing a solution that provides long term investment protection for your organization. As it was said earlier digital assets will only continue to grow in the future and therefore it is of paramount importance to make sure your storage infrastructure contributes towards a cost-effective expansion and easy scalability. For this reason it is also important to consider structure cabling solutions that allow for easy migration and expansion of your storage investment without adopting a rip-andreplace approach. The main goal of a sound structure cabling solution is to be able to support current applications while providing a seamless pathway transition to future technologies. Table 1 DAS SAN NAS Multiple Client access Data sharing Department 1 DAS = Direct-attached storage NAS = Network-attached storage NOS = Network operating system Department 1 Application agnostic Optimised for performance and scalability Departmental Switch Support FCoE Main Switch NAB appliance containing shared departmental data NAB appliance containing shared departmental data NAB appliance containing shared organisational data DAS drives containing NOS and software applications Departmental Switch 3.6 Storage and IP convergence Many data centres today are still operating multiple parallel networks to support their various applications. The detrimental fact is that running multiple networks has a tremendous impact on capital expenditures. So lately there have been several attempts at merging I/O into one consolidated physical infrastructure using Ethernet as the main application. This makes even more sense considering the emergence of 40 Gb/s and 100 Gb/s data rates which makes it clearer that Ethernet is an effective, highperformance I/O consolidation platform now and well into the future. On the other hand the downside to a NAS is that not all applications will support it because most clustering solutions This has been the main driver for the adoption of FCoE. The main application of FCoE is in data centre storage Page 4

area networks (SANs) where it offers the advantage of reducing complexity of design and implementation. With FCoE, network (IP) and storage (SAN) data traffic can be consolidated using a single network. This consolidation can: Fig. 6 $2.1 FCoE Adoption 6 reduce the number of network interface cards used to connect storage and IP networks reduce the number of switches and attachment cords reduce power and cooling costs Table 2 shows a comparison between two hypothetical systems, one that utilises I/O consolidation versus a traditional system that runs separate application on different networks. It is assumed that in an environment with no I/O consolidation each server presents two adapters, one for Ethernet and one for FC, and that each adapter has two cables. Revenue in $ Billions (Line) $1.4 4 $0.7 2 $0.0 0 2008 2009 2010 2011 2012 2013 Port Shipments in $ Millions Table 2 Without I/O consolidation 32 servers FC Ethernet Total Switches 3 3 6 Adapters 32 32 64 Cables 64 64 128 4 Main Distribution and Zone Distribution In line with the ISO/IEC 24764 standard the MD houses the main cross connect and the core routers and switches. The ZD can be seen as the main transition point between backbone and horizontal cabling and houses the LAN and SAN switches that connect to servers and storage devices. Fig. 7 utilising I/O consolidation 32 servers FC Ethernet Total Switches none 3 3 Distributor in accordance with ISO/IEC 11801 ENI ENI Network access cabling subsystem Adapters none 32 32 Cables none 64 64 The evident advantages are highlighted by this simple comparison. An I/O system can bring significant saving in terms of equipment, rack space and connections. However, more data traffic is now running through the same cabling infrastructure with the potential risk of creating bottle necks with the consequence of data loss. MD ZD ZD LDP LDP LDP LDP EO EO EO EO EO EO EO EO EO EO Main distribution cabling subsystem Zone distribution cabling subsystem Fig 6 illustrates the rapid adoption of FCoE throughout 2013. It is therefore critical to take extra care in the selection of high performance cabling solutions. These have to be capable of withstanding high volumes of data transmission in any given environment, whilst delivering the highest signal performance. As data centres continue to face the need to expand at a rapid pace, the fundamental concerns related to Main Distribution and Zone Distribution are remaining constant. A properly planned Data Centre infrastructure must be able to deliver three key strategic concepts: Agility by providing optimum flexibility in design and implementation Availability of network under the most stringent conditions Efficiency through the highest and most reliable network performance Page 5

It is advisable that these concepts are taken into consideration during the planning, design and implementation of the physical infrastructure. It will help to preserve and protect the initial investments and it will guarantee your Data Centre capability to respond to the many changes your business is going to face between now and the next 10 years. Table 3 Parameter OM3 OM4 Unit Effective modal bandwidth at 850nm 2000 a 4700 b MHz*km As the latest trends of Virtualization and Cloud start gaining more traction across the industry it is important to create a Data Centre environment that is able to scale its capacity and meet tougher business requirements. Power budget (for maximum TDP) Operating distance 8.2 0.5 to 0.5 to 100 150 db m An example is brought to evidence by the emerging technology migration towards 40GbE and 100GbE; these new technologies are needed to support the constant growth of data rate transmission across the various functional elements of modern Data Centres. Fig 8 illustrates the penetration of 10GbE, 40GbE and 100GbE and the gradual phase-out of legacy technologies. It is interesting to observe that 10GbE is still going to play a major role throughout the next five years alongside with the rapid growth of 40GbE approaching the end of the decade. This underline the importance of building today a network infrastructure that can support the changes in technology that can be foreseen into the near future. Being able to keep on top of these changes is vital part of a company s ability to retain its business success. Fig 8 Server Penetrationof Ethernet Ports by Speed 100 90 80 70 Channel insertion loss c 1.9 1.5 db Allocation for penalties (for 6.3 64 db maximum TDP) d Unallocated margin 0 0.3 db Additional insertion loss allowed a Per IEC 60793-2-10 b Per TIA-492AAAD c The channelinsertion loss is calculated using the maximum distances specified in Table 86-2 and cabled optical fibre attenuation of 3.5 db/km at 850nm plus an allocation for connection and splice loss given in 86.102.2.1 d Link penalties are used for link bdget calculations. They are not requirements and are not meant to be tested. e This unallocated margin is not available for use. So, let s consider as an example a fibre 10GbE preterminated solution. We want to demonstrate a seamless migration path into 40/100 GbE based on the concept of agility, availability and efficiency. A typical 10GbE installation would look like this: Fig 9 LC to MTP* cassette 0 db LC to MTP* cassette % Servers 60 50 40 30 20 10 0 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 1G 10G 40G 100G Therefore deploying 10GbE today is more strategic then you may think. In fact a 10GbE network today doesn t simply have the responsibility to truly support this application, but it also has the greatest task to provide a reliable migration pathway into 40GbE and 100GbE. This is based on the requirements of the latest IEEE 802.3ba standard which presents very stringent performance values to be met, as shown on Table 3. The main goal of a successful migration strategy is to preserve most of the current investment so that it is still possible to scale the network and accommodate latest technologies. This task needs to be performed by creating minimum disruption and by delivering the performance requirements expected by the standards. One simple way to do this is by swapping only a few components and preserve the rest of the physical infrastructure, which can be achieved in this way: * MTP is a trademark of USConec Page 6

Fig 10 MTP* to MTP* cassette Cabling for 40 Gb/s MTP* to MTP* cassette 5. Conclusions We have explored the different functional elements that are the main supporting pillars of modern Data Centres. What comes to light is that successful organizations need to focus on their business requirements and build an IT strategy that looks well into the future. MTP* to MTP* cassette Cabling for 40 Gb/s MTP* to MTP* cassette When it comes to the physical infrastructure elements it is of vital importance to build a network foundation that can support short terms needs while keeping truck on the long term objectives. So it is vital that today s physical infrastructure is built using best in class low-loss components and fibre cable. These must be designed to deliver the highest Bit Error Rate (BER) performance with lowest Insertion Loss (IL) and Return Loss (RL) values. Fig 11 shows an image of a BER eye pattern as a validation of a 10GbE channel performance according to the IEEE 802.3ae standard requirements. Successful migration strategies are modelled around the concept of agility, availability and efficiency in order to maximise current investment, minimise risk and provide a robust migration path for the adoption of future technologies. Fig 11 The future of your network is highly dependent on the decision you will make today. This is the time to start thinking more strategic about the physical infrastructure. This way it will be possible to minimise the efforts needed to support the business objectives. * MTP is a trademark of USConec Page 7

TE Connectivity Enterprise Networks Regional Sales Headquarters: North America Greensboro, NC, USA Ph: +1-800-553-0938 Fx: +1-717-986-7406 Latin America Buenos Aires, Argentina Ph: +54-11-4733-2200 Fx: +54-11-4733-2282 Europe Kessel-Lo, Belgium Ph: +32-16-35-1321 Fx: +32-16-35-2188 Mid East & Africa Cergy-Pontoise, France Ph: +33-1-3420-2122 Fx: +33-1-3420-2268 Asia Hong Kong, China Ph: +852-2735-1628 Fx: +852-2735-1625 Pacific Sydney, Australia Ph: +61-2-9554-2600 Fx: +61-2-9554-2519 TE Connectivity Enterprise Networks in Europe, Middle East, Africa and India: Austria Vienna Ph: +43-1-90560-1204 Fx: +43-1-90560-1270 Egypt Cairo Ph: +20-2-2419-2334 Fx: +20-2-2417-7647 Hungary Budapest Ph: +36-1-289-1007 Fx: +36-1-289-1010 Netherlands Den Bosch Ph: +31-73-6246-246 Fx: +31-73-6246-958 Russia Moscow Ph: +7-495-790-7902 Fx: +7-495-721-1894 Turkey Istanbul Ph: +90-212-281-8181 Fx: +90-212-281-8184 Belgium Kessel-Lo Ph: +32-16-35-1321 Fx: +32-16-35-2188 Finland Helsinki Ph: +358-95-12-34-20 Fx: +358-95-12-34-250 India Bangalore Ph: +91-80-4011-5000 Fx: +91-80-4011-5030 Norway Nesbru Ph: +47-66-77-88-99 Fx: +47-66-77-88-55 Spain Barcelona Ph: +34-93-291-0330 Fx: +34-93-291-0608 UK Stanmore, Middx Ph: +44-208-420-8140 Fx: +44-208-954-7467 Bulgaria Sofia Ph: +359-2-971-2152 Fx: +359-2-971-2153 France Cergy-Pontoise Ph: +33-1-3420-2122 Fx: +33-1-3420-2268 Italy Collegno (Torino) Ph: +39-011-4012-111 Fx: +39-011-4012-268 Poland Warsaw Ph: +48-22-4576-700 Fx: +48-22-4576-720 Sweden Upplands Väsby Ph: +46-8-5072-5000 Fx: +46-8-5072-5001 Ukraine Kiev Ph: +380-44-206-2265 Fx: +380-44-206-2264 Czech & Slovak Rep. Kurim Ph: +420-541-162-112 Fx: +420-541-162-223 Germany Darmstadt Ph: +49 6151 607-1547 Fx: +49 6151 607-1219 Kazakhstan Almaty Ph: +7-327-244-5875 Fx: +7-327-244-5877 Portugal Evora Ph: +351-961-377-331 Fx: +351-211-454-506 South Africa Midrand, Gauteng Ph: +27 11 707 6300 Fx: +27 11 466 3555 U.A.E. Dubai Ph: +971-4-321-0201 Fx: +971-4-321-6300 Denmark Glostrup Ph: +45-70-15-52-00 Fx: +45-43-44-14-14 Greece/Cyprus Athens Ph: +30-210-9370-396 Fx: +30-210-9370-655 Lithuania Vilnius Ph: +370-5-213-1402 Fx: +370-5-213-1403 Romania Bucharest Ph: +40-21-311-3479 Fx: +40-21-312-0574 Switzerland Steinach Ph: +41-71-447-0-447 Fx: +41-71-447-0-423 WHITE PAPER Contact us: Please contact us at one of the regional offices shown above. ADC KRONE products: www.te.com/adckrone AMP NETCONNECT products: www.ampnetconnect.com TE Connectivity: www.te.com TE Connectivity, TE connectivity (logo), Tyco Electronics, and TE (logo) are trademarks of the TE Connectivity Ltd. family of companies and its licensors. While TE Connectivity has made every reasonable effort to ensure the accuracy of the information in this document, TE Connectivity does not guarantee that it is error-free, nor does TE Connectivity make any other representation, warranty or guarantee that the information is accurate, correct, reliable or current. TE Connectivity reserves the right to make any adjustments to the information contained herein at any time without notice. TE Connectivity expressly disclaims all implied warranties regarding the information contained herein, including, but not limited to, any implied warranties of merchantability or fitness for a particular purpose. The dimensions in this document are for reference purposes only and are subject to change without notice. Specifications are subject to change without notice. Consult TE Connectivity for the latest dimensions and design specifications. Tyco Electronics Corporation, a TE Connectivity Ltd. Company. All Rights Reserved. 201658BE 6/11 Original 2011