in other campus buildings and at remote campus locations. This outage would include Internet access at the main campus.



Similar documents
TIA-942 Data Centre Standards Overview WHITE PAPER

Title: Design of a Shared Tier III+ Data Center: A Case Study with Design Alternatives and Selection Criteria

Data Center Management

About Injazat. Enterprise Cloud Services. Premier Data Center. IT Outsourcing. Learning and Development Services. Enterprise Application Services

Data Centers, Information Technology and Low Current Services

Data Center Infrastructure & Managed Services Outline

Data Center Design Considerations. Walter P. Herring, RCDD/LAN/OSP Senior Systems Specialist Bala Consulting Engineers, Inc.

DATA CENTRES UNDERSTANDING THE ISSUES TECHNICAL ARTICLE

Element D Services Heating, Ventilating, and Air Conditioning

OPERATOR - FACILITY ADDRESS - DATE

Building a Tier 4 Data Center on a Tier 1 Budget

Presenters Brett Weiss, Gabe Martinez, Brian Kroeger.

Data Centers and Mission Critical Facilities Operations Procedures

Data Center Presentation

Choosing Close-Coupled IT Cooling Solutions

AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY

by Metagyre, Inc. September 27, 2010

OPERATIONS MANUAL DATA CENTER COLOCATION

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009

Statement Of Work. Data Center Power and Cooling Assessment. Service. Professional Services. Table of Contents. 1.0 Executive Summary

Data Center Cabling Design Trends

san francisco//usa data center specifications tel: fax: internet + intellectual property + intelligence

ASX Australian Liquidity Centre. ASXCoLo

Perceptive Software Platform Services

100 Locust Avenue, Berkeley Heights, New Jersey P R O P E R T Y O F F E R I N G

TELECOMMUNICATION SYSTEM HAZARD MITIGATION STRATEGIC PLANNING

POWERING A CONNECTED ASIA. Pacnet Hong Kong CloudSpace1 Technical Specifications. Asia s Pioneering Facility with Direct Subsea Cable Access

Colt Colocation Services Colt Technology Services Group Limited. All rights reserved.

South Asia s First Uptime Institute Certified TIER-IV IDC in Mumbai delivering % guaranteed uptime

Using Industrial Ethernet Switches to Assure Maximum Uptime White Paper

Network Design. Yiannos Mylonas

hong kong//china data center specifications tel: fax: internet + intellectual property + intelligence

Keyword: TIA 942, Tier level, Nuclear Data Center. Introduction

19 Site Location, Design and Attributes. Site Location, Design and Attributes

CommScope Intelligent Building Infrastructure Solutions (IBIS)

NY-1 DATACENTER AT A GLANCE. NY-1 is a Tier III-rated, SAS SSAE16 and HIPAA-certified data center

Data Centre Stockholm II, Sweden Flexible, advanced and efficient by design.

AT&T Internet Data Center Site Specification Chicago Area (Lisle, IL)

Data Center Overview Document

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

IBM Twin Data Center Complex Ehningen Peter John IBM BS 2011 IBM Corporation

Pacnet Hong Kong CloudSpace2 Technical Specifications. Upholding the Principles of Efficiency and Sustainability in Data Center Design

DISASTER RECOVERY AND NETWORK REDUNDANCY WHITE PAPER

Introduction to Datacenters & the Cloud

Introducing UNSW s R1 Data Centre

NeuStar Ultra Services Physical Security Overview

Colocation Service Definition. SD008 v1.3 Issue Date 19 Feb 09

The Three Principles of Data Center Infrastructure Design WHITE PAPER

Specialty Environment Design Mission Critical Facilities

Data Center. Pre-terminated. Patch Panel System. Cabling Systems Simplified. Patch Panels. 10G + Gigabit. Patch Cords. Plug & Play Installation

Data Center Commissioning: What you need to know

AT&T Internet Data Center Site Specification - Phoenix Area (Mesa, AZ)

Scalable. Affordable. Flexible. Fast.

How To Move Your Data Center

TrueAlarm Fire Alarm Systems

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

AT&T Internet Data Center Site Specification Washington-DC Area (Ashburn, VA)

Zone Distribution in the Data Center

RC:

LEVEL 3 DATA CENTER ASSESSMENT

OCTOBER Layer Zero, the Infrastructure Layer, and High-Performance Data Centers

How To Visit An Internet Data Center

Evaluating Datacenter Colocation

IBM Portable Modular Data Center Overview for Critical Facilities Round Table

Multi-protocol Label Switching

Location. Central Business District. Ataturk Airport IFC. Sabiha Gökçen Airport

How To Write An Infrastructural Standard For Data Center Infrastructure

CyberFortress Data Center at Innovation Park Infrastructure and Topology Guide. Innovation Park Charlotte. NC DRAFT V0.

Availability and Disaster Recovery: Basic Principles

Fiscal Year Information Technology Request

Technical specifications. Containerized data centre NTR CDC 40f+

Our data centres have been awarded with ISO 27001:2005 standard for security management and ISO 9001:2008 standard for business quality management.

APC APPLICATION NOTE #112

AMP NETCONNECT CABLING SYSTEMS FOR DATA CENTERS & STORAGE AREA NETWORKS (SANS) High-density, High Speed Optical Fiber and Copper Solutions

MISSION CRITICAL FACILITIES DESIGN UPS BATTERY ROOM ENVIRONMENT CRITICAL DESIGN CONSIDERATIONS

Data Center Solutions

Data Center Trend: Distributed Power in the White Space

Electrical. This section applies to the design and installation of building power distribution systems.

Co-location service from Bell. Version 1.0 April 10, Updated as of 4/10/2007 1

The number LEON COUNTY NG9-1-1, RFP # BC Addendum #3

A Scalable, Reconfigurable, and Efficient Data Center Power Distribution Architecture

Data center upgrade proposal. (phase one)

Engineers Edge, LLC PDH & Professional Training

Power over Ethernet technology for industrial Ethernet networks

Data Centre Design ready for your future

COLOCATION PROVIDERS:

AUDIO VISUAL SYSTEMS. Conference Room A/V Presentation Systems. Video Conference Rooms. Video Conference Room. Theater Room A/V System

Dat Da a t Cen t Cen er t St andar St d andar s d R o R undup o BICSI, TIA, CENELEC CENELE, ISO Steve Kepekci, RCDD

Transcription:

DATA CENTER The current Campus Data Center is located within historic Founders Hall which is in a less-than ideal location for a variety of reasons, but is typical of an existing campus IT infrastructure that has evolved over time and that makes use of available space and systems. Structural concerns aside, the Data Center is located on the 2nd floor of the facility and is fairly exposed via large windows. Its location makes maintenance and access difficult given its primary entrance is within a working classroom. A seismic event that causes the local building official to prevent re-entry or occupancy has the potential to cause University IT operations to come to a stop. A severe windstorm or rainstorm could cause water and other physical damage given the Data Center s windows and IT equipment proximity to the exterior of the building. The Data Center does have local, rack-mounted uninterpretable power systems (UPS) providing power conditioning and battery-back-up allowing graceful & orderly shutdown of servers within a 30 minute period. However, the Data Center lacks a back-up generator to support the UPS systems and HVAC for the Data Center during extended outages. During a localized power outage at the Founders Hall, the IT systems supported by the data center are unavailable locally in other campus buildings and at remote campus locations. This outage would include Internet access at the main campus. Mechanical Systems serving the Data Center appear to be partially redundant allowing concurrent maintenance of mechanical cooling systems components. Electrical Systems serving the Data Center appear to be nonredundant and do not allow concurrent maintenance of electrical systems components without a data center shutdown. The importance of having access to data center services all the time will become a necessity in the future. As distance learning and collaborative/hybrid teaching models evolve, reliance on the network and servers will become absolute. Suffering a data outage will become more impactful to the University s main educational mission. The current high risk of a data center service outage coupled with a growing dependency on those services should cause the University to strongly consider construction of a new Data Center on or off campus. Less typical of data center that have evolved over the long period, the IT Server, SAN and Network topologies and strategies are fairly robust with a reliance on visualization and redundant communication links to outlying facilities and remote campuses. The current IT infrastructure and telecommunications cabling topology simplifies locating a new Data Center in a new Campus building 146

CURRENT Current campus technology infrastructure consists largely of a central data center that provides a variety of services to on-campus and remote buildings through both local area networks and wide area networks. These networks carry both voice and data services utilizing a combination of fiber optic network (oncampus) and leased telecommunications connectivity (remote campuses). The current data center server infrastructure is nearly fully virtualized which creates scalability, flexibility and adaptability. New services can be added quickly and easily. The facility that houses the data center is one of the original campus buildings. As such, there are severe limitations with regard to data center growth, data center redundancy, maintainability, and survivability. The current on-campus telecommunications pathways are generally adequate for current needs with the exception of connections across Arrow Highway which are of very limited capacity. The current campus does make use of Wi-Fi, but will need to plan for cabling and network upgrades to support higher bandwidth width wi-fi technologies. Current classroom technology ranges from use of white boards, over head projectors, and short throw overhead projectors coupled to instructor station computers. Existing classroom layouts have been modified to support use of limited technology, often in ways that are less than optimal. Effective collaboration enhancement is not currently achieved by these limited classroom technology installations. SCALABILITY The campus has a good starting point with its adoption of Air-Blown Fiber Optic Cabling (ABF) which provides a scalable approach for both adding bandwidth and buildings. Use of Single-Mode Fiber optic cable provides higher bandwidth capability over longer distances. This should be the media of choice for intra-campus design. FLEXIBILITY AND ADAPTABILITY The existing Air-blown Fiber network should be expanded. ABF technology allows fiber strands to be removed and replaced without access to pullboxes or other intermediate points along a fiber run. Spare tubes will allow fiber to be upgraded without downtime. Intermediate Tube Distribution Units (TDUs) enable a direct point to point pathways to be configured between individual buildings. RELIABILITY Telecommunications Cabling The system should be expanded to allow for loop or ring diversity as University operations become more dependent on the network at the main campus and the remote campus locations. Dual disparate pathways should be created from each building to the data center to increase service reliability. 147

MASTER PLAN RECOMMENDATIONS Campus Infrastructure With voice over Internet Protocol (VOIP) the need for multi-pair copper cabling has been greatly reduced. Future Campus Infrastructure will be designed around mostly fiber optic cable with only a small contingent of copper (50-pairs per building) for analog phone lines referred to as (POTS) or Plain Old Telephone Service. Certain building services such as elevator phone and fire alarm dialers require these copper pairs. To remain flexible the Fiber Optic cable Infrastructure should have the ability to change the number of strands and type of fiber cable as needed. This will be achieved with the use of air blown fiber (ABF), which is already deployed in a limited capacity on the main campus. The ABF is essentially a network of small tubes and tube distribution units (TDU) that can be plugged and unplugged to create a dedicated air and water tight path between any (2) points on Campus. Fiber strands are installed or extracted with the use of compressed inert gas. To achieve reliability the physical campus infrastructure should be configured such that any one building will have (2) physically redundant connections to the Campus Data Center. These physical connections will ideally take (2) disparate paths to the Data Center. Additional redundancy can be achieved by having an additional Data Center. Within new buildings a structured cabling approach should be considered. A 4-pair Category 6 cable provides transmission bandwidth up to 1000Mb/second and is suitable for most student or faculty workstation applications. A 4-pair Category 6A cable provides up to 10Gbit/second bandwidth and is suitable for extremely high capacity workstation applications and audio video transmission. WiFi throughout interior and exterior spaces will permit laptops, tablet devices, and smart phones to communicate at bandwidths up to 60 Mb/second. 148

MAIN CAMPUS The importance of having access to data center services all the time will become a necessity in the future. As distance learning and collaborative/hybrid teaching models evolve, reliance on the network and servers will become absolute. Suffering a data outage will become more impactful to the University s main educational mission. A new Campus Data Center is recommended in order to support the increasing importance of connectivity and the data-centric world that already exists today. The future data center will be located in a new campus building, purposefully-built with the ability to expand or contract based on modular building blocks (space, airconditioning, and power). Data center cabinets will be standardized to a universal configuration. Accessible overhead pathways will allow easy patching between cabinets. Spare multi-strand and multi-cable assemblies will accommodate quick reconfiguration in between cabinets to facilitate adding bandwidth or services in near real time. MAINTAINABILITY The new data center shall be concurrently maintainable where possible allowing maintenance of mechanical or electrical system without taking the data center off-line. The new Data Center should be purposefully-built to house critical telecommunications equipment. There will be a need for reliable environmental control to maintain strict operating temperature and humidity margins. Multiple Computer Room Air Conditioners (CRACs) will be used in an N +1 configuration such that design capacity is still maintained while any one component is turned off (or failed). An intelligent HVAC control panel will insure that all CRACs are run periodically on a rotation basis. Server cabinets should be arranged in a hot aisle / cold aisle alternating pattern per current best practices to maximized air flow efficiency. A hot aisle containment system will be installed around each hot aisle to prevent supply/ return air mixing further increasing HVAC efficiency and to comply with 2013 California Energy Code (CEC). Additional CEC requirements will include an air side of water side economizer to utilize outside air for Data Center air conditioning when environmental conditions permit. Power system should be equally robust and redundant to support the telecommunications equipment as well as the HVAC equipment. The new data center should be supported by a fuel cell or diesel generator capable of providing power to IT equipment, HVAC systems, lighting, and support systems for the Data Center. An automatic transfer switch (ATS) will automatically start the generator and switch over from utility power during a power outage. The Data Center should be equipped with N+1 UPS system to provide ride-over power during transition from utility power to generator power. Modern servers and switching equipment are equipped with dual redundant power supplies with (2) or more cords and plugs. The Data Center power should be routed from redundant power sources within the Data Center. If dual (redundant) UPSs are not specified then redundant power should be sourced from a non-ups circuit to mitigate the impact of a UPS failure. The Data Center should not be located below drains or pipes containing liquid. A dry sprinkler system should be installed so that water is not expelled if a sprinkler head is accidentally knocked off. A dry agent fire suppression system should be considered as well as an emergency power off system (EPO) to allow IT equipment to be shut off prior to release of any fire suppression agent or potential sprinkler activation. The Data Center will not require a raised floor for power distribution or air circulation. An acoustical t-grid drop tile ceiling configuration with ducted supply will provide the most efficient method of air distribution. Power outlets should be located above the server cabinets mounted on telecommunications style ladder tray. In lieu of conventional power distribution an overhead busway system such as Starline, which allows easy snap in and out of various receptacle types, should be considered. 149

DATA CENTER RELIABILITY The existing data center will remain the single active data center until the future redundant data center is built and dual pathways have been established to each building. The future data center will be designed with N+1 reliability standard in mind. While not completely fault tolerant, elements of certain infrastructure systems should be made fault tolerant. Examples include use of on-site generation (fuel cell or diesel generator), two electrical systems feeding each data center cabinet. Dual disparate pathways should be created from the data center to both the University s service providers and the campus telecommunications infrastructure to increase service reliability. REMOTE CAMPUS Each remote campus, generally leased facility of up to 15 classrooms, should be equipped with redundant telecommunications service providers to ensure uninterrupted connectivity. Each remote campus network node/it room should be located in a dedicated space and equipped with a minimum of two telecommunications cabinets ( 1 for Audio Visual equipment, 1-for network equipment & voice/data cabling). The node/it room will service all classrooms, offices, and WiFi antennae in the building. The existing Data Center located in Founder s Hall could be decommissioned or maintained as a fail-over Data Center until such time as an off-site redundant Data Center or Co Location Facility can be established. 150