Academic and Technology Resources

Similar documents
in other campus buildings and at remote campus locations. This outage would include Internet access at the main campus.

TECHNOLOGY. The seamless integration of technology resources into every space on all campuses is the goal through new construction and renovation

Data Centers, Information Technology and Low Current Services

TIA-942 Data Centre Standards Overview WHITE PAPER

About Injazat. Enterprise Cloud Services. Premier Data Center. IT Outsourcing. Learning and Development Services. Enterprise Application Services

62% 40% 96% 90% 77% 40%

Data Center Management

Title: Design of a Shared Tier III+ Data Center: A Case Study with Design Alternatives and Selection Criteria

Chabot-Las Positas Community College District Bond Measure Technology Improvements Supplement to Capital Improvement Program

Build for Tomorrow. Today.

Data Centers and Mission Critical Facilities Operations Procedures

NY-1 DATACENTER AT A GLANCE. NY-1 is a Tier III-rated, SAS SSAE16 and HIPAA-certified data center

Data Center Infrastructure & Managed Services Outline

Data Center Trend: Distributed Power in the White Space

DATA CENTRES UNDERSTANDING THE ISSUES TECHNICAL ARTICLE

DISASTER RECOVERY AND NETWORK REDUNDANCY WHITE PAPER

Using Industrial Ethernet Switches to Assure Maximum Uptime White Paper

World-Class Data Centers, Colocation, Cloud & Management Services. Central and Southern Oregon Locations East and West of the Cascade Divide

4/13/2015 Integrated Technologies Services Strategic Plan

Element D Services Heating, Ventilating, and Air Conditioning

OPERATIONS MANUAL DATA CENTER COLOCATION

RC:

NEWT Managed PBX A Secure VoIP Architecture Providing Carrier Grade Service

OPERATOR - FACILITY ADDRESS - DATE

Choosing Close-Coupled IT Cooling Solutions

CommScope Intelligent Building Infrastructure Solutions (IBIS)

South Asia s First Uptime Institute Certified TIER-IV IDC in Mumbai delivering % guaranteed uptime

IBM Portable Modular Data Center Overview for Critical Facilities Round Table

Perceptive Software Platform Services

Data Center Overview Document

by Metagyre, Inc. September 27, 2010

AEGIS DATA CENTER SERVICES POWER AND COOLING ANALYSIS SERVICE SUMMARY

Data Centre Stockholm II, Sweden Flexible, advanced and efficient by design.

MISSION CRITICAL FACILITIES DESIGN UPS BATTERY ROOM ENVIRONMENT CRITICAL DESIGN CONSIDERATIONS

Data Center Presentation

Presenters Brett Weiss, Gabe Martinez, Brian Kroeger.

hong kong//china data center specifications tel: fax: internet + intellectual property + intelligence

POWERING A CONNECTED ASIA. Pacnet Hong Kong CloudSpace1 Technical Specifications. Asia s Pioneering Facility with Direct Subsea Cable Access

Optimum Climate Control For Datacenter - Case Study. T. Prabu March 17 th 2009

Data Center Design Considerations. Walter P. Herring, RCDD/LAN/OSP Senior Systems Specialist Bala Consulting Engineers, Inc.

HIGHSPEED ETHERNET THE NEED FOR SPEED. Jim Duran, Product Manager - Americas WHITE PAPER. Molex Premise Networks

Power & Environmental Monitoring

Qualitative Data Analysis. Danielle Pierre Research Analyst, Student Research Analyst Postgraduate Program Humber College February 2014

Scalable. Affordable. Flexible. Fast.

Information Communications Technology (ICT) Innovations Category

Three Year District Technology Plan. Pasco School District #1 July 1, 2013 to June 30, 2016

Pacnet Hong Kong CloudSpace2 Technical Specifications. Upholding the Principles of Efficiency and Sustainability in Data Center Design

Building a Tier 4 Data Center on a Tier 1 Budget

What Is a Smart Building? p. 1 Brief History p. 1 What Is a Smart Building? p. 3 The Foundations of a Smart Building p. 7 Overview p.

VoIP Solutions Guide Everything You Need to Know

Colocation and Cloud Hosting Solutions. Build a strong and efficient physical infrastructure that delivers service reliability and profitability

Introducing UNSW s R1 Data Centre

Fiscal Year Information Technology Request

TELECOMMUNICATION SYSTEM HAZARD MITIGATION STRATEGIC PLANNING

Data Center Checklist

Leading Innovation: Hanford Enterprise VoIP. Todd Eckman, MSA, VP Information Management John Morgan, LM, Project Manager 04/19/2012

Colt Colocation Services Colt Technology Services Group Limited. All rights reserved.

Statement Of Work. Data Center Power and Cooling Assessment. Service. Professional Services. Table of Contents. 1.0 Executive Summary

Subtitle. VoIP Migration Strategy. Keys to a Successful Planning and Transition. VoIP Migration Strategy Compare Business Products

Digital College Direction

Network Design Incorporated

How To Move Your Data Center

The evolution of data connectivity

Multi-protocol Label Switching

Site Preparation Management Co., Ltd. March 1st 2013 By Nara Nonnapha, ATD UPTIME

IBM Twin Data Center Complex Ehningen Peter John IBM BS 2011 IBM Corporation

7QUESTIONSYOUNEEDTOASKBEFORE CHOOSINGACOLOCATIONFACILITY FORYOURBUSINESS

Data Operations Center Renovation

ASX Australian Liquidity Centre. ASXCoLo

Convergence. The Enabler of the Intelligent Building. Willie O Connell MD Northern Europe CommScope

UW MEDICINE SITE CONFIGURATION STANDARDS Overview

better broadband Redundancy White Paper

Colocation Service Definition. SD008 v1.3 Issue Date 19 Feb 09

Smart Data Center Solutions

Data Center Cabling Design Trends

TrueAlarm Fire Alarm Systems

Keyword: TIA 942, Tier level, Nuclear Data Center. Introduction

NeuStar Ultra Services Physical Security Overview

19 Site Location, Design and Attributes. Site Location, Design and Attributes

SolveXia Business Continuity Planning

san francisco//usa data center specifications tel: fax: internet + intellectual property + intelligence

How To Create An Intelligent Infrastructure Solution

Electrical. This section applies to the design and installation of building power distribution systems.

COLOCATION PROVIDERS:

Network Design. Yiannos Mylonas

Power over Ethernet technology for industrial Ethernet networks

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

WHITE PAPER. The Business Benefits of Upgrading Legacy IP Communications Systems.

The College of Southern Maryland respectfully submits the following responses regarding the project cited above:

Introduction to Datacenters & the Cloud

Data Center. Pre-terminated. Patch Panel System. Cabling Systems Simplified. Patch Panels. 10G + Gigabit. Patch Cords. Plug & Play Installation

Best Practices for Wire-free Environmental Monitoring in the Data Center

APC APPLICATION NOTE #112

AT&T Internet Data Center Site Specification - Phoenix Area (Mesa, AZ)

Current as of 11/10/08 1 of 1

Datacentre Studley. Dedicated managed environment for mission critical services. Six Degrees Group

DATA CENTER SERVICES

Health Care Solutions

IT Relocation Checklist

Transcription:

Academic and Technology Resources

140 VISION INTRODUCTION Technology should effectively support pedagogy and multiple learning modalitites to facilitate the teaching and learning process. technology resources are needed for an active and engaging learning community and should be designed as flexible, Scaleable and adapatable. FLEXIBLE The design must allow for degrees of flexibility. This may mean more diverse types of spaces, such as small work areas adjacent to lecture rooms or groupings of educational programs into learning communities to allow interdisciplinary work. Along with appropriate room size, University designs must address special configuration and program adjacency issues. The design may build in different degrees of amenities such as technological capabilities; acoustic performance; visibility to the outdoors; depending on what functions are needed to accommodate different learning styles. SCALABLE When effectively and transparently integrated into a new or renovated project, appropriate educational technology can allow students to become self-directed learners and collaborative partners. Awareness of University research expands and faculty becomes mentors. The technology facilitates program flexibility to allow any space on campus to become a learning space. The scale by which instructors and students connect can be within the traditional four walls of a single classroom, or it can reach across campus, across the region, or across the world for the enrichment of learning. The seamless integration of technology resources into every space on all campuses is the goal through new construction and renovation ADAPTABLE Learning happens everywhere. The way students access information today is different from the way they were given information in the past. Providing technology rich spaces with varied types of furniture transforms a traditional classroom layout into one that moves easily into configurations in support of different educational approaches. The human interaction of both faculty and students is changing and adapting to technology and will only continue to shift to a more collaborative and seamless approach. Research has allowed educators and designers to better understand how people work and learn best. Learning style is the way in which each individual learner begins to concentrate on, process, absorb and retain new and difficult material; creating space that is adaptable for different learning styles is creating opportunities for more students to succeed.

141 VISION TECHNOLOGY + VISION The University of La Verne will be nationally recognized for its enriching and relevant educational experience, which prepares students to achieve more than they ever imagined. This vision is directly connected to the environment in which students learn and the tools in which help facilitate their learning. TECHNOLOGY The goals outlined in the 2020 Strategic Vision illustrate the importance of preparing for a digitally integrated learning environment. This facility and technology master plan outlines the importance of connecting the Strategic Vision to the Master Plan through learning space typologies. To achieve Academic Excellence, the learning environment and space design must support the future needs. In order to deliver courses, programs and services in the manner most appropriate for excellence in student learning and that is fiscally responsible, an investment in technology is key. SPACE PEDAGOGY Face-to-face delivery, hybrid delivery or 100% on-line delivery all have technology needs. The interaction between space, technology and pedagogy will create a connected, student-centered, and engaging environment. 2020 STRATEGIC VISION Driven by four strategic Initiatives and goals 1 Achieving educational excellence (curricular & co-curricular) 2 Strengthening the human and financial resources of the University 3 Heightening reputation, visibility, and prominence 4 Enhancing appropriate, quality campus facilities and technologies

142 GOALS TECHNOLOGY GOALS The University strives to build and sustain technological infrastructure, tools and service that support student learning, faculty research, and administrative efforts with meaningful innovations. ROBUST / SCALABLE TECHNOLOGICAL INFRASTRUCTURE Infrastructure should support the University s growth both wired and wirelessly. The Infrastructure should support an increase in access points, security, speed, storage and bandwidth. ROBUST, FAULT-TOLERANT AND CENTRALIZED DATA CENTER The Data Center shall be a secure and robust support system for the University with a co-location and / or cloud facility to augment continued operations. EXPAND MOBILE TECHNOLOGY FOR STUDENTS + STAFF Provide a system that encourages and supports student and faculty devices; early alerts, class data from a mobile device, ability to gain information and / or analyze student activity. EXPAND HYBRID AND ONLINE TEACHING Expand hybrid and online teaching capabilities and offerings to capitalize on the learning benefits to better serve student needs and more effectively utilize space. PROVIDE TECHNICAL + NON-TECHNICAL TRAINING Increase the ability to provide technical and non-technical training of software and hardware systems for students, faculty and staff. TECHNOLOGY INTEGRATED LEARNING SPACES Design classroom spaces which model technology for students, faculty and staff. Include video conferencing, virtual and augmented reality capability and flexible space to hold lectures, classes, etc. ACCESS TO COMPUTER PROFILES Provide access to faculty, staff and student s desktop computer profile from anywhere, anytime. IMPROVE LEARNING MANAGEMENT SYSTEM LITERACY Expand the usage, sophistication and program offerings of the learning management system (currently Blackboard) to support a more robust student and faculty engagement. TECHNOLOGY LEARNING CENTER Provide access to labs, equipment check out, support resources and one-on-one guidance for software and hardware usage. REMOTE DIGITAL ACCESS Provide remote access to computer labs from anywhere, anytime. ENABLE + LEVERAGE DATA DRIVEN DECISIONS Technology shall allow for predictive analytics, business intelligence, for the University. The Infrastructure should leverage current data sets. Additionally, the University shall determine which other data elements they may need or want to report on. ENSURE A CONSISTENT EXPERIENCE The technology shall provide a consistent experience for both students and faculty throughout the University, in various rooms and across all campuses.

143 TECHNOLOGY VISION TODAY S LEARNER Learning Spaces must reflect the needs for today s learner and prepare students for the future careers as digital natives. The learner is a networked individual, including midcareer and adult learners. Students expect integrated and seamless access to technology and a globally connected community of consumers. Space must support face-to-face interaction, but also allow for a hybrid of learning and teaching styles where students can learn anytime from anywhere. The University of La Verne shall leverage technology to become more interactive and collaborative. The learning environments shall allow for students to connect their own devices to any display and be owners of the space. As the University incorporates more opportunities for Hybrid and Online courses, the use of distance learning spaces is increasingly relevant. The Technology should be simple, seamless and transparent to the learning space. At the core of the Technology Vision is the user - and ultimately, the spaces must accommodate more than one type of learning and teaching styles to create a flexible, scalable and adaptable educational space. In line with the recommendations from the Academic Excellence Sub-committee, to be consistent with the University s values and what is taught in the classrooms, the facilities must be sustainable, where we learn matters. Spaces shall be a balance between legacy and innovation with quality environments.

144 INFRASTRUCTURE DATA CENTER The current Campus Data Center is located within historic Founders Hall which is in a less-than ideal location for a variety of reasons, but is typical of an existing campus IT infrastructure that has evolved over time and that makes use of available space and systems. Structural concerns aside, the Data Center is located on the 2nd floor of the facility and is fairly exposed via large windows. Its location makes maintenance and access difficult given its primary entrance is within a working classroom. A seismic event that causes the local building official to prevent re-entry or occupancy has the potential to cause University IT operations to come to a stop. A severe windstorm or rainstorm could cause water and other physical damage given the Data Center s windows and IT equipment proximity to the exterior of the building. The Data Center does have local, rack-mounted uninterpretable power systems (UPS) providing power conditioning and battery-back-up allowing graceful & orderly shutdown of servers within a 30 minute period. However, the Data Center lacks a back-up generator to support the UPS systems and HVAC for the Data Center during extended outages. During a localized power outage at the Founders Hall, the IT systems supported by the data center are unavailable locally in other campus buildings and at remote campus locations. This outage would include Internet access at the main campus. Mechanical Systems serving the Data Center appear to be partially redundant allowing concurrent maintenance of mechanical cooling systems components. Electrical Systems serving the Data Center appear to be nonredundant and do not allow concurrent maintenance of electrical systems components without a data center shutdown. The importance of having access to data center services all the time will become a necessity in the future. As distance learning and collaborative/hybrid teaching models evolve, reliance on the network and servers will become absolute. Suffering a data outage will become more impactful to the University s main educational mission. The current high risk of a data center service outage coupled with a growing dependency on those services leads this Master Plan to recommend the construction of a data center in Phase I. Less typical of data center that have evolved over the long period, the IT Server, SAN and Network topologies and strategies are fairly robust with a reliance on visualization and redundant communication links to outlying facilities and remote campuses. The current IT infrastructure and telecommunications cabling topology simplifies locating a new Data Center in a new Campus building

145 INFRASTRUCTURE CURRENT INFRASTRUCTURE Current campus technology infrastructure consists largely of a central data center that provides a variety of services to on-campus and remote buildings through both local area networks and wide area networks. These networks carry both voice and data services utilizing a combination of fiber optic network (oncampus) and leased telecommunications connectivity (remote campuses). The current data center server infrastructure is nearly fully virtualized which creates scalability, flexibility and adaptability. New services can be added quickly and easily. The facility that houses the data center is one of the original campus buildings. As such, there are severe limitations with regard to data center growth, data center redundancy, maintainability, and survivability. The current on-campus telecommunications pathways are generally adequate for current needs with the exception of connections across Arrow Highway which are of very limited capacity. The current campus does make use of Wi-Fi, but will need to plan for cabling and network upgrades to support higher bandwidth width wi-fi technologies. Current classroom technology ranges from use of white boards, over head projectors, and short throw overhead projectors coupled to instructor station computers. Existing classroom layouts have been modified to support use of limited technology, often in ways that are less than optimal. Effective collaboration enhancement is not currently achieved by these limited classroom technology installations. SCALABILITY The campus has a good starting point with its adoption of Air-Blown Fiber Optic Cabling (ABF) which provides a scalable approach for both adding bandwidth and buildings. Use of Single-Mode Fiber optic cable provides higher bandwidth capability over longer distances. This should be the media of choice for intra-campus design. FLEXIBILITY AND ADAPTABILITY The existing Air-blown Fiber network should be expanded. ABF technology allows fiber strands to be removed and replaced without access to pullboxes or other intermediate points along a fiber run. Spare tubes will allow fiber to be upgraded without downtime. Intermediate Tube Distribution Units (TDUs) enable a direct point to point pathways to be configured between individual buildings. RELIABILITY The Telecommunications Cabling system should be expanded to allow for loop or ring diversity as University operations become more dependent on the network at the main campus and the remote campus locations. Dual disparate pathways should be created from each building to the data center to increase service reliability.

146 INFRASTRUCTURE MASTER PLAN RECOMMENDATIONS Campus Infrastructure Primary recommendation is to build a new data center in Phase I. With voice over Internet Protocol (VOIP) the need for multi-pair copper cabling has been greatly reduced. Future Campus Infrastructure will be designed around mostly fiber optic cable with only a small contingent of copper (50-pairs per building) for analog phone lines referred to as (POTS) or Plain Old Telephone Service. Certain building services such as elevator phone and fire alarm dialers require these copper pairs. To remain flexible the Fiber Optic cable Infrastructure should have the ability to change the number of strands and type of fiber cable as needed. This will be achieved with the use of air blown fiber (ABF), which is already deployed in a limited capacity on the main campus. The ABF is essentially a network of small tubes and tube distribution units (TDU) that can be plugged and unplugged to create a dedicated air and water tight path between any (2) points on Campus. Fiber strands are installed or extracted with the use of compressed inert gas. To achieve reliability the physical campus infrastructure should be configured such that any one building will have (2) physically redundant connections to the Campus Data Center. These physical connections will ideally take (2) disparate paths to the Data Center. Additional redundancy can be achieved by having an additional Data Center. Within new buildings a structured cabling approach should be considered. A 4-pair Category 6 cable provides transmission bandwidth up to 1000Mb/second and is suitable for most student or faculty workstation applications. A 4-pair Category 6A cable provides up to 10Gbit/second bandwidth and is suitable for extremely high capacity workstation applications and audio video transmission. WiFi throughout interior and exterior spaces will permit laptops, tablet devices, and smart phones to communicate at bandwidths up to 60 Mb/second.

147 DATA CENTER DATA CENTER The importance of having access to data center services all the time will become a necessity in the future. As distance learning and collaborative/hybrid teaching models evolve, reliance on the network and servers will become absolute. Suffering a data outage will become more impactful to the University s main educational mission. A new Campus Data Center is recommended in order to support the increasing importance of connectivity and the data-centric world that already exists today. The future data center will be located in a new campus building, purposefully-built with the ability to expand or contract based on modular building blocks (space, airconditioning, and power). Data center cabinets will be standardized to a universal configuration. Accessible overhead pathways will allow easy patching between cabinets. Spare multi-strand and multi-cable assemblies will accommodate quick reconfiguration in between cabinets to facilitate adding bandwidth or services in near real time. MAINTAINABILITY The new data center shall be concurrently maintainable where possible allowing maintenance of mechanical or electrical system without taking the data center off-line. The new Data Center should be purposefully-built to house critical telecommunications equipment. There will be a need for reliable environmental control to maintain strict operating temperature and humidity margins. Multiple Computer Room Air Conditioners (CRACs) will be used in an N +1 configuration such that design capacity is still maintained while any one component is turned off (or failed). An intelligent HVAC control panel will insure that all CRACs are run periodically on a rotation basis. Server cabinets should be arranged in a hot aisle / cold aisle alternating pattern per current best practices to maximized air flow efficiency. A hot aisle containment system will be installed around each hot aisle to prevent supply/ return air mixing further increasing HVAC efficiency and to comply with 2013 California Energy Code (CEC). Additional CEC requirements will include an air side of water side economizer to utilize outside air for Data Center air conditioning when environmental conditions permit. Power system should be equally robust and redundant to support the telecommunications equipment as well as the HVAC equipment. The new data center should be supported by a fuel cell or diesel generator capable of providing power to IT equipment, HVAC systems, lighting, and support systems for the Data Center. An automatic transfer switch (ATS) will automatically start the generator and switch over from utility power during a power outage. The Data Center should be equipped with N+1 UPS system to provide ride-over power during transition from utility power to generator power. Modern servers and switching equipment are equipped with dual redundant power supplies with (2) or more cords and plugs. The Data Center power should be routed from redundant power sources within the Data Center. If dual (redundant) UPSs are not specified then redundant power should be sourced from a non-ups circuit to mitigate the impact of a UPS failure. The Data Center should not be located below drains or pipes containing liquid. A dry sprinkler system should be installed so that water is not expelled if a sprinkler head is accidentally knocked off. A dry agent fire suppression system should be considered as well as an emergency power off system (EPO) to allow IT equipment to be shut off prior to release of any fire suppression agent or potential sprinkler activation. The Data Center will not require a raised floor for power distribution or air circulation. An acoustical t-grid drop tile ceiling configuration with ducted supply will provide the most efficient method of air distribution. Power outlets should be located above the server cabinets mounted on telecommunications style ladder tray. In lieu of conventional power distribution an overhead busway system such as Starline, which allows easy snap in and out of various receptacle types, should be considered.

148 DATA CENTER DATA CENTER RELIABILITY The existing data center will remain the single active data center until the future redundant data center is built and dual pathways have been established to each building. The future data center will be designed with N+1 reliability standard in mind. While not completely fault tolerant, elements of certain infrastructure systems should be made fault tolerant. Examples include use of on-site generation (fuel cell or diesel generator), two electrical systems feeding each data center cabinet. Dual disparate pathways should be created from the data center to both the University s service providers and the campus telecommunications infrastructure to increase service reliability. REMOTE CAMPUS Each remote campus, generally leased facility of up to 15 classrooms, should be equipped with redundant telecommunications service providers to ensure uninterrupted connectivity. Each remote campus network node/it room should be located in a dedicated space and equipped with a minimum of two telecommunications cabinets ( 1 for Audio Visual equipment, 1-for network equipment & voice/data cabling). The node/it room will service all classrooms, offices, and WiFi antennae in the building. The existing Data Center located in Founder s Hall could be decommissioned or maintained as a fail-over Data Center until such time as an off-site redundant Data Center or Co Location Facility can be established.