Compute Canada Technology Briefing
|
|
- Dwain Whitehead
- 8 years ago
- Views:
Transcription
1 Compute Canada Technology Briefing November 12, 2015
2 Introduction Compute Canada, in partnership with regional organizations ACENET, Calcul Québec, Compute Ontario and WestGrid, leads the acceleration of research innovation by deploying state-of-the-art advanced research computing (ARC) systems, storage and software solutions. Together we provide essential digital research services and infrastructure for Canadian researchers and their collaborators in all academic and industrial sectors. Our world-class team of more than 200 experts employed by 35 partner universities and research institutions across the country provide direct support to research teams and industrial partners. Advanced research computing accelerates research and discovery and helps solve today s grand scientific challenges. Using Compute Canada resources research teams and their international partners work with industry giants in the automotive, ICT, life sciences, aerospace and manufacturing sectors to drive innovation and new products to market.canadian researchers leverage their access to expert support and infrastructure to participate in international initiatives.researchers using advanced research computing rate significantly higher in citations than the average from Canada s top research universities and any international discipline average.
3 Key Facts: The investment of $75 million in funding from the Canada Foundation for Innovation (CFI) and provincial partners will address urgent and pressing needs and replace aging high performance computing systems across Canada. Compute Canada and its regional partners have more than 18 years of experience in accelerating results from industrial partnerships in advanced research computing and Canada s major science investments. Compute Canada currently manages more than 20 petabytes of storage and 2 petaflops of computing resources and supports all of Canada s major science investments and programs. ņ ņ With the implementation of this technology deployment plan Compute Canada will manage more than 60 petabytes of storage and 13.4 petaflops of computing resources. What this means for Canada s Research Community These improvements will allow Compute Canada to continue to support the wide array of excellent Canadian research identified in the proposal. The purchase of significantly more storage, deployed as part of an enhanced national storage infrastructure, will accelerate data-intensive research in Canada. The ability to purchase a single Large Parallel machine of over 65,000 cores will provide Canada s largest compute-intensive users with a new resource which far exceeds any machine in the Compute Canada fleet today. ņ ņ This investment is more than an opportunity to increase the size of storage systems and a raw number of cores. The new systems replace old technology with new technology and will be deployed with national services, coherent policies and a new operational model for the organization. This enhanced service level will allow more researchers to exploit the planned four new systems in an efficient and effective way. Compute Canada Technology Briefing - November
4 Overview Compute Canada is the national resource provider for advanced research computing and big data, delivering a full range of systems and services to researchers. Funding from the Canada Foundation for Innovation (CFI), matching funds from provincial partners and from vendors in the form of in-kind contributions, will enable the significant technology refresh program described below. This technology briefing document is intended to be circulated to Compute Canada stakeholders and suppliers. It provides status and planning for the technology refresh program resulting from CFI s cyberinfrastructure initiative, and will be implemented from 2015 through early It also anticipates planning for future growth. The total value of this capital program is $75M, to be spent mainly in 2016 and This reflects a $30M capital grant from CFI, a further $30M from provincial and institutional sources, and $15M of vendor in-kind1. By the end of 2017, many legacy systems will have been replaced by new computational systems and storage, totalling over 123,000 CPU cores and 60 PB of storage. 1 See:
5 New Systems at Four National Sites Through a formal competition among Compute Canada member institutions, four sites were selected to host the new systems and associated services. They are the University of Victoria, Simon Fraser University (SFU), the University of Waterloo, and the University of Toronto. New Computational Systems Planning for the new systems at the four sites has been responsive to user demand, site affinity and experience, and shifts in timing and funding. Envisioned system characteristics follow. University of Victoria: The GP1 system will be an OpenStack cloud, with emphasis on hosting virtual machines and other cloud workloads. At least 3,000 CPU cores 2 are anticipated by early 2016, with a 40% expansion planned in Simon Fraser University: The GP2 system will mainly focus on a mix of batch-oriented parallel and serial workloads with several different node types. It will also have a relatively small OpenStack partition that will federate with GP1 and GP3. Node types will include some large memory nodes, as well as approximately GPU nodes. At least 18,000 CPU cores is anticipated for mid-2016, with a 40% expansion planned in University of Waterloo: The GP3 system will have a similar design to GP2, and it is anticipated that GP2 and GP3 together will provide features for workload portability and resiliency. Plans for GP3 include at least 19,000 CPU cores in late 2016, with approximately 64 GPU nodes. A 40% expansion is planned in University of Toronto: The LP system will be deployed by approximately mid-2017, anticipated to have at least 66,000 CPU cores. This will be a balanced, tightly coupled high performance computing resource, designed mainly large parallel workloads. National Storage Architecture A new national storage architecture spanning the four sites will offer important benefits to users. Compute Canada will utilize concepts of generic storage building blocks, which will use software-defined storage techniques to deploy capacity and performance in a flexible, easily expandable, highly interoperable, and cost-effective manner. In addition to providing file systems for file-based storage, there will be object storage services. Object storage services will provide ease of use and built-in features including resiliency, georeplication, enhanced metadata, and combinations of public access and data isolation. Approximately 20 petabytes (PB) of persistent storage is planned to be deployed across the four sites in early and mid-2016, with expansion to over 60PB by early An offline/nearline tier of over 20PB will provide lower-cost capacity for backups and hierarchical storage management. High performance parallel filesystems for GP2, GP3 and LP will also be deployed 2 CPU core count equivalents are based on Intel Haswell computational capabilities. 3 All future plans for nodes, CPUs and other specifications are intended as conservative estimates Compute Canada Technology Briefing - November
6 Other Compute Canada member sites will be able to benefit from the national storage architecture, including those sites operating legacy resources. For example, users may need to migrate data to the new systems, or they might have use cases that will benefit from object storage or the larger capacity and higher performance the newer systems will offer. Delivery Timeline The Challenge 2 Stage 1+2 technology refresh will span 2 years of staged deployment. By the end of calendar year 2017, essentially all Challenge 2 Stage 1+2 funds will have been expended. The total supply at that time is forecasted to be at least 126,500 CPU cores ( Haswell equivalents) and 62 petabytes of usable persistent storage. Storage does not include near-line or backup storage, nor high-speed parallel scratch space. Challenge 2 Stage 1+2 Technology Planning. Compute is in Haswell-equivalent cores, Storage is in usable petabytes. Timeframe is calendar year quarters (i.e., Q is January-March 2016), and is approximate. The core and storage targets are estimates only. During the same two-year period, much of Compute Canada s existing equipment will be defunded and removed from the allocations process. Users will be moved to one of the new systems, and needed data will be migrated. Planning in 2014 for the site selection process identified 26 systems with 82,000 CPU cores from older generations (nearly 1PF total), for retirement by early calendar year A schedule for the remaining systems will be developed in conjunction with planning for further technology expansion, with some of the remaining systems likely to be removed from the allocations process in Much of the 15PB of allocatable storage available in 2015 will also be defunded and removed from the allocations process during the period.
7 Organizational Cooperation and Planning Planning for site selection and the ensuing technology refresh has included deep coordination among the four national host sites for all aspects of procuring, deploying, configuring, operating and supporting Compute Canada s suite of systems and services. The Compute Canada Technological Leadership Council (TLC) is responsible for developing specifications for the new systems, and will lead the procurement evaluation. TLC includes representatives from each national site, as also includes the four regional CTOs. It is led by the national CTO. New national teams, which will draw from Compute Canada member institutions, will run the systems and services, provide user support, and engage in cross-site coordination on major themes such as monitoring, storage, cloud services, and networking. The new systems and services will share practices for security. The teams for all national systems and services will provide defined coverage and response levels. Procurement Processes All four sites are working with the Compute Canada team to ensure an open and fair acquisition process. Resources will be purchased and owned by each site. Formation of specifications, and evaluation of bids, will be by national teams with full engagement by site procurement officers. Flexibility in Planning Plans described here will be modified as needed, based on discussions among the four sites, Compute Canada, and the national and provincial funding agencies. Re-scaling of expectations for system size and capabilities, if needed, will be based on experience with vendor pricing and the influence of the Canadian dollar s exchange rate. There will also be assessment of anticipated user demand, including for new technologies or configurations. This will be via the SPARC process described below, as well as through discussions with funding agencies and their researchers. By late 2016, updates will be considered for any needed revisions to planning for the expansion of the three GP systems, and of the scale, configuration and timing of the LP system. Alignment of supply and demand will be re-assessed for computation and storage. Planning will also be responsive to any new information concerning additional funding, the selection of additional hosting sites, shifts to Canada s digital research infrastructure strategy, or other factors. Funding and Governance SFU is the lead organization for the CFI capital program and is executing an interinstitutional agreement with the three other hosting institutions and Compute Canada. The Compute Canada membership will be involved in many broader aspects of organizational governance and planning. CFI retains oversight for capital spending for the technology refresh, as well as operational expenses via the Major Science Initiatives (MSI) program. Compute Canada Technology Briefing - November
8 Usage and Capabilities As Canada s national platform for advanced research computing, Compute Canada serves thousands of users in essentially every scientific discipline. Compute Canada is continually engaged in renewal and expansion of its services and its audience. Beyond Canada s academic community, this includes engagement with industry and with international partners. Some of the current and expanded services within Compute Canada are described below. Workload Portability: Users will find it easier than before to run their jobs on any of the new systems. This will be facilitated by deploying a single HPC batch system, having a common naming scheme for software, modules, and filesystem mount points, and incorporating mechanisms for data movement with the workload manager. For projects involving a live stream of observational data, or other time-sensitive characteristics, workload portability will help to ensure the jobs run on time, wherever appropriate HPC resources are available. Cloud Computing: Building on Compute Canada s successful early deployment of cloud systems and services, the GP1, GP2 and GP3 systems will comprise a federated cloud including single sign-on, shared data services, a common cloud scheduler, and other features for resiliency and ease of use. Additional cloud resources within Compute Canada will be able to become part of the federated cloud, simply by using the same authentication and configuration parameters. Big Data: The storage architecture and cloud services will facilitate big data workloads, including data analytics. Storage will include database capabilities, and cloud services will support virtual machines with user-selected software and features. National Operations and Support: The national teams will work together to provide a consistent and well-supported environment for computation and data. This will include all aspects of configuration and support. Users will have a single point of contact to the national helpdesk, and will also be able to benefit from the expertise of on-campus support personnel. Resource Allocations: Compute Canada will continue to allocate compute and storage resources through a fair and open process. Workload portability and the consistency of configuration and support will give users extra flexibility, when desired, in their choice of computing resources.
9 National Services Consultations have helped inform planning for systems and services in Service demands were articulated while consulting with applicants for CFI s Challenge 1 Stage 1, including these middleware services that were identified by multiple applicants: Identification and Authorization Service: Provide common login across systems. Software Distribution Service: Version-controlled software distribution to multiple sites. Data Transfer Service: To move datasets among collaborators and their repositories. Monitoring Service: Track uptime and availability of services and platforms. Resource Publishing Service: Current information about available resources. These services will be deployed beginning in 2016 for all new systems as part of the infrastructure investment. Additional services will be identified, and developed, deployed and supported based on demand. It is Compute Canada s intention to provide a useful and effective set of middleware services, accessible to any user or group. These will provide a high performance and well-supported baseline upon which users or groups may build their own custom applications. Compute Canada views these tools as needed software infrastructure, and is devoting some of the Challenge 2 Stage 1+2 funds to developing that infrastructure. Compute Canada views many of the new services identified above as essential enabling tools for Research Data Management (RDM). As data volumes grow, there is a growing demand for RDM. Compute Canada will provide a common set of middleware services for users with this need. RDM will continue to mature during the period, and will include cooperation with other digital research infrastructure providers in Canada. Future Consultations on this Plan In early 2016, Compute Canada will embark on a second round of SPARC consultations. SPARC2 will help to identify current and future needs, as well as to parameterize growth in user demand. As with the previous SPARC, scientists and engineers from the across Canada will be invited to submit descriptions of their research goals, and the needed advanced research computing capabilities and capacities required to achieve those goals. Compute Canada Technology Briefing - November
10 Projections for Future Supply and Demand Technology Impact of Challenge 2 Stage 1+2 By the end of calendar year 2017, Compute Canada will have delivered essentially all of the new computational and storage capacity facilitated by CFI s Challenge 2 Stage 1+2 award. The $75M value of capital investment will replace most legacy systems and associated storage. Modernization and capacity resulting from Challenge 2 Stage Primary disk does not include offline/nearline storage for backups or near-line storage. It does include a variety of disk- or disk-like technologies, including object storage, block storage, storage replicas, and storage for filesystems.
11 During this technology refresh program, CPUs will be replaced with the latest generation, along with more memory. New nodes will be augmented by GPUs and accelerators. A typical node in service in 2015 has dual 6- or 8-core CPUs and 16-32GB of memory. A typical node to be deployed in 2016 will have dual 14- or 16-core CPUs, with 128GB of memory or greater. Challenge 2 Stage 1+2 is an important and necessary modernization of the DRI provided by Compute Canada. Sustained investment is needed to accommodate the needs of current and future users of Canada s national platform for computation and storage. Scenarios of Increasing Demand There are several factors impacting planning future demand for advanced research computing: 1. Demand by users who engage in computational modelling, for additional CPU resources: a. To increase spatial or temporal resolution; b. To add physics or other simulation factors that were previously too slow or computationally expensive to calculate; c. To test additional parameters or scenarios; d. For projects and users new to Compute Canada, especially in nontraditional fields. 2. Demand for additional storage resources for computational modellers: a. Larger input and output datasets, due to larger or more complex models; b. The need to keep some datasets beyond the end of a computational campaign, to assist in future modelling or to support publications. 3. Demand for portals and gateways, including from new user populations: a. May include needs for highly resilient services and systems; b. May include needs for high-end storage subsystems for database operations; c. Bring a user base that may be quite large, and may include the general public. 4. Demand for projects emphasizing instruments and observational data gathering and analysis: a. May have irreplaceable or highly valuable data, which needs to have multiple copies at multiple locations; b. Include Compute Canada s largest storage users, many of whom have new instruments in development; c. Require computational resources for post-processing, analysis, portals, visualization and/or reanalysis. Compute Canada Technology Briefing - November
12 5. Demand by data-focused projects: a. May require isolation of data or computation from inappropriate disclosure; b. Includes some usage (such as personal health information) with regulatory concerns; c. Includes emphasis on data analytic methods that are not yet generally available on Compute Canada resources. 6. Demand from projects being directed by funding agencies to consider utilizing Compute Canada resources: a. Include a range of use cases that might not be in Compute Canada s current service catalog, but will be developed; b. Some of these projects are very large and demanding; c. Projects view Compute Canada as a partner, not just a resource provider. 7. Demand from industry: a. May require isolation of data or computation from inappropriate disclosure; b. Interested in the expertise of Compute Canada, perhaps more than the computational resources. 8. Demand from government: a. Exist within a regulatory environment that might not be in part of Compute Canada s current service catalog, but will be developed; b. Can involve a long planning timeline.
13 For Challenge 2 Stage 1+2, planning emphasized modernization of the computational and storage resources. Planning has been sensitive to anticipated demand growth and changing patterns of utilization, informed via Challenge 1 and the SPARC consultations. The annual allocations process by Compute Canada is a major indicator of growth trends, since it aggregates hundreds of existing projects. Data for the 2016 allocation period are now available, and reflect the impact of Challenge 2 Stage 1+2. In 2017, further growth is anticipated, along with retirement of legacy resources. Through the SPARC process, Compute Canada has identified the expected growth in community demand for storage (15x) and compute (7x) resources through This data has been converted into a doubling time to project future demand in equivalent core years and terabytes of storage. Demand indicators support using a doubling time of 1.8 years for computational demand, and 1.3 years for storage demand. By forecasting demand based on these doubling times, we can project a trend into the future. For this projection, we present a range where the lower bound represents no growth in the Compute Canada user base, and the upper bound represents ongoing increases in the user base, following historical trends Trends project a demand for 1-3 million Haswell-equivalent cores by 2020, and more than an exabyte of persistent storage. These projections may turn out to be underestimates, since some existing disciplines making extensive use of Compute Canada resources today anticipate needing over 1 million cores or 1 exabyte of data just for their own projects by It is hoped that Compute Canada will be stewards, along with members, regions and provincial partners, of the sustained capital investment that will be required to meet these demands. Compute Canada Technology Briefing - November
14
15 Vision 2020 Compute Canada, as a leading provider of digital research infrastructure (DRI), is taking an integrated approach to data and computational infrastructure in order to benefit all sectors of society. As a result of the technology refresh and modernization supported by CFI s Challenge 2 Stage 1+2, excellent research will benefit from modern and capable resources for computationally-based and data-focused work. Compute Canada is coordinating with government funding agencies and with other DRI providers to develop a vision of coordinating to provide the world s most advanced, integrated and capable systems, services and support for research. Future researchers will have seamless access to DRI resources, integrated together for maximum efficiency and performance, without needing to be concerned with artificial boundaries based on different geographical locations or providers. By 2020, Compute Canada will offer a comprehensive catalog of resources to support the full data research cycle, allowing researchers and their industrial and international partners to compete at a global scale. In cooperation with Canada s other DRI providers, Compute Canada s systems and services will facilitate workflows that easily span different resources: from the lab or campus, to national computational resources, analytical facilities, publication archives, and with collaborators. Local support and engagement will remain a hallmark of delivering excellent service to all users. The pathway to this future has begun, with the modernization of Compute Canada s advanced research computing cyberinfrastructure through the CFI Challenge 2 Stage 1+2 program. Compute Canada Technology Briefing - November
16 36 York Mills Road, Suite 505, Toronto, Ontario, Canada M2P 2E9
A Vision for Research Excellence in Canada
A Vision for Research Excellence in Canada Compute Canada s Submission to the Digital Research Infrastructure Strategy Consultations Contents A Vision for Research Excellence in Canada 3 Overview of Recommendations
More informationFuture Directions in Canadian Research Computing: Complexities of Big Data TORONTO RESEARCH MANAGEMENT SYMPOSIUM, DECEMBER 4, 2014
Future Directions in Canadian Research Computing: Complexities of Big Data TORONTO RESEARCH MANAGEMENT SYMPOSIUM, DECEMBER 4, 2014 1 Role of ARC* Today * Advanced Research Computing New Paradigms Simulation:
More informationBig Workflow: More than Just Intelligent Workload Management for Big Data
Big Workflow: More than Just Intelligent Workload Management for Big Data Michael Feldman White Paper February 2014 EXECUTIVE SUMMARY Big data applications represent a fast-growing category of high-value
More informationCapitalizing on Big Data
Capitalizing on Big Data CARL s response to the consultation document Capitalizing on Big Data: Toward a Policy Framework for Advancing Digital Scholarship in Canada December 12 th 2013 Who we are The
More informationPlanning a Successful Cloud Strategy Identify existing assets, assess your business needs, and develop a technical and business plan for your cloud
SOLUTION WHITE PAPER Planning a Successful Cloud Strategy Identify existing assets, assess your business needs, and develop a technical and business plan for your cloud Table of Contents Executive Summary
More informationB.2 Executive Summary
B.2 Executive Summary As demonstrated in Section A, Compute Canada (CC) supports a vibrant community of researchers spanning all disciplines and regions in Canada. Providing access to world- class infrastructure
More informationWhite Paper. Version 1.2 May 2015 RAID Incorporated
White Paper Version 1.2 May 2015 RAID Incorporated Introduction The abundance of Big Data, structured, partially-structured and unstructured massive datasets, which are too large to be processed effectively
More informationCYBERINFRASTRUCTURE FRAMEWORK FOR 21 st CENTURY SCIENCE AND ENGINEERING (CIF21)
CYBERINFRASTRUCTURE FRAMEWORK FOR 21 st CENTURY SCIENCE AND ENGINEERING (CIF21) Goal Develop and deploy comprehensive, integrated, sustainable, and secure cyberinfrastructure (CI) to accelerate research
More informationReducing Storage TCO With Private Cloud Storage
Prepared by: Colm Keegan, Senior Analyst Prepared: October 2014 With the burgeoning growth of data, many legacy storage systems simply struggle to keep the total cost of ownership (TCO) in check. This
More informationManaging Traditional Workloads Together with Cloud Computing Workloads
Managing Traditional Workloads Together with Cloud Computing Workloads Table of Contents Introduction... 3 Cloud Management Challenges... 3 Re-thinking of Cloud Management Solution... 4 Teraproc Cloud
More informationHow To Speed Up A Flash Flash Storage System With The Hyperq Memory Router
HyperQ Hybrid Flash Storage Made Easy White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com sales@parseclabs.com
More informationHadoop in the Hybrid Cloud
Presented by Hortonworks and Microsoft Introduction An increasing number of enterprises are either currently using or are planning to use cloud deployment models to expand their IT infrastructure. Big
More informationSimplified Management With Hitachi Command Suite. By Hitachi Data Systems
Simplified Management With Hitachi Command Suite By Hitachi Data Systems April 2015 Contents Executive Summary... 2 Introduction... 3 Hitachi Command Suite v8: Key Highlights... 4 Global Storage Virtualization
More informationData Requirements from NERSC Requirements Reviews
Data Requirements from NERSC Requirements Reviews Richard Gerber and Katherine Yelick Lawrence Berkeley National Laboratory Summary Department of Energy Scientists represented by the NERSC user community
More informationThe HP IT Transformation Story
The HP IT Transformation Story Continued consolidation and infrastructure transformation impacts to the physical data center Dave Rotheroe, October, 2015 Why do data centers exist? Business Problem Application
More informationSurvey of Canadian and International Data Management Initiatives. By Diego Argáez and Kathleen Shearer
Survey of Canadian and International Data Management Initiatives By Diego Argáez and Kathleen Shearer on behalf of the CARL Data Management Working Group (Working paper) April 28, 2008 Introduction Today,
More informationHyperQ DR Replication White Paper. The Easy Way to Protect Your Data
HyperQ DR Replication White Paper The Easy Way to Protect Your Data Parsec Labs, LLC 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com
More informationDefining a blueprint for a smarter data center for flexibility and cost-effectiveness
IBM Global Technology Services Cross-industry Defining a blueprint for a smarter data center for flexibility and cost-effectiveness Analytics-based services provide insight for action 2 Defining a blueprint
More informationFive best practices for deploying a successful service-oriented architecture
IBM Global Services April 2008 Five best practices for deploying a successful service-oriented architecture Leveraging lessons learned from the IBM Academy of Technology Executive Summary Today s innovative
More informationConnecting Researchers, Data & HPC
Connecting Researchers, Data & HPC Nick Nystrom Director, Strategic Applications & Bridges PI nystrom@psc.edu July 1, 2015 2015 Pittsburgh Supercomputing Center The Shift to Big Data New Emphases Pan-STARRS
More informationMEETING THE CHALLENGES OF COMPLEXITY AND SCALE FOR MANUFACTURING WORKFLOWS
MEETING THE CHALLENGES OF COMPLEXITY AND SCALE FOR MANUFACTURING WORKFLOWS Michael Feldman White paper November 2014 MARKET DYNAMICS Modern manufacturing increasingly relies on advanced computing technologies
More informationECDF Infrastructure Refresh - Requirements Consultation Document
Edinburgh Compute & Data Facility - December 2014 ECDF Infrastructure Refresh - Requirements Consultation Document Introduction In order to sustain the University s central research data and computing
More informationWrangler: A New Generation of Data-intensive Supercomputing. Christopher Jordan, Siva Kulasekaran, Niall Gaffney
Wrangler: A New Generation of Data-intensive Supercomputing Christopher Jordan, Siva Kulasekaran, Niall Gaffney Project Partners Academic partners: TACC Primary system design, deployment, and operations
More informationProspectus for the Proposed IDRE Cloud Archival Storage Program
Prospectus for the Proposed IDRE Cloud Archival Storage Program PROGRAM GOAL Prepared by the IDRE Research Technology Group Reviewed and Endorsed by the IDRE Board 09/04/12 Rev 1.9.2 The goal of this program
More informationOptimizing Storage for Better TCO in Oracle Environments. Part 1: Management INFOSTOR. Executive Brief
Optimizing Storage for Better TCO in Oracle Environments INFOSTOR Executive Brief a QuinStreet Excutive Brief. 2012 To the casual observer, and even to business decision makers who don t work in information
More informationWorkload Automation Challenges and Opportunities
I D C E X E C U T I V E B R I E F Workload Automation Challenges and Opportunities May 2011 Sponsored by BMC Executive Summary Enterprise IT workload environments are becoming more complex, dynamic, and
More informationCost-effective supply chains: Optimizing product development through integrated design and sourcing
Cost-effective supply chains: Optimizing product development through integrated design and sourcing White Paper Robert McCarthy, Jr., associate partner, Supply Chain Strategy Page 2 Page 3 Contents 3 Business
More informationCampus IT Strategic Plan
Campus IT Strategic Plan February 2013 Vision: The University of Iowa information technology community uses open, collaborative processes to provide IT services and technologies that add measurable value
More informationMitel Professional Services Catalog for Contact Center JULY 2015 SWEDEN, DENMARK, FINLAND AND BALTICS RELEASE 1.0
Mitel Professional Services Catalog for Contact Center JULY 2015 SWEDEN, DENMARK, FINLAND AND BALTICS RELEASE 1.0 Contents MITEL PROFESSIONAL SERVICES DELIVERY METHODOLOGY... 2 CUSTOMER NEEDS... 2 ENGAGING
More informationScaling up to Production
1 Scaling up to Production Overview Productionize then Scale Building Production Systems Scaling Production Systems Use Case: Scaling a Production Galaxy Instance Infrastructure Advice 2 PRODUCTIONIZE
More informationORACLE INFRASTRUCTURE AS A SERVICE PRIVATE CLOUD WITH CAPACITY ON DEMAND
ORACLE INFRASTRUCTURE AS A SERVICE PRIVATE CLOUD WITH CAPACITY ON DEMAND FEATURES AND FACTS FEATURES Hardware and hardware support for a monthly fee Optionally acquire Exadata Storage Server Software and
More informationIBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand.
IBM Global Technology Services September 2007 NAS systems scale out to meet Page 2 Contents 2 Introduction 2 Understanding the traditional NAS role 3 Gaining NAS benefits 4 NAS shortcomings in enterprise
More informationMake the Most of Big Data to Drive Innovation Through Reseach
White Paper Make the Most of Big Data to Drive Innovation Through Reseach Bob Burwell, NetApp November 2012 WP-7172 Abstract Monumental data growth is a fact of life in research universities. The ability
More informationSwiftStack Filesystem Gateway Architecture
WHITEPAPER SwiftStack Filesystem Gateway Architecture March 2015 by Amanda Plimpton Executive Summary SwiftStack s Filesystem Gateway expands the functionality of an organization s SwiftStack deployment
More informationIaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures
IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction
More informationSGI HPC Systems Help Fuel Manufacturing Rebirth
SGI HPC Systems Help Fuel Manufacturing Rebirth Created by T A B L E O F C O N T E N T S 1.0 Introduction 1 2.0 Ongoing Challenges 1 3.0 Meeting the Challenge 2 4.0 SGI Solution Environment and CAE Applications
More informationA Policy Framework for Canadian Digital Infrastructure 1
A Policy Framework for Canadian Digital Infrastructure 1 Introduction and Context The Canadian advanced digital infrastructure (DI) ecosystem is the facilities, services and capacities that provide the
More informationBusiness Intelligence
Transforming Information into Business Intelligence Solutions Business Intelligence Client Challenges The ability to make fast, reliable decisions based on accurate and usable information is essential
More informationOPEN MODERN DATA ARCHITECTURE FOR FINANCIAL SERVICES RISK MANAGEMENT
WHITEPAPER OPEN MODERN DATA ARCHITECTURE FOR FINANCIAL SERVICES RISK MANAGEMENT A top-tier global bank s end-of-day risk analysis jobs didn t complete in time for the next start of trading day. To solve
More informationSoftware. Enabling Technologies for the 3D Clouds. Paolo Maggi (paolo.maggi@nice-software.com) R&D Manager
Software Enabling Technologies for the 3D Clouds Paolo Maggi (paolo.maggi@nice-software.com) R&D Manager What is a 3D Cloud? "Cloud computing is a model for enabling convenient, on-demand network access
More informationSUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager pchadwick@suse.com. Product Marketing Manager djarvis@suse.
SUSE Cloud 2.0 Pete Chadwick Douglas Jarvis Senior Product Manager pchadwick@suse.com Product Marketing Manager djarvis@suse.com SUSE Cloud SUSE Cloud is an open source software solution based on OpenStack
More informationEMC PERSPECTIVE. The Private Cloud for Healthcare Enables Coordinated Patient Care
EMC PERSPECTIVE The Private Cloud for Healthcare Enables Coordinated Patient Care Table of Contents A paradigm shift for Healthcare IT...................................................... 3 Cloud computing
More informationCEDA Storage. Dr Matt Pritchard. Centre for Environmental Data Archival (CEDA) www.ceda.ac.uk
CEDA Storage Dr Matt Pritchard Centre for Environmental Data Archival (CEDA) www.ceda.ac.uk How we store our data NAS Technology Backup JASMIN/CEMS CEDA Storage Data stored as files on disk. Data is migrated
More informationWHITEPAPER. A Technical Perspective on the Talena Data Availability Management Solution
WHITEPAPER A Technical Perspective on the Talena Data Availability Management Solution BIG DATA TECHNOLOGY LANDSCAPE Over the past decade, the emergence of social media, mobile, and cloud technologies
More informationBringing Big Data Modelling into the Hands of Domain Experts
Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks david.willingham@mathworks.com.au 2015 The MathWorks, Inc. 1 Data is the sword of the
More informationA Cloud WHERE PHYSICAL ARE TOGETHER AT LAST
A Cloud WHERE PHYSICAL AND VIRTUAL STORAGE ARE TOGETHER AT LAST Not all Cloud solutions are the same so how do you know which one is right for your business now and in the future? NTT Communications ICT
More informationOpenAIRE Research Data Management Briefing paper
OpenAIRE Research Data Management Briefing paper Understanding Research Data Management February 2016 H2020-EINFRA-2014-1 Topic: e-infrastructure for Open Access Research & Innovation action Grant Agreement
More informationMICROSOFT U.S. BUSINESS & MARKETING ORGANIZATION
MICROSOFT U.S. BUSINESS & MARKETING ORGANIZATION Marketing team aggregates and syndicates digital content on SharePoint 2010 for greater impact, efficiency, and control The Microsoft U.S. Business Marketing
More informationScaling LS-DYNA on Rescale HPC Cloud Simulation Platform
Scaling LS-DYNA on Rescale HPC Cloud Simulation Platform Joris Poort, President & CEO, Rescale, Inc. Ilea Graedel, Manager, Rescale, Inc. 1 Cloud HPC on the Rise 1.1 Background Engineering and science
More informationServer and Storage Sizing Guide for Windows 7 TECHNICAL NOTES
Server and Storage Sizing Guide for Windows 7 TECHNICAL NOTES Table of Contents About this Document.... 3 Introduction... 4 Baseline Existing Desktop Environment... 4 Estimate VDI Hardware Needed.... 5
More informationHadoop Cluster Applications
Hadoop Overview Data analytics has become a key element of the business decision process over the last decade. Classic reporting on a dataset stored in a database was sufficient until recently, but yesterday
More informationIntegrate Big Data into Business Processes and Enterprise Systems. solution white paper
Integrate Big Data into Business Processes and Enterprise Systems solution white paper THOUGHT LEADERSHIP FROM BMC TO HELP YOU: Understand what Big Data means Effectively implement your company s Big Data
More informationImplementing Oracle BI Applications during an ERP Upgrade
Implementing Oracle BI Applications during an ERP Upgrade Summary Jamal Syed BI Practice Lead Emerging solutions 20 N. Wacker Drive Suite 1870 Chicago, IL 60606 Emerging Solutions, a professional services
More informationIntroduction to AWS Economics
Introduction to AWS Economics Reducing Costs and Complexity May 2015 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes
More informationCloud Lifecycle Management
Cloud Lifecycle Managing Cloud Services from Request to Retirement SOLUTION WHITE PAPER Table of Contents EXECUTIVE SUMMARY............................................... 1 CLOUD LIFECYCLE MANAGEMENT........................................
More informationNASCIO EA Development Tool-Kit Solution Architecture. Version 3.0
NASCIO EA Development Tool-Kit Solution Architecture Version 3.0 October 2004 TABLE OF CONTENTS SOLUTION ARCHITECTURE...1 Introduction...1 Benefits...3 Link to Implementation Planning...4 Definitions...5
More informationCYBERINFRASTRUCTURE FRAMEWORK FOR 21 ST CENTURY SCIENCE, ENGINEERING, AND EDUCATION (CIF21) $100,070,000 -$32,350,000 / -24.43%
CYBERINFRASTRUCTURE FRAMEWORK FOR 21 ST CENTURY SCIENCE, ENGINEERING, AND EDUCATION (CIF21) $100,070,000 -$32,350,000 / -24.43% Overview The Cyberinfrastructure Framework for 21 st Century Science, Engineering,
More informationAPPENDIX 1 SUBSCRIPTION SERVICES
APPENDIX 1 SUBSCRIPTION SERVICES Red Hat sells subscriptions that entitle you to receive Red Hat services and/or Software during the period of the subscription (generally, one or three years). This Appendix
More informationHyperQ Storage Tiering White Paper
HyperQ Storage Tiering White Paper An Easy Way to Deal with Data Growth Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com
More informationBUILDING A SCALABLE BIG DATA INFRASTRUCTURE FOR DYNAMIC WORKFLOWS
BUILDING A SCALABLE BIG DATA INFRASTRUCTURE FOR DYNAMIC WORKFLOWS ESSENTIALS Executive Summary Big Data is placing new demands on IT infrastructures. The challenge is how to meet growing performance demands
More informationClodoaldo Barrera Chief Technical Strategist IBM System Storage. Making a successful transition to Software Defined Storage
Clodoaldo Barrera Chief Technical Strategist IBM System Storage Making a successful transition to Software Defined Storage Open Server Summit Santa Clara Nov 2014 Data at the core of everything Data is
More informationHigh Performance Computing OpenStack Options. September 22, 2015
High Performance Computing OpenStack PRESENTATION TITLE GOES HERE Options September 22, 2015 Today s Presenters Glyn Bowden, SNIA Cloud Storage Initiative Board HP Helion Professional Services Alex McDonald,
More informationInside Track Research Note. In association with. Enterprise Storage Architectures. Is it only about scale up or scale out?
Research Note In association with Enterprise Storage Architectures Is it only about scale up or scale out? August 2015 About this The insights presented in this document are derived from independent research
More informationAugust 2009. Transforming your Information Infrastructure with IBM s Storage Cloud Solution
August 2009 Transforming your Information Infrastructure with IBM s Storage Cloud Solution Page 2 Table of Contents Executive summary... 3 Introduction... 4 A Story or three for inspiration... 6 Oops,
More informationCloudCenter Full Lifecycle Management. An application-defined approach to deploying and managing applications in any datacenter or cloud environment
CloudCenter Full Lifecycle Management An application-defined approach to deploying and managing applications in any datacenter or cloud environment CloudCenter Full Lifecycle Management Page 2 Table of
More informationMitel Professional Services UK Catalogue for Unified Communications and Collaboration
Mitel Professional Services UK Catalogue for Unified Communications and Collaboration JUNE 2015 DOCUMENT RELEASE# 1.0 CATALOGUE SERVICES OVERVIEW... 3 TECHNICAL CONSULTING & DESIGN... 5 NETWORK ASSESSMENT...
More informationData in the Cloud: The Changing Nature of Managing Data Delivery
Research Publication Date: 1 March 2011 ID Number: G00210129 Data in the Cloud: The Changing Nature of Managing Data Delivery Eric Thoo Extendible data integration strategies and capabilities will play
More informationCisco Data Center Services for OpenStack
Data Sheet Cisco Data Center Services for OpenStack Use Cisco Expertise to Accelerate Deployment of Your OpenStack Cloud Operating Environment Why OpenStack? OpenStack is an open source cloud operating
More informationIBM Enterprise Content Management Product Strategy
White Paper July 2007 IBM Information Management software IBM Enterprise Content Management Product Strategy 2 IBM Innovation Enterprise Content Management (ECM) IBM Investment in ECM IBM ECM Vision Contents
More informationAccelerating Innovation with Self- Service HPC
Accelerating Innovation with Self- Service HPC Thomas Goepel Director Product Management Hewlett-Packard BOEING is a trademark of Boeing Management Company Copyright 2014 Boeing. All rights reserved. Copyright
More informationQuantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
More informationDell s SAP HANA Appliance
Dell s SAP HANA Appliance SAP HANA is the next generation of SAP in-memory computing technology. Dell and SAP have partnered to deliver an SAP HANA appliance that provides multipurpose, data source-agnostic,
More informationCloud Cruiser and Azure Public Rate Card API Integration
Cloud Cruiser and Azure Public Rate Card API Integration In this article: Introduction Azure Rate Card API Cloud Cruiser s Interface to Azure Rate Card API Import Data from the Azure Rate Card API Defining
More informationON-PREMISES, CONSUMPTION-BASED PRIVATE CLOUD CREATES OPPORTUNITY FOR ENTERPRISE OUT-TASKING BUYERS
ON-PREMISES, CONSUMPTION-BASED PRIVATE CLOUD CREATES OPPORTUNITY FOR ENTERPRISE OUT-TASKING BUYERS By Stanton Jones, Analyst, Emerging Technology www.isg-one.com INTRODUCTION Given the explosion of data
More informationIBM Storwize V7000 Unified and Storwize V7000 storage systems
IBM Storwize V7000 Unified and Storwize V7000 storage systems Transforming the economics of data storage Highlights Meet changing business needs with virtualized, enterprise-class, flashoptimized modular
More informationInformation Technology: Principles and Strategic Aims
Information Technology: Principles and Strategic Aims As observed in the University of Cambridge IT Strategy, the University is a complex and diverse organization whose IT requirements vary depending on
More informationWith DDN Big Data Storage
DDN Solution Brief Accelerate > ISR With DDN Big Data Storage The Way to Capture and Analyze the Growing Amount of Data Created by New Technologies 2012 DataDirect Networks. All Rights Reserved. The Big
More informationDatabases & Data Infrastructure. Kerstin Lehnert
+ Databases & Data Infrastructure Kerstin Lehnert + Access to Data is Needed 2 to allow verification of research results to allow re-use of data + The road to reuse is perilous (1) 3 Accessibility Discovery,
More informationEUDAT. Towards a pan-european Collaborative Data Infrastructure
EUDAT Towards a pan-european Collaborative Data Infrastructure Damien Lecarpentier CSC-IT Center for Science, Finland EISCAT User Meeting, Uppsala,6 May 2013 2 Exponential growth Data trends Zettabytes
More informationEMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise
EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with
More informationDDN updates object storage platform as it aims to break out of HPC niche
DDN updates object storage platform as it aims to break out of HPC niche Analyst: Simon Robinson 18 Oct, 2013 DataDirect Networks has refreshed its Web Object Scaler (WOS), the company's platform for efficiently
More informationTHE EMC ISILON STORY. Big Data In The Enterprise. Copyright 2012 EMC Corporation. All rights reserved.
THE EMC ISILON STORY Big Data In The Enterprise 2012 1 Big Data In The Enterprise Isilon Overview Isilon Technology Summary 2 What is Big Data? 3 The Big Data Challenge File Shares 90 and Archives 80 Bioinformatics
More informationTransforming Accenture s core HR systems: Setting the stage for a digital Accenture
Transforming Accenture s core HR systems: Setting the stage for a digital Accenture 2 Client profile Accenture s internal IT organization is charged with driving the company s digital agenda. Building
More informationUNINETT Sigma2 AS: architecture and functionality of the future national data infrastructure
UNINETT Sigma2 AS: architecture and functionality of the future national data infrastructure Authors: A O Jaunsen, G S Dahiya, H A Eide, E Midttun Date: Dec 15, 2015 Summary Uninett Sigma2 provides High
More informationTechnology Insight Series
HP s Information Supply Chain Optimizing Information, Data and Storage for Business Value John Webster August, 2011 Technology Insight Series Evaluator Group Copyright 2011 Evaluator Group, Inc. All rights
More informationBuilding a Scalable Big Data Infrastructure for Dynamic Workflows
Building a Scalable Big Data Infrastructure for Dynamic Workflows INTRODUCTION Organizations of all types and sizes are looking to big data to help them make faster, more intelligent decisions. Many efforts
More informationWrite a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical
Identify a problem Review approaches to the problem Propose a novel approach to the problem Define, design, prototype an implementation to evaluate your approach Could be a real system, simulation and/or
More informationThis is an RFI and not a RFQ or ITN. Information gathered will lead to possible RFQ/ITN. This is a general RFI for all proposed solutions.
Item Number 1 2 Vendor Question Are you already requesting this information from the manufactures directly? What if one manufacture can do some of what you need, and another can do the other part of what
More informationBusiness Process Validation: What it is, how to do it, and how to automate it
Business Process Validation: What it is, how to do it, and how to automate it Automated business process validation is the best way to ensure that your company s business processes continue to work as
More informationKriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
More informationXSEDE Overview John Towns
April 15, 2011 XSEDE Overview John Towns XD Solicitation/XD Program extreme Digital Resources for Science and Engineering (NSF 08 571) Extremely Complicated High Performance Computing and Storage Services
More informationThe Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets
The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets!! Large data collections appear in many scientific domains like climate studies.!! Users and
More informationXSEDE Service Provider Software and Services Baseline. September 24, 2015 Version 1.2
XSEDE Service Provider Software and Services Baseline September 24, 2015 Version 1.2 i TABLE OF CONTENTS XSEDE Production Baseline: Service Provider Software and Services... i A. Document History... A-
More informationDesktop Virtualization and Storage Infrastructure Optimization
Desktop Virtualization and Storage Infrastructure Optimization Realizing the Most Value from Virtualization Investment Contents Executive Summary......................................... 1 Introduction.............................................
More informationFlash Memory Arrays Enabling the Virtualized Data Center. July 2010
Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,
More informationCiCS INFORMATION TECHNOLOGY STRATEGY 2008 2013 1. PURPOSE 2. CONTEXT. 2.1 Information Technology. 2.2 Changing Environment. 2.
CiCS INFORMATION TECHNOLOGY STRATEGY 2008 2013 1. PURPOSE This strategy replaces the previous IT strategy and is an integral part of the overall strategy being developed by Corporate Information and Computing
More informationLocal Loading. The OCUL, Scholars Portal, and Publisher Relationship
Local Loading Scholars)Portal)has)successfully)maintained)relationships)with)publishers)for)over)a)decade)and)continues) to)attract)new)publishers)that)recognize)both)the)competitive)advantage)of)perpetual)access)through)
More informationIntroduction to the Open Data Center Alliance SM
Introduction to the Open Data Center Alliance SM Legal Notice This Open Data Center AllianceSM Usage: Security Monitoring is proprietary to the Open Data Center Alliance, Inc. NOTICE TO USERS WHO ARE NOT
More informationIntegrating Big Data into Business Processes and Enterprise Systems
Integrating Big Data into Business Processes and Enterprise Systems THOUGHT LEADERSHIP FROM BMC TO HELP YOU: Understand what Big Data means Effectively implement your company s Big Data strategy Get business
More informationcompute canada Strategic Plan 2014-2019
compute canada Strategic Plan 2014-2019 Table of Contents About Compute Canada 3 Vision and Mission 5 Vision 5 Mission 5 Looking Towards 2019 6 Principles 7 What Compute Canada Provides to its Stakeholders
More information