Access, Documentation and Service Desk. Anupam Karmakar / Application Support Group / Astro Lab



Similar documents
International High Performance Computing. Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Cosmological simulations on High Performance Computers

PRACE hardware, software and services. David Henty, EPCC,

Extreme Scaling on Energy Efficient SuperMUC

Supercomputing Resources in BSC, RES and PRACE

Kriterien für ein PetaFlop System

Overview of HPC Resources at Vanderbilt

Information about Pan-European HPC infrastructure PRACE. Vít Vondrák IT4Innovations

Relations with ISV and Open Source. Stephane Requena GENCI

Extreme Scale Compu0ng at LRZ

Collaborative Computational Projects: Networking and Core Support

Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner

IT security concept documentation in higher education data centers: A template-based approach

Comparison of computational services at LRZ

PRACE the European HPC Research Infrastructure. Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

High Performance Computing in CST STUDIO SUITE

Informationsaustausch für Nutzer des Aachener HPC Clusters

Regional Vision and Strategy The European HPC Strategy

JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering

HPC technology and future architecture

Performance monitoring at CERN openlab. July 20 th 2012 Andrzej Nowak, CERN openlab

White Paper The Numascale Solution: Extreme BIG DATA Computing

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

numascale White Paper The Numascale Solution: Extreme BIG DATA Computing Hardware Accellerated Data Intensive Computing By: Einar Rustad ABSTRACT

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

Cloud computing is a marketing term that means different things to different people. In this presentation, we look at the pros and cons of using

Steinbuch Centre for Computing (SCC) The Information Technology Centre of KIT

European Data Infrastructure - EUDAT Data Services & Tools

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca

Open Cloud Computing A Case for HPC CRO NGI Day Zagreb, Oct, 26th

Introduction to MSI* for PubH 8403

Preparing a SQL Server for EmpowerID installation

How To Choose Between A Relational Database Service From Aws.Com

The Top 10 Advantages of a Windows Based PBX

Mark Bennett. Search and the Virtual Machine

The PHI solution. Fujitsu Industry Ready Intel XEON-PHI based solution. SC Denver

1 Bull, 2011 Bull Extreme Computing

Maintaining Non-Stop Services with Multi Layer Monitoring

Future and Emerging Technologies (FET) in H2020. Ales Fiala Future and Emerging Technologies DG CONNECT European Commission

High Performance Computing in Horizon February 26-28, 2014 Fukuoka Japan

Online Fuzzy-C-Means clustering

Setup Cisco Call Manager on VMware

IMPORTANT PROJECT OF COMMON EUROPEAN INTEREST (IPCEI)

CSE 501 Monday, September 09, 2013 Kevin Cleary

International Engineering Journal For Research & Development

Simulation Platform Overview

Gateway Portal Load Balancing

MONITORING PERFORMANCE IN WINDOWS 7

SLURM Workload Manager

Workprogramme

A Guide to Horizon 2020 Funding for the Creative Industries

Resource Scheduling Best Practice in Hybrid Clusters

High Performance Computing

SURVEY REPORT DATA SCIENCE SOCIETY 2014

Virtual desktops made easy

The top 10 advantages of a Windows based PBX

Performance And Scalability In Oracle9i And SQL Server 2000

HPC & Visualization. Visualization and High-Performance Computing

Pedraforca: ARM + GPU prototype

The Hartree Centre helps businesses unlock the potential of HPC

High Performance Computing. Course Notes HPC Fundamentals

White Paper The Numascale Solution: Affordable BIG DATA Computing

Cluster Computing at HRI

WHAT IS SOFTWARE PERFORMANCE ENGINEERING? By Michael Foster

The Hardware Dilemma. Stephanie Best, SGI Director Big Data Marketing Ray Morcos, SGI Big Data Engineering

Big Data Visualization on the MIC

Transcription:

Access, Documentation and Service Desk Anupam Karmakar / Application Support Group / Astro Lab

Time to get answer to these questions Who is allowed to use LRZ hardware? My file system is full. How can I get help? Where can I find documentation about...x? I need somebody who helps me parallelizing my software. Can LRZ help me? How do I write a PRACE Proposal? What is PRACE anyway? How many cpu hours should I apply for and how do I apply? Can I test my program at LRZ? How can I use the Intel Phi Cluster?

HPC for Science Want my science to take wings Need to perform simulations massively parallel Get my huge data visualised Need help to optimise and improve my code Seek help from LRZ and How to get this finished before my thesis ends?

Linux cluster access Open to the universities and applied universities located in Bayern No specific project proposal required Your "Master User" can request one account for you. Single point access request through the LRZ Servicedesk. www.lrz.de/services/compute/linux-cluster

How much resources do you need? CPU Time allocation (national) Large Scale Projects (> 35 million core-h/year) Big projects Medium project Small project Test account Gauss Centre for Supercomputing (GCS) (call twice a year) up to 35 million core-h/year HLRB Project proposal up to 10 million core-h/year HLRB Project proposal up to 3 million core-h/year HLRB Project proposal < 50,000 core-h HLRB Project proposal GCS - www.gauss-centre.eu/large-scale-application HLRB Projects: www.lrz.de/services/compute/supermuc/projectproposal/

Gauss Centre Large Scale Projects German umbrella organisation providing access to national HPC systems All are >PetaFlop systems with different hardware Three world class HPC are available for you: SuperMUC at LRZ JuQueen at JSC Hornet at HLRS Extensive scientific and technical peer-review Biannual large-scale access calls More information on http://www.gauss-centre.eu

Partnership For Advanced Computing in Europe An international not-for-profit association (aisbl) with 25 member countries Pan-European Research Infrastructure providing HPC resources Rigorous peer-review process Industry user can also apply if headquartered in the EU More info: www.prace-ri.eu

Computing Time Allocation via PRACE CPU Time allocation (EU wide) PRACE Large Scale Calls Deisa Deci Call PRACE Preparatory Access calls PRACE Tier 0 Regular Call (call twice a year) ( from 5 million core-h/year SuperMUC) www.prace-ri.eu up to 35 million core-h/year Preparing yourself for project access PRACE HPC Access: www.prace-ri.eu/prace-project-access/ Twice per year according to a fixed schedule: Call opens in February -> starting September Call opens in September -> starting March of the next year. PRACE Preparatory Access: www.prace-ri.eu/prace-preparatory-access/ 1st working day of the quarter (March; July; September; December)

but if need a simple way to get into SuperMUC LRZ Project Proposal for SuperMUC I. Test account II. o o Get access to the machine when you prepare for a proposal. Immediately get 50k core-h to test your application. Regular project Proposal o o o o When you need more (<35 Millions) please write a proposal Reviewed by the Steering committee within 2 months Recommend to follow the LRZ template available Open ended, no deadline www.lrz.de/services/compute/supermuc/projectproposal

Hands on to the Intel MIC cluster SuperMIC Once you have access to SuperMUC please request through the Service desk. It should be arranged fairly quickly. For usage guide use SuperMIC documentation.

remember nothing comes without efforts Your responsibilities o Proper and timely usage of computing resources o Get in touch immediately if you have troubles o Through LRZ service desk: https://servicedesk.lrz.de o Provide us full status report after the project ends o Update us on your break through science done with our systems. o and help us to help you better

Documentation for Users Most of the user documentation is available under Compute Services For SuperMUC www.lrz.de/services/compute/supermuc Go to section "Documentation for Users" For Linux cluster: www.lrz.de/services/compute/linux-cluster Go to section "Documentation, Education and Training If you get lost seek help of Google!

LRZ Service Desk: motives Your single gateway to contact us for any trouble with guaranteed response Go to: https://servicedesk.lrz.de

LRZ Service Desk: Demo

LRZ Service Desk: Demo

LRZ Service Desk: Demo

LRZ Service Desk: Demo Express your problem as detailed as possible.

KONWIHR Bavarian funding for HPC Research The Bavarian Competence Network for Technical and Scientific High Performance Computing

KONWIHR Bavarian funding for HPC Research Established by the Federal State of Bavaria in May 2000 to support HPC related science. Special focus on porting, optimization and parallelization of application codes for the HPC resources in Bavaria. Expertise of the two computing centers LRZ and RRZE to be utilized to enhance HPC research in Bayern. Application twice a year : 31st March and 30th September Detailed information : www.konwihr.uni-erlangen.de

Partnership Initiative in Computational Science (PICS) Individualized services for selected scientific groups flagship role Individual support and guidance and targeted training & education Planning dependability for use case specific optimized IT infrastructures Access to IT competence network and expertise at Computer Science and Mathematics departments Partner contribution LRZ benefits Embedding IT experts in user groups Joint research projects (including funding) Scientific partnership joint publications the (current and future) needs and requirements of the respective scientific domain Developing future services for all user groups Contact: Dr. Anton Frank (frank@lrz.de)

LRZ Application Laboratories LRZ initiative to help researchers tackle HPC issues. Domain specific high-level support for HPC Collaborations for code optimisation, performance improvements, I/O debottlenecking, memory optimisations etc. Preparing proposals for EU, national Or KONWIHR etc. Focus science areas: Astrophysics Lab (astro@lrz.de) Bioscience Lab (bio@lrz.de) Geoscience Lab (geo@lrz.de) Energy Lab (energy@lrz.de)

LRZ Astro Lab Dedicated support team for Astrophysics and plasma physics. Apart from high-level support HPC partnership Through third party projects KONWIHR projects etc. Get in touch with us right from the beginning of your HPC project. Astro Lab High Level Support call First call ended in January 2015 Next one during Q4/2015 Contact: Anupam Karmakar (karmakar@lrz.de)

In a nutshell Access process: GCS, PRACE or LRZ project Eligibility of usage: German and EU researcher Help in confusion or trouble: LRZ service desk Require documentation: www.lrz.de/services/compute and go to "Documentations for Users " High level support: LRZ application Labs (astro, geo, bio, energy@lrz.de) Guidance in writing computing proposal: Service desk Interested in collaboration: LRZ Partnership Initiative Any burning queries: Service Desk.