2016 Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures Call for Proposal of Joint Research Projects

Size: px
Start display at page:

Download "2016 Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures Call for Proposal of Joint Research Projects"

Transcription

1 2016 Joint Usage/Research Center for Interdisciplinary Large-scale Infrastructures Call for Proposal of Joint Research Projects The Joint Usage/Research Center for Interdisciplinary Large-scale Infrastructures (JHPCN) has been in operation since April 2010, with certification from the Minister of Education, Culture, Sports, Science and Technology, as a Network-Type Joint Usage/Joint Research Center comprising eight super-computer equipped centers affiliated with Hokkaido, Tohoku, The of Tokyo, Tokyo Institute of Technology, Nagoya, Kyoto, Osaka and Kyushu. The Technology Center at the of Tokyo functions as the core institution of the Center. Since 2013, the JHPCN s centers are responsible for the operation of those joint research resources named the HPCI-JHPCN system as parts the HPCI system provided by the High Performance Computer Infrastructure (HPCI). Applications for research projects that use the HPCI-JHPCN System are now invited via the HPCI online application system and will be selected in line with our reviewing policy (see Section 6, Research Project Reviews ). The Network-Type Research Center aims to contribute to the advancement and permanent development of the academic and research infrastructure of Japan by implementing academic joint-usage/joint-research centers for the following fields: Earth environment, energy, materials, genomic information, web data, academic information, time-series data from the sensor networks, video data, program analysis, and other fields in information technology. Using information infrastructures such as very large-scale computers as well as very large capacity storage and networks, the Center also aims to address challenging problems that are thus far considered extremely difficult to resolve or clarify. This Network-Type Research Center has enrolled leading researchers in the above-mentioned fields and anticipates the development of an innovative research theme through the collaboration among these researchers. These joint research projects (for the fiscal year 2016) will be implemented from April 2016 to March Joint Research Fields This call for joint research projects will adopt interdisciplinary research projects that use very large-scale numerical computation, very large-scale data processing, very large capacity network technology, and very large-scale information systems. Approximately 60* research projects will be adopted. * Total number of research centers used for a research project. (1) Very large-scale numerical computation This includes scientific and technological simulations in scientific/engineering fields such as Earth environment, energy, and materials, as well as modeling, numerical analysis algorithms, visualization techniques, information infrastructure, etc., to support these simulations. (2) Very large-scale data processing This includes processing genomic information, web data (including from Wikipedia, as well as news sites and blogs), academic information contents, time-series data from the sensor networks, high-level multimedia information processing needed for streaming data for video footage, etc., program analysis, access and search, information extraction, statistical and semantic analysis, data-mining, etc. (3) Very large capacity network technology This includes providing control and security of network quality for very large-scale data sharing. Monitoring and management are required for constructing and operating the very large-capacity network, providing assessment and maintenance of the safety of such networks as well as the development of 1

2 various technologies for the support of research. (4) Very large-scale information systems This combines each of the above-mentioned fields, entailing architecture of the exa-scale computer the creation of high-performance computing infrastructure software, managing grids, virtualization technology, cloud computing, etc. Please pay special attention to the following points when applying. 1 We will only be accepting interdisciplinary joint research proposals that will involve cooperation among researchers from a wide range of disciplines. For example, we presume that very large-scale numerical computation will be a cooperative complimentary research format between computer science and computational science. Therefore, we invite joint research projects among researchers solving problems in fields that use computers and those conducting research related to information science, such as on algorithms, modeling, and parallel processing. 2 As long as the conditions in 1 are met, we accept joint research projects that do not use our research resources such as the HPCI-JHPCN System and the other equipment/resources of our centers. 3 Research projects that use computer resources at multiple locations by taking advantage of the features of the Network-Type Center, or projects that will be worked on in conjunction with several researchers belonging to different research centers will be viewed favorably. For example, project structures for consideration may include the following: joint research projects that share computing resources between several researchers involved in widely distributed large-scale information systems; remote visualization and multi-platform implementation of applications; or cooperation between researchers at different research centers in different fields of research such as simulation and visualization. 2. Categories of Joint Research Projects: Under the premise of the interdisciplinary joint research project structure mentioned above, joint research projects to be invited are as follows: (1) General Joint Research Projects (approximately 80% of the total number of accepted projects will be in this category) (2) International Joint Research Projects (approximately 10% of the total number of accepted projects will be in this category) International joint research projects are conducted in conjunction with foreign researchers to address challenging problems that may not be resolved or clarified with only the help of researchers within Japan. For such research projects, there will be a certain amount of subsidies paid to cover travel expenses incurred for holding meetings with foreign joint researchers during the fiscal year of commencement. For details, please contact our office once your research project has been accepted. (3) Industrial Joint Research Project (approximately 10% of the total number of accepted projects will be in this category) Industrial joint research projects are interdisciplinary projects focused on industrial applications. 2

3 We invite applications in each of these three categories for research projects that use the HPCI-JHPCN System and research projects that do not use the HPCI-JHPCN System. 2015/12/11 If you require the assistance and cooperation of researchers and research groups that are affiliated with each of the research centers in the computer science field, please complete the corresponding sections of the application form describing the specific research target and items required for sharing. We will, whenever possible, arrange for assistance and cooperation through the Network-Type Research Center. During the application, please designate the university (research center) with which you seek collaboration. You may also name several research centers. If this is difficult for you to designate, it is possible for us to decide corresponding research centers on your behalf after considering the research content, etc. Please be sure to discuss this with our contact person (listed on page 8) in advance. 3. Application Requirements: Applications must be made by a Project Representative. Furthermore, the Project Representative and the Deputy Representative, as well as any other joint researchers, must fulfill the following conditions. 1 The Project Representative must be affiliated with an institution in Japan (/Public Authority, Private Enterprise, etc), and must be in a position to be able to obtain the approval of his/her institution (or representative head). 2 At least one Deputy Representative must be a researcher in a different field from that of the Project Representative. There may be more than one Deputy Representative. For example, HPCI face-to-face identity vetting can be provided by another Deputy Representative. 3 If a student is going to participate in the project as a joint researcher, he/she must have graduated. If a non-resident member, defined by the Foreign Exchange and Foreign Trade Act, is going to use computers, a researcher affiliated with your desired joint research center must participate as a joint researcher. International joint research projects must, in addition to the above-mentioned 1 3, fulfill the following conditions (4 and 5): 4 More than one researcher affiliated with a research institution outside Japan must be named as a Deputy Representative. Furthermore, an application must be made using the English Application Form. 5 A researcher affiliated with your desired joint research center must participate as a joint researcher. Industry joint research projects must, in addition to the above-mentioned 1 3, fulfill the following conditions (6 and 7): 6 The Project Representative must be affiliated with a private company, excluding universities and public authorities. 7 More than one researcher affiliated with your desired joint research center must be named as a 3

4 Deputy Project Representative. 4. Joint Research Period: April 1, 2016 to March 31, Depending on possible issues while preparing computer user accounts, the commencement of computer use may be delayed. 5. Facility Use Fees: Research resources listed on the Attachment 1 can be utilized free of charge within the approved amount of resources. 6. Research Project Reviews: Reviews will be conducted by the Joint Research Project Screening Committee, which comprises faculty members affiliated with each research center as well as external members, and the HPCI Screening Committee, which comprises industry, academic, and government experts. Research project proposals will be reviewed in both a general and technical sense for their scientific and technological validity, their facility/equipment requirements and the feasibility of these requirements, and their potential for development. In addition, the validity of resources at the desired center of joint research, along with the cooperation/collaboration proposed, will be subject to review. In addition, for each type of joint research project, there will be considerations as to how suited the form of the proposed project is to the objective of each category. Furthermore, for projects continuing from the previous fiscal year and projects determined to have substantial continuity, an assessment of the previous year s interim report and previous usage of computer resources may be considered during the screening process. 7. Notification of Adoption: We expect to notify the review results by mid-march Application Process: 1. Please note that the following application procedures differ for Research projects that use the HPCI-JHPCN System and Research projects that do not use the HPCI-JHPCN System (described in the Attachment 1). 2. In particular, for Research projects that use the HPCI-JHPCN System, the Project Representative, the Deputy Representative, and all joint researchers that will use the computer must have obtained their HPCI-ID prior to the application. 3. For international joint research projects, an English application form must be completed. (1) Application Procedure: After completing the application form obtained from the JHPCN website ( and completing the online submission of the electronic application, 4

5 please print and press your seal onto the Research Project Proposal Application Form. Mail the completed applications to the Technology Center, The of Tokyo (address provided at the end of this document). For details on the application process, please consult the JHPCN website and the HPCI Quick Start Guide. The summary of the application process is shown below. I: For Research Projects that use the HPCI-JHPCN System 1 Download the Research Project Proposal Application Form (MS Word format) from the JHPCN website and complete it. While completing the Research Project Proposal Application Form, the Project Representative, the Deputy Representative, and all other joint researchers who will use the HPCI-JHPCN System must obtain their HPCI-IDs. If IDs have been obtained in the past, please continue to use them. For online entries to be made as per the requirements of step 2, it is necessary to register the HPCI-ID of the Project Representative, the Deputy Representative, and all other joint researchers who will use the HPCI-JHPCN System. Joint researchers who are not registered at this stage will not be issued with a computer user account (HPCI account, local account) and will not be able to use the computers. 2 Access the Research Projects that use the HPCI-JHPCN System on the Project Application page on the above-mentioned JHPCN website. You will be transferred to the HPCI Application Assistance System where after filling out the necessary items, you may upload a PDF file of the Research Project Proposal Application Form (without seal) created in 1. For Research Projects that use the HPCI-JHPCN System, the project application system on the JHPCN website will not be used. 3 Print and press your seal onto the Research Project Proposal Application Form created in 1, and mail it to the address provided at the end of this application guideline. If your project is adopted, please follow the outline concerning post-adoption procedures stipulated by the HPCI ( In particular, face-to-face identity vetting must be provided by an individual who is able to assume the responsibility of the entire project (the Project Representative or the Deputy Representative). The face-to-face representative must bring copies of photographic identification of all joint researchers who will use the computer (contact representatives such as secretaries are ineligible for a face-to-face identity vetting). II: For Research Projects that do not use the HPCI-JHPCN System 1 Download the Research Project Proposal Application Form (MS Word format) from the JHPCN website and complete it. 2 Access the Research Projects that do not use the HPCI-JHPCN System on the project application page on the above-mentioned JHPCN website. After filling out the necessary items, you may upload a PDF file of the Research Project Proposal Application Form (without seal) created in 1. An Notification of Receipt will be sent to the address that you registered on the Research Project Application page. Applications for Research Projects that do not use the HPCI-JHPCN System do not require 5

6 the HPCI Application Assistance System. Therefore, it is not necessary to obtain the HPCI-ID. 3 Print and press your seal onto the Research Project Proposal Application Form created in 1, and mail it to the address provided at the end of this application guideline. (2) Application Deadline Online Registration (submission of PDF application form): January 8, 2016 (Fri.) 17:00 [Mandatory] Submission of printed application form: January 15, 2016 (Fri.) 17:00 [Mandatory] The above deadlines must be followed for submission of the Research Project Proposal Application Forms. However, if submission of printed documents created in (1)Ⅰ3 and (1)Ⅱ3 above is delayed, please contact the JHPCN office (details on Page 8) in advance. (3) Points to remember when creating the Research Project Proposal Application Form 1 Applicants must comply rules and regulations of the research center whose computer resources will be used in the joint research project. 2 Research resources must be only used for the purpose of the adopted research project. 3 The proposal must be for a project with peaceful purposes. 4 The proposal should consider human rights and profit protection. 5 The proposal must comply with the Ministry of Education s Approach to Bioethics and Safety. 6 The proposal must comply with the Ministry of Economy, Trade and Industry s Concerning Security Trade Management. 7 Applicants must complete a course in research ethics education. Programs on research ethics, including lectures, e-learning, and training workshops, are conducted at each of the affiliated organizations (including CITI Japan e-learning). If your affiliated organization does not provide any research ethics education, please contact our office. Researchers that are eligible to apply for the Japanese Grant-in-Aid for Scientific Research offered by the Ministry of Education, Culture, Sports, Science and Technology and the Japan Society for the Promotion of Science will be considered as eligible if they note their Kakenhi Researcher Code in the application form. 9. Other Important Notices: (1) Submission of a written oath: Research groups whose research projects have been adopted will be expected to submit a written oath pledging adherence to the contents of the above-mentioned 8. Application Process and (3) Points to remember when creating the Research Project Proposal Application Form. The specific process of submission will be provided post-adoption, but a sample of it is provided on the website should applicants wish to familiarize themselves with the requirements of the oath. (2) Regulations for use of facilities: While using the facilities, you are expected to follow the regulations for use pertaining to research 6

7 resources as stipulated by the research center where you will work. (3) Types of reports: The submission of research introduction posters, interim reports, and final reports are expected. Each report must be submitted by the designated deadline. Research introduction posters as well as the final report, in particular, will be expected to be suitable for disclosure (see the website for details). For International Joint Research Projects, these reports should be submitted in English in principle. Furthermore, you will be asked to report the progress of the research (through oral presentation or poster presentation) at the JHPCN symposium (where the results of research projects from the previous year will be presented) that is expected to be held during the summer (4) Disclaimer: Each of the research sites assumes no responsibility at all for any inconveniences that affect applicants as a consequence of this call for joint research projects. (5) Handling of intellectual property rights: In principle, any intellectual property that arises as a result of this research project will belong to each of the research groups involved. However, it is presumed that any certification of invention conducted by the practitioners of this joint research will be in accordance with the host university s policy concerning intellectual property rights. Please contact each of the research sites for details and handling of other exceptional matters. (6) Research ethics education: If a participant in an accepted project (excluding students) cannot be confirmed to have completed a program pertaining to research ethics education or equivalent (for example, eligibility for the Japanese Grant-in-Aid for Scientific Research that is offered by the Ministry of Education, Culture, Sports, Science and Technology, or the Japan Society for the Promotion of Science), this individual s use of computers may be suspended until the completion of such a program can be confirmed. (7) Other: 1 Personal information provided by this proposal shall only be used for screening research projects and providing system access. 2 After the adoption of a research project, the project name and the name/affiliation of the Project Representative will be disclosed 3 After the adoption of the research project, changes cannot be made to the research site or the computer that will be used. 4 If you wish to discuss your application, please contact us on the address provided below (please note in advance that we are not able to respond to telephone-based inquiries). Contact information (for queries about applications, etc.): Joint Usage/Research Center for Interdisciplinary Large-scale Infrastructures Office address: jhpcn@ itc.u-tokyo.ac.jp For details pertaining to eligibility and the management of intellectual property at each research site, you may directly contact each center. You can access the websites of each research site by selecting Research Centers on our website: ( 7

8 Mailing address for the Research Project Proposal Application Form: Yayoi, Bunkyo-ku, Tokyo Technology Center, The of Tokyo Joint Usage/Research Center for Interdisciplinary Large-scale Infrastructures Office (You may use the abbreviations, Technology Center Office, or the JHPCN Office ) Attachment 1 List of research resources available at each Research Center for the Joint Research Project(s) (1) and (2) represent the HPCI-JHPCN System. (1) List of computational resources available for Joint Research Project(s) Name of Research Center Initiative Center, Hokkaido Computational Resource, Format of Use 1 HITACHI SR16000/M1 (Max. 4 node years per 1 project) (168 nodes, 5,376 physical cores, Total main memory capacity 21 TB, TFLOPS) (Shared with general user, MPI parallel processing of up to a maximum of 128 node per 1 job is possible) 1) Calculation time: 27,000,000 seconds: file 2.5 TB 2) Calculation time: 2,700,000 seconds: file 0.5 TB (Assumed amount of resources: 500,000,000 s and 53 TB when combining 1) and 2)) 2 Cloud-System Blade Symphony BS2000 (Max. 4 (node) years per 1 project) (2 nodes, 80 cores) Equal to 4 units of L Server (May also use S Server and M Server), 1 unit of XL Server 3 Data Science Cloud System HITACHI HA8000 (Max. 1 per 1 project, 7 (node) years for 1), 2 (node) years for 2)) 1) Virtual Server (10 cores, Hard-Disk 800 GB, with 1 TB storage for each a, b, c below) 7 units 2) Physical Server (20 cores, Memory 80 GB, Hard-Disk 2 TB) 2 units Cloud-integrated Storage System a. Additional Storage for Virtual Server: Max. 18 TB b. gfarm Storage for Local Server: Max. 68 TB c. Amazon S3 compatible object storage: Max. 18 TB d. WebDAV Storage: Max. 22 TB Estimated number of Projects adopted 1+2:10 3 1):7 2):2 (Max. 9) Cyberscience Center, Tohoku 1 Supercomputer System SX-ACE (2,560 nodes) (Max. 48 node years per 1 project) Theoretical Peak Performance 707TFLOPS, Main Memory Capacity 160 TB, Max. 1,024 nodes, shared use with general users 2 Computing Server System (68 nodes) (Max. 12 node years per 1 project) Theoretical Peak Performance 31.3TFLOPS, Main Memory Capacity 8.5 TB, Max. 24 nodes, shared use with general users 3 Storage 4PB (Max. 20 TB per 1 project) 8 10

9 Technology Center, The of Tokyo Global Scientific and Computing Center, Tokyo Institute of Technology Technology Center, Nagoya Academic Center for Computing and Media Studies, Kyoto 1 Fujitsu PRIMEHPC FX10 (Max. resource allocation per 1 project: 60 (node) years) For each adopted project, 12 nodes to 60 nodes 1 year equivalent of tokens will be provided. While the number of priority nodes can vary depending on the number of projects adopted, the maximum is 480 nodes. 1 Cloudy Green Supercomputer TSUBAME 2.5 and Successor Model provides 180 units of TSUBAME2.5 (= equivalent to 540,000 (node) h in Thin node). Max. 420 nodes (CPU 64.3TF + GPU 1.56PF) can be used simultaneously. Computational resources allocated by number of units.1 unit = 3,000 node hours (0.34 node years). Please add the desired total number of units to the application form and clearly indicate the number of units desired for each quarter (shared with general users). Maximum amount of resource allocated per 1 project: 22 units (7.5 node years) (Expected to provide resources through the Successor Model of TSUBAME 2.5 starting from December 2016) 1 Fujitsu PRIMEHPC FX100: approximately 3.2PF (2880 nodes) 2 Fujitsu PRIMERGY CX400/25050: approximately 447TF (384 nodes) 3 Fujitsu PRIMERGY CX400/270: approximately 279TF (184 nodes+ 184 Xeon Phi) 4 SGI UV2000: about 24.5TF, 20TB Memory, 1280 cores Maximum amount of resources allocated per 1 project: 15 units:86 node years. All resource provided for sharing with general users. 1 unit = 50,000 node hours (equivalent to 180,000 JPY of additional cost). When using a large-scale storage, conversion of 10TB: 5000 node hours used. Cray XE6 (Camphor) 1 (32 nodes, 1,024 cores, TFLOPS:Max. 32 nodes ½ year per 1 project) 2 (128 nodes, 4,096 cores, TFLOPS:Max. 256 nodes weeks per 1 project) (1:First period: April to end of August,shared between all groups, 2:2 weeks/group allocation) Cray XC30 (Magnolia: Xeon 2/node) 3 (32 nodes, 896 cores, TFLOPS:Max. 16 nodes ½ year per 1 project) 4 (128 nodes, 3,584 cores, TFLOPS:Max. 256 nodes weeks per 1 project) (3:First period: April to end of August, shared between all groups, 4:2 weeks/group allocation) Cray XC30 (Camellia: Xeon + Xeon Phi/node) 5 (128 nodes, cores, TFLOPS:Max. 40 nodes years per 1 project) (5:All year (paused during September), shared between all groups Next period system (provisional name) (in procurement) (Architecture:x86 64) 6 (128 nodes, more than 7,680 cores, more than 384 TFLOPS:Max. 40 nodes ½ year per 1 project) (6:Second Term (October to end of March), shared between all groups. 2015/12/ : :5 2 46:5 5 :4 Cybermedia Center, Osaka 1 Supercomputer SX-ACE (For 1 project: Max. 22 node years) Computational nodes: Max. 512 nodes (141 TFlops) available all year for shared use up to 280,000 node hours. Storage: 20 TB per 1 project 2 PC cluster for large-scale visualization (For 1 project: Max. 6 node years) Computational nodes: Max. 62 nodes (more than CPU 24.08TFLOPS, GPU TFLOPS) available all year for shared use or dedicated use for up to 53,000 node hours : 5

10 Research Institute for Technology, Kyushu Furthermore, dedicated use of the PC cluster for large-scale visualization is required to work with the high resolution display system. Storage:20 TB per 1 project Maximum resources allocated for 1 project: 128 node years (shared with general users) (965 nodes, 1,930 CPU TFlops Xeon Phi TFlops) (shared with general users) (Maximum number of simultaneously usable nodes for 1 project 128 nodes) (Maximum number of simultaneously usable nodes for 1 job: 128 nodes. Up to 16 nodes can be used for Xeon Phi installed nodes) 2015/12/11 8 Advanced Software Development Environment RENKEI-VPE : While joint research can be implemented individually through four of the research sites with HPCI advanced software development environments (no need to do so with multiple research sites), the account issuing/management work of the computational system will be conducted by the Tokyo Institute of Technology s Global Scientific and Computing Center. Name of Research Center Initiative Center, Hokkaido Technology Center, The of Tokyo Global Scientific and Computing Center, Tokyo Institute of Technology Research Institute for Technology, Kyushu Computational Resource, Format of Use Physical nodes: 1 node (40 cores, Memory: 128 GB, Hard-Disk: 2TB) 40 VM per 1 physical node (however, shared use with HPCI) VM Standard Specification: 1 core, 3 GB memory Physical nodes: 1 node (12 cores, Memory: 48 GB, Hard-Disk: 20 TB) (shared) 12 VM per 1 physical node VM Standard Specification: 1 core, 4 GB memory (provided until end of October 2016) Physical nodes: 2 nodes (more than 6 VM per 1 physical node), VM Standard Specification:1 CPU, 3 GB memory (unit of use: 3 VM/year) Physical nodes: 2 nodes (More than 24 VM functional) VM Standard Specification:1 CPU, 3 GB memory, 100 GB Disk (unit of use: 1 VM/year) Estimated number of Projects adopted (2) Software available for joint research: Initiative Center, Hokkaido HITACHI SR16000/M1 [language compiler] optimized FORTRAN90 (Hitachi), XL Fortran(IBM),XL C/C++(IBM) [library] MPI-2 (without dynamic process creation function), MATRIX/MPP,MATRIX/MPP/SSS, MSL2,NAG SMP Library,ESSL,Parallel ESSL, BLAS,LAPACK,ScaLAPACK,ARPACK, FFTW2.1.5/3.2.2 [application software]gaussian09 10

11 Cloud System Blade Symphony BS2000 application server [language compiler]intel Composer XE 2011 [application software]ls-dyna,gaussian09,amber,comsol, AVS/Express,Open FOAM project server [OS]CentOS 5.5 [language compiler]intel Composer XE 2011 (XL server only), gcc [library]hadoop (excluding XL server), OpenMPI, OpenMP3.0 [application software]gaussian09 (XL server only) Advanced Software Development Environment Hosting Server RENKEI-VPE [OS]CentOS 6.3 [application software]gfarm, gfarm2fs, kvm User may install an optional compiler/library/application Cyberscience Center, Tohoku [language compiler]fortran90/sx,c++/sx, Intel Cluster Studio XE(Fortran, C, C++) [library]mpi/sx, ASL, ASLSTAT, MathKeisan, MKL, NumericFactory [application software]gaussian09 Technology Center, The of Tokyo Fujitsu PRIMEHPC FX10 [language compiler]fortran90, C, C++ [library]mpi, BLAS, LAPACK/ScaLAPACK, FFTW, PETSc, METIS/ParMETIS, SuperLU/SuperLU_DIST [application software]openfoam, ABINIT-MP, PHASE, FrontFlow/Blue, FrontFlow/Red, REVOCAP Advanced Software Development Environment Hosting Server RENKEI-VPE [OS]CentOS 6.x, CentOS 5.x (scheduled), Scientific Linux 5.x, 6.x (scheduled) User may install an optional compiler/library/application Global Scientific and Computing Center, Tokyo Institute of Technology Cloudy green supercomputer TSUBAME 2.5 [language compiler]intel C/C++ Fortran, PGI C/C++ Fortran compatible for accelerator, GNU C, GNU Fortran, CUDA for C, CUDA for Fortran, HMPP [library]openmp, MPI(OpenMPI, MPICH2, MVAPICH2), BLAS, LAPACK, CULA [application software]gaussian 09, AMBER (limited to academic uses) CST MW-STUDIO (limited to industrial uses) 11

12 Advanced Software Development Environment Hosting Server RENKEI-VPE [OS]CentOS 5, 6, 7 (scheduled) User may install an optional compiler/library/application Technology Center, Nagoya Fujitsu PRIMEHPC FX100 [language compiler]fortran, C, C++, XPFortran [library]mpi, BLAS, LAPACK, ScaLAPACK, SSLII, NUMPAC, FFTW, HDF5 [application software]gaussian, LS-DYNA, HPC portal Fujitsu PRIMERGY CX400 [language compiler]fujitsu Fortran C C++ XPFortran, Intel Fortran C C++ [library]mpi(fujitsu, Intel), BLAS, LAPACK, ScaLAPACK, SSLII(Fujitsu), MKL(Intel), NUMPAC, FFTW, HDF5 [application software]abaqus, ADF, AMBER, GAMESS, Gaussian, Gromacs, HyperWorks, LAMMPS, LS-DYNA, NAMD, OpenFOAM, Poynting, STAR-CCM+, AVS/ExpressDev/PCE, EnSightGold, IDL, ENVI, ParaView, HPC portal SGI UV2000 [language compiler]intel Fortran C C++ [library]mpt(sgi),mkl(intel), FFTW, HDF5, NETCDF [application software] AVS/ExpressDev, EnSightDR, IDL, ENVI(SARscape), ParaView, POV-Ray, NICE DCV, JINDAIJI, 3DAVS Player Academic Center for Computing and Media Studies, Kyoto [language compiler]fortran90/c/c++ (Cray, Intel, PGI) [library]mpi, BLAS, LAPACK, ScaLAPACK, IMSL, NAG(XE6) [application software]gaussian09(xe6) Cybermedia Center, Osaka [language compiler]fortran90/sx, C++/SX, Intel(R) Fortran Compiler, Intel(R) C Compiler [library]asl-sx, MPI/SX, MathKeisan [application software]nastran, Marc, AVS, IDL, etc. Research Institute for Technology, Kyushu HITACHI HA8000-tc/HT210 [language compiler]intel Cluster Studio XE(Fortran, C, C++), Hitachi Optimized Fortran 12

13 [library]blas, LAPACK, ScaLAPACK, MKL, FFTW, HDF5, NetCDF [application software]gaussian09, AMBER, Ensight, WRF, GAMESS, GROMACS, etc. 2015/12/11 Advanced Software Development Environment Hosting Server RENKEI-VPE [OS]CentOS 6.2 (scheduled) [application software]gfarm, gfarms2fs, kvm, User may install an optional compiler/library/application (3) Other equipment/resources available for joint research: Initiative Center, Hokkaido Ø User terminal Ø 3-dimensional visualization equipment (AVS/ExpressViz, EnSight, FieldView) Cyberscience Center, Tohoku Ø Large-format color printer Ø Three-dimensional visualization equipment (12-screen flat stereo visualization system ( (Full-HD) 50-inch projection module 12 screens), visualization server 4. 3-dimensional visualization software (AVS/Express MPE), liquid crystal shutter glasses) Global Scientific and Computing Center, Tokyo Institute of Technology Ø Remote GUI environment: VDI (Virtual Desktop Infrastructure) system Technology Center, Nagoya Ø Visualization system High-definition 8K display system (185-inch digital display 16 screens), Full HD circularly polarized light spectroscopic system (150-inch screen, projector, circularly polarized glasses, etc.), remote visualization system, head-mounted display system (mixed reality, VICON, etc.), dome-type display system, on-site use system (for data transfer) Cybermedia Center, Osaka Ø High-definition display system: 24-screen flat stereo visualization system Suita Main Building, Cybermedia Center, Osaka (Full-HD): 50-inch projection module 24 screens Image-processing PC: 7 units (AVS Express/MPE VR, IDL, ParaView) (Note) This is prioritized for users conducting visualization using PC clusters for large-scale visualization. Ø High-definition display system: 15-screen cylindrical stereo visualization system Umekita Center, Cybermedia Center, Osaka 1366x768 (WXGA) 46-inch LCD 15 screens Image-processing PC: 5 units (AVS Express/MPE VR, IDL, ParaView) (Note) This is prioritized for users conducting visualization using PC clusters for large-scale visualization. 13

High Performance Computing Infrastructure in JAPAN

High Performance Computing Infrastructure in JAPAN High Performance Computing Infrastructure in JAPAN Yutaka Ishikawa Director of Information Technology Center Professor of Computer Science Department University of Tokyo Leader of System Software Research

More information

1 Bull, 2011 Bull Extreme Computing

1 Bull, 2011 Bull Extreme Computing 1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance

More information

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015

Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 Introduction to ACENET Accelerating Discovery with Computational Research May, 2015 What is ACENET? What is ACENET? Shared regional resource for... high-performance computing (HPC) remote collaboration

More information

Overview of HPC systems and software available within

Overview of HPC systems and software available within Overview of HPC systems and software available within Overview Available HPC Systems Ba Cy-Tera Available Visualization Facilities Software Environments HPC System at Bibliotheca Alexandrina SUN cluster

More information

High Performance Computing Infrastructure in Japan

High Performance Computing Infrastructure in Japan High Performance Computing Infrastructure in Japan Kento Aida National Institute of Informatics 2 Overview of HPCI Introduction n High Performance Computing Infrastructure (HPCI) Ø national project promoted

More information

The Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist

The Top Six Advantages of CUDA-Ready Clusters. Ian Lumb Bright Evangelist The Top Six Advantages of CUDA-Ready Clusters Ian Lumb Bright Evangelist GTC Express Webinar January 21, 2015 We scientists are time-constrained, said Dr. Yamanaka. Our priority is our research, not managing

More information

locuz.com HPC App Portal V2.0 DATASHEET

locuz.com HPC App Portal V2.0 DATASHEET locuz.com HPC App Portal V2.0 DATASHEET Ganana HPC App Portal makes it easier for users to run HPC applications without programming and for administrators to better manage their clusters. The web-based

More information

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER Tender Notice No. 3/2014-15 dated 29.12.2014 (IIT/CE/ENQ/COM/HPC/2014-15/569) Tender Submission Deadline Last date for submission of sealed bids is extended

More information

Working with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University

Working with HPC and HTC Apps. Abhinav Thota Research Technologies Indiana University Working with HPC and HTC Apps Abhinav Thota Research Technologies Indiana University Outline What are HPC apps? Working with typical HPC apps Compilers - Optimizations and libraries Installation Modules

More information

Building a Top500-class Supercomputing Cluster at LNS-BUAP

Building a Top500-class Supercomputing Cluster at LNS-BUAP Building a Top500-class Supercomputing Cluster at LNS-BUAP Dr. José Luis Ricardo Chávez Dr. Humberto Salazar Ibargüen Dr. Enrique Varela Carlos Laboratorio Nacional de Supercómputo Benemérita Universidad

More information

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools

More information

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007

More information

PRIMERGY server-based High Performance Computing solutions

PRIMERGY server-based High Performance Computing solutions PRIMERGY server-based High Performance Computing solutions PreSales - May 2010 - HPC Revenue OS & Processor Type Increasing standardization with shift in HPC to x86 with 70% in 2008.. HPC revenue by operating

More information

Fujitsu HPC Cluster Suite

Fujitsu HPC Cluster Suite Webinar Fujitsu HPC Cluster Suite 29 th May 2013 Павел Борох 0 HPC: полный спектр предложений от Fujitsu PRIMERGY Server, Workstation Cluster Management & Operation ISV and Research Partnerships HPC Cluster

More information

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC

Mississippi State University High Performance Computing Collaboratory Brief Overview. Trey Breckenridge Director, HPC Mississippi State University High Performance Computing Collaboratory Brief Overview Trey Breckenridge Director, HPC Mississippi State University Public university (Land Grant) founded in 1878 Traditional

More information

PLGrid Infrastructure Solutions For Computational Chemistry

PLGrid Infrastructure Solutions For Computational Chemistry PLGrid Infrastructure Solutions For Computational Chemistry Mariola Czuchry, Klemens Noga, Mariusz Sterzel ACC Cyfronet AGH 2 nd Polish- Taiwanese Conference From Molecular Modeling to Nano- and Biotechnology,

More information

HPC Wales Skills Academy Course Catalogue 2015

HPC Wales Skills Academy Course Catalogue 2015 HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses

More information

Trends in High-Performance Computing for Power Grid Applications

Trends in High-Performance Computing for Power Grid Applications Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views

More information

SR-IOV: Performance Benefits for Virtualized Interconnects!

SR-IOV: Performance Benefits for Virtualized Interconnects! SR-IOV: Performance Benefits for Virtualized Interconnects! Glenn K. Lockwood! Mahidhar Tatineni! Rick Wagner!! July 15, XSEDE14, Atlanta! Background! High Performance Computing (HPC) reaching beyond traditional

More information

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain

More information

Deploying and managing a Visualization Farm @ Onera

Deploying and managing a Visualization Farm @ Onera Deploying and managing a Visualization Farm @ Onera Onera Scientific Day - October, 3 2012 Network and computing department (DRI), Onera P.F. Berte [email protected] Plan Onera global HPC

More information

INTEL PARALLEL STUDIO XE EVALUATION GUIDE

INTEL PARALLEL STUDIO XE EVALUATION GUIDE Introduction This guide will illustrate how you use Intel Parallel Studio XE to find the hotspots (areas that are taking a lot of time) in your application and then recompiling those parts to improve overall

More information

1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations

1 DCSC/AU: HUGE. DeIC Sekretariat 2013-03-12/RB. Bilag 1. DeIC (DCSC) Scientific Computing Installations Bilag 1 2013-03-12/RB DeIC (DCSC) Scientific Computing Installations DeIC, previously DCSC, currently has a number of scientific computing installations, distributed at five regional operating centres.

More information

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

GPU System Architecture. Alan Gray EPCC The University of Edinburgh GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems

More information

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical

More information

JSPS Fellowship Programs for Overseas Researchers

JSPS Fellowship Programs for Overseas Researchers Hosts should observe the accompanying Japanese Application Guidelines when making s. 2014 JSPS JSPS Fellowship Programs for Overseas Researchers for Overseas Researchers Invitation for Research in Japan

More information

HPC technology and future architecture

HPC technology and future architecture HPC technology and future architecture Visual Analysis for Extremely Large-Scale Scientific Computing KGT2 Internal Meeting INRIA France Benoit Lange [email protected] Toàn Nguyên [email protected]

More information

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert

JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA

More information

3. Preparing the Application (Proposal for Grant-in-Aid) and Submitting the Application (Proposal for Grant-in-Aid)

3. Preparing the Application (Proposal for Grant-in-Aid) and Submitting the Application (Proposal for Grant-in-Aid) 3. Preparing the Application (Proposal for Grant-in-Aid) and Submitting the Application (Proposal for Grant-in-Aid) The document necessary for the application is the Proposal for Grant-in-Aid. The Proposal

More information

The CNMS Computer Cluster

The CNMS Computer Cluster The CNMS Computer Cluster This page describes the CNMS Computational Cluster, how to access it, and how to use it. Introduction (2014) The latest block of the CNMS Cluster (2010) Previous blocks of the

More information

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster

Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Introduction to Linux and Cluster Basics for the CCR General Computing Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY 14203 Phone: 716-881-8959

More information

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN 1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction

More information

Nagoya University International Programs for AY2016 Civil and Environmental Engineering Graduate Program (Doctoral Program)

Nagoya University International Programs for AY2016 Civil and Environmental Engineering Graduate Program (Doctoral Program) Nagoya University International Programs for AY2016 Civil and Environmental Engineering Graduate Program (Doctoral Program) Admission Requirements for International Students (October Admission) Nagoya

More information

Getting Started with HPC

Getting Started with HPC Getting Started with HPC An Introduction to the Minerva High Performance Computing Resource 17 Sep 2013 Outline of Topics Introduction HPC Accounts Logging onto the HPC Clusters Common Linux Commands Storage

More information

Admission Requirements for International Students (October Admission)

Admission Requirements for International Students (October Admission) Nagoya University International Programs for AY2016 Civil and Environmental Engineering Graduate Program (Master's Program) at the Graduate School of Engineering Admission Requirements for International

More information

Using the Windows Cluster

Using the Windows Cluster Using the Windows Cluster Christian Terboven [email protected] aachen.de Center for Computing and Communication RWTH Aachen University Windows HPC 2008 (II) September 17, RWTH Aachen Agenda o Windows Cluster

More information

High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info

High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info High Performance Computing within the AHRP http://www.ahrp.info http://www.ahrp.info The Alliance for HPC Rhineland-Palatinate! History, Goals and Tasks! Organization! Access to Resources! Training and

More information

ST810 Advanced Computing

ST810 Advanced Computing ST810 Advanced Computing Lecture 17: Parallel computing part I Eric B. Laber Hua Zhou Department of Statistics North Carolina State University Mar 13, 2013 Outline computing Hardware computing overview

More information

Overview of HPC Resources at Vanderbilt

Overview of HPC Resources at Vanderbilt Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources

More information

Enterprise HPC & Cloud Computing for Engineering Simulation. Barbara Hutchings Director, Strategic Partnerships ANSYS, Inc.

Enterprise HPC & Cloud Computing for Engineering Simulation. Barbara Hutchings Director, Strategic Partnerships ANSYS, Inc. Enterprise HPC & Cloud Computing for Engineering Simulation Barbara Hutchings Director, Strategic Partnerships ANSYS, Inc. Historical Perspective Evolution of Computing for Simulation Pendulum swing: Centralized

More information

Part I Courses Syllabus

Part I Courses Syllabus Part I Courses Syllabus This document provides detailed information about the basic courses of the MHPC first part activities. The list of courses is the following 1.1 Scientific Programming Environment

More information

A Crash course to (The) Bighouse

A Crash course to (The) Bighouse A Crash course to (The) Bighouse Brock Palen [email protected] SVTI Users meeting Sep 20th Outline 1 Resources Configuration Hardware 2 Architecture ccnuma Altix 4700 Brick 3 Software Packaged Software

More information

Kriterien für ein PetaFlop System

Kriterien für ein PetaFlop System Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop www.cloud.sara.nl Tutorial 2014-06-11 UvA HPC and Big Data Course June 2014 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current

More information

SURFsara HPC Cloud Workshop

SURFsara HPC Cloud Workshop SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current

More information

Visualization Cluster Getting Started

Visualization Cluster Getting Started Visualization Cluster Getting Started Contents 1 Introduction to the Visualization Cluster... 1 2 Visualization Cluster hardware and software... 2 3 Remote visualization session through VNC... 2 4 Starting

More information

Graduate School of medicine, Kyoto University Guide for Applying to Professional Degree Program in Public Health for 2016

Graduate School of medicine, Kyoto University Guide for Applying to Professional Degree Program in Public Health for 2016 Graduate School of medicine, Kyoto University Guide for Applying to Professional Degree Program in Public Health for 2016 Translation Disclaimer Kyoto University strives to achieve the highest possible

More information

Nagoya University International Programs for AY2015

Nagoya University International Programs for AY2015 Nagoya University International Programs for AY2015 Medical Science Graduate Program (Doctoral Program) at the Graduate School of Medicine Admission Requirements for International Students (October Admission)

More information

HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA

HPC Cloud. Focus on your research. Floris Sluiter Project leader SARA HPC Cloud Focus on your research Floris Sluiter Project leader SARA Why an HPC Cloud? Christophe Blanchet, IDB - Infrastructure Distributing Biology: Big task to port them all to your favorite architecture

More information

High Performance Computing in CST STUDIO SUITE

High Performance Computing in CST STUDIO SUITE High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver

More information

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of

More information

HPC on AWS. Hiroshi Kobayashi, Dev./Lab. IT System HGST Japan, Ltd. Jun 3, 2015

HPC on AWS. Hiroshi Kobayashi, Dev./Lab. IT System HGST Japan, Ltd. Jun 3, 2015 HPC on AWS Hiroshi Kobayashi, Dev./Lab. IT System HGST Japan, Ltd. Jun 3, 2015 1 HPC on AWS HPC = High Performance Computing AWS = Amazon Web Service 2 Agenda HGST Why choose Cloud? Performance Flexibility

More information

Nagoya University International Programs for AY2016

Nagoya University International Programs for AY2016 Nagoya University International Programs for AY2016 Medical Science Graduate Program (Doctoral Program) at the Graduate School of Medicine Admission Requirements for International Students (October Admission)

More information

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014

Using WestGrid. Patrick Mann, Manager, Technical Operations Jan.15, 2014 Using WestGrid Patrick Mann, Manager, Technical Operations Jan.15, 2014 Winter 2014 Seminar Series Date Speaker Topic 5 February Gino DiLabio Molecular Modelling Using HPC and Gaussian 26 February Jonathan

More information

icer Bioinformatics Support Fall 2011

icer Bioinformatics Support Fall 2011 icer Bioinformatics Support Fall 2011 John B. Johnston HPC Programmer Institute for Cyber Enabled Research 2011 Michigan State University Board of Trustees. Institute for Cyber Enabled Research (icer)

More information

Performance of the JMA NWP models on the PC cluster TSUBAME.

Performance of the JMA NWP models on the PC cluster TSUBAME. Performance of the JMA NWP models on the PC cluster TSUBAME. K.Takenouchi 1), S.Yokoi 1), T.Hara 1) *, T.Aoki 2), C.Muroi 1), K.Aranami 1), K.Iwamura 1), Y.Aikawa 1) 1) Japan Meteorological Agency (JMA)

More information

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training

More information

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,

More information

Cluster Computing at HRI

Cluster Computing at HRI Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing

More information

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (

How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) ( TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

Interactive Data Visualization with Focus on Climate Research

Interactive Data Visualization with Focus on Climate Research Interactive Data Visualization with Focus on Climate Research Michael Böttinger German Climate Computing Center (DKRZ) 1 Agenda Visualization in HPC Environments Climate System, Climate Models and Climate

More information

Remote & Collaborative Visualization. Texas Advanced Compu1ng Center

Remote & Collaborative Visualization. Texas Advanced Compu1ng Center Remote & Collaborative Visualization Texas Advanced Compu1ng Center So6ware Requirements SSH client VNC client Recommended: TigerVNC http://sourceforge.net/projects/tigervnc/files/ Web browser with Java

More information

ACADEMIC YEAR 2016 (BEGINNING OCTOBER 2016) GRADUATE SCHOOL OF ENVIRONMENTAL STUDIES NAGOYA UNIVERSITY. May 2016

ACADEMIC YEAR 2016 (BEGINNING OCTOBER 2016) GRADUATE SCHOOL OF ENVIRONMENTAL STUDIES NAGOYA UNIVERSITY. May 2016 GENERAL GUIDELINES FOR ADMISSION TO GRADUATE STUDY PROGRAMS AT THE GRADUATE SCHOOL OF ENVIRONMENTAL STUDIES FOR SPECIAL APPLICATION FOR INTERNATIONAL STUDENTS ACADEMIC YEAR (BEGINNING OCTOBER ) GRADUATE

More information

HPCI Help Desk System User Manual Ver. 2

HPCI Help Desk System User Manual Ver. 2 Document ID:HPCI-OF01-002E-02 HPCI Help Desk System User Manual Ver. 2 2013/09/30 HPCI Operating Office Revision History Date issued Ver. Descriptions 2012/3/30 1 2013/9/30 2 A full-fledged revision is

More information

http://www.k.u-tokyo.ac.jp/index.html.en

http://www.k.u-tokyo.ac.jp/index.html.en *This document is a translation of the Japanese version. In the event that any question should arise about this version, the Japanese version is the authoritative version. Guidelines for Applicants to

More information

Designed for Maximum Accelerator Performance

Designed for Maximum Accelerator Performance Designed for Maximum Accelerator Performance A dense, GPU-accelerated cluster supercomputer that delivers up to 329 double-precision GPU teraflops in one rack. This power- and spaceefficient system can

More information

Graduate Program in Japanese Language and Culture. (Master s Course) Application Instructions

Graduate Program in Japanese Language and Culture. (Master s Course) Application Instructions Graduate Program in Japanese Language and Culture (Master s Course) For Fiscal 2015 Application Instructions 1. Objectives This program is designed to provide teachers of the Japanese-language working

More information

Unleashing the Performance Potential of GPUs for Atmospheric Dynamic Solvers

Unleashing the Performance Potential of GPUs for Atmospheric Dynamic Solvers Unleashing the Performance Potential of GPUs for Atmospheric Dynamic Solvers Haohuan Fu [email protected] High Performance Geo-Computing (HPGC) Group Center for Earth System Science Tsinghua University

More information

Technical Computing Suite Job Management Software

Technical Computing Suite Job Management Software Technical Computing Suite Job Management Software Toshiaki Mikamo Fujitsu Limited Supercomputer PRIMEHPC FX10 PRIMERGY x86 cluster Outline System Configuration and Software Stack Features The major functions

More information

Graduate School of Medicine, Kyoto University Admission Guidelines for Professional Degree Program in Public Health for 2014

Graduate School of Medicine, Kyoto University Admission Guidelines for Professional Degree Program in Public Health for 2014 Graduate School of Medicine, Kyoto University Admission Guidelines for Professional Degree Program in Public Health for 2014 Message to Prospective Applicants for Our Professional Degree Program in Public

More information

Parallel Computing. Introduction

Parallel Computing. Introduction Parallel Computing Introduction Thorsten Grahs, 14. April 2014 Administration Lecturer Dr. Thorsten Grahs (that s me) [email protected] Institute of Scientific Computing Room RZ 120 Lecture Monday 11:30-13:00

More information

LS-DYNA Scalability on Cray Supercomputers. Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp.

LS-DYNA Scalability on Cray Supercomputers. Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp. LS-DYNA Scalability on Cray Supercomputers Tin-Ting Zhu, Cray Inc. Jason Wang, Livermore Software Technology Corp. WP-LS-DYNA-12213 www.cray.com Table of Contents Abstract... 3 Introduction... 3 Scalability

More information

APPLICATION GUIDELINES FOR ADMISSION TO THE DOCTORAL PROGRAM AT THE GRADUATE SCHOOL OF ENGINEERING. International Students Entrance Examination

APPLICATION GUIDELINES FOR ADMISSION TO THE DOCTORAL PROGRAM AT THE GRADUATE SCHOOL OF ENGINEERING. International Students Entrance Examination DOCTORAL PROGRAM APPLICATION GUIDELINES FOR ADMISSION TO THE DOCTORAL PROGRAM AT THE GRADUATE SCHOOL OF ENGINEERING Academic Year 2015 (Fall Enrollment) International Students Entrance Examination Application

More information

Altix Usage and Application Programming. Welcome and Introduction

Altix Usage and Application Programming. Welcome and Introduction Zentrum für Informationsdienste und Hochleistungsrechnen Altix Usage and Application Programming Welcome and Introduction Zellescher Weg 12 Tel. +49 351-463 - 35450 Dresden, November 30th 2005 Wolfgang

More information

Cloud Computing through Virtualization and HPC technologies

Cloud Computing through Virtualization and HPC technologies Cloud Computing through Virtualization and HPC technologies William Lu, Ph.D. 1 Agenda Cloud Computing & HPC A Case of HPC Implementation Application Performance in VM Summary 2 Cloud Computing & HPC HPC

More information

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications Amazon Cloud Performance Compared David Adams Amazon EC2 performance comparison How does EC2 compare to traditional supercomputer for scientific applications? "Performance Analysis of High Performance

More information

2016 Application Procedure for Doctoral Degree (For Doctoral Course students)

2016 Application Procedure for Doctoral Degree (For Doctoral Course students) Last update date: March, 2016 2016 Application Procedure for Doctoral Degree (For Doctoral Course students) The process for applying for Doctoral degree is as follows. Please read this notice carefully

More information

Service Partition Specialized Linux nodes. Compute PE Login PE Network PE System PE I/O PE

Service Partition Specialized Linux nodes. Compute PE Login PE Network PE System PE I/O PE 2 Service Partition Specialized Linux nodes Compute PE Login PE Network PE System PE I/O PE Microkernel on Compute PEs, full featured Linux on Service PEs. Service PEs specialize by function Software Architecture

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. September 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 25 Course

More information

3rd International Symposium on Big Data and Cloud Computing Challenges (ISBCC-2016) March 10-11, 2016 VIT University, Chennai, India

3rd International Symposium on Big Data and Cloud Computing Challenges (ISBCC-2016) March 10-11, 2016 VIT University, Chennai, India 3rd International Symposium on Big Data and Cloud Computing Challenges (ISBCC-2016) March 10-11, 2016 VIT University, Chennai, India Call for Papers Cloud computing has emerged as a de facto computing

More information

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24.

bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24. bwgrid Treff MA/HD Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 24. November 2010 Richling/Kredel (URZ/RUM) bwgrid Treff WS 2010/2011 1 / 17 Course

More information

Big Data and Cloud Computing for GHRSST

Big Data and Cloud Computing for GHRSST Big Data and Cloud Computing for GHRSST Jean-Francois Piollé ([email protected]) Frédéric Paul, Olivier Archer CERSAT / Institut Français de Recherche pour l Exploitation de la Mer Facing data deluge

More information

4. Application Documents. The following documents must be submitted by all applicants:

4. Application Documents. The following documents must be submitted by all applicants: 2015 Applicant Guidelines for Doctoral Programs (PhD) of Graduate School of Information Science, Nagoya University (October admission) Important Notice: The Japanese version of the Applicant Guidelines

More information

OpenMP Programming on ScaleMP

OpenMP Programming on ScaleMP OpenMP Programming on ScaleMP Dirk Schmidl [email protected] Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign

More information

Estonian Scientific Computing Infrastructure (ETAIS)

Estonian Scientific Computing Infrastructure (ETAIS) Estonian Scientific Computing Infrastructure (ETAIS) Week #7 Hardi Teder [email protected] University of Tartu March 27th 2013 Overview Estonian Scientific Computing Infrastructure Estonian Research infrastructures

More information

Data-analysis scheme and infrastructure at the X-ray free electron laser facility, SACLA

Data-analysis scheme and infrastructure at the X-ray free electron laser facility, SACLA Data-analysis scheme and infrastructure at the X-ray free electron laser facility, SACLA T. Sugimoto, T. Abe, Y. Joti, T. Kameshima, K. Okada, M. Yamaga, R. Tanaka (Japan Synchrotron Radiation Research

More information

Graduate School of Medicine, Kyoto University Guide for Applying to Doctoral Program in Medical Sciences for 2015

Graduate School of Medicine, Kyoto University Guide for Applying to Doctoral Program in Medical Sciences for 2015 Graduate School of Medicine, Kyoto University Guide for Applying to Doctoral Program in Medical Sciences for 2015 Translation Disclaimer Kyoto University strives to achieve the highest possible accuracy

More information