Round 2 Final Report. Wolfgang Gentzsch and Burak Yenier May 31, 2013

Size: px
Start display at page:

Download "Round 2 Final Report. Wolfgang Gentzsch and Burak Yenier May 31, 2013"

Transcription

1 Round 2 Final Report Wolfgang Gentzsch and Burak Yenier May 31, 2013 Page 1

2 Welcome! We are proud to present the final report of this second Round of the Experiment, documenting the results of 3 months of hard work by the 33 teams and their members, findings, challenges, lessons learned, and recommendations. We were amazed by how engaged all participants moved forward, despite the fact that this was not their day job. But their inquiring mind and the chance of collaborating with the brightest people and companies in the world and tackling some of today s challenges with accessing remote resources in HPC centers and HPC Clouds were their strongest motivator. We want to thank all participants for their continuous commitment and for the voluntary contribution to their individual teams and thus to the whole Experiment. Round 2 of the Experiment concluded end of March 2013, with more than 360 participating organizations and individuals from 30 countries, working together in 33 teams. In the meantime, at the writing of this report, 470 participants are registered, and we are now forming Team 80. The Experiment that we originally kicked off on July 20, 2012 brought together four categories of participants: the industry end-users, the computing and storage resource providers, the software providers, and the experts. We will continue to refer to these categories throughout the report as we discuss roles and responsibilities, motivations and challenges as well as costs and benefits. As participants and organizers of this experiment, we were collectively selecting the end-user projects to be worked on, assigning providers and experts to each project and finding ways to overcome the hurdles we were running into. Each team s goal was not only to complete the selected end-user project, but also to chart the way around the hurdles they identified. The aim of this Experiment was to explore the end-to-end process of accessing remote computing resources and to study and overcome the potential roadblocks. The aim of the experiment is not to perform long production runs. Therefore, we restricted the free computing and software license usage time by a team to max 1,000 cpu-core hours. At the end of Round 2 of the Experiment, we have analyzed each team s contribution; we herewith document and share our findings with all of our participants. We also plan to contribute a Compendium of 25 selected team reports (case studies) which will be published by Tabor Communications in HPCwire and other media, generously sponsored by Intel s HPC division. As part of this report, you will find a brief description of how the experiment is organized, the roles and responsibilities of the participants, some project statistics, a collection of lessons learned and recommendation from our teams, and also 13 detailed team case studies. Last but not least, we want to thank John Kirkley from Kirkley Communications for his support with editing these case studies. Enjoy reading! We d love to hear your feedback. Wolfgang.Gentzsch@hpcexperiment.com and Burak.Yenier@hpcexperiment.com Page 2

3 Contents 1. Executive Summary Page 4 2. Building the Teams 7 3. Roadmap: how to complete an end-user project The Teams The UberCloud Exhibit and Conferences in Invitation to Join Round 2 of the Uber-Cloud HPC Experiment 22 Appendix 1 - Final Team Reports 23 TEAM 26 - Development of Stents for a Narrowed Artery 23 TEAM 30 - Heat Transfer Use Case 27 TEAM 34 - Analysis of Vertical and Horizontal Wind Turbines 35 TEAM 36 - Advanced Combustion Modeling for Diesel Engines 37 TEAM 40 - Simulation of Spatial Hearing 40 TEAM 44 - CFD Simulation of Drifting Snow 43 TEAM 46 - CAE Simulation of Water Flow Around a Ship Hull 47 TEAM 47 - Heavy Duty Abaqus Structural Analysis using HPC in the Cloud 54 TEAM 52 - High-Resolution Simulations of Blow-off in Combustion Systems 60 TEAM 53 - Understanding Fluid Flow in Microchannels 62 TEAM 54 - Analysis of a Pool in a Desalinization Plant 66 TEAM 56 - Simulating Radial and Axial Fan Performance 69 TEAM 58 - Simulating Wind Tunnel Flow Around Bicycle and Rider 74 Appendix 2 - Questions & Answers 79 Page 3

4 1. Executive Summary After a fast paced 4-months, Round 2 of the UberCloud Experiment (also known as the HPC Experiment) concluded in March 2013, with more than 360 participating organizations and individuals from 30 countries, working together in 33 teams. This report presents their findings, challenges, lessons learned, recommendations, and some of their use cases. Why are we performing this experiment? The aim of the UberCloud Experiment is to explore the end-to-end process of accessing remote computing resources in HPC Centers and in HPC Clouds and to study and overcome the potential roadblocks. The Experiment originally kicked off on July 20, 2012 and brought together four categories of participants: the industry end-users, the computing and storage resource providers, the software providers, and the experts. We set up end-user projects, assigning providers and experts, and tried to find ways to overcome the hurdles we were running into. Each team s goal was to complete its project, and to chart the way around the hurdles they identified. End users can achieve many benefits by gaining access to additional compute resources beyond their current internal resources (e.g. workstations), arguably the most important two are: the benefit of agility gained by speeding up product design cycles through shorter simulation run times. the benefit of superior quality achieved by simulating more sophisticated geometries or physics, or by running many more iterations to look for the best product design. Tangible benefits like these make HPC and more specifically HPC-as-a-Service quite attractive. But how far are we from an ideal remote use of HPC or HPC-as-a-Service (HPCaaS) or HPC in the Cloud model? At this point, we don t know, no one quite does. However, in the course of this experiment, following each team and monitoring its challenges and progress, we ve got an excellent insight into these roadblocks and how our 25 teams have tackled them. Building the teams The main approach for this experiment is to look at the end-users project and select the appropriate resources, software and expertise that match those requirements. During the four months of the experiment, we were able to build 25 teams all with a project proposed by an industry end user. This final report, available to all of our participants, contains the final report of each of these teams offering valuable insight through their own words. Page 4

5 As we gather more information through building and following the progress of the teams we are also creating a positive feedback loop, where each team teaches us how to build a stronger team for the next project. We look forward to future rounds of the experiment where this accumulating knowledge will yield ever more successful projects. Roadmap to completing an end-user project A major improvement of Round 2 was the introduction of the Basecamp collaboration platform for each team and the fine-grain partitioning of the end-to-end process of accessing and using remote resources, into 22 individual steps. We recognized that every end-user project requires a slightly different approach, a variety of software and compute resources, a certain expertise to lead the end-to-end process, and a tailored schedule. However, to be able to keep the entire experiment consistent we ask each team to follow a common roadmap for each of their end-user projects, which has been published in each team s Basecamp collaboration platform. The expert assigned to each team is the guide in following this roadmap. The roadmap calls for communication with the organizers at certain points, although generally the teams are autonomous and make their own decisions. While in Round 1 we had defined six steps for the end-to-end process, we came up with 22 individual steps in Round 2, based on the feedback of many of the Round 1 teams. The major sets of tasks are Step 1. Define the end-user project. The expert and the end-user jointly define the project. Based on this information, as organizers we assign the appropriate resources to the project and ensure the availability of the assigned resources. Step 2. Contact the assigned resources and set up the project environment. The expert contacts the resource and software providers and performed activities such as assisting in software and license installations, creation of user accounts, and configuration of the environment for the project. Step 3. Initiate the end-user project execution. The expert assists the end-user with uploading the necessary data, code and configuration files to the remote resource(s). The expert then works with the resource provider to queue the project up on the HPC system. Step 4. Monitor the project. The expert remains engaged with the resource providers and at any time has up to date information about the status of the project. Step 5. Review results with the end-user. The expert assists the end-user in downloading the results from the resource provider s environment and discusses the results with the end-user. Page 5

6 Step 6. Document findings. During the entire lifecycle of the project, there occur hurdles, friction and failure points and the expert documents these findings. These steps have been subdivided further into 22 smaller steps which we explain in Chapter 3. Roadblocks, lessons learned, and recommendations Our team members have reported the following roadblocks during the course of their team projects. The teams were also asked to provide information on how they resolved them (or not). The main roadblocks which are presented and discussed in the individual team reports in the Appendix are: Gigabytes of data files and slow data transfer; information security and privacy; unpredictable costs; lack of easy, intuitive self-service registration and administration; incompatible software licensing models; high expectations and disappointing results; reliability and availability of resource providers; and the need for a professional HPC Cloud Provider. For more details please see the sections on Challenges, Lessons Learned, and Recommendations in the individual team reports in the Appendix. Just like all other participants, we as the organizers, treated the experiment as a learning opportunity. In this round of the experiment, we have learning from the shortcomings of round 1 and we have improved the experiment for round 2. To be specific, we discussed and provided solutions for the following shortcomings: - all participants are professionals with busy schedules and the experiment is not their primary job, so they could only dedicate a few hours per week to the experiment; - some resource providers run into resource crunches which delayed team projects; - some of our projects ran into long delays since the project and the resource provider weren t the best match possible; - some resource providers struggled with the installation of an application; - others had difficulties with providing network access through complex network connections; - resource providers differ in their service philosophies; - simply getting started was a challenge; - a few teams struggled with figuring out which team member needs to do what and when; - team forming was one of the steps, which took the longest amount of time, each team member needed to exchange significant amounts of information about their background, capabilities, expectations, availability, and commitment levels with one another before the project could even kick off; - and finally, manual processes are just slow, they consumed days, sometimes weeks especially because the various technology and people resources were inherently remote, each with different priorities. We hope that our participants will extract significant value out of our report. They certainly deserve to do so in return for their generous contributions, support and participation. Page 6

7 2. Building the Teams During the course of the second round of this experiment, over 200 active participants and observers have registered at the experiment website, after 160 participants in Round 1. This healthy pool of participants allowed us to break this group into 26 teams, which were working in parallel. We ve designed the experiment in a way that each team can work autonomously but follow a common methodology, a common documentation standard and a common calendar. Let s illustrate the inner workings of the Experiment with an example: Suppose the end-user is in need of additional compute resources to speed up a product design cycle, say for simulating more sophisticated geometry or physics, or for running many more simulations for a higher quality result. That suggests a specific software stack, domain expertise, and even hardware configuration. The general idea is to look at the end-user s task and select the appropriate resources, software and expertise that match its requirements. Then, with modest guidance from the Experiment organizers, the user, resource providers, and Experts will implement and run the task and deliver the results. The hardware and software providers will measure resource usage; the Expert will summarize the steps of analysis and implementation; the end user will evaluate the quality of the process and of the results and the degree of user-friendliness this process provided. The experiment organizers will analyze the feedback received. Finally, the team will get together, extract lessons learned, and present further recommendations as input for the corresponding case study. Some participants, especially our compute resource and software providers were a part of multiple teams; but we kept the end-users and experts assigned to a single team. We also suggested restricting free usage of computing resources to 1,000 cpu-core hours, to avoid jeopardizing our resource partners business. To start, let s define what roles each stakeholder has to play to make service-based HPC come together. In this case, stakeholders consist of industrial end users, resource providers, software providers, and high performance computing experts. The Team Expert This group includes individuals or companies with expertise, especially in areas like cluster management or porting application code onto HPC systems. It also encompasses PhD-level domain specialists with in-depth application knowledge. In the experiment, experts worked with end users, computer centers, and software providers to help glue the pieces together. Each team has been led by a Team Expert, who guided all aspects of the project selection, its execution, and documentation of the project. The expert followed the roadmap presented in Page 7

8 detail on the team s Basecamp collaboration platform, to complete the end-user project and to communicate the findings back. Experts determined when help or resources were needed from other participants within their team and had access to these resources. The expert has also been the conduit for communications with the organizers. We relied on the experts to raise the flag when the team needed assistance or additional resources from outside the team. The End-User A typical example is a small or medium size manufacturer in the process of designing and prototyping its next product. These users are candidates for remote HPC or HPC-as-a-Service when in-house computation on workstations has become too lengthy a process, but acquiring additional computing power in the form of HPC is too cumbersome or is not in line with budgets. HPC is not likely to be the core expertise of this group. As participants and organizers of the experiment, the end-users are the group we were working to satisfy. The end-users defined their projects in detail, set success criteria, provided input data and interpreted the outcome of the project to determine if the success criteria have been met. The end-users were required to ensure that they have the proper authorization to bring in their projects to the experiment. Although the input/output data and the results were not shared outside of the team assigned to the project, the findings regarding the hurdles and how they were resolved are shared with all participants, if requested in an anonymized form.. The end-users have also been asked to select projects that are suitable for the experiment. To serve as examples the following projects were not considered suitable: requires over 1,000 CPU core hours, requires licenses from ISV s that are not able nor willing to participate in the experiment, input/output dataset contains secret information, output will be used for anything other than experimentation purposes. The Compute and Storage Resource Provider This pertains to anyone who owns HPC resources, computers, and storage, and is networked to the outside world. A classic HPC center would fall into this category, as well as a standard datacenter used to handle batch jobs, or a cluster-owning commercial entity that is willing to offer up cycles to run non-competitive workloads during periods of low CPU-utilization, or, certainly, an HPC Cloud services provider. This group contributed their compute, storage, and data transmission resources and related expertise to the experiment. The providers were responsible for completing the execution of the projects and making the results available to the end-user based on the collectively agreed on schedules. Although the providers were expected to make their resources available to the Page 8

9 participants at no cost within the scope of this experiment, they measured and reported on resource usage. Each provider has been free to define the limits of their contribution and had the right to turn down any proposed project. The Software and Service Provider This includes software owners of all stripes, including ISVs, public domain organizations and individual developers. We were looking for rock-solid software, which has the potential to be used on a wider scale. For the purpose of this experiment, on-demand license usage has been tracked in order to determine the feasibility of using the service model as a revenue stream. Our application software and service provider participants supported the experiment in multiple ways. Other than contributing their software licenses or services to the experiment, they have been an escalation point for the experts, in case they ran into hurdles they couldn t cross. Similar to computation and storage resource providers, the software and service providers made their resources available to the participants at no cost; however they measured their resource usage. The Team Mentor One of the process improvements in Round 2 was the introduction of the Team Mentors. They play a key role as a guide, a supervisor and as a source of help to Experiment teams. A Team Mentor gives the team the best chance of success without getting too far into the day-to-day work of the team. A Team Mentor s tasks have proactive and reactive components. Proactive components can best be tackled in a regular weekly work-phase and focus on checking the Team status, helping the Team Members to fill in all requested information in the Team Documents in Basecamp, and consequently monitor Team progress; in case there is no activity over the last 7 days contact the Team Expert and ask if any support is needed. Reactive components are for example requests from Team Members for help, and conveying this request back to the Experiment Organizers. The Team Mentor is NOT a hands-on project manager; this is the task of the Team Expert. The Team Mentor makes sure the Team Expert doesn't get lost or lose interest. In summary, the tasks of the Team Mentor are: Receive, review and accept the assignment for his new team Familiarize with his new team members Introduce yourself to the team members as their Team mentor Check the team contact sheet and remind team members to fill it out Page 9

10 Propose a kick-off meeting via telephone conference or Skype Remind team members to use BaseCamp for communication, coordination and To Do s tracking Help Team Expert follow the project checklist and fill out the text documents Report back to Experiment Organizers when a step is complete Ensure that the team remains active Promote additional services from The UberCloud Exhibit Track project metrics to ensure compliance with terms Help the team with agreement related concerns, for example: SLA and NDA Provide ideas about improvement opportunities for the HPC Experiment Finally, help/encourage the Team Expert in writing the Case Study Page 10

11 3. Roadmap: How to Complete an End-user Project We recognized that every end-user project requires a slightly different approach, a variety of software and compute resources and a tailored schedule. However, to be able to keep the entire experiment consistent we asked each team to follow a common roadmap for each of their end-user projects, which has been published in the team s Basecamp collaboration platform. The expert assigned to each team has been the guide in following this roadmap. The roadmap called for communication with the organizers at certain points, although generally the teams have been autonomous and made their own decisions. If at any point we discovered that the roadmap didn t address the needs of multiple participants, we were fast in updating it. The end-to-end process for an end-user and his team to get together onto cloud resources, execute the end-user s application, and bring the results back to the end-user is subdivided into a set of smaller steps published on Basecamp which help to guide the team through this process and to avoid pitfalls and roadblocks, or at least help to resolve them. The following Help Text has been entered into BaseCamp for each of the To-Do s to give the teams an explanation of how to complete that specific To-Do. Each help text is expected to be self-explanatory to avoid the need for reading the entire document to understand a specific step. Step 1. Define the end-user project by completing the following to-do's To-Do 1.1: Team Expert fills out "Project definition" text document with support from End User The "Project definition" text document is where information about the project, such as the project objectives and summary of the goals to be achieved, the application software to be used, the custom code requirements, and information about post-processing are stored. The document serves as a reference throughout the project and drives decisions such as: which Software Providers to work with, which Resource Provider to select. This document should be filled out by the Team Expert with support from the End User. Please add any additional sections to the document as necessary. A kick-off meeting can be a good time to cover all sections where there is missing information. To access the related text document log into BaseCamp, select your project, click on the Text Documents menu (appears right below the project name) and click on the name of the document. This document can be edited directly in BaseCamp, please click anywhere in the document to edit. The document auto-saves any changes made and you can close the document by clicking on the name of the project in the menu when done editing. Once the "Project definition" text document is filled out, please check the box next to this To-Do item on the home page of the project in BaseCamp. Page 11

12 To-Do 1.2: Organizer assigns Software Provider based on "Project definition" text document Software Providers are assigned by the Organizers of the Experiment based on the information provided in the "Project definition" text document. Although each team is closely monitored by the Organizers and this To-Do is typically completed as quickly as possible, please post a comment into this To-Do if you face delays. You can post a comment into a To-Do by clicking on the To-Do on the projects home page in BaseCamp. The comments box will appear under the "Discuss this to-do" section. Once assigned, you can find out the name of the Software Provider in the "Software resources" and Key Contacts text documents in BaseCamp. To access the related text document log into BaseCamp, select your project, click on the Text Documents menu (appears right below the project name) and click on the name of the document. This document can be edited directly in BaseCamp, please click anywhere in the document to edit. The document auto-saves any changes made and you can close the document by clicking on the name of the project in the menu when done editing. To-Do 1.3: Organizer assigns Resource Provider based on "Project definition" text document Resource Provider is assigned by the Organizers of the Experiment based on the information provided in the "Project definition" text document. Although each team is closely monitored by the Organizers and this To-Do is typically completed as quickly as possible, please post a comment into this To-Do if you face delays. You can post a comment into a To-Do by clicking on the specific To-Do on the projects home page in BaseCamp. The comments box will appear under the "Discuss this to-do" section. Once assigned, you can find out the name of the Resource Provider in the "Computing resources" text document in BaseCamp and in the Key Contact text document. To access the related text document log into BaseCamp, select your project, click on the Text Documents menu (appears right below the project name) and click on the name of the document. This document can be edited directly in BaseCamp, please click anywhere in the document to edit. The document auto-saves any changes made and you can close the document by clicking on the name of the project in the menu when done editing. To-Do 1.4: 1.4 Team Expert calls for a kick-off meeting over Skype via Doodle event scheduler A kick-off meeting is useful for the team to come together to get to know one another, discuss the Project definition, success criteria, project timelines, Software resources, Compute resources. A kick-off meeting is recommended, but not required. Page 12

13 Considering time zone differences and conflicting schedules it can get difficult to find a time slot for the kick-off meeting. A free online service called Doodle ( can be used to coordinate the meeting time. To set up a meeting go to the Doodle website, click Schedule an Event, enter a title that contains your team name (ex: "Team 99: Kick-off meeting" and your address, and suggest several day/time slots. Doodle will provide a link for your event. Anyone who has the link can provide their own schedule information without a username or password. Paste the URL into BaseCamp or s to invite team members To-Doodle. Once the schedule is set with Doodle, use BaseCamp to announce the schedule. You can also create an Event in BaseCamp as a reminder for your team members. To-Do 1.5: Resource Provider fills out "Computing resources" text document The "Computing resources" text document is where information about the resources, such as the name of the cluster/datacenter, the process to request access, the contact information for technical support are stored. The document serves as a reference throughout the project for team members. This document should be filled out by the Resource Provider. Please add any sections to the document as necessary. To access the related text document log into BaseCamp, select your project, click on the Text Documents menu (appears right below the project name) and click on the name of the document. This document can be edited directly in BaseCamp, please click anywhere in the document to edit. The document auto-saves any changes made and you can close the document by clicking on the name of the project in the menu when done editing. Once the "Computing resources" text document is filled out, please check the box next to this To-Do item on the home page of the project in BaseCamp. To-Do 1.6: Software Provider fills out "Software resources" text document The "Software resources" text document is where information about the software resources, such as the name of the software to be used, the process to request license keys, the contact information for technical support are stored. The document serves as a reference throughout the project for team members. This document should be filled out by the Software Provider. Please add any sections to the document as necessary. To access the related text document log into BaseCamp, select your project, click on the Text Documents menu (appears right below the project name) and click on the name of the document. This document can be edited directly in BaseCamp, please click anywhere in the document to edit. The document auto-saves any changes made and you can close the document by clicking on the name of the project in the menu when done editing. Once the "Software resources" text document is filled out, please check the box next to this To- Do item on the home page of the project in BaseCamp. Page 13

14 To-Do 1.7: End-User fills out "Software resources" text document if custom code is needed The "Software resources" text document is where information about any customer code to be supplied by the end-user is stored, such as the name of the software to be used, a short description of the custom software, how to obtain the code/binaries to be used, and the contact information for technical support. The document serves as a reference throughout the project for team members. This document should be filled out by the End User who provides the custom code. Please add any sections to the document as necessary. To access the related text document log into BaseCamp, select your project, click on the Text Documents menu (appears right below the project name) and click on the name of the document. This document can be edited directly in BaseCamp, please click anywhere in the document to edit. The document auto-saves any changes made and you can close the document by clicking on the name of the project in the menu when done editing. Once the "Software resources" text document is filled out, please check the box next to this To- Do item on the home page of the project in BaseCamp. To-Do 1.8: Team Expert reviews the UberCloud Exhibit, considers additional services which may be useful The UberCloud Exhibit contains the list of products and services that can provide additional benefit to the Experiment teams. Please go to and review all available products and services. When a suitable listing is identified, click the Experiment button on the related UberCloud Exhibit page and fill out the related form. To inform the team members of the availability of the product or service post a comment in this To-Do. You can post a comment into a To-Do by clicking on the To-Do on the projects home page in BaseCamp. The comments box will appear under the "Discuss this to-do" section. Step 2. Contact the assigned resources and set up the project environment by completing the following to-do's To-Do 2.1: Team Expert gets resources using "Computing resources" text document Team Expert is responsible for requesting access to the resources specified in the "Compute resources" text document. Please follow the instructions provided in this document to request access from the Resource Provider. If any technical support is required contact the resources specified in the same document. In case of unresolved issues reach out to your Team Mentor first, followed by the Experiment Organizers for assistance. Once access to the required resources is confirmed, please check the box next to this To-Do item on the home page of the project in BaseCamp. Page 14

15 To-Do 2.2: Team Expert sets up software using "Software resources" text doc with Resource Provider help Team Expert is responsible for setting up the software specified in the "Software resources" text document with assistance from the Resource Provider. Please follow the instructions provided in this document to request software license keys if needed. If any technical support is required contact the resources specified in the same document. In case of unresolved issues reach out to your Team Mentor first, followed by the Experiment Organizers for assistance. Once set up of the required Software is complete, please check the box next to this To-Do item on the home page of the project in BaseCamp. To-Do 2.3: Team Expert sets up end-user code using "Software resources" doc with Resource Provider help Team Expert is responsible for setting up the custom software specified in the "Software resources" text document with assistance from the Resource Provider. Please follow the instructions provided in this document to obtain the custom code and configuration as needed. If any technical support is required contact the resources specified in the same document. In case of unresolved issues reach out to your Team Mentor first, followed by the Experiment Organizers for assistance. Once set up of the required custom software is complete, please check the box next to this To- Do item on the home page of the project in BaseCamp. To-Do 2.4: Team Expert configures the project environment with Resource Providers help Team Expert is responsible for configuring the project environment based on the team s needs using information specified in the "Project definition", "Compute resources", "Software resources" text documents. The project environment consists of the compute resource provided by the Resource Provider, the application software and custom code, the application s data, as well as the appropriate configuration options. It s strongly advised for the Resource Provider (e.g. for HPC Computer Centers) to implement a separate queue on the system s distributed resource manager (in the LSF, PBS, Grid Engine etc.). This allows the Resource Provider to better monitor and control resource usage of UberCloud Experiment compute jobs. Once configuration of the project environment is complete, please check the box next to this To- Do item on the home page of the project in BaseCamp. To-Do 2.5: Team Expert performs a trial run A trial-run is performed once the project environment is set up to prove that the Compute resources, Software resources, end-user custom code, application input data, and necessary configurations are all in place. A trial-run is usually performed with a minimal size data set. Executing the trial-run is the Team Expert s responsibility. If any technical support is required Page 15

16 contact the support resources specified in the "Project definition", "Compute resources", "Software resources" text documents. In case of unresolved issues reach out to your Team Mentor first, followed by the Experiment Organizers for assistance. Once configuration of the project environment is complete and the trial-run is successful, please check the box next to this To-Do item on the home page of the project in BaseCamp. Step 3. Initiate the end-user project execution by completing the following to-do's To-Do 3.1: Team Expert uploads data to the project environment with help from End User Team Expert is responsible for uploading the end-user s input data into the project environment with the help of the End User. If there are information security or dataset related concerns please alert your Team Mentor first, followed by the Experiment Organizers by posting a comment in this To-Do. You can post a comment into a To-Do by clicking on the To-Do on the projects home page in BaseCamp. The comments box will appear under the "Discuss this todo" section. Once upload of data into the project environment is complete, please check the box next to this To-Do item on the home page of the project in BaseCamp. To-Do 3.2: Team Expert queues the job(s) for the project with help from Resource Provider Team Expert is responsible for queuing the job(s) using the information specified in the "Compute resources" text document. If any technical support is required contact the resources specified in the same document. In case of unresolved issues reach out to your Team Mentor first, followed by the Experiment Organizers for assistance. Once job(s) are queued, please start to closely monitor the job(s) and check the box next to this To-Do item on the home page of the project in BaseCamp. Step 4. Monitor the project by completing the following to-do's To-Do 4.1: Team Expert monitors the job status Team Expert is responsible for monitoring the job(s) using the information specified in the "Compute resources" text document. Potential cost overruns and job failures should be monitored for and promptly acted on. If any technical support is required contact the resources specified in the same document. In case of unresolved issues reach out to your Team Mentor first, followed by the Experiment Organizers for assistance. Once job(s) are completed check the box next to this To-Do item on the home page of the project in BaseCamp. Page 16

17 To-Do 4.2: Team Expert re-sets parameters between runs as needed with support from End User Team Expert is responsible for monitoring the jobs using the information specified in the "Compute resources" text document. Some jobs may require parameter updates from one job to another and the Team Expert should promptly act on such requirements. If any technical support is required contact the resources specified in the same document. In case of unresolved issues reach out to your Team Mentor first, followed by the Experiment Organizers for assistance. Once jobs are completed check the box next to this To-Do item on the home page of the project in BaseCamp. To-Do 4.3: Team Expert performs post processing, such as visualization with help from Resource Provider Team Expert is responsible for performing any post-processing tasks needed using the information specified in the "Project definition" and "Software resources" text documents. If any technical support is required contact the resources specified in the same document. In case of unresolved issues reach out to your Team Mentor first, followed by the Experiment Organizers for assistance. Once the post processing jobs are completed check the box next to this To-Do item on the home page of the project in BaseCamp. Step 5. Review results by completing the following to-do's To-Do 5.1: Team Expert makes results available to the End User, if needed repeats Step 2-5 Team Expert is responsible for making the results available to the End User. This may be accomplished by providing result datasets to the End User via transfer mechanisms such as FTP or through remote visualization techniques. Steps 2 through 5 may need to be repeated with different settings or data to reach the success criteria of the team. If resource utilization limit will be exceeded, please consult the Team expert first, followed by the Experiment Organizers before moving forward. If any technical support is required contact the resources specified in the same document. In case of unresolved issues reach out to your Team Mentor first, followed by the Experiment Organizers for assistance. Once the results are made available to the End User check the box next to this To-Do item on the home page of the project in BaseCamp. To-Do 5.2: Team Expert removes the End User data from project environment with Resource Provider's help Team Expert is responsible for removing the End User s input and output data from the project environment. Page 17

18 Once the results are made available to the End User and the End User data is removed from the project environment, check the box next to this To-Do item on the home page of the project in BaseCamp. Step 6. Document findings by completing the following to-do's To-Do 6.1: Team Expert initiates documentation using "Template for Uber-Cloud Experiment Uses Cases" At the conclusion of their project, each team is required to document their experience in the form of a use case. This includes teams which were unable to reach their desired goals. Template for Uber-Cloud Experiment Uses Cases.pdf file is provided in BaseCamp for the teams to follow. The Team Expert reviews the template in BaseCamp and distributes it to the team members to fill out relevant sections. Team Experts can distribute the file by either pointing the team members to BaseCamp or attaching the pdf file to an . A clear due date must be set by the Team Expert for the team members to submit the information. To access the related file log into BaseCamp, select your project, click on the Files menu (appears right below the project name) and click on the name of the file. To-Do 6.2: Team Expert requests team members to contribute to and review the documentation The Team Expert is responsible for gathering information from the team members to complete and final edit the documentation of the project. Please provide the final version of the documentation to your Team Mentor or Experiment Organizers by attaching it to BaseCamp as a file. Thank you for your participation in the Experiment and taking the time to summarize your experience in the form of a use case. Page 18

19 4. The Teams We have built 33 teams: 21 teams completed their task successfully, the other 12 teams did not terminate, either because our team building was not optimal, or the end-user dropped out because of other priorities; some of them decided to continue in Round 3. And 13 successful teams submitted their case study report at the end of their team projects: TEAM 26 Development of Stents for a Narrowed Artery TEAM 27 stalled, changed, then became TEAM 44, successful TEAM 28 Fluid Flow in Medical Device TEAM 29 Photorealistic Rendering, stalled TEAM 30 Heat Transfer Use Case TEAM 31 Simulation of the radiofrequency field distribution inside human body, stalled TEAM 32 Two-phase flow simulation of a separation columns, stalled TEAM 33 Large-scale and high-resolution weather and climate prediction, stalled TEAM 34 Analysis of Vertical and Horizontal Wind Turbines TEAM 35 Hadoop based simulations with data from telecommunication, stalled TEAM 36 Advanced Combustion Modeling for Diesel Engines TEAM 37 Simulation of gas bubbles in a liquid mixing vessel, stalled TEAM 38 Analysis of the biological diversity in a geography using R scripts, stalled TEAM 39 Remote Visualization, stalled TEAM 40 Simulation of Spatial Hearing TEAM 41 3-D electromagnetic simulation of physical structures TEAM 42 Noise and Vibration analysis, Strength and Stiffness analysis, stalled TEAM 43 Oxidizer flow within a Hybrid Rocket Motor TEAM 44 CFD Simulation of Drifting Snow TEAM 45 Simulating Smoke flow inside a building, stalled TEAM 46 CAE Simulation of Water Flow Around a Ship Hull TEAM 47 Heavy Duty Abaqus Structural Analysis using HPC in the Cloud TEAM 48 Simulation of jet mixing in the supersonic flow with shock, stalled TEAM 49 Simulating water flow through an irrigation water sprinkler TEAM 50 Numerical EMC and Dosimetry with high-res models TEAM 51 Simulation of water/ blood flow inside rotating microchannels, stalled TEAM 52 High-Resolution Computer Simulations of Blow-off in Combustion Systems TEAM 53 Understanding Fluid Flow in Microchannels TEAM 54 Analysis of a Pool in a Desalinization Plant TEAM 55 Ensemble simulation of weather at 20km and higher resolution TEAM 56 Simulating Radial and Axial Fan Performance TEAM 57 Gas turbine gas dilution analysis TEAM 58 Simulating Wind Tunnel Flow Around Bicycle and Rider The detailed case study reports from these teams are included in Appendix 1 below. Page 19

20 5. The UberCloud Exhibit and Conferences in 2013 Another improvement in Round 2 is the introduction of the services directory, the UberCloud Exhibit, the one-stop interactive online services directory for Cloud users and service providers with focus on High Performance Computing, Big Data, and Digital Manufacturing. It aims at complementing well-established annual exhibitions like the International Conference & Exhibition for High Performance Computing in the US in November, the International Supercomputing Conference & Exhibition in Europe in June, and the International ISC Cloud Conference & Exhibition for HPC & Big Data in September. The UberCloud Exhibit is where resource, software and expertise service providers showcase their services to and interact with the HPC and Digital Manufacturing communities worldwide, as well as collaborate directly with 100 s of UberCloud Experiment participants. As service provider, you can join the UberCloud Exhibit today and request your Exhibit space. The benefits for our community members of using this UberCloud Exhibit or for exhibiting here are manifold:»» Gone are the days where you had to start with laborious and time-consuming Google searches to find a specific service, product, or technology. Here is your one-stop interactive online Services Exhibit targeting your specific community, with around-the-clock opening hours, free access, at your finger tip.»» The UberCloud Exhibit is an independent and unbiased directory of services; there is no vendor lock-in, no vendor preference.»» There is a value in being part of the community because that s how you find active, interested business partners. Example: Software providers who are about to introduce a new product will launch it to the market through TheUberCloud Exhibit because they can easily find customers and business partners here.»» Each vendor Exhibit comes with an information poster followed by an interactive communication panel with a set of buttons encouraging you to take actions and triggering appropriate reactions from the service provider and from the UberCloud. You can grab an information brochure, talk to an exhibitor, get a life demo, a trial service, or participate in the UberCloud Experiment to test the service.»» Exhibitors acquire Exhibit Space, not content. They are free to change their exhibit content on the fly, remove old messages and add fresh ones, thus keeping the content in their exhibit space always up to date.»» And finally, the annual fee for service providers exhibiting here is kept at a minimum so that every company, large and small, can afford to exhibit. Thus, there is no budget barrier or other reason to stay off. Page 20

21 We are very pleased about our first 15 participants who joined the UberCloud Exhibit within the first 3 months: Amazon, Bright Computing, Bull, Charity Engine, Cloud Advisory Council, CloudSoft, Cloudyn, Cycle Computing, Dacolt, Eleks, ESI, Nice Software, Saldlab Fidesys, Samplify, SGI, and Univa. Thank you! To view the service offerings in this UberCloud Exhibit either search by a specific keyword or select one of the tabs above which will take you to the different service areas and the corresponding service providers. And, you are encouraged to get in touch with our UberCloud team to help us further improve this UberCloud interactive online Services Exhibit. Events that the UberCloud HPC Experiment is presenting at in 2013: Hartree Centre Workshop HPC as a Service for Industry, Manchester, UK, Jan 29 30, 2013 HPC Advisory Council Stanford Conference, Stanford, California, Feb 7 8, 2013 Cloudscape V, Brussels, Belgium, Feb 27 28, 2013 HPC Advisory Council Lugano Conference, Lugano, Switzerland, Mar 13 15, th Annual HPCC Conference Supercomputing: Big Systems Big Data Better Products. Newport, RI, Mar 26 28, 2013 High Performance Computing Symposia (HPC 13), San Diego, CA, Apr 10, 2013 HPC User Forum, Tucson, Arizona, Apr 29 May 1, 2013 SIMULIA Community Conference, Vienna, Austria, May 22 24, 2013 ANSYS Regional Conference Santa Clara, CA, May 30, 2013 Intl. Supercomputing Conference, Leipzig, ISC, Germany, Jun 16 20, 2013 Intl. Conference on High Performance Computing and Simulation, Helsinki, Finland, July 1 5, 2013 CONTRAIL Cloud Computing Summerschool, Almere, The Netherlands, July 22 26, 2013 Intelligent Data Acquisition and Advanced Computing Systems Conference, IDAACS, Berlin, Germany, Sept 12-14, 2013 ISC Cloud Conference on HPC and Big Data in the Cloud, Heidelberg, Germany, Sept 23 24, 2013 CLOUDCOMP 2013, Wuhan, People s Republic of China, Oct 17 19, 2013 Page 21

22 6. Invitation to Join the UberCloud HPC Experiment The 2 nd round of the UberCloud HPC Experiment started on November 15 th with the kick-off in the Intel Booth Theater at SC'12 in Salt Lake City, together with a live webinar broadcasted to all participants around the world. The 2 nd round concluded at the end of March with significant improvements over Round 1: More professional, more automation, more participants, more applications, more teams Extending the application areas: HPC, CAE, and Life Sciences Better guidance for the teams: end-to-end process has been broken down into 22 steps Common project management tool for all teams and experiment organizers: Basecamp Services Directory (UberCloud Exhibit) for our service providers, open, community-wide 3-level support: front line (team); 2 nd level, team mentors; and 3 rd level. organizers The following is an open invitation to all members of the HPC, CAE, and Life Sciences communities to join us for the next Round of the HPC Experiment, where we will again apply the cloud computing service model to challenging CAE workloads. With the capacity of their current workstations often unable to provide enough memory, simulations taking too long, and the number of jobs too small to get quality results, CAE engineers and their organizations are looking to increase their available computing power beyond the workstations. Should they buy or rent? Buying additional compute power leads to all kinds of challenges in the context of a high-performance compute cluster acquisition. Now, recently, the other option of using remote resources became more attractive with the advent of Cloud Computing. However, here, many face additional challenges such as security and data privacy, incompatible licensing models, moving data back and forth, and a dozen others. We believe that it s time to experiment how to overcome these challenges and achieve the benefits of the Cloud Computing model. You can participate in this experiment as an industrial End-User in need of instant additional computing power accessible remotely, or as a compute Resource Provider, or as a Software Provider, or as a Team Expert. Depending on the specific requirements of the industry application we will identify the best suited resource provider, invite the software provider to join the team, and an bring on a Team expert who helps to implement the application and data onto the remote resource. There is no money involved for participating in this hands-on experiment. We are all motivated to study the end-to-end process of putting the Team of Four together, implement and run the workload, and get the final results back to the end-user. To participate in one of the next rounds of the experiment, please register at Page 22

23 Appendix 1 Final Team Reports TEAM 26 - Development of Stents for a Narrowed Artery MEET THE TEAM End User Anonymous Software Provider Matt Dunbar Dunbar is Chief Architect at SIMULIA. Resource Providers Tony DeVarco and Eurgne Kremenetsky Devarco is Senior Manager for Strategic Partners and Cloud Computing at SGI. Kremenetsky is Systems Engineering Technical Lead at SGI HPC/CAE Experts Scott Shaw and Gregory Shirin Shaw is a Senior Applications Engineer at SGI. Shirin, the HPC Experiment team mentor, is a senior consultant with Grid Dynamics. USE CASE This project focused on simulating stent deployment using SIMULIA s Abaqus/Standard and Remote Visualization Software from NICE to run Abaqus/CAE on SGI Cyclone. The intent was to determine the viability of shifting similar work to the cloud during periods of full-utilization of in-house compute resources. Information on Software and Resource Providers Abaqus from SIMULIA, the Dassault Systems brand for realistic simulation, is an industry leading product family that provides a comprehensive and scalable set of Finite Element Analysis (FEA) and multiphysics solvers and modeling tools for simulating a wide range of linear and nonlinear model types. It is used for stress, heat transfer crack initiation, failure and other types of analysis in mechanical, structural, aerospace, automotive, bio-medical, civil, energy, and related engineering and research applications. Abaqus includes four core products: Abaqus/CAE, Abaqus/Standard, Abaqus/Explicit, and Abaqus/CFD. Abaqus/CAE provides users with a modeling and visualization environment for Abaqus analysis. NICE Desktop Cloud Visualization (DCV) is an advanced technology that enables technical computing users to remote access 2D/3D interactive applications over a standard Page 23

24 network. Engineers and scientists are immediately empowered by taking full advantage of highend graphics cards, fast I/O performance and large memory nodes hosted in "Public or Private 3D Cloud", rather than waiting for the next upgrade of the workstations. SGI Cyclone is the world's first large scale on-demand cloud computing service dedicated to technical applications. Cyclone capitalizes on over twenty years of SGI HPC expertise to address the growing science and engineering technical markets that rely on extremely high-end computational hardware, software and networking equipment to achieve rapid results. Current State The end user currently has two 8 core PC workstations for pre- and post-processing with Abaqus/CAE, and a Linux based compute server with 40 cores and 128GB of available memory. They do not use any batch job scheduling software. The typical size of model of the stent design that they run has 2-6 million degrees of freedom (DOF). Typical job uses 20 cores and takes six hours. After the job is run, the data is transferred to the workstation for postprocessing. It was agreed the Simulia and SGI would provide the end user with Abaqus licenses for up to 128 cores in order to see if running a job on more cores could reduce the time to finish the job, as well as provide access to NICE DCV remote graphics software to view the results in Northern California before downloading them to the end user office in New Hampshire. End-To-End Process 1. Set up Cyclone account for End User. 2. SGI License Server info sent to Software Provider. 3. Issuance of a 128 core temporary license of Abaqus by Software Provider. 4. End user uploads model to his home directory on Cyclone login node and sends to CAE Expert. 5. Benchmark scaling exercise to find core count sweet spot is done by CAE Expert. 6. Results of benchmark scaling exercise sent to End User by CAE Expert. 7. Remote Viz session to view data using Abaqus CAE is set up by CAE Expert. 8. Remote Viz demo via WebEx with End User. 9. PBS submission script written by CAE Expert and shared with End User. 10. End user uploads, runs, views and downloads test case days of free access is given to End User. CHALLENGES The team met via a con call and agreed upon the list of steps that made up the end-to-end process. The setting up of the end user account and having the software licenses issued was quickly done. In order for the End User to upload their model via SSH they needed to get permission from their internal IT group, which took some time. Once the model was uploaded, the CAE Expert ran the model at various core counts and produced a routine benchmark report Page 24

25 for the End User to review (see results in table below). The remote viz demo went smoothly but when the End User tried to run the software themselves it took both the Resource and End User IT network teams to open the necessary ports, which took much longer than anticipated. Once the ports were open, the remote viz post-processing experience was better than expected. Analysis output files still needed to be shipped back to the End User for future reuse, additional post-processing, etc. Data transfer via the network was found to be slow. Final results might be better transferred through an external USB hard drive via FedEx. BENEFITS Here are the top 3 benefits of participating in the experiment for each of the team members: End User 1. Gained an increased understanding of what is involved in turning on and using a cloud-based solution for computational work with the Abaqus suite of finite element software. 2. Determined that shifting computational work to the cloud during periods of fullutilization of in-house compute resources is a viable approach to ensuring analysis throughput. 3. Participation in the experiment allowed direct assessment of the speed and integrity of remote visualization of computational models (both pre- and post-processing) for a variety of model and output database sizes. SGI/Nice DCV provided a robust solution, which permitted fast and accurate manipulation of the models used in the study. Software Provider 1. I was able to hear from an experienced Abaqus user that doing remote postprocessing using a client machine in New Hampshire to an SGI Cyclone server in California provided a good user experience. 2. I was able to hear from an end user that managing the networking requirements (opening ports in firewalls) took some work but was manageable. 3. I have a reference point for an Abaqus user who views executing his Abaqus workflow on SGI Cyclone to be a viable solution. CAE Expert 1. Expanded my knowledge of analytical methods used in medical stent engineering with Abaqus/Standard. 2. Increased awareness of user interactions with cloud based solution and networking requirements. 3. The geographic distance of ~3100 miles between customer and SGI Cyclone Cloud resources confirms distance is no longer a barrier in HPC and remote visualization. Based on the Abaqus Engineer, he comments the SGI Remote Visualization for cloud computing was faster and smoother than I expected. Page 25

26 Resource Provider: 1. The ability to walk a new customer through our HPC cloud process for usage. 2. Testing our remote visualization solution, which is in beta. 3. Working with a long time CAE ISV partner to offer a joint cloud base solution to run and view Abaqus jobs. CONCLUSION For an Abaqus user using SGI Cyclone this is a viable solution for both compute and visualization. The Viz side was impressive. Core s Test Model - Abaqus K Elements, 1M Nodes, 2M DOF, 12 Steps, 563 Iterations ICE 8200EX X5570 2x4 2.93GHz 24GB/node, SUSE 11SP1, IB QDR 4x Fabric # Node Memor Scr host_spli MP_MOD Runtim hh:mm:s Speed s y Storage t E e s up GB NAS 1 MPI :43: GB NAS 1 MPI :04: GB NAS 1 MPI :21: GB NAS 1 MPI :09: Core s # Node s Memor y Scr Storage host_spli t MP_MOD E Runtim e hh:mm:s s Speed up GB NAS 2 MPI :39: GB NAS 2 MPI :20: GB NAS 2 MPI :05: GB NAS 2 MPI :38: Host Split Perf Improvement Cores HS2/HS % % % % The host_split option set in the Abaqus_v6.env file allows multiple MPI ranks per compute node to improve Abaqus/Std message passing performance with multisocket compute nodes. Typically this setting applies to high contact and low duration solver walltime/iteration simulations. The host split default is 1. Page 26

27 Result of one of the CompBio Experiment teams: Development of stents for a narrowed artery after balloon angioplasty to widen the artery and improve blood flow. Case Study Authors End-user, Scott Shaw, Matt Dunbar, Tony DeVarco, Eugene Kremenetsky, and Gregory Shirin. Page 27

28 TEAM 30 - Heat Transfer Use Case MEET THE TEAM Lluís M. Biscarri, Biscarri Consultoria SL, Director Pierre Lafortune, Biscarri Consultoria SL, CAE Expert Wibke Sudholt, CloudBroker GmbH, CTO and Managing Partner Nicola Fantini, CloudBroker GmbH, CEO and Managing Partner Members of the CloudBroker Team for development and support Joël Cugnoni, researcher and developer of CAELinux Peter Råback, CSC IT Center for Science, Development Manager Organizations Involved Biscarri Consultoria SL (BCSL, based in Barcelona, is a SME engineering and consulting company specialized in CAE technology that offers simulation services and know-how transfer to industry. BCSL is focussed on the use of open-source computational mechanics software and its application to multi-physics engineering problems of the industry. The use of HPC and cloud computing hardware resources is one of BCSL main interests as well. CSC IT Center for Science Ltd. ( is administered by the Finish Ministry of Education and Culture. CSC provides IT support and resources for academia, research institutes and companies. CAELinux ( is an open source project, a ready to use Linux distribution for CAE and scientific computing. The main goal of CAELinux is to promote the use of state of the art open source software in research and engineering. The current version of CAELinux is based on Ubuntu LTS 64 bit and includes the most popular open source CAE applications such as OpenFOAM, Elmer FEM, Code-Aster, Code-Saturne, Calculix, Salome, Gmsh, Paraview and many more. CAELinux is available both as an installable LiveDVD image and as a virtual machine image on Amazon EC2. CloudBroker GmbH ( is a spin-off company of the ETH Zurich located in Zurich, Switzerland. It offers scientific and technical applications as a service in the cloud, for usage in fields such as biology, chemistry, health and engineering. Its flagship product, the CloudBroker Platform, delivers on-demand web and API access to application software on top of compute and storage resources in public or private clouds such as Amazon Web Services. USE CASE Background In many engineering problems fluid dynamics is coupled with heat transfer and many other multiphysics scenarios. The simulation of such problems in real cases produces large numerical models to be solved, so that big computational power is required in order for simulation cycles Page 28

29 to be affordable. For SME industrial companies in particular it is hard to implement this kind of technology in-house, because of its investment cost and the IT specialization needed. There is great interest in making these technologies available to SME companies, in terms of easy-to-use HPC platforms that can be used on demand. Biscarri Consultoria SL, is committed to disseminate parallel open source simulation tools and HPC resources in the cloud. CloudBroker is offering its platform for various multiphysics, fluid dynamics, and other engineering applications, as well as life science for small, medium and large corporations along with related services. The CloudBroker Platform is also offered as a licensed in-house solution. Current State Biscarri Consultoria SL is exploring the capabilities of cloud computing resources for performing highly coupled computational mechanics simulations, as an alternative to the acquisition of new computing servers to increase the computing power available. For a small company such as BCSL, the strategy of using cloud computing resources to cover HPC needs has the benefit of not needing an IT expert to maintain in-house parallel servers thus concentrating on our efforts in our main field of competence. To solve the needs of the end user, the following hardware and software resources existing on the provider side were employed by the team: Elmer ( an open source multi-physical simulation software mainly developed by the CSC IT Center for Science CAELinux ( a CAE Linux distribution including the Elmer software as well as a CAELinux virtual machine image at the AWS Cloud CloudBroker Platform (public version under CloudBroker s web-based application store offering scientific and technical Software as a Service (SaaS) on top of Infrastructure as a Service (IaaS) cloud resources, already interfaced to AWS and other clouds Amazon Web Services (AWS, in particular Amazon s IaaS cloud offerings EC2 (Elastic Compute Cloud) for compute and S3 (Simple Storage Service) for storage resources Experiment Procedure Technical Setup The technical setup for the HPC Experiment was performed in several steps. These followed the principle to start with the simplest possible solution and then to grow it to fulfil more complex requirements in an agile fashion. If possible, each step was first tested and iteratively improved before the next step was taken. The main steps were: Page 29

30 1. All team members were given access to the public CloudBroker Platform via their own account under a shared organization created specifically for the HPC Experiment. A new AWS account was opened by CloudBroker, the AWS credit loaded onto it, and the account registered in the CloudBroker Platform exclusively for the experiment team. 2. Elmer software on the existing CAELinux AWS machine image was made available in the CloudBroker Platform for serial runs and tested with minimal test cases by CloudBroker and Joël Cugnoni. The setup was then extended to allow parallel runs using NFS and MPI. 3. Via Skype calls, screen sharing, chatting, and contributions on Basecamp, the team members exchanged knowledge on how to work with Elmer on the CloudBroker Platform. The CloudBroker Team gave further support for its platform throughout HPC Experiment Round 2. CloudBroker and BCSL performed corresponding validation case runs to test the functionality. 4. The original CAELinux image was only available for normal, non-hpc AWS virtual machine instance types. Therefore, Joël Cugnoni provided Elmer 6.2 as optimized and non-optimized binaries for Cluster Compute instances. Also, the CloudBroker Team deployed these on the CloudBroker Platform for the AWS HPC instance types with 10GBit Ethernet network backbone, called Cluster Compute instances. 5. BCSL created a medium benchmark case, and performed scalability and performance runs with different numbers of cores and nodes of the Amazon Cluster Compute Quadruple and Eight Extra Large instance types and different I/O settings. The results were logged, analyzed and discussed within the team. 6. The CloudBroker Platform setup was improved as needed. This included, for example, a better display of the number of cores in the web UI, and the addition of artificial AWS instance types with fewer cores, as well as the ability to change the shared disk space. 7. BCSL tried to run a bigger benchmark case on the AWS instance type configuration that turned out to be preferable from the scalability runs that is, single AWS Cluster Compute Eight Extra Large instances. Fig. 1 - This figure shows the model employed in the scalability benchmark. The image on the right shows the temperature field, while the left image shows the velocity field at a certain time of the transient simulation. Page 30

31 Validation Case First a validation case was defined to test the whole simulation procedure. This case was intentionally simple, but had the same characteristics as the more complex problems that were used for the rest of the experiment. It was an idealized 2D room with a cold air inlet on the roof (T = 23ºC, V = 1m/s), a warm section on the floor (T = 30ºC, V = 0.01m/s) and an outlet on a lateral wall near the floor (P = 0.0Pa). The initial air temperature was 25ºC. The mesh was created with Salome V6. It consists of 32,000 nodes and 62,000 linear triangular elements. The solution is transient. Navier-Stokes and Heat equations were solved in a strong coupled way. No turbulence model was used. Free convection effects were included. The mesh of the benchmark analysis was a much finer one of the same geometry domain, consisting of about 500,000 linear triangular elements. The warm section on the floor was removed and lateral boundaries had open condition (P = 0.0Pa). Job Execution The submission of jobs to be run at AWS was done through the web interface of the CloudBroker Platform. The procedure was as follows: A job was created on the CloudBroker Platform, specifying Job Name, Software, Instance Type and AWS Region Case and mesh partition files were compressed and uploaded to the CloudBroker Platform attached to the created job The job was submitted to the selected AWS resource Result files were downloaded from the CloudBroker Platform and postprocessed in a local workstation Scalability parameters were calculated from job output log file data Fig. 2 Streamline on the inlet section. Page 31

32 CHALLENGES End User The first challenge for BCSL in this project was to learn if the procedure to run Elmer jobs in a cloud computing resource such as AWS is easy enough to be a practical alternative to in-house calculation servers. The second challenge was to determine the level of scalability of the Elmer solver running at AWS. Here we encountered good scalability when the instance employed is the only computational node. When running a job on an instances using two or more computational nodes the scalability is reduced dramatically, showing that communications between cores of different computational nodes slows down the process. AWS uses 10Gbit Ethernet as backbone network, which seems to be a limitation for this kind of simulations. After the scalability study with the mesh of 500 Kelems was performed, a second scalability test was tried with a new mesh of about 2000 Kelems. However, jobs submitted for this study to Cluster Compute Quadruple Extra Large and Cluster Compute Eight Extra Large instances have not been successfully run yet. Further investigations are in progress to better characterize the network bottleneck issue as a function of problem size (number of elements per core) and to establish if it is related to MPI communication latency or NFS throughput of the results. Resource Provider and Team Expert On the technical side, most challenges were mastered by already existing features of the CloudBroker Platform or by small improvements. For this it was essential to follow the stepwise agile procedure as outlined above, partly ignoring the stiffer framework suggested by the default HPC Experiment tasks on Basecamp. Unfortunately AWS HPC Cloud resources are limited to a 10 GBit Ethernet network. 10 Gbit Ethernet was not sufficient in terms of latency and throughput to run the experiment efficiently on more than one node in parallel. The following options are possible: 1. Run the experiment on one large node only, that is the AWS Cluster Compute Eight Extra Large instances with 16 cores 2. Run several experiment jobs independently in parallel with different parameters on the AWS Cluster Compute Eight Extra Large instances 3. Run the experiment on another cloud infrastructure which provides low latency and high throughput using technology such as Infiniband The CloudBroker Platform allows for all the variants as described above. Variants 2 and 3 were not part of this experiment, but would be the next reasonable step to explore in a further experiment round. In the given time, it was also not possible to try out all the different I/O optimization possibilities, which could provide another route to improve scalability. Page 32

33 A further challenge of the HPC Experiment was to bring together the expertise from all the different involved partners. Each of them has experience on a separate set of the technical layers that were needed to be combined here (actual engineering use case, Elmer CAE algorithms, Elmer software package, CloudBroker Platform, AWS Cloud). For example, often it is difficult to say from the onset which layer causes a certain issue, or if the issue results from the combination of layers. Here it was essential for the success of the project to stimulate and coordinate the contributions of the team members. For the future, we envision making this procedure more efficient through decoupling for example, by the software provider directly offering an already optimized Elmer setup in the CloudBroker Platform to the end users. Finally, a general challenge of the HPC Experiment concept is that it is a non-funded effort (apart from the AWS credit). This means that the involved partners can only provide manpower on a best effort basis, and paid projects during the same time usually have precedence. It is thus important that future HPC Experiment rounds take realistic business and commercialization aspects into account. BENEFITS Concerning the ease of using cloud computing resources, we concluded that this working methodology is very friendly and easy to use through the CloudBroker Platform. The main benefits for BCSL regarding the use of cloud computing resources were: To have external HPC capabilities available to run medium sized CAE simulations To have the ability to perform parametric studies, in which a big number of small/medium size simulations have to be submitted To externalize all IT stuff necessary to have in-house calculation servers For CloudBroker, it was a pleasure to extend its platform and services to a new set of users and to Elmer as a new software. Through the responses and results we were able to further improve our platform and to gain additional experience on the performance and scalability of AWS cloud resources, particularly for the Elmer software. CONCLUSIONS AND RECOMMENDATIONS The main lesson learned at Biscarri Consultoria SL arising from our participation in HPC Experiment Round 2 is that collaborative work through the Internet, using on-line resources like cloud computing hardware, Open Source software such as Elmer and CAElinux, and middleware platforms like CloudBroker, is a very interesting alternative to in-house calculation servers. A backbone network such as 10Gbit Ethernet connecting computational nodes of a cloud computing platform seems not to be suitable for computational mechanics calculations that need to be run on more than one large AWS Cluster Compute node in parallel. The need for Page 33

34 network bandwidth for the solution of strongly coupled equations involved in such simulations makes the use of faster network protocols such as Infiniband necessary to achieve time savings when running it in parallel on more than a single AWS Cluster Compute instance with 16 cores. For CloudBroker, HPC Experiment Round 2 has provided another proof of its methodology, which combines its automated web application platform with remote consulting and support in an agile fashion. The CloudBroker Platform could easily work with CAELinux and the Elmer software at AWS. User requirements and test outcomes even resulted in additional improvements, which are now available to all platform users. On the other hand, this round has shown again that there are still needs for example, a reduction of latency and improvement of throughput (i.e., by using Infiniband instead of 10 GBit Ethernet) to be fulfilled by dynamic cloud providers such as AWS regarding highly scalable parallel HPC resources. Their cloud infrastructure is currently best suited for loosely or embarrassingly parallel jobs such as parameter sweeps, or highly coupled parallel jobs limited to single big machines. Finally, despite online tools, the effort necessary for a project involving several partners like this one should not be underestimated. CloudBroker expects though that in the future more software like Elmer can be directly offered through its platform in an already optimized way, making usage more efficient. Case Study Authors - Lluís M. Biscarri, Pierre Lafortune, Wibke Sudholt, Nicola Fantini, Joël Cugnoni, and Peter Råback. Page 34

35 TEAM 34 - Analysis of Vertical and Horizontal Wind Turbines MEET THE TEAM End-user Henrik Nordborg Nordborg is a professor at the HPC center of the University in Switzerland. Software provider ANSYS Fluent and NICE visualization software Software ANSYS Fluent and three of its ANSYS HPC Packs because of its strengths in analyzing complex fluid dynamic systems. Resource Provider Penguin Computing Penguin provides Linux-based servers, workstations, HPC systems and clusters, and Scyld ClusterWare. HPC/CAE Expert Juan Enriquez Paraled Paraled is the manager of ANALISIS-DSC, a mechanical engineering service and consultancy company, specialized in fluid, structural and thermal solutions. USE CASE The goal was to optimize the design of wind turbines using numerical simulations. The case of vertical axis turbines is particularly interesting, since the upwind turbine blades create vortices that interact with the blades downstream. The full influence of this can only be understood using transient flow simulations, requiring large models to run for a long time. CHALLENGES In order test the performance of a particular wind turbine design, a transient simulation had to be performed for each wind speed and each rotational velocity. This lead to a large number of very long simulations, even though each model might not be very large. Since the different wind speeds and rotational velocities were independent, the computations could be trivially distributed on a cluster or in the cloud. Another important use of HPC and cloud computing for wind power is parametric optimization. Again, if the efficiency of the turbine is used as target function, very long transient simulations will have to be performed to evaluate every configuration. BENEFITS Page 35

36 The massive computing power required to optimize a wind turbine is typically not available locally. Since only some steps of the design require HPC and an on-site cluster would never be fully utilized, cloud computing offers an obvious solution. Figure 1: 2D simulation of a rotating vertical wind turbine. CONCLUSIONS AND RECOMMENDATIONS The problem with cloud computing for simulations using commercial tools is that the number of licenses is typically the bottleneck. Obviously, having a large number of cores does not help if there are not enough parallel licenses. In our case, a number of test-licenses were provided by ANSYS, which was very helpful. It is not possible to transfer data back and forth between the cluster and a local workstation. Therefore, any HPC facility needs to provide remote access for interactive use. Unfortunately, this was not available in our case. A test performed on the Penguin cluster showed an 8% increase in speed (per core) as compared with our local Windows cluster. This speedup was surprisingly small, given that Penguin uses a newer generation of CPUs with a better theoretical floating-point performance. This again demonstrates that simulations on an unstructured grid are bandwidth limited. To conclude, cloud computing would be an excellent option for these kinds of simulations if the HPC provider offered remote visualization and access to the required software licenses. Figure 2: CFD Simulation of a vertical wind turbine with 3 helical rotors. Case Study Author Juan Enriquez Paraled Page 36

37 TEAM 36 - Advanced Combustion Modeling for Diesel Engines MEET THE TEAM End User and HPC Expert: Dacolt Dacolt, headquartered in the Netherlands, offers software and services for CFD modeling of industrial combustion applications, by providing innovative tools and expertise to support our customers in realizing their fuel efficiency and pollutant emissions design goals. Resource Provider Penguin On Demand (POD) POD is Penguin Computing's on demand HPC cloud service. Software Provider ANSYS, Inc. ANSYS develops and globally markets engineering simulation software and technologies widely used by engineers and designers. USE CASE Modeling combustion in Diesel engines with CFD is a challenging task. The physical phenomena occurring in the short combustion cycle are not fully understood. This especially applies to the liquid spray injection, the auto-ignition and flame development and formation of undesired emissions like NOx, CO and soot. Dacolt has developed an advanced combustion model named Dacolt PSR+PDF, specifically meant to address these types of challenging cases where combustion initiating chemistry plays a large role. This Dacolt PSR+PDF model has been implemented in ANSYS Fluent and was validated on an academic test case (SAE paper pdf). An IC engine case validation case is the next step, tackled in the context of the HPC Experiment in the Penguin Computing HPC cloud. CHALLENGES Current challenges for the end-user operating with just his in-house resources include the fact that the computational resources needed for these simulations are significant (i.e. more than 16 cpus and one to three days of continuous running. BENEFIT The benefit for the end-user using remote resources was that the remote clusters allow small companies to conduct simulations that previously were only possible by large companies and government labs. Page 37

38 Simulation result showing the flame (red) located on top of the evaporating fuel spray (light blue in the center) End-user findings on the provided cloud access include: Startup: o POD environment setup went smoothly o ANSYS software installation and licensing as well System: o POD system OS comparable to OS used at Dacolt o ANSYS Fluent version same as used at Dacolt Running: o Getting used to POD job scheduling o No portability issues of the CFD model in general o Some MPI issues related to Dacolt s User Defined Functions (UDFs) o Solver crash during injection + combustion phase, to be investigated Overall, we experienced easy-to use ssh access to the POD cluster. The environment and software set-up went smoothly with collaboration between POD and ANSYS. The remote environment, which nearly equaled the Dacolt environment, provided a head start. Main issue encountered: the uploaded Dacolt UDF library for Fluent did not work in parallel out of the box. It is likely the Dacolt User Defined Functions would have to be recompiled on the remote system. Project results An IC-engine was successfully run until solver divergence, to be reviewed by Dacolt with ANSYS support. Dacolt model validation seems promising. Page 38

39 Anticipated challenges included: Account set-up and end-user access Configuring end-user s CFD environment with ANSYS Fluent v14.5 Educating end-user in using the batch queuing system Get data in and out of the POD cloud Actual barriers encountered: Running end-user UDFs with Fluent in parallel gave MPI problems CONCLUSIONS AND RECOMMENDATIONS Use of POD remote HPC resources worked well with ANSYS Fluent Although the local and remote systems were quite comparable in terms of OS, etc, systems like MPI may not work out of the box Local and remote network bandwidth was good enough for data transfer, but not for tunneling CAE graphics using X Future use of remote HPC resources depends on availability of pay-as-you-go commercial CFD licensing schemes Case Study Author Ferry Tap Page 39

40 TEAM 40 - Simulation of Spatial Hearing MEET THE TEAM End User The end user is a manufacturer of consumer products. The end-user tasks were related to the planning of the simulations and post-processing the simulated data. Software Provider and HPC Expert Antti Vanne, Kimmo Tuppurainen, Tomi Huttunen These team members are with Kuava Ltd. Kuava provides services for computational technology and simulations. Kuava's software products are Waveller Cloud platform for running and visualizing simulations in the cloud; and Datain, a tool for data acquisition, analysis and storage. HPC Experts Ville Pulkki, Marko Hiipakka Pulkki and Hiipakka are researchers from Aalto University. They provided technical expertise for acoustic analysis in the post-processing of the simulation results. USE CASE A sound emitted by an audio device is perceived by the user of the device. The human perception of sound is, however, a personal experience. For example, the spatial hearing (the capability to distinguish the direction of sound) depends on the individual shape of the torso, head and pinna (i.e. so-called head-related transfer function, HRTF). To produce directional sounds via headphones, one needs to use HRTF filters that model sound propagation in the vicinity of the ear. These filters can be generated using computer simulations, but, to date, the computational challenges of simulating the HRTFs have been enormous due to: the need of a detailed geometry of head and torso; the large number of frequency steps needed to cover the audible frequency range; and the need of a dense set of observation points to cover the full 3D space surrounding the listener. In this project, we investigated the fast generation of HRTFs using simulations in the cloud. The simulation method relied on an extremely fast boundary element solver, which is scalable to a large number of CPUs. The process for developing filters for 3D audio is long but the simulation work of this study constitutes a crucial part of the development chain. In the first phase, a sufficient number of the 3D head-and-torso geometries needed to be generated. A laser-scanned geometry of a commercially available test dummy was used in these simulations. Next, acoustic simulations to characterize acoustic field surrounding the head-and-torso were performed. This was our task in the HPC Experiment. The Round 2 simulations focused on the effect of the acoustic impedance of the test dummy on Page 40

41 the HRTFs. Finally, the filters were generated from the simulated data and they will be evaluated by a listening test. The final part was done by the end-user. Simulations were run via Kuava's Waveller Cloud simulation tool using the system described below. The number of concurrent instances ranged between 6 and 20. Service: Amazon Elastic Compute Cloud Total CPU hour usage: 371h Type: High-CPU Extra Large Instance High-CPU Extra Large Instance: 7 GiB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High One EC2 Compute Unit provides the equivalent CPU capacity of a GHz 2007 Opteron or 2007 Xeon processor. This is also the equivalent to an early GHz Xeon processors CHALLENGES Our main challenge was to develop interactive visualization tools for simulation data stored in the cloud. BENEFITS The main benefit resulted from the flexible resource allocation, which is necessary for efficient acoustic simulations. That is, a large number of instances can be obtained for a short period of time. Other benefits included not having to invest in our own computing capacity. Especially in audio simulations, the capacity is needed in short bursts for fast simulation turnaround times and the time between the simulation bursts while the next simulation is planned i.e., when no computational capacity is needed is significant. CONCLUSIONS AND RECOMMENDATIONS The main lessons learned during Round 2 were related to using CPU-optimization when compiling the code for cloud simulations. We observed that Amazon did not support all optimization features even though the optimization should be available in the instances used for simulations. The problems were solved (with the kind help of Amazon support) by disabling some of the optimizations when compiling the code. Page 41

42 The man hours accumulated during the experiment included Kuava (50h), and end-user (5h). Total CPU hour usage during the experiment was 371h using High-CPU Extra Large Instance. Fig. 1 - Simulation model (an acoustic test dummy). The dots indicate all locations of monopole sound sources that were used in the simulations. The red dots are the sound sources used in this image. The figure in the middle shows the sound pressure level (SPL) in the left ear as a function of the sound direction and the frequency. On the right, the SPL relative to sound sources in the far-field is shown. Case Study Author Tomi Huttunen Page 42

43 TEAM 44 - CFD Simulation of Drifting Snow MEET THE TEAM End-user/CFD Expert Ziad Boutanios Boutanios is a Principal Engineer with Binkz, Inc., a Canadian-based CFD consultancy firm. Resource provider San Diego Supercomputer Center SDSC provides cyber infrastructure resources to scientists who require massive compute and data-handling capabilities to conduct their research. HPC Experts Koos Huijssen, Jian Tao Huijssen is a Scientific Software Engineer with VORtech B.V., The Netherlands. VORtech is a scientific software engineering firm in the Netherlands that combines in-depth mathematical knowledge and professional software development to support its customers in developing, improving and maintaining technical-scientific simulation software. The HPC Expert in the first month of the experiment was Jian Tao, Research Scientist at Louisiana State University USE CASE Binkz Inc. is a Canadian-based CFD consultancy firm with less than five employees, active in the areas of aerospace, automotive, environmental and wind engineering, as well as naval hydrodynamics and process technologies. For Binkz's consultancy activities, simulation of drifting snow is necessary in order to predict the redistribution of accumulated snow by the wind around arbitrary structures. Such computations can be used to determine the snow load design parameters of rooftops, which are not properly addressed by building codes at the present. Other applications can be found in hydrology and avalanche effects mitigation. Realistic simulation of snow drift requires a 3D two-phase fully-coupled CFD model that easily takes several months of computing time on a powerful workstation (~16 cores) and memory requirements that can exceed 100GB in some cases; hence the need for computing clusters to reduce the computing time. The pay-per-use model of the cloud paradigm could be ideal for a small consultancy firm to reduce the fixed costs of acquiring and maintaining a computing cluster, and allow the direct billing of the computing resources in each project. The snowdrift simulations were performed with a customized OpenFOAM two-phase solver. OpenFOAM is a free, open source CFD software package developed by OpenCFD Ltd at ESI Group and distributed by the OpenFOAM Foundation. It has a large user base across most areas of engineering and science, from both commercial and academic organizations. The input data consisted of a computational mesh of several million cells and a number of ASCII input files to provide the physical and numerical parameters of the simulation. Output data consisted of several files containing the values of the velocity, pressure, volume fraction and Page 43

44 turbulence variables for each of the air and snow phases, in every computational cell and for each required flow time. These were used to generate snapshots of the flow field and drifting snow as well as values of snow loads where required on and around the structure being analyzed. End-to-end process: Project definition was agreed upon in an online meeting between team expert and the end user and the SDSC's Compute Cluster 'Triton' was selected as hardware resource to fulfill the large memory demands (~100GB RAM) and fast interconnect required for good scalability. An initial budget of 1,000 core hours was assigned to the project. OpenFOAM was downloaded into the home directories. An initial attempt to build the solver with the PGI compilers was unsuccessful. Building with the Intel compilers was successful but subsequent computational tests ended in segmentation faults never observed on other platforms. As a last ditch effort a final build was done with the gcc compiler, the OpenFOAM native compiler, albeit with several non-optimal fixes to make sure the build is available on time to get some tests done before the project deadline. At that point about 40% of the allocated CPU time had been spent. Limited speedup tests were done with the gcc build due to scarcity of time and resources left. The speedup tests showed the expected scalability behavior with one anomalous occurrence never before observed on other platforms. Thorough investigation of the anomaly was considered outside the context of the Experiment considering the non-optimal nature of the gcc build. Efforts invested: Triton support: <10 hours, build attempts, system configuration, tracking of the build. End user: more than 100 hours in build attempts, solver and testcase setup, testing the builds and analyzing the test results. Team expert: hours of basic support, reporting and overall experiment management. Resources: ~900 core hours for building the software, testing the builds and performing initial tests for running large jobs. CHALLENGES The main challenge during the setup of the configuration was getting a successful build of OpenFOAM on the hardware resource. Page 44

45 The main challenge during test execution was scheduling a test MPI simulation job requiring several parallel compute nodes on queues occupied by a high number of serial runs by other users prioritized in the queuing system. This resulted in deployment waiting times that were not acceptable in the workflow of the end user. There exist however other queues on Triton that could provide better prioritization and response times, but they were not tested due to the limited time frame of the experiment. BENEFITS The first benefit of the experiment was the learning experience with building OpenFOAM with different compilers on different platforms. Past experience in compiling OpenFOAM on other CentOS systems led us to believe this would not be a problem on Triton. Unfortunately, it was and in the future one should make sure in advance that an optimized OpenFOAM build exists on the target resource; or the project plan should anticipate the time and labor required to obtain a good build. In this experiment, SDSC had agreed to provide computing time only, but even so support staff committed a significant amount of their own time to assist with the OpenFOAM build. Given enough time it is certain that the Triton support staff would have managed to provide optimal builds of OpenFOAM with all tested compilers. Another benefit from the experiment is that, apart from a well-fitting hardware platform (as Triton would be), it is also important for production jobs to be launched on appropriate MPI queues that would not allow high numbers of smaller serial jobs to delay large parallel MPI jobs. CONCLUSIONS AND RECOMMENDATIONS OpenFOAM (or more generally, a large open-source software package such as OpenFOAM) is best built on the platform it will run on. OpenFOAM is most easily built with the third-party software as provided within the distribution. For the application of snow drift simulations, running on a public/academic resource using a standard (i.e. non-prioritized) account yields unpredictable waiting times and important computing delays when running concurrently with a high number of serial runs by other users. In another experiment round, we would recommend testing an alternative platform/queue that has a different capacity, user base, or job queuing system that is a better fit to the end user's work flow. Page 45

46 Fig. 1 - Closeup of the building model with simplified roof structure. The structured mesh is 1.25 million hexahedral cells. Case Study Authors -- Ziad Boutanios and Koos Huijssen Page 46

47 TEAM 46 - CAE Simulation of Water Flow Around a Ship Hull MEET THE TEAM End User Andrew Pechenyuk, DMT Pechenyuk is with Digital Marine Technology (DMT). The company was established in 2002 by a group of specialists in the field of shipbuilding, ship repair and computer technologies. Today the main activities are: ship hydrodynamics, e.g. hull form design and ship propulsion calculations, cargo stowage and seafastening projects, e.g. heavy lift transportation projects, strength calculations, etc. Software Provider Andrey Aksenov, TESIS Capvidia/ТЕСИС is an international company whose strategic goal is offering advanced and economically sound solutions on the market of engineering products and services. FlowVision CFD software has been developing since 1991 by the team from Russian Academy of Sciences, viz., Institute for Computer-Aided Design, Institute for Mathematical Modeling, and Computing Centre. In 1999, the team has joined Capvidia/ТЕСИС and formed the CFD department. At Capvidia/ТЕСИС, FlowVision is developed further and commercialized. The first commercial version of FlowVision was released in March Resource Provider Jesus Lorenzana, FCSCL The Foundation of Supercomputing Center of Castile and León (FCSCL) is a public entity created by the Regional Government of Castile and León and the University of León, which goal is the improvement in the research tasks of the university, the researching centers and the companies of Castile and León. HPC Expert Adrian Jackson, EPCC, The University of Edinburgh EPCC is a leading European centre of excellence in advanced research, technology transfer and the provision of high-performance computing services to academia and industry. Based at The University of Edinburgh, it is one of Europe's leading supercomputing centers. USE CASE The goal of this project was to run CAE simulations of water flow around the hull of a ship much more quickly than was possible using available resources. Current simulations took a long time to compute, limiting the usefulness and usability of CAE for this problem. For instance, on the resources currently available to the end user, a simulation of seconds of real time water flow took two to three weeks of computational time. We decided to run the existing software on a HPC resource to realize whatever runtime gains might be achieved by using larger amounts of computing resources. Page 47

48 Fig. 1 - Wave pattern around the ship hull Application software requirements This project required the TESIS FlowVision 3.08.xx software. FlowVision is already parallelized using MPI so we expected it to be able to utilize the HPC resources. However, it does require the ability to connect to the software from a remote location while the software is running in order to access the software licenses and steer the computation. For the license keys, see description in FlowVision installation and preferences on Calendula Cluster.docx, simulation/uploads/42 Custom code or configuration of end-user This project necessitated the upgrading of the operating system on the HPC system to support some libraries required by the FlowVision software. The Linux version on the system was not recent enough and one of the system main parts, glibc libraries, were not the correct version. We also had to open a number of ports to enable software to connect to and from specific external machines (specified by their IP address). Computing Resource: Resource requirement from the end-user: about 8-16 nodes for the HPC machine, used for 5 runs of 24 hours each. Resource details: There are two processors on each node, Intel Xeon 3.00GHz, with 4 real cores per processor, so each compute node has 8 real cores. Each node also has 16GB of memory and two 1Gb Ethernet cards and one Mellanox Infiniband card. This experiment had been assigned 32 nodes (so 256 cores) to use for simulations. How to request resources: To get access to the resources you the resource provider. They provide an account quickly (in around a day). How to access resources: The front end of the resource is accessed using ssh; you will need an account on the system to do this using a command such as this: ssh -X hpc07_1@calendula.fcsc.es -p 2222 Once you have logged into the system. you can run jobs using the Open Grid Scheduler/Grid Engine batch system. To use that system you need to submit a job script using the qsub command. Page 48

49 CHALLENGES Current simulations take a long time to compute, limiting the usefulness and usability of the CAE approach for this problem. For instance, on the resources currently available to the end user, a simulation of seconds of real time water flow takes two to three weeks of computational time. To improve this time to solution we need to access to larger computational resources than we currently have available. Scientific Challenge Simulation of the viscous flow around the hull of a ship with the free surface was provided. The object of research was the hull of the river-sea dry-cargo vessel with extremely high block coefficient (Cb = 0.9). The hull flow included complex phenomena, e.g. wave pattern on the free surface, and fully developed turbulence flow in the boundary layer. The main purpose of the simulation was towing resistance determination. In general, dependence of towing resistance on the speed of the ship was used for the prime mover's power prediction at the design stage. The present case considered a test example for which there is reliable experimental data. In contrast to the conventional method of model tests, the methods of CFD simulation have not been fully studied regarding the reliability of the results, as well as the computational resources and time costs, etc. For these reasons, the computational grid formation and the scalability of the solution were the focus of this research. Resources FCSCL, the Foundation of Supercomputing Center of Castile and León, Spain, provided HPC resources, in the form of a 288 HP blade nodes system with 8 cores and 16GB RAM per node. Software FlowVision is a new generation multi-purpose simulation system for solving practical CFD (computational fluid dynamics) problems. The modern C++ implementation offers modularity and flexibility that allows addressing the most complex CFD areas. A unique approach to grid generation (geometry fitted sub-grid resolution) provides a natural link with CAD geometry and FE mesh. The ABAQUS integration through Multi-Physics (MP) Manager supports the most complex fluid-structure interaction (FSI) simulations (e.g., hydroplaning of automotive tires). FlowVision integrates 3D partial differential equations (PDE) describing different flows, viz., the mass, momentum (Navier-Stokes), and energy conservation equations. The system of the governing equations is completed by state equations. If the flow is coupled with physicalchemical processes like turbulence, free surface evolution, combustion, etc., the corresponding PDEs are added to the basic equations. All together the PDEs, state equations, and closure correlations (e. g., wall functions) constitute the mathematical model of the flow. FlowVision is based on the finite-volume approach to discretization of the governing equations. Implicit velocity-pressure split algorithm is used for integration of the Navier-Stokes equations. Page 49

50 Towing Resistance, kn FlowVision is integrated CFD software: its pre-processor, solver, and post-processor are combined into one system. A user sets the flow model(s), physical and method parameters, initial and boundary conditions, etc. (pre-processor), performs and controls calculations (solver), and visualizes the results (post-processor) in the same window. He can stop the calculations at any time to change the required parameters, and continue or recommence the calculations. Additional Challenges This project continued from the first round of the cloud experiment. In the first round we faced the challenge that the end user for this project had a particular piece of commercial simulation software they needed to use for this work. The software required a number of ports to be open from the front end of the HPC system to the end users machines, both for accessing the licenses for the software and to enable visualization, computational steering, and job preparation for the simulations. There were a number of issues to be resolved to enable these ports to be opened, including security issues for the resource provider (requiring the open ports to be restricted to a single IP address or small range of IP addresses), and educating the end user about the configuration of the HPC system (with front-end and back-end resources and a batch system to access the main back-end resources). These issues were successfully tackled. However, another issues was encountered the Linux version of the operating system on the HPC resources was not recent enough and one of the system main parts, glibc libraries, were not the required version for the commercial software to be run. The resource provider was willing to upgrade the glibc libraries to the required version; however this impacted another team during the first round. At the start of this second round of the experiment this problem was resolved so simulations could be undertaken. Outcome The dependence of the towing resistance on the resolution of computational grid (grid convergence) was investigated. The results show that grid convergence becomes good when grids with a number of computational cells of more than 1 mln are used experiment, 197 kn mln 1 mln 1.2 mln 1.5 mln Number of Computational Cells Fig. 2 - Grid convergence, speed 12.5 knots Page 50

51 The results of the simulation, performed in a wide range of towing speeds (Froude numbers) in the grid with about 1 mln computational cells, showed good agreement with the experimental data. CFD calculations were performed in full scale. The experimental results were obtained in the deep-water towing tank of Krylov State Research Centre (model scale is 1:18.7). The fullscale CFD results were compared to the recalculated results of the model test. The maximum error in the towing resistance of the hull reached only 2.5%. 1.6E E E E-03 C R 8.0E E E E E Fn CFD with free surface CFD, double body Towing Tank Fig. 3 - Comparison of the CFD and experimental data in dimensionless form (residual resistance coefficient versus Froude number) Visualization of the free surface demonstrated the wave pattern, which is in a good correspondence with the photos of the model tests. High-quality visualization of other flow characteristics was also available. Fig. 4 - Free surface CFD, speed 13 knots (Fn = 0.182) Page 51

52 Acceleration (times relative to 16-cores) Fig. 5 - Pressure distribution on the hull surface (scale in Pa) Fig. 6 - Shear stress distribution on the hull surface (scale in Pa) Number of Cores project with 2 mln cells project with 1 mln cells Fig. 7 - Scalability test results Page 52

53 CONCLUSIONS AND RECOMMENDATIONS Using HPC clouds offers users incredible access to supercomputer resources. CFD users with the help of commercial software can greatly speed up their simulation of hard industrial problems. Nevertheless, existing access to these resources has the following drawbacks: 1. Commercial software must be first installed on remote supercomputer 2. It is necessary to provide the license for the software, or to connect to a remote license server 3. User can be faced with a lot of problems during installation process: e.g., incompatibility of the software with the operation system, and incompatibility of additional 3rd-party software like MPI, TBB libraries, etc. 4. All these steps require that the user be in contact with the software vendor or cluster administrator for technical support From our point of view, it is necessary to overcome all these problems in order to use commercial software on HPC clouds. Commercial software packages used for simulation often have requirements for licensing and operation that either means the resources they are being run on need to access external machines or software needs to be installed locally to handle licenses, etc. New users to HPC resources often require education in the general setup and use of such systems (e.g., the fact you generally access the computational resources through a batch system rather than logging on directly). Basecamp has been useful to enable communication between the project partners, sharing information, and ensuring that one person is does not hold up the whole project. Communication between the client side and the solver side of modern CAE systems ordinarily uses network protocol. Thus the organization of work over SSH protocol requires additional operations, including port forwarding and data translation. On the other hand, when properly configured, the client interface is able to manage the solving in the same manner as in local network. Case Study Authors Adrian Jackson, Jesus Lorenzana, Andrew Pechenyuk, and Andrey Aksenov. Page 53

54 TEAM 47 - Heavy Duty Abaqus Structural Analysis using HPC in the Cloud MEET THE TEAM End User Frank Ding Ding is the Engineering Analysis and Computing Manager at Simpson Strong-Tie in Northern California. In this experiment he represents the CAE end user focused on creating structural products that help people build safer and stronger homes and buildings. Simpson Strong-Tie is considered a leader in structural systems research, testing and innovation, and one of the largest suppliers of structural building products in the world. Software Provider Matt Dunbar Dunbar is now the Chief Architect and CAE technical specialist at Simulia Dassault Systems, in its Rhode Island facilities. He represents the application level expertise in this experiment. Simulia is one of the main CAE vendors. Resource Provider Steve Hebert Herbert is one of the founders and CEO of Nimbix, located in Texas, which in this team is the provider of cloud-based High Performance Computing infrastructure and applications. Rob Sherrard, is the other co-founder of Nimbix and VP of Service Delivery. HPC Expert and Team Manager Sharan Kalwani, is an independent HPC Segment Architect with DataSwing Corporation and in this project is the overall Subject Matter Expert, project management, solution expertise and team lead. Kalwani is located in Michigan. New to the team and a critical part of Round 2 Antonio Arena, Solutions Architect, NICE software, network and middleware team expert Cynthia Underwood, IT consultant and subject matter expert at NICE Software Dennis Nagy, Mentor, Principal at Beyond CAE USE CASE In Round 1 of the HPC Cloud experiment, the team established that indeed computational use cases could be submitted successfully using the cloud API and infrastructure. The objective of this Round 2 was to explore the following: How can the end user experience be improved? For example, how could the post processing of HPC CAE results kept in the cloud be viewed at the remote desktop? Was there any impact of the security layer on the end user experience? The end to end process remains widely dispersed end user demand was tested in two different geographic areas: Continental USA and Europe. The network bandwidth and latency were expected to play a major role since it impacts the workflow and user perception of the Page 54

55 ability to deliver cloud HPC capability not in the compute, but in the pixel manipulation domain. Here is an example of the workflow: 1. Once the job finishes, the end user receives a notification , the results files remain at the cloud facility i.e. they are NOT transferred back to the end user s workstation for post-processing 2. The post-processing is done using remote desktop tool in this case NICE-Software DCV infrastructure layer on the HPC provider s visualization node(s). Typical network transfer sizes (upstream and downstream) were expected to be modest, and it is this impact that we hoped to measure thus making them tunables. This represented the major component of the end user experience. The team also expanded by almost 100%, to help bring in more expertise and support to tackle the last stage of the whole process and make the end user experience adjustable depending on several network layer related factors. CHALLENGE Fig 1. - Typical end user screen manipulation(s) The major challenge and now widely accepted to be the most critical was the end user perception and acceptance of the cloud as a smooth part of the workflow. Here remote visualization was necessary to see if the simulation results (left remotely in the cloud) could be Page 55

56 viewed and manipulated as if it were local on the end user desktop. To contrast with Round 1, and to get real network expertise to bear on this aspect, NICE s DCV was chosen to help deliver this, as it is: Application neutral Has a clean and separate client (free) and server component Provided some tuning parameters which can help overcome the bandwidth issues Several tests were conducted and carefully iterated, such as image update rate, bandwidth selection, codecs, etc. A screen shot is shown below for the final successful user acceptance of remote visualization settings: TABLE 1. CAST-IN-PLACE MECHANICAL ANCHOR CONCRETE ANCHORAGE PULLOUT CAPACITY ANALYSIS (FEA STATS) Materials Steel & Concrete Procedure 3D Nonlinear Contact, Fracture & Damage Analysis Number of Elements 1,626,338 Number of DOF 1,937,301 Solver ABAQUS/Explicit in Parallel Solving Time 11.5 hours on a 32-core Linux Cluster ODB Result File Size 2.9 GB Setup We made a number of end user trails. First the DCV was installed with both a Windows and Linux client. Next a portal window was opened, usually at the same time as the end user trial to observe the demand on the serving infrastructure (see diagram). This ensured that there was sufficient bandwidth and capacity at the cloud end. The end node was hosting an NVIDIA graphics accelerator card. An initial concern was if the version was supported or had an impact. DCV has the ability to do a sliding scale of the pixel compression and this involves skipping certain frames in order to keep the flow smooth. Page 56

57 The post processing underlying Infrastructure (cloud end): Fig 2. - DCV layer setup The post processing underlying Infrastructure (end user space): Fig 3. - DCV enabled post processing (end user view) Page 57

58 Fig 4. - Ingress/egress test results/profile Figure 4 basically shows us that the cloud Internet measurements peaked out at 12 Mbits/sec, but generally hover at or below 8 Mbits for this particular session. This profile graph is a good representation of what has been seen in the past on DCV sessions. The red line (2 Mbits/sec) is where consistent end user experience for this particular graphic size was observed. CONCLUSIONS AND RECOMMENDATIONS Here is a summary of the key results found during our Round 2 experiment: End point Internet bandwidth variability: Depending on when it is conducted, a vendor neutral test applet result ranges from 1Mbps ~ 10Mbps. The pipe Bandwidth was expected to be 20 Mbits/sec, but when it was shared by the office site using normal enterprise applications such as server Exchange, Citrix, etc., such variation was not conducive to a qualitative end user experience. Switching to another pipe (with burst mode of 50 to 100 Mbits/sec): More testing showed that the connection was not stable, and ABAQUS/Viewer graphics windows freeze was experienced after being idle for a while. This required local IT to troubleshoot the issue. There were no significant differences between Windows or Linux hosted platforms. The NICE DCV/EnginFrame is a good platform for remote visualization if a stable Internet BW is available. Some of the parameters for the connection performance: o VNC connection line-speed estimate: 1 ~ 6 Mbps, RTT ~ 62 ms o DCV bandwidth usage: AVG 100 KiB ~ 1 MiB o DCV Frame Rate: 0~10 FPS, >5 FPS acceptable, >10 FPS smooth We tried Linux and Windows desktop. Because of the BW randomness & variability, it was not possible create a good baseline to compare the performance of the 2 desktops. The graphics cards did not have any impact on the end user experience. However the model size and graphic image pixel size perhaps play a major role, and the current experiment did not have enough time to study and characterize this issue. The ABAQUS model used in this test case does not put much demand on the graphics card. We've seen only 2% usage on the card. There was usually sufficient network capacity and bandwidth at the cloud serving end. The last mile delivery or capability at the end user site was the most important and perhaps only determining factor influencing the end user experience and perception. Page 58

59 Beyond the cloud service provider, a local or end user IT support person with network savvy is perhaps a necessary part of the infrastructure team in order to deliver robust and repeatable post processing visual delivery. This incurs a cost. The security aspect could not be tested, as the time and effort required were not sufficient in the time allotted. Part of the end user experience from Round 1 was to better document the setup which can be found in the Appendix and clearly shows a smooth and easy to follow up flow. Major single conclusion and recommendations Any site that wishes to benefit from this experience needs to prioritize the last mile issue. End User Experience Observations & Data Tables: Use Case Measure Remote thru DCV Local Desktop Loading ODB file response time (seconds) Copy & paste contour plots functionality does not work works Copy & paste XY data functionality works works response time Creating animation (seconds) 12 8 Model dynamic manipulation response time Acceptable when Bandwidth > 2 Mbits/sec no delay Bandwidth Usage Average (KiB) Peak (MiB) Image Quality = Image Quality = Note: Image Quality: Specify the quality level of dynamic images when using TCP connections. Higher values correspond to higher image quality and more data transfer. Lower values reduce quality and reduce bandwidth usage. Network Latency for round trip from the DCV remote visualization server Ping statistics for : Packets: Sent = 4, Received = 3, Lost = 1 (25% loss), Approximate round trip times in milliseconds: Min = 56ms, Max = 58ms, Average = 56ms Case Study Authors Frank Ding, Matt Dunbar, Steve Hebert, Rob Sherrard and Sharan Kalwani. Page 59

60 TEAM 52 - High-Resolution Computer Simulations of Blow-off in Combustion Systems MEET THE TEAM End User - Combustion Science & Engineering, Inc. (CSE, USA) For more than fourteen years, Combustion Science & Engineering, Inc. (CSE) has been dedicated to the study, advancement, and application of combustion and fire science. Compute Resource Provider Bull extreme factory (Bull XF, France) Software Provider ESI Group (OpenFOAM) HPC Expert Dacolt (Netherlands) USE CASE The undesired blow-off of turbulent flames in combustion devices can be a very serious safety hazard. Hence, it is of interest to study how flames blow off. Simulations offer an attractive way to do this. However, due to the multi-scale nature of turbulent flames, and the fact that the simulations are unsteady, these simulations required significant computer resources. This makes the use of large, remote computational resources extremely useful. In this project, a canonical test problem of a turbulent premixed flame is simulated with OpenFOAM and run in extremefactory.com. Fig. 1 - Schematic of the bluff-body flame holder experiment. Sketch of the Volvo case: A premixed mixture of air and propane enters the left of a plane rectangular channel. A triangular cylinder is located at the center of the channel and serves as a flame holder. Application software requirements OpenFOAM can handle this problem very well, download: Custom code or configuration of end-user OpenFOAM input files are available at These files were used in a 3D simulation that ran OpenFOAM (reactingfoam to be precise) in Page 60

61 40 cores. To get an idea of how to run these files, have a look at the section "Run in parallel" in: Computing resource requirements: At least 40 cores. Fig. 2 - Predicted temperature contours field for the Volvo case using OpenFOAM. CHALLENGES The current challenges for the end-user (with just his in-house resources is that the computational resources needed for these simulations are significant (i.e. more than 100 cpus and 1-3 days of continuous running). BENEFITS Remote clusters allow small companies to conduct simulations that were previously only possible for large companies and government labs. CONCLUSIONS Running reactingfoam for a simulation of a bluff-body-stabilized premixed flame requires a mesh of less than 1/4 million cells. This is not much, but the simulations need to run for a long time, and are part of a parametric study that needs more than 100 combinations of parameters. Running one or two huge simulations is not the goal here. The web interface was easy to use so much easier than running in Amazon's EC2, that I did not even read the instructions and was able to properly run OpenFOAM. Nonetheless, it was not very clear how to download all the data once the simulation ended. In addition the simulation ran satisfactorily. There were some errors at the end, but these were expected. Suggestion: A key advantage of OpenFOAM is, it allows us to tailor OpenFOAM applications to different problems. This requires making some changes in the code and compiling with wmake, which can be done on Amazon EC2. It is not clear how this can be done with the present interface. A future test might be to run myreactingfoam instead of reactingfoam. Case Study Author - Ferry Tap Page 61

62 TEAM 53 - Understanding Fluid Flow in Microchannels MEET THE TEAM End-user and Software Provider Computational Physics and Mechanics Group of Dr. Ganapathysubramanian at Iowa State University Resource Provider Rutgers Discovery Informatics Institute (RDI2) at Rutgers University, and FutureGrid, XSEDE, NERSC, UCLM Spain, IHPC Singapore HPC Experts Rutgers Discovery Informatics Institute team. TEAM MEMBERS Javier Diaz-Montes, Research Associate, Rutgers Discovery Informatics Institute,Rutgers Univ. Baskar Ganapathysubramanian, Assistant Professor, Dept. of Mech.Engng, Iowa State Univ. Manish Parashar, Professor, Rutgers Discovery Informatics Institute, Rutgers University Ivan Rodero, Research Associate, Rutgers Discovery Informatics Institute, Rutgers University Yu Xie, Research Assistant, Department of Mechanical Engineering, Iowa State University Jaroslaw Zola, Research Associate Professor, Rutgers Discovery Informatics Institute. USE CASE Problem Description The end-user developed a parallel MPI solver for Navier-Stokes equation. With this solver, the end-user can simulate the flow in a microchannel with an obstacle for a single configuration of the fluid speed, the micro-channel size and the obstacle geometry (see Figure 1). A single simulation typically requires hundreds of CPU-hours. Fig. 1 - Example flow in a microchannel with a pillar. Four variables characterize the simulation: channel height, pillar location, pillar diameter, and Reynolds number. The end-user sought to construct a phase diagram of possible fluid flow behaviors to understand how input parameters affect the flow. Additionally, the end-user wanted to create a library of fluid flow patterns to enable analysis of their combinations. The problem has many significant applications in the context of medical diagnostics, bio-medical engineering, constructing structured materials, etc. Page 62

63 CHALLENGES The problem was challenging for the end-user as it required thousands of MPI-based simulations, which collectively exceeded computational throughput offered by any individual HPC machine. Although the end-user had access to several high-end HPC resources, executing thousands of simulations requires complex coordination and fault-tolerance, which were not readily available. Finally, simulations are highly heterogeneous, and their computational requirements were hard to estimate a priori, adding another layer of complexity. The Solution To tackle the problem, the team decided to use multiple federated heterogeneous HPC resources. The team proceeded in four stages: 1. Preparatory phase in which HPC experts gained understanding of the domain problem, and formulated a detailed plan to solve it this phase included face-to-face meeting between the end-user and HPC experts 2. Software-hardware deployment stage in which HPC experts deployed the end-user s software, and implemented required integration components. Here, minimal or no interaction with systems administrators was required, thanks to the flexibility of the CometCloud platform used in the experiment 3. Computational phase in which the actual simulations were executed 4. Data analysis in which the output of simulations was summarized and postprocessed by the end-user. The developed approach is based on the federation of distributed heterogeneous HPC resources aggregated completely in a user-space. Each aggregated resource acts as a worker executing simulations. However, each resource can join or leave the federation at any point of time without interrupting the overall progress of the computations. Each aggregated resource acts as a temporal storage for the output data as well. The data is compressed on-the-fly, and transferred using the RSYNC protocol to the central repository for simple, sequential postprocessing. In general, the computational platform used in this experiment takes the concept of volunteer computing to the next level, in which desktops are replaced with HPC resources. As a result the end-user s application gains cloud-like capabilities. In addition to solving an important and urgent problem for the end-user, the experiment serves as a proof of concept of applying a user-oriented computational federation to solve large-scale computational problems in engineering. CONCLUSIONS AND RECOMMENDATIONS Several observations emerged from the experiment: Good understanding of the domain specific details by the HPC experts was important to the fluent progress of the experiment. Page 63

64 Close collaboration with the end-user, including face-to-face meetings, was critical for the entire process. Although it may at first seem counterintuitive, working within limits set by different HPC centers i.e. using only SSH access without special privileges greatly simplified the development process. At the same time, maintaining friendly relationship with respective systems administrators helped to shorten response time to address common operational issues. General Challenges and Benefits of Using UberCloud The main difficulty was to obtain a sufficient number of HPC resources that collectively would provide throughput needed to solve the end-users problem. This challenge was solved by interacting with several HPC centers, and then exploiting elasticity offered by CometCloud to add extra resources during the experiment. For example, several machines were federated after the experiment was already running for five days. UberCloud greatly simplified the process of obtaining computational resources. The ability to quickly contact various HPC providers was central to the success of the experiment. UberCloud provided a well-structured and organized environment to test new approaches for solving large-scale scientific and engineering problems. Following well planned steps with clearly defined deadlines, as well as having a central message-board and documents repository, greatly simplified and accelerated the development process. Experiment Highlights The main highlights of the experiment are summarized below: 10 different HPC resources from 3 countries federated using CometCloud 16 days, 12 hours, 59 minutes and 28 seconds of continuous execution 12,845 MPI-based flow simulations executed 2,897,390 core-hours consumed 400 GB of output data generated The most comprehensive data to date on the effect of pillars on microfluid channel flow gathered. Case Study Authors - Javier Diaz-Montes, Baskar Ganapathysubramanian, Manish Parashar, Ivan Rodero, Yu Xie, and Jaroslaw Zola. Acknowledgments This work is supported in part by the National Science Foundation (NSF) via grants number IIP and DMS (RDI2 group), and CAREER and PHY (Iowa State group). This Page 64

65 project used resources provided by: the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by the NSF grant number OCI , FutureGrid, which is supported in part by the NSF grant number OCI , and the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy (DOE) under the contract number DE-AC02-05CH The authors would like to thank the SciCom research group at the Universidad de Castillala Mancha, Spain (UCLM) for providing access to Hermes, and Distributed Computing research group at the Institute of High Performance Computing, Singapore (IHPC) for providing access to Libra. The authors would like to acknowledge the Consorzio Interuniversitario del Nord est Italiano Per il Calcolo Automatico, Italy (CINECA), Leibniz-Rechenzentrum, Germany (LRZ), Centro de Supercomputacion de Galicia, Spain (CESGA), and the National Institute for Computational Sciences (NICS) for willing to share their computational resources. The authors would like to thank Dr. Olga Wodo for discussion and help with development of the simulation software, and Dr. Dino DiCarlo for discussions about the problem definition. The authors express gratitude to all administrators of systems used in this experiment, especially to Prentice Bisbal from RDI2 and Koji Tanaka from FutureGrid, for their efforts to minimize downtime of computational resources, and a general support. Page 65

66 TEAM 54 - Analysis of a Pool in a Desalinization Plant MEET THE TEAM End User Juan Enriquez Paraled, Manager of ANALISIS-DSC Software Provider ANSYS CFX and CEI Ensight Gold We used ANSYS CFX software and three of its ANSYS HPC Packs because of its strengths in analyzing complex fluid dynamic systems. Additionally, we used the CEI Ensight Gold to visualize and analyze the CFD results without the need of downloading big files over the Internet. Resource Provider Gompute (48 dedicated cores). HPC/CAE Expert Henrik Nordborg, professor at the HPC center of the University Switzerland USE CASE Many areas in the world have no available fresh water even though they are located in coastal areas. As a result, in recent years a completely new industry has been created to treat seawater and transform it into tap water. This water transformation requires that the water must be pumped into special equipment, which is very sensitive to cavitation. Therefore, a correct and precise water flow intake must be forecasted before building the installation. The CFD analysis of air-water applications using free surfaces modeling is a highly complex modelization. The computational mesh must correctly capture the fluid interface, and the number of iterations required to obtain physically and numerically converged solution is very high. If both previous requirements are not matched, the forecasted solution will not even be close to the real world solution. CHALLENGES The end-user needed to obtain a physical solution in a short period of time as the time to analyze the current design stage was limited. The time limitation mandated the use of remote Page 66

67 HPC resources to meet the customer s time requirements. As usual the main problem was the result data transfer between the end-user and the HPC resources. To overcome this problem, the end-user used the visualization software Ensight to look at the solution and obtain images and animations completely through the Internet. The table below provides an evaluation of the Gompute on demand solution: Criteria Inhouse cluster Ideal cloud HPC Gompute on demand Uploading speed 11.5 MB/s 2 MB/s 2-3 MB/s Downloading speed 11.5 MB/s 2 MB/s 4-5 MB/s Ease to use reasonable excellent excellent Refresh rate excellent excellent good Latency excellent excellent excellent Command line access possible possible possible Output file access possible possible possible Run on the reserved easy easy easy cluster Run on the on demand N/A easy easy cluster Graphical node excellent excellent excellent Using UDF s on the possible possible possible cluster State of the art hardware good good good Scalability poor excellent excellent Security excellent excellent good Remote Visualization The end user categorized the Gompute VNC-based solution as excellent. It is possible to request a graphically accelerated node when starting your programs with a GUI. This functionality substantially cuts virtual prototyping lead time, since all the data generated from a CAE simulation can be simulated directly in Gompute. Also, this omits time consuming data transfers and increases data security by removing the need to have multiple copies of the same data at different locations sometimes on insecure workstations. Gompute accelerators allows the use of the desktop over links with latency over 300 ms. This allows Gompute resources to be used by locations separated by as much as 160 degrees longitude i. e., the user may be in India and the cluster in Detroit. Collaborative workflow is allowed by the Gompute remote desktop sharing option so two users at different geographical locations can work together on the same simulation. Ease of Use Page 67

68 Gompute on demand provide a ready-to-use environment with an integrated repository of the applications requested, license connection, and queuing system based on SGE. In order to establish the connection to the cluster, you just open ports 22 and 443 on the company s firewall. Downloading Gompute explorer and opening a remote desktop allows you to have the same user experience as working with your own in house machine. Compared to other tested HPC connection modes, Gompute connections were easy to setup and use. The connection allowed connecting and disconnecting to the HPC account to check how the calculations were progressing. As to costs, the Gompute quotation clearly described the services provided. Also, the technical support from Gompute personal was good BENEFITS Compute remotely Pre/post-process remotely Gompute can be used as an extension of in-house resources Able to burst into Gompute On-Demand from an in-house cluster Accelerated file transfers Possible to have exclusive desktops Support for multiple users on each graphics node Applications integrated and ready to use GPFS storage available Handles high latency links between the user and the Gompute cluster Facilitates collaboration with clients and support CONCLUSIONS AND RECOMMENDATIONS The bottleneck using commercial software in CAE is the cost of the commercial CFD licenses. There were two lessons learned: ANSYS has no CFD on demand license to use the maximum number of available cores in a system while competitor software, such as Star-CCM+, already has such a license. Supercomputing centers must provide analysis/post-processing tools for customers to check results without the need to download result files otherwise, many of the advantages of using cloud computing are lost because of long data file transfer times. The future for the wider use of supercomputing centers is to find a way to have commercial CAE (CFD and FEA) licenses on demand in order to pay for the actual software usage. Commercial software must take full advantage of current and future hardware developments for the wider spread of virtual engineering tools. Case Study Authors Juan Enriquez Paraled, Manager of ANALISIS-DSC; Ramon Diaz, Gompute Page 68

69 TEAM 56 - Simulating Radial and Axial Fan Performance MEET THE TEAM End User A company specializing in the design, development, manufacturing, sales, distribution and service of air and gas compressors. Software Provider Wim Slagter, ANSYS Inc., Netherlands Resource Provider Ramon Diaz, Gompute (Gridcore AB), Sweden HPC Team Expert Oleh Khoma, Eleks Team Mentor Dennis Nagy, HPC Experiment USE CASE For the end user the aim of the exercise was to evaluate the HPC cloud service without the need to obtain new engineering insights. That s why a relatively basic test case was chosen a case for which they already had results from the end user s own cluster, and which had a minimum of confidential content. The test case was the simulation of the performance of an axial fan in a duct similar to those found in the AMCA standard. A single ANSYS Fluent run simulated the performance of a fan under 10 different conditions to reconstruct the fan curve. The mesh consisted of 12 million tetrahedral cells and was suited to test parallel scalability. CHALLENGES The main reason to look to HPC in the cloud is cost. The end user has a highly fluctuating load with regard to simulations. This means that their current on-site cluster rarely has the correct capacity. When it is too large, they are paying too much for hardware and licences; and when it is too small they are losing money because the design teams are waiting for the results. With a flexible HPC solution in the cloud the end user can theoretically avoid both costs. Evaluation HPC as a service will only be an alternative to the current on-site solution if it manages to meet a series of well-defined criteria as set by the end user. Criteria Local HPC Ideal cloud HPC Page 69 Actual cloud HPC Pass/Fail Upload speed 11.5 MB/s 2 MB/s 0.2 MB/s Fail Download speed 11.5 MB/s 2 MB/s 4-5 MB/s Pass Graphical output possible possible inconvenient Fail

70 Quality of the image excellent excellent good Pass Refresh rate excellent excellent good Pass Latency excellent excellent good Pass Command line access possible possible possible Pass Output file access possible possible possible Pass Run on the reserved easy easy easy Pass cluster Run on the on demand N/A easy easy Pass cluster Graphical node excellent excellent good Pass Using UDF s on the possible possible possible Pass cluster State of the art hardware reasonable good good Pass Scalability poor excellent excellent Pass Security excellent excellent good Pass Hardware cost good excellent N/A N/A License cost good excellent N/A N/A Table 1 - Evaluation results Cluster Access Gridcore allows you to connect to its clusters through the GomputeXplorer. This is a Java-based program that lets you monitor your jobs and launch virtual desktops. Establishing the connection was actually not that easy. If the standard SSH and SSL ports (22 and 443) are open in your companies firewall then connecting is straightforward. This is however rarely the case. Alternatively you can make your connection with the use of a VPN. Both options require that the end user make changes to the firewall. Because the end user had to wait a long time for these changes to be implemented, valuable time was lost. Only the port changes were implemented. So the VPN option was never tested. Transfer Speed Input files, and certainly result files, for typical calculations range from a couple of hundreds of megabytes to a couple of gigabytes in size. Therefore a good transfer speed is of vital importance. The target is a minimum of 2 MB/s for both upload and download. This means that it is theoretically possible to transfer 1GB of data in 8.5 minutes. When transferring files with the GomputeXplorer, upload speeds of 0.2MB/s and download speeds of about 4-5MB/s were measured. When transferring the files with a regular SSH client the upload speed was 1.7 MB/s and the download speed 0.9 MB/s. These speeds were measured during transferring the same files several times. The tests were performed one after the other to ensure a fair comparison. These measurements show that theoretically reasonable Page 70

71 to good transfer speeds are possible, but so far no solution was found to get the GomputeXplorer s upload speed up to par. As noted by resource provider, most clients get their speed depending on the bandwidth, and the low numbers measured are quite abnormal. Several tests were performed in the system seeking the root cause of the issue, but none was found. The investigations would have continued until the solution was found, but not within the time frame of the experiment. It might be more practical to wait for a new file transfer tool that is planned to be rolled out shortly by Gompute and might resolve this issue. Graphical output in batch To see how the flow develops over time, it is common practice to output some images from the flow field. Fluent cannot do this with just a command line but requires an X-window to render to. The end user was not able to make this option work on the Gompute cluster within the allocated timeframe. Several suggestions (mainly different command line arguments) have been put forward to resolve this issue. Remote visualization The end user used the HP Remote Graphics Software package that gave a like local experience. If we categorise HP RGS as excellent, the VNC based solution of Gompute can surely be categorized as good. There was a noticeable difference between the dedicated cluster and the on-demand one with regard to the quality of the remote visualisation (these are both remote Gompute clusters the dedicated one was specifically reserved for the end user). The dedicated clusters render quality and latency was much better. It is entirely possible to do preand post-processing on the cluster. It is also possible to request a graphically accelerated node when starting your programs with a GUI. Ease of use The Gompute remote cluster uses the same queuing system (SGE) as the end user s cluster so the commands are familiar. The fact that you can request a full virtual desktop makes using the system a breeze. This virtual desktop allows for easy compilation of the UDF s (C-code to extend the capabilities of Fluent) on the architecture of the remote cluster. Submitting and monitoring jobs is just as easy as on the local cluster. The process is also identical on the dedicated and the on-demand cluster. Apart from the billing method, there is no additional overhead when you temporarily want to expand your simulation capacity by using the ondemand cluster. Page 71

72 Hardware The hardware that was made available to the end user was less than two years old (Westmere Xeon s). This was considered to be good. Sandy Bridge-based Xeon s would have been considered excellent. The test case was used to benchmark the Gompute cluster against the end user s own aging cluster. Fig. 1 - Comparison of run times of the test case. The time it took to run the simulation on 16 cores of the local cluster is the reference where the speedup is defined relative to this time. The blue curve represents the old, local cluster and the red curve the on-demand cluster from Gompute. The green point is from a run on a workstation that has a similar hardware configuration as the cluster from Gompute but runs Windows instead of Linux. The following points can be concluded from this graph: The old cluster isn t performing all that badly considering its age. Either that or a larger speedup was expected from the new hardware. The simulation scales nicely on the Gompute cluster, but not as well on the local cluster. The performance of the workstation is similar to that of the Gompute cluster. Cost The resource provider only provides hardware; the customer is still responsible for acquiring necessary software licenses. The cost benefit is therefore limited to hardware and support. The most likely customer base for the On Demand Cluster service are companies that either rarely do a simulation or occasionally need extra capacity. In both cases they would have to pay Page 72

73 for a set of licenses that are rarely used. It doesn t seem to be a very good solution and may become a showstopper for adopting the HPC in the cloud. Hopefully, ANSYS will come up with a license model that would enable a service that is more in line with HPC in the cloud. BENEFITS End User Ease of use. Post- and pre-processing can be done remotely. Excellent opportunity to test the state of the art in cloud based HPC. CONCLUSIONS AND RECOMMENDATIONS HPC in the cloud is technically feasible. Most remaining issues are implementation related that the resource provider should be able to solve. The remote visualisation solution was good and allowed the user to actually perform some real work. Of course, it remains to be seen if a stress test with multiple users from the same company yields the same results. The value of the HPC in the cloud solution is limited by the absence of appropriate license models from the software vendors that would allow Gompute to actually sell simulation time and not just hardware and support. Further rounds of this experiment can be used to analyse the abnormal uploading speed. File transfer might be tested using the VPN connection to guarantee no restrictions from the company s firewall. Also of interest is the testing of the new release of Gompute file transfer tool, which implements a transferring accelerator. Different graphical node configurations can be tested to enhance the user experience. Case Study Authors Wim Slagter, Ramon Diaz, Oleh Khoma, and Dennis Nagy. Note: The illustration on top of this report shows pressure contours in front/behind a 6-bladed axial fan. Page 73

74 TEAM 58 - Simulating Wind Tunnel Flow Around Bicycle and Rider MEET THE TEAM End User Mio Suzuki Suzuki is an Analysis Engineer (CFD, wind tunnel analysis) with Trek Bicycle Corporation. Trek is a bicycle manufacturer with approximately 1,800 employees worldwide, whose mission is to build the best bikes in the world. Software Provider and HPC Expert Mihai Pruna Pruna is a Software Engineer with CADNexus, developing CAD to CAE interoperability tools based on the CAPRI platform. CADNexus is a global provider of interoperability software for collaborative multi-cad & CAE environments. Resource Provider Kevin Van Workum, PhD Dr. Van Workum is the Chief Technical Officer at Sabalcore Computing Inc. Sabalcore Computing has been an industry leader in HPC On-Demand services since USE CASE The CAPRI to OpenFOAM Connector and the Sabalcore HPC Cloud infrastructure were used to analyze the airflow around bicycle design iterations from Trek Bicycle. The goal was to establish a great synergy between iterative CAD design, CFD analysis, and HPC cloud environments. Trek has been heavily invested in engineering R&D, and does extensive prototyping before producing a final production design. CAE has been an integral part of design process in accelerating the pace of R&D and rapidly increasing the number of design iterations. Advanced CAE capabilities have helped Trek reduce cost and keep up with the demanding product development time necessary to stay competitive. Page 74

75 Automating iterative design changes in Computer Aided Design (CAD) models coupled with Computational Fluid Dynamics (CFD) simulations can significantly enhance the productivity of engineers and enable them to make better decisions in order to achieve optimal product designs. Using a cloud-based or On-Demand solution to meet the HPC requirements of computationally intensive applications decreases the turn-around time in iterative design scenarios and reduces the overall cost of the design. With most of the software available today, the process of importing CAD models into CAE tools, and executing a simulation workflow requires years of experience and remains, for the most part, a human-intensive task. Coupling parametric CAD systems with analysis tools to ensure reliable automation also presents significant interoperability challenges. The upfront and ongoing costs of purchasing a high performance computing system are often underestimated. As most companies HPC needs fluctuate, it's difficult to adequately size a system. Inevitably, this means resources will be idle for many hours and, at other times, will be inadequate for a project's requirements. In addition, as servers age and more advanced hardware becomes available, companies may recognize a performance gap between themselves and their competitors. Beyond the price of the hardware itself, a large computer cluster demands specialized power resources, consumes vast amounts of electrical power, and requires specialized cooling systems, valuable floor space and experienced experts to maintain and manage them. Using a HPC provider in the cloud overcomes these challenges in a cost effective, pay-per-use model. Experiment Development The experiment was defined as an iterative analysis of the performance of a bike. Mio Suzuki at Trek, the end user, supplied the CAD model. The analysis was performed on two Sabalcore provided cluster accounts. The CADNexus CFD connector, an iterative preprocessor, was used to generate OpenFOAM cases using the SolidWorks CAD model as geometry. A custom version of the CAPRI-CAE interface, in the form of an Excel spreadsheet, was delivered to the end user by the team expert Mihai Pruna, who represented the Software Provider, CADNexus. The CAPRI-CAE interface was modified to allow for the deployment and execution of OpenFOAM cases on Sabalcore cluster machines. Mihai Pruna also ran test simulations and provided advice in setting up the CAD model for tessellation, that is, the generation of an STL file suitable for meshing (Figure 1: CAD Model Tessellation prior to Meshing). Page 75

76 Fig. 1 - Setting up the CAD model for tessellation The cluster environment was set up by Kevin Van Workum with Sabalcore, allowing for rapid and frequent access to the cluster accounts via SSH as needed by the automation involved in copying and executing the OpenFOAM cases. The provided bicycle was tested at two speeds: 10 and 15 mph. The CADNexus CFD connector was used to generate cutting planes and wake velocity linear plots. In addition, the full simulation results were archived and provided to the end user for review using ParaView, a free tool (see the figure on top of this report). ParaView or other graphical post-processing applications can also be run directly on Sabalcore using their accelerated Remote Graphical Display capability. Thanks to the modular design of the CAPRI powered OpenFOAM Connector and the flexible environment provided by Sabalcore Computing, integration of the software and HPC provider resources was quite simple. CHALLENGES General Considering the interoperability required between several technologies, the set up went fairly smoothly. The CAPRI-CAE interface had to be modified to work with an HPC cluster. The production version was designed to work with discrete local or cloud based Ubuntu Linux machines. For the cluster environment, some programmatically generated scripts had to be changed to send jobs to a solver queue rather than execute the OpenFOAM utilities directly. The CAD model was not a native SolidWorks project but rather a series of imported bodies, and surfaces exhibited topological errors that were picked up by the CAPRI middleware. Defeaturing in SolidWorks, as well as turning off certain consistency checks in CAPRI, helped alleviate these issues and produce quality tessellations. Data Transfer Issues Page 76

77 Sometimes, a certain OpenFOAM dictionary would fail to copy to the client, causing the OpenFOAM scripts to fail. This issue has not been resolved at this time, but it seems to occur only with large geometry files, although it is not the geometry file that fails to copy. Possible solutions include zipping up each case and sending it as a single file. Retrieving the full results can take a long time. Solutions already developed involve doing some of the post processing on the client and retrieving only simulation results data specified by the user, as implemented by CADNexus in the Excel based CAPRI-CAE interface (2), or running ParaView directly on the cluster, as implemented by Sabalcore. End User s Perspective Capri is a fantastic tool to connect the end user desktop environment directly to a remote cluster. As an end user, the first challenge I faced was thoroughly understanding the formatting of the Excel sheet. As soon as I was able to identify what was wrong with my Excel entries, the rest of the workflow went relatively smoothly and as exactly specified in the templates workflow. I also experienced slowness in building up the cases and running the cases. If there is a way to increase the speed at each step (synchronizing the CAD, generating cases on the server, and running), that would enhance the user experience. BENEFITS Figure 2: Z=0 Velocity Color Plot Generated with CADNexus Visualizer Lightweight Postprocessor The CAPRI-CAE Connector and the CAPRI-FOAM connector dramatically simplify the generation of design-analysis iterations. The user has a lot fewer inputs to fill in, and the rest are generated automatically. The end user does not need to be proficient in OpenFOAM or Linux. With respect to the HPC resource provider, the environment provided to the user by Sabalcore was already setup to run OpenFOAM, which helped speedup the process of integrating the CADNexus OpenFOAM connector with Sabalcore's services. The only required modification to the HPC environment made by Sabalcore was to allow a greater than normal number of SSH Page 77

The UberCloud. Online Community & Marketplace for 20+ Million Engineers & Scientists to Discover, Try, Buy Computing in the Cloud

The UberCloud. Online Community & Marketplace for 20+ Million Engineers & Scientists to Discover, Try, Buy Computing in the Cloud The UberCloud Online Community & Marketplace for 20+ Million Engineers & Scientists to Discover, Try, Buy Computing in the Cloud Presented at the Silicon Valley Engineering Council September 24 2014 at

More information

Contrail Summerschool Almere, July 22 26, 2013. The UberCloud HPC Experiment Paving the way to HPC as a Service

Contrail Summerschool Almere, July 22 26, 2013. The UberCloud HPC Experiment Paving the way to HPC as a Service Contrail Summerschool Almere, July 22 26, 2013 The UberCloud HPC Experiment Paving the way to HPC as a Service It all started with an observation Business Clouds are becoming widely accepted.. but acceptance

More information

The UberCloud. We enable companies to build, deploy, and consume compute and data intensive applications as services, better, faster, and cheaper

The UberCloud. We enable companies to build, deploy, and consume compute and data intensive applications as services, better, faster, and cheaper The UberCloud We enable companies to build, deploy, and consume compute and data intensive applications as services, better, faster, and cheaper Wolfgang Gentzsch & Burak Yenier The UberCloud Inc, June

More information

Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible.

Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible. Digital manufacturing technology and convenient access to High Performance Computing (HPC) in industry R&D are essential to increase the quality of our products and the competitiveness of our companies.

More information

Smart Manufacturing. CAE as a Service in the Cloud. Objective: convincing you to consider CAE in the Cloud

Smart Manufacturing. CAE as a Service in the Cloud. Objective: convincing you to consider CAE in the Cloud Smart Manufacturing CAE as a Service in the Cloud Objective: convincing you to consider CAE in the Cloud Wolfgang Gentzsch LS-DYNA Conference Würzburg 15. 17. June 2015 Engineers & scientists major computing

More information

Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible.

Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible. Digital manufacturing technology and convenient access to High Performance Computing (HPC) in industry R&D are essential to increase the quality of our products and the competitiveness of our companies.

More information

Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible.

Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible. Digital manufacturing technology and convenient access to High Performance Computing (HPC) in industry R&D are essential to increase the quality of our products and the competitiveness of our companies.

More information

SGI HPC Systems Help Fuel Manufacturing Rebirth

SGI HPC Systems Help Fuel Manufacturing Rebirth SGI HPC Systems Help Fuel Manufacturing Rebirth Created by T A B L E O F C O N T E N T S 1.0 Introduction 1 2.0 Ongoing Challenges 1 3.0 Meeting the Challenge 2 4.0 SGI Solution Environment and CAE Applications

More information

MEETING THE CHALLENGES OF COMPLEXITY AND SCALE FOR MANUFACTURING WORKFLOWS

MEETING THE CHALLENGES OF COMPLEXITY AND SCALE FOR MANUFACTURING WORKFLOWS MEETING THE CHALLENGES OF COMPLEXITY AND SCALE FOR MANUFACTURING WORKFLOWS Michael Feldman White paper November 2014 MARKET DYNAMICS Modern manufacturing increasingly relies on advanced computing technologies

More information

Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible.

Very special thanks to Wolfgang Gentzsch and Burak Yenier for making the UberCloud HPC Experiment possible. Digital manufacturing technology and convenient access to High Performance Computing (HPC) in industry R&D are essential to increase the quality of our products and the competitiveness of our companies.

More information

CRM for Customer Service and Support

CRM for Customer Service and Support CRM for Customer Service and Support Maximize satisfaction. Drive repeat business Servicing customers effectively is a key component in growing your business through loyal, repeat buyers. Whether it s

More information

advantedge Services advantedge Microsoft Dynamics AX Services for ISVs, VARs and existing AX users Africa Asia-Pac Europe North America South America

advantedge Services advantedge Microsoft Dynamics AX Services for ISVs, VARs and existing AX users Africa Asia-Pac Europe North America South America advantedge Services SAGlobal advantedge Microsoft Dynamics AX Services for ISVs, VARs and existing AX users Africa Asia-Pac Europe North America South America About SAGlobal SAGlobal is the largest specialist

More information

Leads360 Small Business Admin Training Manual

Leads360 Small Business Admin Training Manual Leads360 Small Business Admin Training Manual 1 Contents Getting Started... 4 Logging In... 4 Release Notes... 4 Dashboard... 5 Message from Admin... 5 New Features... 5 Milestones Report... 5 Performance

More information

N(i) 2 WHITE PAPER on CHANGE MANAGEMENT

N(i) 2 WHITE PAPER on CHANGE MANAGEMENT on CHANGE MANAGEMENT ABOUT THIS... 2 IT AND CHANGE MANAGEMENT... 3 RESPONDING TO PERFORMANCE AND ACCOUNTABILITY DEMANDS...3 AN EMERGING NEED FOR CHANGE MANAGEMENT...3 DEFINING CHANGE MANAGEMENT... 4 WHAT

More information

Cloud, On-premises, and More: The Business Value of Software Deployment Choice

Cloud, On-premises, and More: The Business Value of Software Deployment Choice Cloud, On-premises, and More: A research report prepared by: Publication sponsored by: TABLE OF CONTENTS Introduction: Choices, Limits, and Adaptability Isn t Everything Cloud? The Importance of Architecture

More information

Making the Transition. From ISV to SaaS. with Xterity Wholesale Cloud

Making the Transition. From ISV to SaaS. with Xterity Wholesale Cloud Making the Transition From ISV to SaaS with Xterity Wholesale Cloud CONTENTS: 1 The New Business Model...Page 3 2 Business Challenges...Page 5 3 Technology Challenges...Page 7 4 Xterity Wholesale Cloud...Page

More information

Scaling LS-DYNA on Rescale HPC Cloud Simulation Platform

Scaling LS-DYNA on Rescale HPC Cloud Simulation Platform Scaling LS-DYNA on Rescale HPC Cloud Simulation Platform Joris Poort, President & CEO, Rescale, Inc. Ilea Graedel, Manager, Rescale, Inc. 1 Cloud HPC on the Rise 1.1 Background Engineering and science

More information

Share this ebook! Top 7 Benefits IT Process Automation Provides MSP s 1

Share this ebook! Top 7 Benefits IT Process Automation Provides MSP s 1 Top 7 Benefits IT Process Automation Provides MSP s 1 Introduction.. 3 1. Enables Scalability Without Increasing Head Count.. 9 2. An End To Firefighting... 12 3. Complicated Scripting No Longer Necessary..

More information

ISC 14 Post Conference Summary

ISC 14 Post Conference Summary ISC 14 THE HPC EVENT June 22 26, 2014, Leipzig, Germany ISC 14 Post Conference Summary This year s International Supercomputing Conference (ISC) attracted 2,405 conference attendees from 51 countries as

More information

Data Center Infrastructure Management

Data Center Infrastructure Management Data Center Infrastructure Management Helping IT Empower the Business Luis M Burgos, HP Services BDM Arrow, ECS Proactive Care Advanced Presented under Non-Disclosure A New Style of IT Driven by Four New

More information

How to Build an Enterprise App in 5 Days 1

How to Build an Enterprise App in 5 Days 1 How to Build an Enterprise App in 5 Days 1 TABLE OF CONTENTS STAGES OF TRADITIONAL APP DEVELOPMENT 3 STAGE 1: DEFINE 4 STAGE 2: BUILD & TEST 4 STAGE 3: ROLLOUT 6 STAGE 4: MANAGEMENT 7 BUILDING AN ENTERPRISE

More information

Personalised view of metrics for an instant snapshot of your business. Wizard-Driven Dashboards

Personalised view of metrics for an instant snapshot of your business. Wizard-Driven Dashboards Workflow Automation Build a competitive advantage with automated processes and increased business visibility While the competition struggles with manual processes, queries and reports on customer interactions,

More information

CRM for Business Intelligence

CRM for Business Intelligence CRM for Business Intelligence Real-time visibility into your business Strategise effectively and make informed business decisions with timely, accurate insight into your organisation. Maximizer CRM 2015

More information

CRM for Customer Service and Support

CRM for Customer Service and Support OV E RV I E W CRM for Customer Service and Support MAXIMIZER CRM Published By Maximize satisfaction. Drive repeat business Servicing customers effectively is a key component in growing your business through

More information

WINDOWS AZURE AND WINDOWS HPC SERVER

WINDOWS AZURE AND WINDOWS HPC SERVER David Chappell March 2012 WINDOWS AZURE AND WINDOWS HPC SERVER HIGH-PERFORMANCE COMPUTING IN THE CLOUD Sponsored by Microsoft Corporation Copyright 2012 Chappell & Associates Contents High-Performance

More information

The Basics of Promoting and Marketing Online

The Basics of Promoting and Marketing Online How to Start Growing Your Business Online The Basics of Promoting and Marketing Online Revision v1.0 Website Services and Web Consulting Where do you see your business? We see it in the cloud How to Start

More information

PROJECT MANAGEMENT PLAN <PROJECT NAME>

PROJECT MANAGEMENT PLAN <PROJECT NAME> PROJECT MANAGEMENT PLAN TEMPLATE This Project Management Plan Template is free for you to copy and use on your project and within your organization. We hope that you find this template useful and welcome

More information

SAP Digital CRM. Getting Started Guide. All-in-one customer engagement built for teams. Run Simple

SAP Digital CRM. Getting Started Guide. All-in-one customer engagement built for teams. Run Simple SAP Digital CRM Getting Started Guide All-in-one customer engagement built for teams Run Simple 3 Powerful Tools at Your Fingertips 4 Get Started Now Log on Choose your features Explore your home page

More information

setup and provide drill-down capabilities to view further details on metrics and dynamic updates for a real-time view of your business conditions.

setup and provide drill-down capabilities to view further details on metrics and dynamic updates for a real-time view of your business conditions. Workflow Automation Build a competitive advantage with automated processes and increased business visibility While the competition struggles with manual processes, queries and reports on customer interactions,

More information

DevOps: Roll out new software and functionality quicker with high velocity DevOps

DevOps: Roll out new software and functionality quicker with high velocity DevOps DevOps: Roll out new software and functionality quicker with high velocity DevOps As software becomes more central, companies are looking for ways to shorten software development cycles and push new functionality

More information

10 ways for SMBs to Capture Value from their Data

10 ways for SMBs to Capture Value from their Data Ramiro El Ga 10 ways for SMBs to Capture Value from their Data Nicolas Raspal Founder & CTO @ We Are Cloud Introduction When it comes to acquiring business intelligence (BI) capabilities, many small and

More information

Zoho Projects. Social collaborative project management platform

Zoho Projects. Social collaborative project management platform Zoho Projects is a feature- rich and easy- to- use cloud- based collaborative project management platform for small to medium- sized businesses as well as teams and departments in larger companies. Its

More information

HOW TO SELECT A BACKUP SERVICE FOR CLOUD APPLICATION DATA JUNE 2012

HOW TO SELECT A BACKUP SERVICE FOR CLOUD APPLICATION DATA JUNE 2012 HOW TO SELECT A BACKUP SERVICE FOR CLOUD APPLICATION DATA JUNE 2012 INTRODUCTION The use of cloud application providers or Software-as-a-Service (SaaS) applications is growing rapidly. Many organizations

More information

White Paper. The Ten Features Your Web Application Monitoring Software Must Have. Executive Summary

White Paper. The Ten Features Your Web Application Monitoring Software Must Have. Executive Summary White Paper The Ten Features Your Web Application Monitoring Software Must Have Executive Summary It s hard to find an important business application that doesn t have a web-based version available and

More information

Web Extras. Customer Service Description. Version 3.0. February 26, 2002

Web Extras. Customer Service Description. Version 3.0. February 26, 2002 Web Extras Customer Service Description Version 3.0 February 26, 2002 Proprietary: Not for disclosure outside of Interland except under written agreement This document is subject to change without notice.

More information

Avanade ViewX Technology

Avanade ViewX Technology WhitePaper Avanade ViewX Technology Avanade s Unified Communication and Collaboration Managed Services technology platform is made up of two parts: ViewX for monitoring, alerting, reporting and visualization,

More information

1Targeting 2. 4Analysis. Introducing Marketing Automation. Best Practices for Financial Services and Insurance Organizations.

1Targeting 2. 4Analysis. Introducing Marketing Automation. Best Practices for Financial Services and Insurance Organizations. Introducing Marketing Automation Best Practices for Financial Services and Insurance Organizations 5 Marketing Technology 1Targeting 2 Engagement 4Analysis 3 Conversion 1 Marketing Automation = Marketing

More information

INTRODUCTION THE CLOUD

INTRODUCTION THE CLOUD INTRODUCTION As technologies rapidly evolve, companies are responding with creative business models and exciting ways to reach new markets. But major technology shifts and the influx of information that

More information

ilinc Implementation Guide: Preparation, Setup, Rollout, & Follow-up

ilinc Implementation Guide: Preparation, Setup, Rollout, & Follow-up ilinc Implementation Guide: Preparation, Setup, Rollout, & Follow-up ILINC COMMUNICATIONS, INC PHOENIX, ARIZONA Implementation Guide: Contents ilinc Implementation Guide...2 Section 1: Preparation...2

More information

Solving the Big Data Intention-Deployment Gap

Solving the Big Data Intention-Deployment Gap Solving the Big Data Intention-Deployment Gap Big Data is on virtually every enterprise s to-do list these days. Recognizing both its potential and competitive advantage, companies are aligning a vast

More information

a senior project planner for those seniors expecting to graduate in May

a senior project planner for those seniors expecting to graduate in May a senior project planner for those seniors expecting to graduate in May contents what is the longwood university senior graphic design project? why is the senior graphic design project valuable to the

More information

Building The Business Case For Launching an App Store

Building The Business Case For Launching an App Store Building The Business Case For Launching an App Store Why Telcos and ISPs are perfectly positioned to become the SaaS channel for their SMB customers This paper is intended to help ISPs and Telcos realize

More information

BI Dashboards the Agile Way

BI Dashboards the Agile Way BI Dashboards the Agile Way Paul DeSarra Paul DeSarra is Inergex practice director for business intelligence and data warehousing. He has 15 years of BI strategy, development, and management experience

More information

20 th Year of Publication. A monthly publication from South Indian Bank. www.sib.co.in

20 th Year of Publication. A monthly publication from South Indian Bank. www.sib.co.in To kindle interest in economic affairs... To empower the student community... Open YAccess www.sib.co.in ho2099@sib.co.in A monthly publication from South Indian Bank 20 th Year of Publication Experience

More information

Using WhatsUp IP Address Manager 1.0

Using WhatsUp IP Address Manager 1.0 Using WhatsUp IP Address Manager 1.0 Contents Table of Contents Welcome to WhatsUp IP Address Manager Finding more information and updates... 1 Sending feedback... 2 Installing and Licensing IP Address

More information

Welcome to IBM SmartCloud Notes!

Welcome to IBM SmartCloud Notes! Introduction Welcome to IBM SmartCloud Notes! This project template has been created to assist IT Project Managers tasked with planning and executing a Proof of Concept for IBM SmartCloud Notes Hybrid.

More information

Accountant Guide Includes everything you need to know to get started as a Clear Books Accounting Partner

Accountant Guide Includes everything you need to know to get started as a Clear Books Accounting Partner Accountant Guide Includes everything you need to know to get started as a Clear Books Accounting Partner Digital Edition Get ready to experience fast, reliable and secure accounting software that is easy

More information

Your guide to using new media

Your guide to using new media Your guide to using new media A comprehensive guide for the charity and voluntary sector with tips on how to make the most of new, low cost communication tools such as social media and email marketing.

More information

PINPOINT LABS HARVESTER CERTIFICATION AND TRAINING

PINPOINT LABS HARVESTER CERTIFICATION AND TRAINING PINPOINT LABS HARVESTER CERTIFICATION AND TRAINING PINPOINT LABS HARVESTER CERTIFICATION HARVESTER CERTIFICATION COURSE DESCRIPTIONS Harvester Certified Specialist (HCS) The HCS certification teaches students

More information

Simulation Platform Overview

Simulation Platform Overview Simulation Platform Overview Build, compute, and analyze simulations on demand www.rescale.com CASE STUDIES Companies in the aerospace and automotive industries use Rescale to run faster simulations Aerospace

More information

Technology Partner Program

Technology Partner Program Technology Partner Program Partnering For Success Technology partnerships are critical to the ability of to deliver world-class solutions to customers. understands the challenges customers face in aligning

More information

MY HELPDESK - END-USER CONSOLE...

MY HELPDESK - END-USER CONSOLE... Helpdesk User Guide Page 1 Helpdesk User Guide Table of Contents 1 INTRODUCTION... 3 1.1. OBJECTIVES... 3 1.2. END-USER CONSOLE... 3 1.3. SUMMARY OF RESPONSIBILITY... 3 1.4. HELPDESK INCIDENT LIFE CYCLE...

More information

GSX for Exchange. When End User performance... Matters! GSX Solutions 2015

GSX for Exchange. When End User performance... Matters! GSX Solutions 2015 GSX for Exchange When End User performance...... Matters! 1 About GSX Solutions Founded 1996, Headquartered in Switzerland Offices in USA, UK, France, Switzerland, China 6 millions mailboxes monitored

More information

How To Build A Cloud Computer

How To Build A Cloud Computer Introducing the Singlechip Cloud Computer Exploring the Future of Many-core Processors White Paper Intel Labs Jim Held Intel Fellow, Intel Labs Director, Tera-scale Computing Research Sean Koehl Technology

More information

Data Centre. Business Intelligence. Enterprise Computing Solutions United Kingdom. Sales Lead Portal User Guide. arrow.com

Data Centre. Business Intelligence. Enterprise Computing Solutions United Kingdom. Sales Lead Portal User Guide. arrow.com Business Intelligence Data Centre Cloud Mobility Security Enterprise Computing Solutions United Kingdom Sales Lead Portal User Guide arrow.com End-to-end Lead Generation Whether you want to reach new prospects

More information

Cisco Unified Communications and Collaboration technology is changing the way we go about the business of the University.

Cisco Unified Communications and Collaboration technology is changing the way we go about the business of the University. Data Sheet Cisco Optimization s Optimize Your Solution using Cisco Expertise and Leading Practices Optimizing Your Business Architecture Today, enabling business innovation and agility is about being able

More information

OUTSOURCING PRODUCTION SUPPORT

OUTSOURCING PRODUCTION SUPPORT WHITE PAPER Managed Global Software Teams OUTSOURCING PRODUCTION SUPPORT How to Engage a Remote Team with Minimum Risk THE SITUATION You re slammed. The list of projects and requests keeps growing. You

More information

Virtual Appliance Setup Guide

Virtual Appliance Setup Guide Virtual Appliance Setup Guide 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their respective

More information

Introduction to CFD Analysis

Introduction to CFD Analysis Introduction to CFD Analysis Introductory FLUENT Training 2006 ANSYS, Inc. All rights reserved. 2006 ANSYS, Inc. All rights reserved. 2-2 What is CFD? Computational fluid dynamics (CFD) is the science

More information

The SMB s Blueprint for Taking an Agile Approach to BI

The SMB s Blueprint for Taking an Agile Approach to BI The SMB s Blueprint for Taking an Agile Approach to BI The people, process and technology necessary for building a fast, flexible and cost-effective solution The Agile Approach to Business Intelligence

More information

Overview. Timeline Cloud Features and Technology

Overview. Timeline Cloud Features and Technology Overview Timeline Cloud is a backup software that creates continuous real time backups of your system and data to provide your company with a scalable, reliable and secure backup solution. Storage servers

More information

Professional CRM Support

Professional CRM Support Professional CRM Support Professional CRM Support Maximising the benefit from your CRM investment The CRM Business is proud to work with our clients to deliver great CRM Solutions. We understand that once

More information

02 General Information. 03 Features. 06 Benefits.

02 General Information. 03 Features. 06 Benefits. invgate 02 General Information. 03 Features. 06 Benefits. Index. 02 General Information. Improve your IT department Managing Service Support just got easier Despite growing connectivity, data accessibility

More information

WHITE PAPER. CRM Evolved. Introducing the Era of Intelligent Engagement

WHITE PAPER. CRM Evolved. Introducing the Era of Intelligent Engagement WHITE PAPER CRM Evolved Introducing the Era of Intelligent Engagement November 2015 CRM Evolved Introduction Digital Transformation, a key focus of successful organizations, proves itself a business imperative,

More information

The Evolving Role of Process Automation and the Customer Service Experience

The Evolving Role of Process Automation and the Customer Service Experience The Evolving Role of Process Automation and the Customer Service Experience Kyle Lyons Managing Director Ponvia Technology Gina Clarkin Product Manager Interactive Intelligence Table of Contents Executive

More information

Streamline your supply chain with data. How visual analysis helps eliminate operational waste

Streamline your supply chain with data. How visual analysis helps eliminate operational waste Streamline your supply chain with data How visual analysis helps eliminate operational waste emagazine October 2011 contents 3 Create a data-driven supply chain: 4 paths to insight 4 National Motor Club

More information

Personalised view of metrics for an instant snapshot of your business. Wizard-Driven Dashboards

Personalised view of metrics for an instant snapshot of your business. Wizard-Driven Dashboards Workflow Automation Build a competitive advantage with automated processes and increased business visibility While the competition struggles with manual processes, queries and reports on customer interactions,

More information

Monitor and Manage Your MicroStrategy BI Environment Using Enterprise Manager and Health Center

Monitor and Manage Your MicroStrategy BI Environment Using Enterprise Manager and Health Center Monitor and Manage Your MicroStrategy BI Environment Using Enterprise Manager and Health Center Presented by: Dennis Liao Sales Engineer Zach Rea Sales Engineer January 27 th, 2015 Session 4 This Session

More information

J U L Y 2 0 1 2. Title of Document. Here is the subtitle of the document

J U L Y 2 0 1 2. Title of Document. Here is the subtitle of the document J U L Y 2 0 1 2 Title of Document Here is the subtitle of the document Introduction to OpenText Protect Premier Anywhere Deploying and maintaining advanced Enterprise Information Management (EIM) solutions

More information

Getting Started with 20/20 Insight TRIAL VERSION

Getting Started with 20/20 Insight TRIAL VERSION Getting Started with 20/20 Insight TRIAL VERSION 20/20 Insight is a registered trademark of Performance Support Systems, Inc., Newport News, VA. Windows XP, MS Outlook, MS Word, Excel and PowerPoint are

More information

Unleash the Power of e-learning

Unleash the Power of e-learning Unleash the Power of e-learning Version 1.5 November 2011 Edition 2002-2011 Page2 Table of Contents ADMINISTRATOR MENU... 3 USER ACCOUNTS... 4 CREATING USER ACCOUNTS... 4 MODIFYING USER ACCOUNTS... 7 DELETING

More information

Unbreak ITSM: Work the Way People Do

Unbreak ITSM: Work the Way People Do Unbreak ITSM: Work the Way People Do New Pressures from the Application Economy What happened? Just yesterday your IT organization was the master of its domain. When users had a problem or request, they

More information

Professional CRM Support. Telephone: 01625 322 230 Website: www.thecrmbusiness.com Email: support@thecrmbusiness.com

Professional CRM Support. Telephone: 01625 322 230 Website: www.thecrmbusiness.com Email: support@thecrmbusiness.com Professional CRM Support Professional CRM Support Maximising the benefit from your CRM investment The CRM Business is proud to work with our clients to deliver great CRM Solutions. We understand that once

More information

The Ultimate Guide to Buying Business Analytics

The Ultimate Guide to Buying Business Analytics The Ultimate Guide to Buying Business Analytics How to Evaluate a BI Solution for Your Small or Medium Sized Business: What Questions to Ask and What to Look For Copyright 2012 Pentaho Corporation. Redistribution

More information

The Worksoft Suite. Automated Business Process Discovery & Validation ENSURING THE SUCCESS OF DIGITAL BUSINESS. Worksoft Differentiators

The Worksoft Suite. Automated Business Process Discovery & Validation ENSURING THE SUCCESS OF DIGITAL BUSINESS. Worksoft Differentiators Automated Business Process Discovery & Validation The Worksoft Suite Worksoft Differentiators The industry s only platform for automated business process discovery & validation A track record of success,

More information

Monitoring Replication

Monitoring Replication Monitoring Replication Article 1130112-02 Contents Summary... 3 Monitor Replicator Page... 3 Summary... 3 Status... 3 System Health... 4 Replicator Configuration... 5 Replicator Health... 6 Local Package

More information

Grid Scheduling Dictionary of Terms and Keywords

Grid Scheduling Dictionary of Terms and Keywords Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status

More information

Installing and Using the vnios Trial

Installing and Using the vnios Trial Installing and Using the vnios Trial The vnios Trial is a software package designed for efficient evaluation of the Infoblox vnios appliance platform. Providing the complete suite of DNS, DHCP and IPAM

More information

INTEGRATED MARKETING AUTOMATION

INTEGRATED MARKETING AUTOMATION INTEGRATED MARKETING AUTOMATION To generate, manage and convert leads Whitepaper W W W.AC T I V ECO N V E R S I O N. C O M M A R K E T I N G I N T E L L I G E N C E F O R S A L E S info@activeconversion.com

More information

Secure Cloud Computing through IT Auditing

Secure Cloud Computing through IT Auditing Secure Cloud Computing through IT Auditing 75 Navita Agarwal Department of CSIT Moradabad Institute of Technology, Moradabad, U.P., INDIA Email: nvgrwl06@gmail.com ABSTRACT In this paper we discuss the

More information

User Manual for Web. Help Desk Authority 9.0

User Manual for Web. Help Desk Authority 9.0 User Manual for Web Help Desk Authority 9.0 2011ScriptLogic Corporation ALL RIGHTS RESERVED. ScriptLogic, the ScriptLogic logo and Point,Click,Done! are trademarks and registered trademarks of ScriptLogic

More information

Optimizing Your Database Performance the Easy Way

Optimizing Your Database Performance the Easy Way Optimizing Your Database Performance the Easy Way by Diane Beeler, Consulting Product Marketing Manager, BMC Software and Igy Rodriguez, Technical Product Manager, BMC Software Customers and managers of

More information

Getting a head start in Software Asset Management

Getting a head start in Software Asset Management Getting a head start in Software Asset Management Managing software for improved cost control, better security and reduced risk A guide from Centennial Software September 2007 Abstract Software Asset Management

More information

Top 10 Storage Headaches in the Distributed Enterprise

Top 10 Storage Headaches in the Distributed Enterprise White Paper: Top 10 Storage Headaches Top 10 Storage Headaches And What YOU Can Do To Manage Them! Summary IT directors at growing, distributed enterprises face a number of unique challenges, particularly

More information

HPC technology and future architecture

HPC technology and future architecture HPC technology and future architecture Visual Analysis for Extremely Large-Scale Scientific Computing KGT2 Internal Meeting INRIA France Benoit Lange benoit.lange@inria.fr Toàn Nguyên toan.nguyen@inria.fr

More information

A Roadmap to Total Cost of Ownership

A Roadmap to Total Cost of Ownership A Roadmap to Total Cost of Ownership Building the Cost Basis for the Move to Cloud A white paper by Penny Collen The Roadmap to Total Cost of Ownership Getting a clear and complete financial picture of

More information

PIVOTAL CRM. CRM that does what you want it to do BROCHURE

PIVOTAL CRM. CRM that does what you want it to do BROCHURE PIVOTAL CRM CRM that does what you want it to do BROCHURE THE PIVOTAL CRM PHILOSOPHY THE PIVOTAL ADVANTAGE Today s business world is a fast moving and dynamic environment one in which your teams expect

More information

For more information, contact FieldView Solutions at 732.395.6920 info@fieldviewsolutions.com, or www.fieldviewsolutions.com

For more information, contact FieldView Solutions at 732.395.6920 info@fieldviewsolutions.com, or www.fieldviewsolutions.com A FieldView White Paper How Next Generation Monitoring Improves Data Center Infrastructure Management (DCIM) Introduction According to a global survey by DatacenterDynamics (DCD), data centers in the U.S.

More information

RFID Journal LIVE! 2014

RFID Journal LIVE! 2014 RFID Journal LIVE! 2014 Exhibitor Marketing Tools and Services For more information, please contact: Kathy Roach Marketing Coordinator 212-584-9400 x3 kroach@rfidjournal.com Alan McIntosh Director of Sales

More information

White paper: Unlocking the potential of load testing to maximise ROI and reduce risk.

White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. Executive Summary Load testing can be used in a range of business scenarios to deliver numerous benefits. At its core,

More information

Advanced Configuration Steps

Advanced Configuration Steps Advanced Configuration Steps After you have downloaded a trial, you can perform the following from the Setup menu in the MaaS360 portal: Configure additional services Configure device enrollment settings

More information

TRANSFORMING HP S SOFTWARE S CUSTOMER EXPERIENCE WITH ADVOCACY WITH ADVOCACY

TRANSFORMING HP S SOFTWARE S CUSTOMER EXPERIENCE WITH ADVOCACY WITH ADVOCACY TRANSFORMING HP S SOFTWARE S CUSTOMER CUSTOMER EXPERIENCE EXPERIENCE WITH ADVOCACY WITH ADVOCACY How HP Software s Service Portfolio Management Customer Success team engaged, educated and delighted 1,000+

More information

CLOUD MIGRATION STRATEGIES

CLOUD MIGRATION STRATEGIES CLOUD MIGRATION STRATEGIES Faculty Contributor: Dr. Rahul De Student Contributors: Mayur Agrawal, Sudheender S Abstract This article identifies the common challenges that typical IT managers face while

More information

Solution brief. HP CloudSystem. An integrated and open platform to build and manage cloud services

Solution brief. HP CloudSystem. An integrated and open platform to build and manage cloud services Solution brief An integrated and open platform to build and manage cloud services The industry s most complete cloud system for enterprises and service providers Approximately every decade, technology

More information

Trade Show Strategy Don t Leave Town Without It!

Trade Show Strategy Don t Leave Town Without It! Trade Show Strategy Don t Leave Town Without It! Prepared by: Tom Marx, President/CEO and Leslie Allen, PR Manager Whitepaper 2175 East Francisco Blvd., Suite F San Rafael, CA 94901 Phone: 415.453.0844

More information

WHITE PAPER Improving Your Supply Chain: Collaboration, Agility and Visibility

WHITE PAPER Improving Your Supply Chain: Collaboration, Agility and Visibility WHITE PAPER Improving Your Supply Chain: Collaboration, Agility and Visibility Apprise.com Improving Your Supply Chain: Collaboration, Agility and Visibility The globalization of businesses and their supply

More information

Knowledge Base Data Warehouse Methodology

Knowledge Base Data Warehouse Methodology Knowledge Base Data Warehouse Methodology Knowledge Base's data warehousing services can help the client with all phases of understanding, designing, implementing, and maintaining a data warehouse. This

More information

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014

HPC Cluster Decisions and ANSYS Configuration Best Practices. Diana Collier Lead Systems Support Specialist Houston UGM May 2014 HPC Cluster Decisions and ANSYS Configuration Best Practices Diana Collier Lead Systems Support Specialist Houston UGM May 2014 1 Agenda Introduction Lead Systems Support Specialist Cluster Decisions Job

More information

Web Load Stress Testing

Web Load Stress Testing Web Load Stress Testing Overview A Web load stress test is a diagnostic tool that helps predict how a website will respond to various traffic levels. This test can answer critical questions such as: How

More information

Customer Case Study. Timeful

Customer Case Study. Timeful Customer Case Study Timeful Customer Case Study Timeful Benefits Improved key metrics monitoring by processing the entire production data set instead of sampling subsets More effective data-driven product

More information