CLOUD HPC IN FINANCE CLOUD BENCHMARK REPORT WITH REAL-WORLD USE-CASES
|
|
|
- Juniper Harvey
- 10 years ago
- Views:
Transcription
1 CLOUD HPC IN FINANCE CLOUD BENCHMARK REPORT WITH REAL-WORLD USE-CASES 12 AUGUST 2015
2 Disclaimer Techila Technologies Ltd. disclaims any and all warranties, express, implied or statutory regarding this document or the use of thereof by you to the full extent permitted by law. Without limiting the generality of the foregoing, this document provided by Techila Technologies Ltd. in connection therewith are provided as-is and without warranties of any kind, including, without limitation, any warranties of performance or implied warranties of merchantability, fitness for a particular purpose, title and noninfringement. Further, Techila Technologies Ltd. does not make, and has not made, any presentation or warranty that the document is accurate, complete, reliable, current, error-free, or free from harmful information. Limitation of Liability In no event shall Techila Technologies Ltd. or any of its respective directors, officers, employees, or agents, be liable to you or any other person or entity, under any theory, including without limitation negligence, for damages of any kind arising from or related to the application of this document or any information, content, or materials in or accessible through this document, including, but not limited to, direct, indirect, actual, incidental, punitive, special or consequential damages, lost income, revenue or profits, lost or damaged data, or other commercial or economic loss, that result from your use of, or inability to use, this document, even if any of those persons or entities have been advised of the possibility of such damages or such damages are foreseeable. Use of this document and copyright No part of this document may be used, reproduced, modified, or transmitted in any form or means without the prior written permission of Techila Technologies. This document and the product it describes are considered protected by copyrights and other intellectual property rights according to the applicable laws. Techila, Techila Grid, and the Techila logo are either registered trademarks or trademarks of Techila Technologies Ltd in the European Union, in the United States and/or other countries. All other trademarks are the property of their respective owners. Copyright Techila Technologies Ltd All rights reserved.
3 About Techila Technologies Techila Technologies Ltd. is a provider of distributed computing middleware and management solutions, and a pioneer in cloud-powered high-performance computing. Our financial industry customers include central and national banks, leading investment banks, asset managers, and insurance companies. The Techila solution brings rocket speed to computing. It is designed to save the time of business users and IT experts, to solve challenges related to parallel application development and deployment, and to speed up the idea-to-deployment cycle. The solution is built around a patented autonomic computing technology, which creates a selfmanaging and scalable computing service and execution environment. This technology enables enterprise customers to manage the computing power available in their current and future computing servers and clusters, and even to include capacity from the company s trusted cloud providers. Techila supports the Windows and Linux operating systems. It enables the enterpriseready secure integration of hybrid cloud IT, and cloud bursting into the leading public cloud platforms Amazon EC2, Google GCE, and Microsoft Azure. Business users of Techila can deliver faster and better-quality results when solving even the most challenging and complex business-critical problems. Techila integrates directly with the customer s favorite computational tools, with no need to redesign applications and codes. It comes with productized support for MATLAB, Python, C/C++, C#, R script, and a range of other popular environments. Techila eliminates performance barriers and improves productivity across the enterprise. It is a multi-tenant solution with a priority and policy-based framework. Built-in security features enable the establishment of a single platform to serve multiple business lines and applications, from research and development functions to the front office and all the way to applications run as a web service in an SOA environment. When using Techila, a company no longer needs to suffer from opportunities missed because its users were unable to compute fast enough. Visit us on the Internet and find out how your business can benefit from rocket-speed computing.
4 Table of contents 1 INTRODUCTION TESTS TEST SUITE METHODOLOGY PLATFORMS COST OF COMPUTING HANDS ON PORTFOLIO ANALYTICS, PYTHON MACHINE LEARNING, C OPTION PRICING, C# BACKTESTING, MATLAB DERIVATIVES PRICING, MATLAB FITTING A PRICING MODEL, MATLAB DATA DRIVEN INSURANCE RISK SIMULATION, MATLAB ECONOMETRIC FORECASTING, DYNARE PORTFOLIO SIMULATION, R MIXED WORKLOAD REMARKS MORE RELIABILITY STILL LIMITATIONS CONCLUSIONS GLOSSARY Appendix A Cloud Platform Specifications A.1 Systems A.2 Prices Appendix B Breakdown of execution times and cloud costs B.1 General notes B.2 Portfolio Analytics B.3 Machine Learning B.4 Option Pricing B.5 Backtesting B.6 Derivatives Pricing B.7 Fitting A Pricing Model B.8 Data Driven Insurance Risk Simulation B.9 Econometric Forecasting B.10 Portfolio Simulation... 54
5 CLOUD HPC IN FINANCE 5/54 12 AUGUST Introduction Over the last couple of years, Techila Technologies has noticed a shift in perceptions of the use of cloud-based processing as part of enterprise IT. As both cloud service offerings and understanding of the cloud have matured, interest in clouds and the exploration of suitable uses for cloud-based processing has grown and evolved. The cloud is no longer seen as just a substitute for on-premises datacenters. It is viewed as a technology which can empower IT to support business in various scenarios and enable services that were neither feasible nor possible until now. Techila Technologies has noted that the financial services industry, also known as the FSI, has recently been highly interested in exploring the benefits of cloud-based processing. Continuously growing computational needs may be one of the reasons for this. Regulatory requirements are a particular headache in many organizations. Similar observations have been reported by Excelian Limited. In Excelian Technology Spark, Q1 2015, Excelian s Ian King writes that general acceptance of cloud computing and the exploration of suitable uses for it have grown and evolved, particularly around burst capabilities in the risk calculation space. Techila Technologies is of the same opinion. In this benchmark experiment, first run in 2014, Techila Technologies goal is to provide customers with an easy-to-understand analysis and benchmark, which answers questions on the performance and cost of computing in leading clouds. Because FSI customers have been interested in exploring the possibilities of benefitting from cloud-based processing in their businesses, the current test round will focus on analyzing how well equipped the leading clouds are to respond to needs arising from computational business scenarios in the FSI, and how well clouds perform in real-world usecases implemented in popular environments, such as Python, C/C++, C#/.NET, MATLAB and R script. The findings will be compared to a reference server environment set up on a Linux cluster provided by CSC. This benchmark experiment is not being sponsored by any vendors. A range of benchmarks and analysis reports are available on processor types and data transfer speeds. While such reports are great, based on conversations with customers Techila Technologies notes that users feel that FLOPS/ USD or Gbps/ USD or memory/ USD do not always translate directly into performance in real-world application scenarios. In response, our benchmark uses real-world business use-cases from the industry. Techila Technologies would like to thank all of its partners and the companies that contributed to the development of the test suite. To hear more about how Techila can bring rocket speed to computing in your business, contact us using the contact details on Techila Technologies website, at or us at [email protected].
6 CLOUD HPC IN FINANCE 6/54 12 AUGUST Tests 2.1 Test Suite Design approach Techila Technologies has built the test suite for this benchmark test round using application usecases from industry customers who have wanted to contribute to this cloud benchmark experiment. The customers are from the leading financial institutions, investment banks, central and national banks, and insurance companies. Some of the application use-cases selected to the test suite are computationally embarrassingly parallel and some of them utilize inter-node communication to improve the efficiency of the problem solving. Based on Techila Technologies experience Python, C/C++, C#, MATLAB and R script are widely used modeling environments in FSI. These are also the environments, which the contributing industry customers have used in the development of their application use-cases. Case Programming Language Portfolio Analytics Python Machine Learning C++ Option Pricing C# Backtesting MATLAB Derivatives Pricing MATLAB Fitting A Pricing Model MATLAB Data Driven Insurance Risk Simulation MATLAB Econometric Forecasting Dynare/ MATLAB Portfolio Simulation R Table 1 Test suite Comparability of results When Techila Technologies published the previous Cloud Benchmark Report in 2014, some of the readers asked if they could run the test suite also in their own on-premises datacenter. They were interested in seeing how the tests perform in a conventional server infrastructure, so that they could compare their existing systems with cloud infrastructures. Because the test suite is built of real business use-cases from financial institutions, Techila Technologies must treat the applications confidential. In order support comparability of results with on-premises infrastructures, this time Techila Technologies run the tests also in a reference server environment provided by CSC. The specifications of the reference servers can be found in Cloud Platform Specifications, Chapter A.2 of this report.
7 CLOUD HPC IN FINANCE 7/54 12 AUGUST Methodology The test methodology was selected to support analysis of the computing performance of different cloud platforms. In Finance, as in other modern industries, many computational applications use stochastic processes. Because of this, the computational processes have a nondeterministic nature. Randomness can cause results, which could make objective comparison in benchmark test results difficult. To enable objective comparison of results, Techila Technologies test team removed the randomness from the test runs (Projects) by fixing the seed numbers. The team also designed the tests in a way, which uses fixed run order for computational tasks Project A computational problem created by an End-User. In this report, a Project is an individual run of a test case. Job A Job is a unit of a computational Project, which can be executed independently. Server Techila Server software running on a computer in the computing environment. Techila Server s responsibilities include management of available resources and scheduling the Jobs. Worker Techila Worker software running on a computer in the computing environment. Techila Worker is responsible of execution of Jobs. This report uses Techila distributed computing solution s terminology. For a comprehensive description of the terminology, please refer to Techila Fundamentals document. (Jobs). These gave the team the possibility to repeat identical tests on every platform. Test team decided also to run the tests with possible multithreading functions disabled. The test runs were orchestrated using Techila distributed computing middleware, which provides autonomic deployment and management of the computing environment. The Techila distributed computing middleware enables repeating the tests in any target environment without redesign, and its self-management features provide an easy deployment of applications in large-scale distributed computing environments. Techila supports ad-hoc and variable usage patterns with several clouds, and in on-premises environments, which makes it useful in all target environments; clouds and reference servers. If the Project required data, the data was transferred to the computing environment before the test was started. The purpose of this was to eliminate the effect of possible connection issues, other network traffic, and bandwidth between the data sources and the clouds. For the same reason, if a test application required any runtime components, Techila s automatic configuration features were used to transfer them to the cloud in advance, before starting the tests. When selecting the storage options, a local disk was preferred for the installation of Techila Worker software. If this was not available, a network block device such as Amazon EBS (Elastic Block Store) was used as the disk. For the reliability of test results, the test team repeated each test case several times. If the results were consistent, the results were accepted, and metrics were recorded for this report. If there was an insignificant difference in the metrics, the best metrics were recorded for this report. The execution of computational workloads in a Techila environment is on native level. The computation does not happen inside Techila. Techila starts and monitors the computational
8 CLOUD HPC IN FINANCE 8/54 12 AUGUST 2015 process, which means that the use of Techila s execution platform does not cause overheads to the actual computing. If the test team noted any interesting results in a test run, which they were not able to explain immediately, the team performed further analysis to identify possible causes for the observed phenomena. An example of phenomena, which required further analysis, were inconsistencies between the performance of the Linux and Windows operating systems in Chapter Platforms This benchmark test round included resources from the three leading public cloud platform providers: Microsoft, Amazon Web Services, and Google. In addition to the public cloud platform providers, a reference server environment was also included in the study. The servers were set up on a Linux cluster in CSC s datacenter. All the public cloud platforms included in this benchmark round offer a rich availability of different cloud instance types. In this benchmark, Techila Technologies team decided to select two instance types from each vendor, whenever there were applicable instance types available. This was to see the performance of high-end instances compared to other possibly interesting offerings. Specifications of the tested instance types and the specifications of the reference servers are presented in Cloud Platform Specifications, Chapter A.1. In this report we will use following abbreviated names for the computing platforms: Abbreviated name AWS Azure CSC GCE Table 2 Abbreviations used in the document Full name Amazon Elastic Compute Cloud EC2 Microsoft Azure CSC - IT Center for Science, Taito cluster Google Compute Engine
9 CLOUD HPC IN FINANCE 9/54 12 AUGUST Cost of computing In the Cloud Benchmark Report published in 2014, Techila Technologies reported complexities related to comparing the cost of computing in different clouds: Cloud platform providers have a different billing granularities and billable states of cloud services are not standardized. More detailed information about pricing in the included cloud platform environments is included in Chapter A.2 of the Appendix A Cloud Platform Specifications. As a rule of thumb, cloud platform providers bill for the time, when the services are in a running state, but the definition of running varies. In HPC scenarios where the deployments consist of a large number of cloud resources, this variance gets amplified. AWS has the lowest granularity billing model of the cloud platforms included in this benchmark test round. In AWS invoicing is rounded up to a full hour. Azure has the finest granularity, where billing is implemented per minute, rounded up to the nearest minute. GCE is billed per minute, too, but the minimum billable amount is 10 minutes. Because of the differences in the service billing models, estimating the actual cost of cloud computing requires understanding the requirements of business. Workload patterns, Service Level Agreements (SLAs), organizational structure, and regulatory requirements set the framework in which the cost of cloud computing needs to be estimated. If the demand for computing resources is highly variable, the overheads and an unideal billing granularity or a vendor with unideal billable states can have a dramatic effect on the TCO. When cloud is made an integral part of enterprise IT, and resources from a cloud become a standard building block of the infrastructure, cost differences, characteristics in service provisioning, and differences in performance will accumulate and even small differences can have significance. Many cloud providers also offer discounts for users who have sustained use, sign up for a capacity plan, or sign up for the service with a monetary commitment. Customers who do this can save money when they trade off some of the elasticity. The cost estimates included in this report are based on standard list prices valid at the date of this report. The prices used in this report are presented in Appendix A Cloud Platform Specifications, Chapter A.2. Because the workload patterns and business requirements are unique to each company, Techila Technologies team decided to focus in this report on the cost of actual computing only. This process is described in Cloud Platform Specifications, Chapter A.2. This simplifies the comparison of the cost of computing in different clouds, and the reader can quickly identify the platforms, which could be worth a more detailed total cost-of-ownership (TCO) simulation. In the end of each test case there is a figure, which presents the price/ performance of different cloud instance types in effective computing use. An estimate of the actual TCO of cloud-based processing can be calculated by simulating the company s computational system workload model using this data, and adding possible platform-specific overheads to it. An overhead, which can be worth noticing when design cloud-powered infrastructures to support business, is the time to prepare an operating system. Preparing a Windows operating system takes longer than preparing a Linux-based system
10 CLOUD HPC IN FINANCE 10/54 12 AUGUST 2015 In the 2014 benchmark test round, Techila Technologies team noticed inconsistencies in provisioning of cloud-based services. In the course of this year s test round, Techila Technologies team noticed these inconsistencies had largely vanished from all clouds. Despite of this improved capability to meet fluctuating demands of modern business, Techila Technologies still does not recommend considering cloud as a source of unlimited elasticity. Facing the limits of elasticity can mean that IT can t meet the SLA offered to the business, and it can mean reduced efficiency of the system utilization, thus causing an increased cost.
11 CLOUD HPC IN FINANCE 11/54 12 AUGUST Hands on 3.1 Portfolio Analytics, Python Portfolio Analytics is done by quantitative analysts who want to look into portfolios using performance attribution, portfolio profiling, and risk parameters. This test case is a Python code, which analyses the client s portfolios, and deconstructs their sources of return. One of the key motivations for including this use-case in the test suite was to investigate the performance of different platforms in Python-based computing. Techila Technologies has noticed that the scientific Python ecosystem is maturing fast and many customers in Finance see that Python is an appealing alternative for MATLAB, because it s free, open source, and becoming increasingly powerful. Software used in Portfolio Analytics, Python Python Additional modules: math sys numpy pandas datetime time statsmodels sklearn Figure 1 Time used in computing in Portfolio Analytics, Python When the test team ran this test case in Python environment on different platforms, they observed dramatic differences in performance between different operating systems. This is illustrated in Figure 1 above. The time used in computing is presented in wall clock time, which is the actual amount of time taken to execute the test case. This is equivalent to timing the test case with a stopwatch.
12 CLOUD HPC IN FINANCE 12/54 12 AUGUST 2015 The computing took % longer in Linux-based systems than in systems with Windows operating system. The team was surprised by the fact that all cloud instances types with Windows operating system, including the older Compute Optimized AWS c3.8xlarge instances, outperformed the reference servers, which was the highest performing Linux system. When this phenomenon was noticed, Techila Technologies test team analyzed the detailed execution statistics for the test Projects. In some other test cases, for example in Econometric Forecasting presented in Chapter 3.8, the team observed that Jobs were not able to utilize processor resources efficiently, which reflected bottleneck in the system s I/O handling capacity. However, in this test case, the processor utilization metrics were high, which means that the performance here was not affected by possible I/O bottlenecks. The team suggested that the reason for lower performance in Linux environments could result of Linux implementation of the Python libraries used in this application: math, sys, numpy, pandas, datetime, time, statsmodels, sklearn. If the native parts of these libraries in Linux include less efficient algorithms than their Windows implementations, performing the same operation in Linux will take more time. In large scale simulations, even small effects can get amplified and have a significant impact on the overall performance. The price/performance graph below displays how the performance of each instance type compares when the instance price is taken into account. Figure 2 Price/performance of computing in Portfolio Analytics, Python
13 CLOUD HPC IN FINANCE 13/54 12 AUGUST Machine Learning, C++ Based on Techila Technologies observations, many banks have developed their quantitative libraries in C/C++, and are now extending them with Machine Learning (ML) implementations. Bloomberg, for example, has developed a ML solution, which addresses market demands and shows the probability of selling a specific volume at a specific price, the expected cost of liquidation and maximum volume, and the expected days to liquidate a specific volume. Software used in Machine Learning, C++ GCC 3.0 Microsoft Visual Studio 2010 Additional libraries: None This ML test uses a compute-intensive code, which is implemented in C++ language. Techila enables execution of ML in an environment, which is aligned with the organization s governance, risk management, and compliance requirements (IT GRC). This use-case was considered to be relevant also in this benchmark test round, as financial engineers are interested in possibilities to benefit of ML techniques in interacting with the market. The ML algorithm is used for uncovering multivariate associations, either with classification or regression tree ensembles, from large and diverse data sets. It natively handles numerical and categorical data with missing values, and potentially large quantities of non-informative features are handled gracefully utilizing artificial contrast features, bootstrapping, and p-value estimation. Figure 3: Time used in computing in Machine Learning, C++ In this Machine Learning case, all instance types delivered performance figures, which were logically hand-in-hand with the instance type s technical specifications. The differences observed between
14 CLOUD HPC IN FINANCE 14/54 12 AUGUST 2015 Windows operating system and the Linux-based systems could be explained by the compilers ability to optimize the code for each operating system platform. Before running the tests, the test team would have expected the newer generation Compute Optimized AWS c4.4xlarge instance to perform significantly better than older but still popular c3.8xlarge instances. The team was surprised, when they saw that the performance of AWS c4.4xlarge was only a little bit better than AWS c3.8xlarge. The performance difference between the different instance types was greater even in Azure, despite the fact that both Azure instances were relatively new instance types. In this test case the Linux-based cloud server instances performed well compared to the reference servers. In fact, AWS Compute Optimized server instance c4.4xlarge with Linux was even slightly faster than the reference servers with Linux. The price/performance graph below displays how the performance of each instance type compares when the instance price is taken into account. Figure 4. Price/performance of computing in Machine Learning, C++
15 CLOUD HPC IN FINANCE 15/54 12 AUGUST Option Pricing, C# One of the key motivations for including this use-case in the test suite was to investigate the performance of different platforms in C# simulations. Based on Techila Technologies observations, many banks have developed their own quantitative libraries in C/C++. A recent trend has been increasing interest towards using C#, because it s as powerful as C++ and easy to integrate with Excel and database systems and the interoperability with other Microsoft products. Software used in Option Pricing, C# Microsoft Visual Studio 2010.NET Framework 4.5 Additional libraries: None In this test case, the team decided to focus on running the tests on Windows operating system only. There is a open source implementation of Microsoft's.NET Framework available, Mono, which would enable running C# applications also in Linux-based. The team decided not to use the Mono solution in this benchmark experiment, because using Mono would have added another layer to the software stack. Being able to analyze the platform s computing performance would require an indepth analysis of the Mono layer s possible effect. This was not in the scope of this test round. Because of this reason, the tests on the reference servers with Linux were not executed for C# case either. Figure 5. Time used in computing in Option Pricing, C# When looking at the performance metrics, we can see that Azure A11, which is a so-called Compute Intensive instance, performed better in this test case than Azure D14 Optimized Compute Instances. This is in line with Kenaz Kwa s post in Windows Azure blogs, but the size of the gap between the instance types was a surprise to the team. In AWS the difference between different Compute Optimized instance types was much smaller in this test case.
16 CLOUD HPC IN FINANCE 16/54 12 AUGUST 2015 Another interesting observation is that in the Machine Learning test case, which was implemented in C++ (Chapter 3.2), AWS with both Windows and Linux was faster than Azure, and GCE with Linux performed well too, but in this Option Pricing use-case, the Azure A11 was a clear winner. Based on this Option Pricing test, the team suggests that Microsoft has been able to optimize its Common Language Runtime (CLR) for Azure. This would be logical because C# was originally developed by Microsoft within its.net initiative. The price/performance graph below displays how the performance of each instance type compares when the instance price is taken into account. Figure 6. Price/performance of computing in Option Pricing, C#
17 CLOUD HPC IN FINANCE 17/54 12 AUGUST Backtesting, MATLAB The use-case consists of a computationally intensive MATLAB code, which is used in backtesting of financial models. The code was designed to run directly on MATLAB without need for additional MATLAB Toolboxes. Software used in Backtesting, MATLAB MATLAB R2013b 64bit Additional toolboxes: None Figure 7: Time used in computing in Backtesting, MATLAB The reference server environment was clear winner compared to any clouds, including cloud instances with Haswell processors. The statistics recorded for the reference servers in this test were impressive. Comparing the performance of AWS, Azure and GCE Linux instances with the performance of the reference servers shows that clouds are significantly slower in this test case, %.
18 CLOUD HPC IN FINANCE 18/54 12 AUGUST 2015 Figure 8. Price/performance of computing in Backtesting, MATLAB
19 CLOUD HPC IN FINANCE 19/54 12 AUGUST Derivatives Pricing, MATLAB The goal of derivatives pricing in Finance is to determine the fair price of a given security. The securities can be plain vanilla and exotic options, convertible bonds, or something else. This test comprised of solving a computationally intensive problem related to pricing exotic derivatives. The model was implemented in MATLAB using the standard libraries of MATLAB, and functions from MATLAB Parallel Computing Toolbox (PCT). Software used in Derivatives Pricing, MATLAB MATLAB R2013b 64bit Additional toolboxes: MATLAB Parallel Computing Toolbox Version 6.3 Figure 9. Time used in computing in Derivatives Pricing, MATLAB When running the tests, Techila Technologies team recorded performance statistics, which were well aligned with the hardware specifications of the instances types within each cloud vendors offering. An interesting observation, which the team made from the test results is the good performance of Azure in this test case. Azure A11 and Azure D14 are older generation Sandy Bridge processors and AWS c4.4xlarge and GCE instance use Haswell processors. Despite of this, the Azure instances can deliver very competitive performance in this case. Only the reference servers being faster. The team suggests that a possible reason for this can be the unsuitability of this test case for architectures using hyper-threading (HT). Internet forums include discussions where users have reported that PCT operations do not always perform ideal on HT ed platforms. A possible solution
20 CLOUD HPC IN FINANCE 20/54 12 AUGUST 2015 to overcome this limitation would be to rewrite the code s PCT operations with equivalent operations available in Techila SDK. The price/performance graph below displays how the performance of each instance type compares when the instance price is taken into account. Figure 10. Price/performance of computing in Derivatives Pricing, MATLAB
21 CLOUD HPC IN FINANCE 21/54 12 AUGUST Fitting A Pricing Model, MATLAB This use-case is about estimating a financial multiparameter model, used in pricing of financial instruments. The problem consists a MATLAB model for option pricing, and daily data. In this use-case, there is a dataset, which consists of data from a long time period. The model is implemented in MATLAB and used functions from MATLAB s Financial Toolbox and Statistics Toolbox. The model fitting can be done for each data point separately, which enables data parallelism. Software used in Fitting A Pricing Model, MATLAB MATLAB R2013b 64bit Additional toolboxes: MATLAB Financial Toolbox Version 5.2 MATLAB Statistics Toolbox Version 8.3 Even if the problem was data parallelizable, it was not embarrassingly parallel. This was because of a state synchronization operation, which was related to the process. This requires either implementing collecting of intermediate results to the user s computer and performing a synchronization procedure there, or implementing communication directly between the computational tasks. Because this benchmark experiment is implemented using Techila distributed computing middleware, the state synchronization between Jobs was easy to implement using Techila s new Interconnect feature. The Techila Interconnect is a light weight technology, which is designed to enable easy implementation of parallel workloads in standard environments such as MATLAB, R, Python, C/C++, C#/.NET and Java. It offers fast communication between Jobs without need to use Message Passing Interface (MPI) or Jini. It also adds new features to parallel computing, such as communication between Jobs implemented in different languages and easy Detaching and Attaching of Projects. With the help of Techila Interconnect it is possible to extend the benefits of Techila distributed computing middleware from embarrassingly parallel problems to parallel problems, and enhance the performance of interconnected problems such as evolutionary optimization algorithms in a distributed computing environment.
22 CLOUD HPC IN FINANCE 22/54 12 AUGUST 2015 Figure 11. Time used in computing in Fitting A Pricing Model, MATLAB In the team s opinion, the most interesting results in this Model Fitting use-case were observed in AWS with big hyper-threaded many-core AWS c3.8xlarge instances. The team started the AWS tests by running the tests in an environment, which was built of AWS c4.4xlarge instances. In this system, they observed that the application performance on Linux Workers was slightly better than on Workers running Windows operating system. After the AWS c4.4xlarge tests the team moved to tests in a system with AWS c3.8xlarge instances. AWS c3.8xlarge are an older but still popular instance type that uses Sandy Bridge processors running at a lower clock-speed than the AWS c4.4xlarge. The team expected that the performance of the older generation system would be lower than the performance of the AWS c4.4xlarge Haswell system. On the other hand, because the AWS c3.8xlarge uses twice as big many-core processors, there was an assumption that this could be an advantage to the AWS c3.8xlarge. What the team did not foresee was that the relative gap in computing performance between Windows operating system and Linux would be significantly different. In the AWS c4.4xlarge system the performance difference between Windows operating system and Linux was less than 1%. But in AWS c3.8xlarge the performance difference was stunning 54%. When the team noticed that the radical differences in performance, they retrieved the detailed execution statistics for the test Projects. The performance statistics showed that the Jobs utilization of processor capacity in all environments was high. Based on this, the team was able to make an assumption that the performance problem is not related to I/O access. If there would be a
23 CLOUD HPC IN FINANCE 23/54 12 AUGUST 2015 bottleneck in I/O access, the CPU would be idling, which would reduce the Job s ability to utilize processor capacity efficiently. After this, the team run the same Jobs on the same virtual machines, but setting Techila to a mode where it does not assign more than 16 Jobs to each host. As presented in Appendix A Cloud Platform Specifications Chapter A.1, AWS c3.8xlarge has 16 physical cores, which have been HT ed to make it 32 CPU cores. In this new run, the performance of the AWS c3.8xlarge system with Windows operating system increased to a level, which aligns with the results of this Model Fitting test in other AWS systems. Based on this test, the team assumes that the AWS c3.8xlarge system sets limitations for the scalability of the computing performance inside a physical processor. The team makes a guess, and says that the there is something in the design of AWS c3.8xlarge host, which does not work well with Windows operating system in this MATLAB use-case. Because Techila makes the processing scale across many physical processors, limiting the use of hyper-threading will make the performance constraint vanish. Whether the root cause in this case is in an operating system driver or in locking system components, can t be said for sure based on this analysis. What can be confirmed is that in this case AWS c3.8xlarge performs in this test case significantly better when running Linux. Even if we can say that limiting the number of Jobs assigned to AWS c3.8xlarge instances solves problem, this can t be considered as a very feasible solution. Limiting the number of Jobs doubles the system s cost/ performance index, and would make c3.8xlarge expensive to use. When looking at the performance metrics and cost/ performance diagram, we can see that AWS c4.4xlarge and GCE n1-standard-16 can offer a great cost/ performance for users, who are doing model fitting on Linux. Because they are Linux-based platforms, this could also make them interesting platforms for implementing burst capabilities to the reference servers included in this test.
24 CLOUD HPC IN FINANCE 24/54 12 AUGUST 2015 Figure 12. Price/performance of computing in Fitting A Pricing Model, MATLAB
25 CLOUD HPC IN FINANCE 25/54 12 AUGUST Data Driven Insurance Risk Simulation, MATLAB This use-case is an insurance risk simulation, which includes analysis of hundreds of input files, which comprise a socalled stochastic dataset. Software used in Data Driven Insurance Risk Simulation, MATLAB MATLAB R2013b 64bit In this use-case the algorithm and data were both parallelizable. When executed using the Techila distributed computing middleware, the application was able to benefit of both data parallelism and task parallelism at the same time, which made the performance well scalable. Additional toolboxes: MATLAB Parallel Computing Toolbox Version 6.3 The test case was organized in a way where data files were transferred to each Worker before starting the tests, as described in Chapter 2.2. The purpose of this was to eliminate the effect of possible connection issues, other network traffic, and bandwidth between the data sources and the clouds. Figure 13. Time used in computing in Data Driven Insurance Risk Simulation When executing the tests on different platforms, the test team noticed that in this Data Driven Insurance Risk Simulation case the computing performance on Linux-based environments was consistently better than Windows operating system, but also that it is possible that larger manycore instances are not as ideal for I/O-intensive use-cases as smaller multi-core instances. This usecase highlights also the benefits of fast SSD drives to computing. AWS c3.8xlarge uses older processors than AWS c4.4xlarge. The AWS c3.8xlarge processors use Sandy Bridge, which is also used in Azure A11. On the other hand, AWS c3.8xlarge instances have
26 CLOUD HPC IN FINANCE 26/54 12 AUGUST 2015 the most processor cores of the instances included in this benchmark test round. This is twice as much as what the other instances have. The AWS c3.8xlarge has 32 HT ed processor cores. Because of this, we could expect the performance of the AWS c3.8xlarge system to be relatively close to Azure A11. When looking at Figure 13 we can see that AWS c3.8xlarge is significantly lower in this Data Driven Insurance Simulation test case than other instances. Because this test case is an embarrassingly parallel problem processing a set on independent input files, increasing the number of simultaneous processes on an instance will also increase the number of simultaneous I/O operations. Because of the design of this test case, the I/O in this use-case consists of file access operations. When we use many-core instances with a large number of CPU cores, the increasing file access operations can increase the probability of file locks and other file system related bottlenecks. On the other hand, the reference-server beating performance recorded on Azure D14 running Linux can be explained with fast SSD drives, and good adaptation of the operating system s drivers for them. In this data driven simulations, the increasing data parallelism starts loading the host s operating system and file system, and at one point they become a bottleneck for performance. Because of this, the team suggests that users would use Techila distributed computing middleware to identify and select the ideal degree of parallelism for their infrastructure, and consider using a larger fleet of smaller CPUs than smaller fleet of big and possibly HT ed many-core processors. The price/performance graph below displays how the performance of each instance type compares when the instance price is taken into account. Figure 14. Price/performance of computing in Data Driven Insurance Risk Simulation
27 CLOUD HPC IN FINANCE 27/54 12 AUGUST Econometric Forecasting, Dynare This test comprised of simulating a macroeconomic DSGE model (Dynamic Stochastic General Equilibrium), which was implemented using Dynare. Dynare is a software platform that runs with MATLAB to solve, simulate and estimate macroeconomic models. The DSGE methodology attempts to explain economic phenomena, such as economic growth, business cycles. It looks also at the effects of monetary and fiscal policy, on the basis of macroeconomic models derived from microeconomic principles. Software used in Econometric Forecasting, Dynare Dynare MATLAB R2013b 64bit Additional toolboxes: MATLAB Statistics Toolbox Version 8.3 A large panel of applied mathematics and computer science techniques are internally employed by Dynare: multivariate nonlinear solving and optimization, matrix factorizations, local functional approximation, Kalman filters and smoothers, MCMC techniques for Bayesian estimation, graph algorithms, optimal control, etc. Various public bodies (central banks, ministries of economy and finance, international organizations) and some private financial institutions use Dynare for performing policy analysis exercises and as a support tool for forecasting exercises. This Econometric Forecasting test case was implemented in Dynare The test case was run in MATLAB environment. In addition to MATLAB s standard functions, the test case used also methods from MATLAB Statistics Toolbox. Because of the design of this application, each Job of related to the application generates a large amount of temporary data. Because of this, the team expected I/O capabilities of the Worker platforms to play a significant role in the overall system performance.
28 CLOUD HPC IN FINANCE 28/54 12 AUGUST 2015 Figure 15. Time used in computing in Econometric Forecasting, Dynare When executing the tests on different platforms, the test team noticed that in this Econometric Forecasting the computing performance on Linux-based environments was consistently slightly better than Windows operating system. The team suggests that this difference could happen if Dynare algorithms used in this Econometric Forecasting test case are more optimized for Linuxbased platforms. When comparing the performance of cloud instances and the reference servers, we can see the superior performance of the reference servers. Even the best performing cloud instance type, the AWS c4.4xlarge with Linux, performed 40% slower than the reference servers in this high disk I/O use-case. Even the high specification processors of AWS c4.4xlarge or SSD drives on Azure D14 can t compensate the gap. The test team makes a guess that the reference servers excellent performance in this test case could be related to possible I/O latencies caused by virtualization technologies used in the clouds. The team has met engineers from manufacturing companies, who have run engineering simulation tools in the cloud. If the engineering tool s solver has loaded the I/O, the performance of the tools has not been as good as the hardware specifications would promise. The team suggests that constant writing and rewriting of files in this Dynare DSGE simulation could cause a similar phenomenon. Other interesting observations can be made by comparing the performance of the AWS c4.4xlarge and AWS c3.8xlarge, and Azure A11 and Azure D14:
29 CLOUD HPC IN FINANCE 29/54 12 AUGUST 2015 In several MATLAB use-cases in this benchmark round, there is a clear difference in the performance of different AWS instances, but in this case which is implemented in Dynare running on MATLAB, the difference is quite small, especially on Linux. This could indicate that the algorithms of Dynare can benefit better of hyper threaded architectures than MATLAB s standard functions. The statistics from Azure A11 and Azure D14 show that the performance of this Econometric Forecasting use-case is better on Azure D14 instances than on compute-intensive Azure A11 instances. This is contrary what the processor specifications presented in Appendix A Cloud Platform Specifications Chapter A.1 would suggest. When the test team analyzed the detailed execution statistics for the test Projects, they noticed that the processor utilization was not consistent between Azure D14 and Azure A11. When the application was run in a system using Azure D14 instances, it was able perform the Dynare DSGE simulation efficiently. In an Azure A11 environment the processor utilization statistics remained lower. Based on their analysis, the test team says that the unideal processor utilization of Azure A11 in this case can be because of its conventional drives for temporary data storage. In this Econometric Forecasting test case, Jobs are generating large amounts of temporary data. Azure D14 uses fast SSD drives for temporary data. The SSD drives are able to keep up with the pace of I/O operations, whereas the conventional disks of Azure A11 become a bottleneck for the system s performance. Techila Technologies test team says that this test case highlights the importance of understanding the capabilities of the cloud instances. The abstraction and automation in clouds makes it easy to deploy virtual machines with different processor capabilities and mount media to them. Because of the abstraction it is easy to forget that after all clouds are built of hardware with physical constraints, just as the computers on the user s desk. The more disk I/O operations the use-case has, the more relevant it is to keep an eye on the system s configuration of OS Disk, Temporary Storage Disk and Data Disks. The price/performance graph below displays how the performance of each instance type compares when the instance price is taken into account.
30 CLOUD HPC IN FINANCE 30/54 12 AUGUST 2015 Figure 16. Price/performance of computing in Econometric Forecasting, Dynare
31 CLOUD HPC IN FINANCE 31/54 12 AUGUST Portfolio Simulation, R This Portfolio Simulation test case uses a highly computeintensive code, which is implemented in R programming language. This use-case was already seen in Techila Technologies cloud benchmark experiment in 2014, where the tests were executed in R version The team decided to use the same R version also this year, because it will enable comparison between the data on this test round and the test round of Software used in Portfolio Simulation, R R R script is a rapidly developing language. Because of this, version control and package management are common challenges in organizations with many R users who require high-performance computing services. If the users have downloaded their R environments from CRAN at different times, or are using features available in the various R packages available in the community, managing the configuration can become problematic. Because Techila is an autonomic computing solution, it offers self-configuration for the computing environment and built-in version control of packages, including input data. This simplifies the management of multi-user systems and horizontal scalability of the computing environment in R scenarios. Figure 17 Time used in computing in Portfolio Simulation, R If we look at the performance data recorded in the 2014 tests, and compare them with the computing times recorded this year, we can say that generally speaking Linux remains as the better performing environment for this Portfolio Simulation use-case.
32 CLOUD HPC IN FINANCE 32/54 12 AUGUST 2015 The comparison shows also the increasing performance of cloud instances over the year. Because the availability of different operating systems was more limited in 2014 than in 2015, a comprehensive comparison is not possible. However, we can see that the Linux instances which represent the newest development this year with their Haswell processors perform 10-25% faster than the Linux instances with Sandy Bridge architecture tested in An interesting observation, which the test team made in the course of these tests is the good performance of Azure A11 with Linux. This instance type did not excel in MATLAB use-cases, but in this Portfolio Simulation test-case, it was the fastest cloud instance, only 3% slower than the reference servers. Because of this, the test team suggests that the Azure A11 with Linux could be an interesting alternative for R language users who seek for the best performance. The price/performance graph below displays how the performance of each instance type compares when the instance price is taken into account. Figure 18. Price/performance of computing in Portfolio Simulation, R
33 CLOUD HPC IN FINANCE 33/54 12 AUGUST Mixed workload Based on Techila Technologies conversations with customers in the industry, a common requirement for computing solutions is to establish a single platform, which can serve many simultaneous users and facilitate collaboration between different business lines. Because the role of business IT includes in many cases serving several business areas, limiting the focus to individual test cases presented in this report individually, could provide an incomplete picture. Because of this, Techila Technologies test team thought that it would be useful to make an estimate of a possible total cost of computing and performance if the workload consists of a mix business use-cases. In this report, this was done by calculating normalized and averaged performance and cost statistics. Techila distributed computing middleware and management solution is a multi-tenant solution with a priority and policy-based framework, which can be used to automatically assign the optimal resources to support different computational workloads. When using Techila in business IT, it is possible to deploy even a heterogeneous computing platform and let Techila to automatically optimize the computing by use-cases and SLA-based priorities and execution policies. The possibility of using a heterogeneous computing platform and this optimization were not included in this analysis in this Chapter. Figure 19. Normalized time used in computing in the tests
34 CLOUD HPC IN FINANCE 34/54 12 AUGUST 2015 Figure 20. Normalized price/performance of computing in the tests
35 CLOUD HPC IN FINANCE 35/54 12 AUGUST Remarks 4.1 More reliability In the Cloud Benchmark test round performed in 2014, Techila Technologies observed that all cloud providers were not able to respond consistently to requests of high profile instances. The team suggested in 2014 that a possible reason for this behavior could have been that the providers are still building their datacenters. Now, a year later, when the test team performed similar deployments, they did not face capacityrelated problems anymore. As described in Chapter 4.2, provisioning different instance types takes different time, but now the instances used in this benchmark test round got deployed consistently. The team says that this could indicate that all the leading clouds have developed their infrastructure to a scale, where the platforms can respond even to high requests quite reliably, even if the customer would be using the service at an on-demand model. This development is great news to customers. The matured cloud service offerings mean that customers get more consistent value for their money. But even more importantly for business, this means that exploring use of cloud-based processing is becoming possible even in business areas with SLA requirements. In worst case, unreliability of IT could impact the business and mean missed opportunities and lower profits. Because of this, the team is glad that they can report this year all cloud platforms responded to resource request consistently. 4.2 Still limitations In FSI security and compliance are key concerns. All the leading public cloud providers have been active on listening to customers and implementing features that can align with enterprise customers governance, compliance, and audit requirements. Because security and compliance can become blockers for cloud strategy, Techila Technologies recommends putting these requirements on the table as early as possible. The leading cloud providers are very large and process-driven enterprises. If the enterprise customer s IT outsourcing processes require Service Organization Controls (SOC) reports, knowledge about the vendor s team, or ability to walk into the datacenter to perform an audit, it is good to discuss these requirements with the vendor as early as possible. This will give the vendor and the customer time see if the processes can be integrated. Even if the reliability of provisioning cloud services has increased significantly over the last year, clouds are still not sources of unlimited elasticity. Based on the observations done during this test round, clouds are very capable to meet fluctuating demands of modern business. However, clouds are still not ready to support ad hoc demands taken to the extreme. This is especially valid when using Windows, which is a popular platform in FSI. When designing the integration of Windows-based cloud resources to enterprise IT it is good to remember that installation of Windows includes a so-called System Preparation (Sysprep) phase, which happens when creating a new virtual machine in Infrastructure as a Service (IaaS) systems and in Platform as a Service environments (PaaS) systems. Because of the Sysprep phase, the
36 CLOUD HPC IN FINANCE 36/54 12 AUGUST 2015 deployment of Windows instances takes usually longer than deployment of similar instances with a Linux-based system. There has been discussion about avoiding the time required for Sysprep by leaving IaaS machines in a stopped state when they are not in use. This would take the Sysprep time only in the initial deployment. When they are needed again, they could be simply started from their previous state. This solution is technically possible but in FSI, where security and compliance are key concerns, Techila Technologies wants to remind customers that leaving machines in a stopped state increases the risk of leaving data on the machine s stateful disk. If the IaaS machines are deleted, the leading cloud platforms erase the disks and secure the erased content using advanced processes.
37 CLOUD HPC IN FINANCE 37/54 12 AUGUST Conclusions When examining individual test cases based on this benchmark, we can see that not all clouds are built in the same way. In the real-world business environment, a cloud instance type that excels in one use-case can give a less optimal performance in another. Because of this, Techila Technologies is unable to nominate a winning cloud platform or winning instance type based on this benchmark test round in each case the winner depends on the use-cases, the requirements of the business, and the cloud vendor s ability to integrate with the enterprise s processes. Figure 21 Normalized price/performance statistics for each instance type As illustrated in Figure 21, Azure A11 offers great Linux performance, and AWS c4.4xlarge has an attractive price/performance ratio. On the other hand, this benchmark test round demonstrated that in some cases the benefits of fast drive technologies are as critical as computing power. In such cases, Azure D14 can be an interesting option. Because clouds are developing rapidly, Techila Technologies recommends that IT architects keep an eye on the availability of new resources. When cloud vendors introduce new instance types, it may be useful to run tests on them to see whether the new offerings would benefit your business. Techila makes it easy to deploy dedicated resources for such tests without interfering with production systems. The experiences of the test team during the benchmark test round show that all clouds have been developed into very capable platforms over the last year. All the cloud platforms are currently able to respond even to high resource requests.
38 CLOUD HPC IN FINANCE 38/54 12 AUGUST 2015 Cloud vendors have clearly been listening to their customers and partners, and have aligned their roadmaps with business requirements. Much attention has been paid to cloud security and support for industry-specific standards. As a result, the current cloud offerings include versatile mechanisms supporting the integration of cloud capabilities with the customer s IT processes. A common goal among customers in the FSI is the establishment of a single platform which can support computing across an enterprise. The goal is to eliminate performance barriers and improve productivity levels from research and development all the way through to production. When a business operates in rapidly moving markets, computational workloads can fluctuate. Cloud-based infrastructures have been seen as a potential source of the required flexibility, which cannot be achieved using a conventional, on-premises data center architecture. Techila Technologies discussions with enterprise IT architects reveal that one of the remaining roadblocks to cloud computing in the FSI is compliance, particularly the communication of compliance. Compliance is an area of the FSI in which financial institutions cannot afford to take risks. The FSI needs to convince regulatory authorities that cloud adoption does not compromise their ability to manage risks relating to system availability and information security. Many business users and IT architects rate the mechanisms and processes of the leading cloud vendors as good overall. In most cases, the leading cloud vendors are able to support integration with the customer s IT outsourcing processes, data destruction and leakage prevention have been designed professionally, and offerings include in-region hosting with versatile Virtual Private Cloud (VPC) technologies. Because of this, some customers have said that the key remaining roadblock on the way to the cloud is achieving peace of mind in relation to this business-critical topic. This benchmark report shows that there are many areas of finance in which cloud-based processing would benefit business the use-cases included in this benchmark are just some examples. Other possibilities include the development of risk models such as VaR and CCR, capital modeling, tracking of the real-time performance of portfolios, volatility modeling, investigation of interdependencies and correlations, analyzing the causes of stock prices, and scalable integration of data feeds. If an organization has not achieved full peace of mind on compliance but is still interested in the benefits of cloud-based processing, Techila Technologies recommends listing the applicable usecases and investigating which of these deal with sensitive data sets, and which belong to a lower confidentiality category. Once the cases have been arranged into confidentiality categories, the exploration of cloud-based processing can begin with the lowest confidentiality class scenarios, and be extended to other use-cases as the organization feels more confident about the cloud. Such an incremental process offers the organization a risk-free way of learning about integrating cloud computing with a business s systems. Despite the rapid development of cloud offerings, Techila Technologies does not believe that onpremises servers in in-house data centers will disappear. As illustrated in Figure 21, a right-sized onpremises infrastructure can provide an organization with continuous processing power and resources, which can be easily assigned to processing even the most highly classified workloads. The offerings of the leading cloud providers are highly capable. The cloud can offer an overall performance, almost as good as the reference servers used in this benchmark with an excellent cost/performance ratio, particularly in the case of fluctuating workloads. Because of this, a hybrid
39 CLOUD HPC IN FINANCE 39/54 12 AUGUST 2015 IT infrastructure combined with policy-based assignment of resources can be an attractive option for many companies in the industry. Techila assigns computing resources according to the priorities and policies defined. With execution policies, it is possible to define the workloads and data which need to remain on on-premises servers, strictly within the organization s firewalls, regions, and other trust zones. Techila also supports the integration of hybrid IT and the easy exploration of cloud burst capabilities in the case of workloads where these can be allowed. Techila Technologies hopes that enterprises exploring uses for the cloud will find this report useful and that the data it includes supports the development of successful cloud implementations. Techila Technologies is a pioneer in cloud-powered high-performance computing and has helped many organizations to integrate cloud-based processing with their IT. Keeping business waiting is expensive. If you are interested in how Techila can bring rocket speed to your computing, contact us at
40 CLOUD HPC IN FINANCE 40/54 12 AUGUST Glossary Term AWS Azure CLR CRAN CSC EBS FLOPS Gbps GCE HPC HT IaaS Jini MKL Mono MPI P2P PaaS SOC SSD VPC Table 3 Terminology glossary Description Amazon Web Services Microsoft Azure Common Language Runtime The Comprehensive R Archive Network CSC IT Center For Science Elastic Block Store Floating-point Operations Per Second Gigabits per second Google Compute Engine High-Performance Computing Hyper threading Infrastructure as a Service A network architecture for the construction of distributed systems, also known as Apache River Math Kernel Library A free and open source project to create standard-compliant.net Frameworkcompatible set of tools Message Passing Interface Peer-to-Peer Platform as a Service Service Organization Controls Solid-state drive Virtual Private Cloud
41 CLOUD HPC IN FINANCE 41/54 12 AUGUST 2015 Appendix A Cloud Platform Specifications A.1 Systems * ** *** **** *****
42 CLOUD HPC IN FINANCE 42/54 12 AUGUST 2015 ****** Even if the focus of this benchmark experiment is not in comparing processor specifications, technical specifications show interesting differences in the cloud architectures. Features, which can explain differences in specific test cases, include hyper threading (HT) of virtual CPUs and use of network disk as storage. Please see Chapter 2.2 for an explanation on how the Techila Workers installation location was chosen.
43 CLOUD HPC IN FINANCE 43/54 12 AUGUST 2015 A.2 Prices Prices on the cloud market are under a constant change. The cost estimates included in this report use standard list prices which cloud vendors have published in the Internet. List prices valid at the date of this document for the cloud instances types included in this benchmark experiment are presented in the table below. The prices are for datacenters hosted in the European time zone. * ** *** **** Many cloud providers offer discounts for users who have sustained use, sign up for a capacity plan, or sign up for the service with a monetary commitment. Customers who do this can save money when they trade off some of the elasticity of the cloud and, or commit to a prepayment or minimum usage over the invoicing period.
44 CLOUD HPC IN FINANCE 44/54 12 AUGUST 2015 Billable time in cloud is not standardized. All participating cloud platform providers bill for the time, when the instances are in a running state, but the definition of running varies. In large-scale and high-performance computing computing scenarios where the deployments consist of a large amount of cloud resources, this variance can get amplified. Amazon has the lowest granularity billing model, where invoicing is rounded up to a full hour. Azure has the finest granularity, where billing is implemented per minute, rounded up to the nearest minute. GCE is billed per minute, too, but the minimum billable amount is 10 minutes. Because the lack of standardization in billable time, and differences in billing models, accurate calculation of the full TCO of cloud computing is unique to each organization. This estimation is a complex task, which requires simulation of the organizations computational system workload model following to the cloud-specific billing definitions. Because the goal of this report is to provide customers with an easy-to-understand analysis, Techila Technologies team decided to focus in the analysis only on the cost of actual computing. For this work, the team wrote a simplified formula, which does not take into account the billing states and billing granularities of different cloud vendors. The simplified formula takes the prices presented in the table in this Chapter, and assumes that each cloud vendor s billing granularity would be one second, which is better than any of them offers in reality. This simplification makes comparing the cost of different cloud vendors with short test runs more feasible and removes the need to simulate an artificial longer-term workload as seen in the test round in 2014.
45 CLOUD HPC IN FINANCE 45/54 12 AUGUST 2015 Appendix B Breakdown of execution times and cloud costs B.1 General notes This chapter presents a breakdown of execution times and cost of cloud computing in each test case. The tests were executed according to Chapter 2.2 Methodology. Cost of cloud computing is calculated according to the principles presented in Chapter 2.4 Cost of computing. The tables in this chapter contain following columns: Cloud Platform Instance Type Operating System USD/ CPU core hour Wall clock time/ h Cost (USD) Simplified Cost (USD) Cloud environment. Name of cloud instance in the Cloud Platform offering. Operating system running on the instance. Cost of running the cloud instance for one (1) hour, divided with the number of processor cores on the instance. Actual amount of time taken to execute the test case. This is equivalent to timing the test case with a stopwatch. Actual cost of cloud-based resources used in performing the computational work, taking into account the billing granularity of the cloud vendor. Cost of cloud-based resources used in performing the computational work, computed using the simplified billing formula described in Appendix A, Chapter A.2 Prices.
46 CLOUD HPC IN FINANCE 46/54 12 AUGUST 2015 B.2 Portfolio Analytics
47 CLOUD HPC IN FINANCE 47/54 12 AUGUST 2015 B.3 Machine Learning
48 CLOUD HPC IN FINANCE 48/54 12 AUGUST 2015 B.4 Option Pricing
49 CLOUD HPC IN FINANCE 49/54 12 AUGUST 2015 B.5 Backtesting
50 CLOUD HPC IN FINANCE 50/54 12 AUGUST 2015 B.6 Derivatives Pricing
51 CLOUD HPC IN FINANCE 51/54 12 AUGUST 2015 B.7 Fitting A Pricing Model
52 CLOUD HPC IN FINANCE 52/54 12 AUGUST 2015 B.8 Data Driven Insurance Risk Simulation
53 CLOUD HPC IN FINANCE 53/54 12 AUGUST 2015 B.9 Econometric Forecasting
54 CLOUD HPC IN FINANCE 54/54 12 AUGUST 2015 B.10 Portfolio Simulation
TECHILA INTERCONNECT END-USER GUIDE
TECHILA INTERCONNECT END-USER GUIDE 16 NOVEMBER 2015 TECHILA INTERCONNECT 2/17 16 NOVEMBER 2015 Disclaimer Techila Technologies Ltd. disclaims any and all warranties, express, implied or statutory regarding
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain
CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series www.cumulux.com
` CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS Review Business and Technology Series www.cumulux.com Table of Contents Cloud Computing Model...2 Impact on IT Management and
Server Consolidation with SQL Server 2008
Server Consolidation with SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 supports multiple options for server consolidation, providing organizations
Dell One Identity Manager Scalability and Performance
Dell One Identity Manager Scalability and Performance Scale up and out to ensure simple, effective governance for users. Abstract For years, organizations have had to be able to support user communities
can you effectively plan for the migration and management of systems and applications on Vblock Platforms?
SOLUTION BRIEF CA Capacity Management and Reporting Suite for Vblock Platforms can you effectively plan for the migration and management of systems and applications on Vblock Platforms? agility made possible
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...
solution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms?
solution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms? CA Capacity Management and Reporting Suite for Vblock Platforms
Datacenter Management and Virtualization. Microsoft Corporation
Datacenter Management and Virtualization Microsoft Corporation June 2010 The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the
Part V Applications. What is cloud computing? SaaS has been around for awhile. Cloud Computing: General concepts
Part V Applications Cloud Computing: General concepts Copyright K.Goseva 2010 CS 736 Software Performance Engineering Slide 1 What is cloud computing? SaaS: Software as a Service Cloud: Datacenters hardware
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
EMC XtremSF: Delivering Next Generation Performance for Oracle Database
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
RevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
An Oracle White Paper August 2011. Oracle VM 3: Application-Driven Virtualization
An Oracle White Paper August 2011 Oracle VM 3: Application-Driven Virtualization Introduction Virtualization has experienced tremendous growth in the datacenter over the past few years. Recent Gartner
Virtual Machine Environments: Data Protection and Recovery Solutions
The Essentials Series: The Evolving Landscape of Enterprise Data Protection Virtual Machine Environments: Data Protection and Recovery Solutions sponsored by by Dan Sullivan Vir tual Machine Environments:
Performance with the Oracle Database Cloud
An Oracle White Paper September 2012 Performance with the Oracle Database Cloud Multi-tenant architectures and resource sharing 1 Table of Contents Overview... 3 Performance and the Cloud... 4 Performance
Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for Multitenant Deployments
Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for Multitenant Deployments February 2014 Contents Microsoft Dynamics NAV 2013 R2 3 Test deployment configurations 3 Test results 5 Microsoft Dynamics NAV
IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report
Windows Server Virtualization An Overview
Microsoft Corporation Published: May 2006 Abstract Today s business climate is more challenging than ever and businesses are under constant pressure to lower costs while improving overall operational efficiency.
ORACLE FINANCIAL SERVICES ANALYTICAL APPLICATIONS INFRASTRUCTURE
ORACLE FINANCIAL SERVICES ANALYTICAL APPLICATIONS INFRASTRUCTURE KEY FEATURES Rich and comprehensive business metadata allows business users to interact with financial services data model to configure
Windows Embedded Security and Surveillance Solutions
Windows Embedded Security and Surveillance Solutions Windows Embedded 2010 Page 1 Copyright The information contained in this document represents the current view of Microsoft Corporation on the issues
EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
CA Automation Suite for Data Centers
PRODUCT SHEET CA Automation Suite for Data Centers agility made possible Technology has outpaced the ability to manage it manually in every large enterprise and many smaller ones. Failure to build and
Initial Hardware Estimation Guidelines. AgilePoint BPMS v5.0 SP1
Initial Hardware Estimation Guidelines Document Revision r5.2.3 November 2011 Contents 2 Contents Preface...3 Disclaimer of Warranty...3 Copyright...3 Trademarks...3 Government Rights Legend...3 Virus-free
Informatica Ultra Messaging SMX Shared-Memory Transport
White Paper Informatica Ultra Messaging SMX Shared-Memory Transport Breaking the 100-Nanosecond Latency Barrier with Benchmark-Proven Performance This document contains Confidential, Proprietary and Trade
Analytic Modeling in Python
Analytic Modeling in Python Why Choose Python for Analytic Modeling A White Paper by Visual Numerics August 2009 www.vni.com Analytic Modeling in Python Why Choose Python for Analytic Modeling by Visual
Cloud Computing Workload Benchmark Report
Cloud Computing Workload Benchmark Report Workload Benchmark Testing Results Between ProfitBricks and Amazon EC2 AWS: Apache Benchmark, nginx Benchmark, SysBench, pgbench, Postmark October 2014 TABLE OF
Cloud Computing. Adam Barker
Cloud Computing Adam Barker 1 Overview Introduction to Cloud computing Enabling technologies Different types of cloud: IaaS, PaaS and SaaS Cloud terminology Interacting with a cloud: management consoles
CloudCenter Full Lifecycle Management. An application-defined approach to deploying and managing applications in any datacenter or cloud environment
CloudCenter Full Lifecycle Management An application-defined approach to deploying and managing applications in any datacenter or cloud environment CloudCenter Full Lifecycle Management Page 2 Table of
Is Hyperconverged Cost-Competitive with the Cloud?
Economic Insight Paper Is Hyperconverged Cost-Competitive with the Cloud? An Evaluator Group TCO Analysis Comparing AWS and SimpliVity By Eric Slack, Sr. Analyst January 2016 Enabling you to make the best
Scaling up to Production
1 Scaling up to Production Overview Productionize then Scale Building Production Systems Scaling Production Systems Use Case: Scaling a Production Galaxy Instance Infrastructure Advice 2 PRODUCTIONIZE
can you simplify your infrastructure?
SOLUTION BRIEF CA Virtual Desktop Automation for Vblock Platforms can you simplify your infrastructure? agility made possible You Can. With services that increase the speed of virtual provisioning on Vblock
DevOps for the Cloud. Achieving agility throughout the application lifecycle. The business imperative of agility
DevOps for the Cloud Achieving agility throughout the application lifecycle We don t have to tell you that your company is under increasing pressure to respond more quickly to changing business conditions.
WINDOWS AZURE AND WINDOWS HPC SERVER
David Chappell March 2012 WINDOWS AZURE AND WINDOWS HPC SERVER HIGH-PERFORMANCE COMPUTING IN THE CLOUD Sponsored by Microsoft Corporation Copyright 2012 Chappell & Associates Contents High-Performance
Cluster, Grid, Cloud Concepts
Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of
Mission-Critical Java. An Oracle White Paper Updated October 2008
Mission-Critical Java An Oracle White Paper Updated October 2008 Mission-Critical Java The Oracle JRockit family of products is a comprehensive portfolio of Java runtime solutions that leverages the base
The Role of the Operating System in Cloud Environments
The Role of the Operating System in Cloud Environments Judith Hurwitz, President Marcia Kaufman, COO Sponsored by Red Hat Cloud computing is a technology deployment approach that has the potential to help
Choosing Between Commodity and Enterprise Cloud
Choosing Between Commodity and Enterprise Cloud With Performance Comparison between Cloud Provider USA, Amazon EC2, and Rackspace Cloud By Cloud Spectator, LLC and Neovise, LLC. 1 Background Businesses
Proactive Performance Management for Enterprise Databases
Proactive Performance Management for Enterprise Databases Abstract DBAs today need to do more than react to performance issues; they must be proactive in their database management activities. Proactive
Resource Sizing: Spotfire for AWS
Resource Sizing: for AWS With TIBCO for AWS, you can have the best in analytics software available at your fingertips in just a few clicks. On a single Amazon Machine Image (AMI), you get a multi-user
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability
Cloud Computing Capacity Planning. Maximizing Cloud Value. Authors: Jose Vargas, Clint Sherwood. Organization: IBM Cloud Labs
Cloud Computing Capacity Planning Authors: Jose Vargas, Clint Sherwood Organization: IBM Cloud Labs Web address: ibm.com/websphere/developer/zones/hipods Date: 3 November 2010 Status: Version 1.0 Abstract:
An Oracle White Paper September 2013. Advanced Java Diagnostics and Monitoring Without Performance Overhead
An Oracle White Paper September 2013 Advanced Java Diagnostics and Monitoring Without Performance Overhead Introduction... 1 Non-Intrusive Profiling and Diagnostics... 2 JMX Console... 2 Java Flight Recorder...
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,
Understanding the Benefits of IBM SPSS Statistics Server
IBM SPSS Statistics Server Understanding the Benefits of IBM SPSS Statistics Server Contents: 1 Introduction 2 Performance 101: Understanding the drivers of better performance 3 Why performance is faster
Big Data and Natural Language: Extracting Insight From Text
An Oracle White Paper October 2012 Big Data and Natural Language: Extracting Insight From Text Table of Contents Executive Overview... 3 Introduction... 3 Oracle Big Data Appliance... 4 Synthesys... 5
Tips and Best Practices for Managing a Private Cloud
Deploying and Managing Private Clouds The Essentials Series Tips and Best Practices for Managing a Private Cloud sponsored by Tip s and Best Practices for Managing a Private Cloud... 1 Es tablishing Policies
Ensuring Web Service Quality for Service-Oriented Architectures. An Oracle White Paper June 2008
Ensuring Web Service Quality for Service-Oriented Architectures An Oracle White Paper June 2008 Ensuring Web Service Quality for Service-Oriented Architectures WEB SERVICES OFFER NEW OPPORTUNITIES AND
Building Private & Hybrid Cloud Solutions
Solution Brief: Building Private & Hybrid Cloud Solutions WITH EGENERA CLOUD SUITE SOFTWARE Egenera, Inc. 80 Central St. Boxborough, MA 01719 Phone: 978.206.6300 www.egenera.com Introduction When most
ORACLE VIRTUAL DESKTOP INFRASTRUCTURE
ORACLE VIRTUAL DESKTOP INFRASTRUCTURE HIGHLY SECURE AND MOBILE ACCESS TO VIRTUALIZED DESKTOP ENVIRONMENTS KEY FEATURES Centralized virtual desktop management and hosting Facilitates access to VDI desktops
The State of Cloud Storage
205 Industry Report A Benchmark Comparison of Speed, Availability and Scalability Executive Summary Both 203 and 204 were record-setting years for adoption of cloud services in the enterprise. More than
Dell One Identity Manager 7.0. Help Desk Module Administration Guide
Dell 2015 Dell Inc. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in this guide is furnished under a software license or nondisclosure
System Requirements and Platform Support Guide
Foglight 5.6.7 System Requirements and Platform Support Guide 2013 Quest Software, Inc. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in
An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide
Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.
Performance Analysis: Benchmarking Public Clouds
Performance Analysis: Benchmarking Public Clouds Performance comparison of web server and database VMs on Internap AgileCLOUD and Amazon Web Services By Cloud Spectator March 215 PERFORMANCE REPORT WEB
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Running Oracle s PeopleSoft Human Capital Management on Oracle SuperCluster T5-8 O R A C L E W H I T E P A P E R L A S T U P D A T E D J U N E 2 0 15
Running Oracle s PeopleSoft Human Capital Management on Oracle SuperCluster T5-8 O R A C L E W H I T E P A P E R L A S T U P D A T E D J U N E 2 0 15 Table of Contents Fully Integrated Hardware and Software
ORACLE INFRASTRUCTURE AS A SERVICE PRIVATE CLOUD WITH CAPACITY ON DEMAND
ORACLE INFRASTRUCTURE AS A SERVICE PRIVATE CLOUD WITH CAPACITY ON DEMAND FEATURES AND FACTS FEATURES Hardware and hardware support for a monthly fee Optionally acquire Exadata Storage Server Software and
2) Xen Hypervisor 3) UEC
5. Implementation Implementation of the trust model requires first preparing a test bed. It is a cloud computing environment that is required as the first step towards the implementation. Various tools
MS Exchange Server Acceleration
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba
ZADARA STORAGE. Managed, hybrid storage EXECUTIVE SUMMARY. Research Brief
ZADARA STORAGE Managed, hybrid storage Research Brief EXECUTIVE SUMMARY In 2013, Neuralytix first documented Zadara s rise to prominence in the then, fledgling integrated on-premise and in-cloud storage
Numerix CrossAsset XL and Windows HPC Server 2008 R2
Numerix CrossAsset XL and Windows HPC Server 2008 R2 Faster Performance for Valuation and Risk Management in Complex Derivative Portfolios Microsoft Corporation Published: February 2011 Abstract Numerix,
How To Use An Org.Org Cloud System For A Business
An Oracle White Paper March 2011 Oracle Exalogic Elastic Cloud: A Brief Introduction Disclaimer The following is intended to outline our general product direction. It is intended for information purposes
Vblock Systems hybrid-cloud with Cisco Intercloud Fabric
www.vce.com Vblock Systems hybrid-cloud with Cisco Intercloud Fabric Version 1.0 April 2015 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
CUSTOMER Presentation of SAP Predictive Analytics
SAP Predictive Analytics 2.0 2015-02-09 CUSTOMER Presentation of SAP Predictive Analytics Content 1 SAP Predictive Analytics Overview....3 2 Deployment Configurations....4 3 SAP Predictive Analytics Desktop
Zend and IBM: Bringing the power of PHP applications to the enterprise
Zend and IBM: Bringing the power of PHP applications to the enterprise A high-performance PHP platform that helps enterprises improve and accelerate web and mobile application development Highlights: Leverages
An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics
An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,
Planning the Migration of Enterprise Applications to the Cloud
Planning the Migration of Enterprise Applications to the Cloud A Guide to Your Migration Options: Private and Public Clouds, Application Evaluation Criteria, and Application Migration Best Practices Introduction
Accelerating Microsoft Exchange Servers with I/O Caching
Accelerating Microsoft Exchange Servers with I/O Caching QLogic FabricCache Caching Technology Designed for High-Performance Microsoft Exchange Servers Key Findings The QLogic FabricCache 10000 Series
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
DataStax Enterprise, powered by Apache Cassandra (TM)
PerfAccel (TM) Performance Benchmark on Amazon: DataStax Enterprise, powered by Apache Cassandra (TM) Disclaimer: All of the documentation provided in this document, is copyright Datagres Technologies
Cloud Computing: Making the right choices
Cloud Computing: Making the right choices Kalpak Shah Clogeny Technologies Pvt Ltd 1 About Me Kalpak Shah Founder & CEO, Clogeny Technologies Passionate about economics and technology evolving through
Identifying Problematic SQL in Sybase ASE. Abstract. Introduction
Identifying Problematic SQL in Sybase ASE Written by Darren Mallette, Senior Technical Consultant, Dell Software Abstract Database administrators (DBAs), developers, quality assurance (QA) analysts and
Online Transaction Processing in SQL Server 2008
Online Transaction Processing in SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 provides a database platform that is optimized for today s applications,
White Paper on CLOUD COMPUTING
White Paper on CLOUD COMPUTING INDEX 1. Introduction 2. Features of Cloud Computing 3. Benefits of Cloud computing 4. Service models of Cloud Computing 5. Deployment models of Cloud Computing 6. Examples
An Esri White Paper January 2011 Estimating the Cost of a GIS in the Amazon Cloud
An Esri White Paper January 2011 Estimating the Cost of a GIS in the Amazon Cloud Esri, 380 New York St., Redlands, CA 92373-8100 USA TEL 909-793-2853 FAX 909-793-5953 E-MAIL [email protected] WEB esri.com
Fast, Low-Overhead Encryption for Apache Hadoop*
Fast, Low-Overhead Encryption for Apache Hadoop* Solution Brief Intel Xeon Processors Intel Advanced Encryption Standard New Instructions (Intel AES-NI) The Intel Distribution for Apache Hadoop* software
Amazon Web Services 100 Success Secrets
Amazon Web Services Amazon Web Services Made Simple: Learn how Amazon EC2, S3, SimpleDB and SQS Web Services enables you to reach business goals faster Copyright 2008 Amazon Web Services 100 Success Secrets
Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for On-Premises Single Tenant Deployments
Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for On-Premises Single Tenant Deployments July 2014 White Paper Page 1 Contents 3 Sizing Recommendations Summary 3 Workloads used in the tests 3 Transactional
Steps to Migrating to a Private Cloud
Deploying and Managing Private Clouds The Essentials Series Steps to Migrating to a Private Cloud sponsored by Introduction to Realtime Publishers by Don Jones, Series Editor For several years now, Realtime
ORACLE PLANNING AND BUDGETING CLOUD SERVICE
ORACLE PLANNING AND BUDGETING CLOUD SERVICE ENTERPRISE WIDE PLANNING, BUDGETING AND FORECASTING KEY FEATURES Multi-dimensional / multi user planning with a powerful business rules engine Flexible workflow
Platform as a Service: The IBM point of view
Platform as a Service: The IBM point of view Don Boulia Vice President Strategy, IBM and Private Cloud Contents 1 Defining Platform as a Service 2 The IBM view of PaaS 6 IBM offerings 7 Summary 7 For more
How To Get A Client Side Virtualization Solution For Your Financial Services Business
SOLUTION BRIEF Financial Services Industry 2nd Generation Intel Core i5 vpro and Core i7 vpro Processors Benefits of Client-Side Virtualization A Flexible, New Solution for Improving Manageability, Security,
Protecting Data with a Unified Platform
Protecting Data with a Unified Platform The Essentials Series sponsored by Introduction to Realtime Publishers by Don Jones, Series Editor For several years now, Realtime has produced dozens and dozens
Capacity Plan. Template. Version X.x October 11, 2012
Template Version X.x October 11, 2012 This is an integral part of infrastructure and deployment planning. It supports the goal of optimum provisioning of resources and services by aligning them to business
Hadoop in the Hybrid Cloud
Presented by Hortonworks and Microsoft Introduction An increasing number of enterprises are either currently using or are planning to use cloud deployment models to expand their IT infrastructure. Big
cloud functionality: advantages and Disadvantages
Whitepaper RED HAT JOINS THE OPENSTACK COMMUNITY IN DEVELOPING AN OPEN SOURCE, PRIVATE CLOUD PLATFORM Introduction: CLOUD COMPUTING AND The Private Cloud cloud functionality: advantages and Disadvantages
Object Storage: A Growing Opportunity for Service Providers. White Paper. Prepared for: 2012 Neovise, LLC. All Rights Reserved.
Object Storage: A Growing Opportunity for Service Providers Prepared for: White Paper 2012 Neovise, LLC. All Rights Reserved. Introduction For service providers, the rise of cloud computing is both a threat
Foglight. Managing Hyper-V Systems User and Reference Guide
Foglight Managing Hyper-V Systems User and Reference Guide 2014 Quest Software, Inc. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in this
CA NSM System Monitoring Option for OpenVMS r3.2
PRODUCT SHEET CA NSM System Monitoring Option for OpenVMS CA NSM System Monitoring Option for OpenVMS r3.2 CA NSM System Monitoring Option for OpenVMS helps you to proactively discover, monitor and display
Microsoft Private Cloud Fast Track
Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with Nutanix technology to decrease
Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms
EXECUTIVE SUMMARY Intel Cloud Builder Guide Intel Xeon Processor-based Servers Red Hat* Cloud Foundations Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms Red Hat* Cloud Foundations
On Demand Satellite Image Processing
On Demand Satellite Image Processing Next generation technology for processing Terabytes of imagery on the Cloud WHITEPAPER MARCH 2015 Introduction Profound changes are happening with computing hardware
How To Build A Cloud Computer
Introducing the Singlechip Cloud Computer Exploring the Future of Many-core Processors White Paper Intel Labs Jim Held Intel Fellow, Intel Labs Director, Tera-scale Computing Research Sean Koehl Technology
