1 Financial Services Grid Computing on Amazon Web Services January, 2016
2 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes only. It represents AWS s current product offerings and practices as of the date of issue of this document, which are subject to change without notice. Customers are responsible for making their own independent assessment of the information in this document and any use of AWS s products or services, each of which is provided as is without warranty of any kind, whether express or implied. This document does not create any warranties, representations, contractual commitments, conditions or assurances from AWS, its affiliates, suppliers or licensors. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. Page 2 of 24
3 Contents Abstract 4 Introduction 4 A Definition of Grid Computing for Financial Services 4 Compute Grid Architecture & Performance 5 Benefits of Grid Computing in the Cloud 6 Grid Computing on AWS 8 Initial Implementation 8 Full Cloud Implementation 10 Closing Thoughts 20 Glossary 21 Contributors 24 Document Revisions 24 Page 3 of 24
4 Abstract Amazon Web Services (AWS) offers customers operating large compute grids a powerful method of running risk and pricing calculations of more scenarios, against larger datasets, in a shorter time and for lower cost. Meeting these goals with traditional infrastructure and applications presents a number of significant challenges. This whitepaper is intended for architects and developers in financial services sectors who are looking to expand grid computation onto AWS. It outlines the best practices for managing large grids on the AWS platform and offers a reference architecture to guide organizations in the delivery of these complex systems. Introduction A Definition of Grid Computing for Financial Services High performance computing (HPC) allows end users to solve complex science, engineering, and business problems using applications that require a large amount of computational resources, as well as high throughput and predictable latency networking. Most systems providing HPC platforms are shared among many users, and comprise a significant capital investment to build, tune, and maintain. For this paper, we focus the discussion on high performance computing applied to financial services industry sectors. This may include calculating prices and risk for derivative, commodity, or equity products; the aggregate and ticking risk calculation of trader portfolios; or the calculation of aggregate position for an entire institution. These calculations may be run continuously throughout the trading day or run at the end of the day for clearing or reporting purposes. Unlike other types of HPC workloads, FS focused workloads tend to utilize massively parallel architectures which can tolerate some latency, with optimization focused on overall throughput. In this paper we will use the term compute grid to refer to HPC Clusters exhibiting these characteristics. For other use cases, such as for life sciences or engineering workloads, the approach used is different. Page 4 of 24
5 Compute Grid Architecture & Performance Many commercial and open source compute grids use HTTPS for communication and can run on relatively unreliable networks with variable throughput and latency. However, for ticking risk applications, and in some proprietary compute grids, network latency and bandwidth can be important factors in overall performance. Compute grids typically have hundreds to thousands of individual processes (engines) running on tens or hundreds of machines. For reliable results, engines tend to be deployed in a fixed ratio of compute cores to memory (for example, two virtual cores and 2 GB of memory per engine). Increased throughput is measured in terms of job tasks processed per period of time. To increase the throughput of the grid, just add additional engines. The formation of a compute cluster is controlled by a grid director or controller, and clients of the compute grid submit tasks to engines via a job manager or broker. In many grid architectures, sending data between the task client and engines is done directly, while in other architectures data is sent via the grid broker. In some architectures (Figure 1), known as two-tier grids, the director and broker responsibilities are managed by a single component, while in larger three-tier grids a director may have many brokers, each responsible for a Figure 1 Two, Three and Four-Tier Grid Topologies Page 5 of 24
6 subset of engines. In the largest grids, a four-tier architecture may be used, where each engine has the ability to span multiple tasks onto other engines. The timeframe for the completion of grid calculations tends to be minutes or hours rather than milliseconds. Calculations are partitioned among engines and computed in parallel, and thus lend themselves to a shared-nothing architecture. Communications with the client of the computation tend to accept relatively high latency and can be retried during failure. Benefits of Grid Computing in the Cloud With AWS, you can allocate compute capacity on demand without upfront planning of data center, network, and server infrastructure. You have access to a broad range of instance types to meet your demands for CPU, memory, local disk, and network connectivity. Infrastructure can be run in any of a large number of global regions, without long lead times of contract negotiation and a local presence. This approach enables faster delivery, especially in emerging markets. With Amazon Virtual Private Cloud (VPC), you can define a virtual network topology that closely resembles a traditional network that you operate in your own data center. You have complete control over your virtual networking, including selection of your own IP address range, creation of subnets (VLANs), and configuration of route tables and network gateways. This allows you to build network topologies to meet demands of isolation and security for internal compliance and audit or external regulators. Because of the on-demand nature of AWS, you can build grids and infrastructure as required for isolation between business lines, or for sharing of infrastructure for cost optimization. With elastic capacity and the ability to dynamically change the infrastructure, you can choose how much capacity to dedicate based upon what you are actually require at a point in time, rather than having to provision for peak utilization. Furthermore, black-swan events can be handled with ease by scaling up grids to accommodate the demands of business owners reacting to major market changes. Operational tasks of running compute grids of hundreds of nodes are simplified on AWS because you can fully automate such provisioning and scaling tasks, and treating the infrastructure as part of your codebase. AWS resources can be Page 6 of 24
7 controlled interactively using the AWS Management Console, programmatically using APIs or SDKs, or by using third party operational tools. You can tag instances with internal role definitions or cost centers to provide visibility and transparency at runtime and on billing statements. AWS resources are monitored by Amazon CloudWatch 1, providing visibility into the utilization rates of whole grids as well as fine granularity measures of individual instances or data stores. You can combine elastic compute capacity with other AWS services to minimize complexity of the compute client. For example, Amazon DynamoDB, a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability, can capture results from the compute grid at any scale and throughput 2. Sensitive software artifacts such as QA Libraries can be stored security and deployed easily with AWS CodeDeploy 3. Compute Engines can take advantage of high performance file systems such as Intel Lustre Cloud Edition (offered on the AWS Marketplace) for temporary file storage and working space 4. When the compute job is completed, the resources are terminated and no longer incur cost. Many organizations are now looking for new ways to perform compute intensive tasks at a lower cost. Fast provisioning, minimal administration, and flexible instance sizing and capacity, along with innovative third party support for grid coordination and data management, make the AWS platform a compelling solution ware Page 7 of 24
8 Grid Computing on AWS Initial Implementation You can build a POC or initial implementation of an HPC Grid on AWS by simply moving some of the compute grid on the Amazon Elastic Compute Cloud (Amazon EC2) 5. An Amazon Machine Image (AMI) 6 is built with the required grid management and QA software, and grid calculations reference existing dynamic data sources via a VPN connection over Amazon VPC 7. The VPC configuration for this architecture is very simple, comprising a single subnet for the engine instances. AWS DirectConnect 8 is used to ensure predictable latency and throughput for the large amount of customer data being transferred to the grid engines. You could also leverage Amazon Relational Database Service (Amazon RDS) 9, or Amazon DynamoDB for configuration or instrumentation data Please see for a full list of available images See for more information 9 Page 8 of 24
9 The architecture shown in Figure 2 allows for a simple extension of existing grid compute jobs onto an elastic and scalable platform. By running engines on Amazon EC2 only when compute jobs are required, you can benefit from lowered cost and increased flexibility of the infrastructure used for these grids. Figure 2 Initial Implementation with Grid Engines on AWS Page 9 of 24
10 Full Cloud Implementation Reference Architecture Figure 3 Grid Computing Reference Architecture Figure 3 is an example of a reference architecture which can also be viewed in the AWS Architecture Center at and should be considered with the following best practices: Page 10 of 24
11 Security Security of customer, trade, and market data is of paramount importance to our customers and to AWS. AWS builds services in accordance with security best practices, provides appropriate security features in those services, and documents how to use those features. In addition, AWS customers must use those features and best practices to architect and appropriately secure the application environment. AWS manages a comprehensive control environment that includes the necessary policies, processes, and control activities for the delivery of each of the web service offerings. As of the publish date of this document, AWS is compliant with various certifications and third-party attestations, including SOC1 Type 2, SOC 2 and Soc 3 compliance, PCI DSS Level 1 compliance, ISO and 9001 certification, and FISMA Moderate authorization. For a more complete and up-to-date list of AWS certifications and accreditations, please visit the AWS Security Center at It is important to note that AWS operates a shared responsibility model in the cloud. While you as a customer can leverage the secure underlying infrastructure and foundation services to build secure solutions, you are responsible for designing, configuring, and managing secure operating systems, platforms, and applications, while still retaining full responsibility and control over your information security management systems, security policies, standards, and procedures. You are also responsible for the compliance of your cloud solution with relevant regulations, legislation, and control frameworks. AWS Identity and Access Management (IAM) 10 provides a robust solution for managing users, roles, and groups that have rights to access specific data sources. You can issue users and systems individual identities and credentials, or provision them with temporary access credentials relevant to their access requirements within a restricted timeframe using the Amazon Security Token Service (Amazon STS) 11. Using standard Amazon IAM tools, you can build finegrained access policies to meet your security requirements for cloud resources. 10 For more information, please see aws.amazon.com/iam 11 Page 11 of 24
12 Besides provisioning individual accounts, or dispensing temporary credentials on an as-needed basis, you can federate user identity to existing identity and access management systems such as Active Directory or LDAP. The AWS Directory Service 12 AD Connector allows you to connect to your existing Microsoft Active Directory to enable users to seamlessly access AWS resources and applications using their existing corporate credentials, and Simple AD is a stand-alone managed directory, powered by Samba 4 Active Directory Compatible Server. Besides password authentication, AWS also supports Multi Factor Authentication (MFA) for both Web console and API access. MFA provides both secure authentication and enhanced authorization for AWS resource access. For example, you could use this feature to prevent accidental deletion of data in Amazon S3 such that only MFA-authenticated users can delete objects. Partner solutions are also available for encrypted network overlays, privileged user access and federation, and intrusion detection systems, allowing for onpremises security control frameworks and information security management systems to be extended to the AWS cloud. Patching While AWS, Microsoft, Red Hat or other Operating Systems providers will release new AMI s into AWS, you must ensure that a given EC2 instance is patched to the required level based upon internal security & compliance requirements and controls. To manage this, many customers will adopt a rolling AMI architecture as depicted in Figure Page 12 of 24
13 Figure 4 Example of a Rolling AMI Architecture Starting with a base Amazon Machine Image of Amazon Linux, Red Hat or Microsoft Windows, the customer will do a one-time build of a current Customized Operating System that meets the internal requirements for patching, services enabled/disabled, configuration of firewalls or other OS configuration, as well as installation of Intrusion Detection/Protection software. This AMI is then used to launch HPC Grids (or specific tiers within the Grid). At some point when patching is required, AWS CloudFormation 13 can be used to stand up a single instance from this current AMI, apply the required OS patches, and then create a new current patched AMI from which new Grid components can be launched. Once done, the AMI from Patch Level 0 would be deprecated, for instance through the use of Tagging Policies. Network For financial services grids, in addition to IAM policy control, it is a best practice to use Amazon VPC to create a logically isolated network within AWS. This allows for instances within VPC to be addressed using an IP addressing scheme of your 13 Page 13 of 24
14 choice. You can use subnets and routing tables to enable specific routing from source data systems to the grid and client platforms on AWS, and you can use hypervisor level firewalls (VPC Security Groups 14 ) to further reduce the attack surface. Your site can then be easily connected using a VPN Gateway software or hardware device 15. You can also use static or dynamic routing protocols to provide for a single routing domain and dynamic reachability information between your internal and cloud environments. Another networking feature that is ideal for grid computing is the use of Cluster Placement Groups 16. This allows for fully bisected 10 GB networking between EC2 Cluster Compute instances, providing reliable network latency between grid engine nodes, and is highly recommended for the grid cluster and any high performance filesystem nodes being run. Distribution of Static Data Sources There are several ways to distribute static data sources, such as grid management software, holiday calendars, and gridlibs onto the instances that will be performing calculations. One option is to pre-install and bundle such components into an AMI from which an instance is launched, resulting in faster start-up times. However, as gridlibs are updated with patches and holiday dates change, these AMIs must be rebuilt. This can be time consuming from an operational perspective. Instead, consider storing static sources on an onpremises source such as a filesystem, on Amazon S3 17, or through the use of AWS CodeDeploy, which encrypts software assets automatically. Then install these components on instance start-up using AWS CloudFormation 18, shell scripts, EC ygroups.html omputing.html Page 14 of 24
15 Config, EC2 Run Commands 19 or your preferred configuration management tools. Access to Dynamic Data Sources Dynamic data sources such as market, trade, and counterparty data can be safely and securely stored in the AWS cloud using services such as Amazon RDS or Amazon DynamoDB, or on a high performance filesystem such as Intel Lustre. These solutions significantly reduce operational complexity for backup and recovery, scalability, and elasticity. Datasets which require high read throughput, such as random seed data or counterparty data, are ideally placed on DynamoDB, a distributed filesystem, or datagrid to allow for configurable read IOPS by scaling out. Datasets such as client configuration, schedule, or grid metadata may also be stored simply and reliably on Amazon S3 as simple properties files, xml/json configuration, or binary settings profiles. In situations where you must transfer large amounts of dynamic data from centralized market data systems or trade stores to compute grids during job execution, using AWS Direct Connect 20 can help to ensure predictable throughput and latency. Direct Connect establishes a dedicated network connection from internal systems to AWS. In many cases, this can reduce network costs while providing a more consistent network experience for compute grid engines. In ticking risk applications, for example, consistent performance of access to underlying data sources is vitally important. Cluster Machine Availability & Workflow The elasticity and on-demand nature of the AWS platform enables you to run grid infrastructure only when it is required. Unlike in a traditional data center, where grid servers are powered up and available at all times, AWS instances may be shut down, saving significant cost. This model requires the instances be started before the grid is deployed and brought online, and it requires shutting down instances when the grid is not active Page 15 of 24
16 Many customers find that they want to manage the start-up of instances themselves with custom software components. These systems may already be in place and used to install software across very large grids, and provide a good integration point for instance provisioning. To accomplish the start-up and availability of hundreds or thousands of instances 21, open source tools such as AWS cfn-cluster 22 or MIT StarCluster 23 are available, as well as commercial AWS partner solutions such as Bright Cluster Manager 24 or Cycle Computing CycleCloud 25. A great feature of these solutions is to utilize Spot Instances 26 to drive down cost with a minimum of operational involvement. Additionally, some grid computing platforms such as Tibco Silver Fabric 27 are able to manage infrastructure built on Amazon EC2 via their internal provisioning workflow. Regardless of the software used, the grid must be brought online prior to clients submitting tasks, so exposing metrics on number of instances available and the grid composition is of high importance when building on the cloud. Result Aggregation & Client Scaling Grid calculations that are run on hundreds or thousands of grid compute engines can return very large data sets back to the client, which can require a significant engineering effort ensure that the client can scale to collect all the calculation results in the required timeframe. This complexity has led many financial services customers to build data grids into which calculation results are stored 21 A list of solutions for cluster management can be found at section Leverage a Vibrant Ecosystem Page 16 of 24
17 and where the final calculations are performed using parallel aggregation or map/reduce. Figure 5 Task Creation and Result Aggregation Flow Large result sets can also be exported (as shown in Figure 5) to Amazon S3 for historical access and subsequently archived to Amazon Glacier 28, an extremely low-cost storage service that provides secure and durable storage for data archiving and backup. You can share this data across business units, utilize the data for back-testing, or enable access to external parties or regulators using Identity and Access Management (IAM) policies. High Availability AWS Regions are divided into Availability Zones 29, which are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. It is a well-established best practice to build architectures that use multiple Availability Zones for high availability and failure isolation. Grid computing architectures are no exception, but for efficiency of grid execution it is best to run all components within a single Availability Zone. By doing so, data is not being moved across multiple physical sites and thus run times are reduced Page 17 of 24
18 You should still use multiple Availability Zones during start-up of the grid cluster instances 30, as any one of the available zones in a region should be used. In the rare event of Availability Zone issues that affect a running grid, cluster management should be capable of rebuilding the grid infrastructure in an alternate Availability Zone. Once the grid is up and running again, jobs can be resubmitted for processing. Repeatable Assembly A best practice for any architecture built on AWS is to employ repeatable assembly of the resources in use. AWS CloudFormation enables template-driven creation of all the AWS technologies in this whitepaper, and can be used to quickly create development environments on demand, whether for new features, emergency fixes, or concurrent delivery streams. This also allows for the creation of infrastructure during performance and functional testing from automated build systems. Compute Regions & Locations AWS offers customers the flexibility of choosing where infrastructure is deployed across a global set of regions and Availability Zones in North and South America, Europe, and Asia. With a global business hosting dynamic data sources in multiple locations, compute can be run near to the data to minimize compute time due to network data transfer. Alternatively, you may choose to run compute jobs in a region that is optimal from a cost perspective. Lastly, you may choose to run compute jobs where they are best able to satisfy regulatory and compliance requirements, and in fact very large grids may build across multiple AWS Regions for quick and inexpensive access to capacity on demand. Instance Type Considerations Amazon EC2 offers a variety of instances types to choose from, including very small instances for simple testing, instances with high CPU or memory ratios, I/O optimized instances including those with solid state drives (SSDs) for random I/O intensive workloads (such as databases), and cluster compute optimized instances. Grid calculations that build large intermediate result sets may benefit from High Memory instances, while computations that are processor intensive 30 See best practice section on Cluster Machine Availability for more information Page 18 of 24
19 may require a high CPU to Memory ratio. For computations requiring a large degree of parallelism, we recommend using the C4 Instance type, which offers a custom Intel Xeon E v3 (Haswell) which runs at a base speed of 2.9 GHz, and can achieve clock speeds as high as 3.5 GHz with Intel Turbo Boost (complete specifications are available here). C4 is part of our Cluster Compute Family of instance which provide high performance using fast processors and high bandwidth, low latency networking. For QA Models that can benefit from the efficiency gains of GPGPU, CG1 and G2 31 instances provide high performance of CPU combined with multiple GPU units comprising up to 1536 CUDA Cores and 4GB RAM each. And in the case of the requirement for high performance database workloads, hi1/i2 High I/O instances backed by SSDs can offer very high IOPS. Spot Instances Spot Instances are an option for scaling compute grids with lower cost on AWS. You simply bid on spare Amazon EC2 instances and run them whenever your bid exceeds the current Spot Price, which varies in real time based on supply and demand. You can use Spot Instances to augment the number of engines in a grid in order to speed processing for a lower cost than would be available on demand or through RIs, in the same way that you may use Scavenging on a large grid today g.html Page 19 of 24
20 Figure 6 shows an example of a grid augmented with Spot Instances. Figure 6 Grid Engines using On-Demand and Spot Instances With Amazon EC2, it costs the same to run 100 instances for 1 hour as it does to run 1 instance for 100 hours. When using Spot Instances, the on-demand portion of the grid can be run for a shorter period of time, and in real-world scenarios, Spot Instances have allowed a time savings of 50% with a total cost reduction of 20%. Furthermore, Spot Fleet is a feature of EC2 that allows you to describe the total capacity you require in terms of CPU and RAM, from a variety of different instance types. Spot Fleet then works to automatically deliver you that amount of capacity through Spot Instances with the lowest possible spot price. For more information on Spot Fleet, please see and for Spot Instances see Closing Thoughts Amazon Web Services provides a powerful mechanism for financial services organizations to expand risk and pricing grids outside of their internally managed Page 20 of 24
21 infrastructure. Either with the simple expansion of a set of grid engines onto Amazon Elastic Compute Cloud, or using AWS to host an entire compute grid, you can process more data in a shorter period of time with no infrastructure setup costs. And by utilizing services such as Amazon Dynamo DB and Intel Lustre Cloud Edition, you can simplify the codebase used to manage these large scale environments, while using Spot Instances to drive down cost even further. The end result is the ability to process more data, more frequently, using a larger number of scenarios and tenors than before, and at a lower cost. The end result is better business decisions made quickly without the traditional large investment required to innovate. Glossary Gridlibs Grid libraries are the packages of executable code or static reference data that must be copied onto each compute engine in order to perform calculations. This includes the application code that defines the risk or pricing model (the QA Library) as well as binary data for holiday calendars that aid in calculating instrument maturity dates. These elements typically change infrequently (2 or 3 times per quarter) and are small in size. They are referenced with very high frequency during calculation and so must be available directly on the engine. QA library Software that comprises the analytical models used to calculate risk or pricing for a security or instrument. Please see for more information on Quantitative Analysis. Engine Software component that performs calculations as part of a compute grid. Engines do not direct the flow of the overall client calculation, but instead only perform the required operation using supplied QA libraries, gridlibs, using static and dynamic data sources. Static data Static data sources are typically small in size and change infrequently (perhaps only several times per year) but are used many times during a risk or pricing Page 21 of 24
22 calculation. They tend to be deployed directly to engines rather than accessed at run time. Holiday calendar Static data used to calculate the maturity date of a security or instrument. A holiday calendar provides information on which dates are trading days, public holidays, weekends, and so on. Tenor Metric indicating the time to maturity for an instrument or scenario. Risk calculations will express tenors for 1 week, 1 month, 3 months, 1 year, 10 years, and so on. Risk values are expressed at each hypothetical maturity date. Trade data This is the trade or portfolio data that is the subject of the calculation. This will include value, term, and information on which tenors (time to maturity) must be priced. This data is typically unique per calculation and per engine and so must be supplied each time a computation is to be run. Market data Market data is supplied to calculations for the purposes of understanding market movement, and the forecasting of risk based upon it. Most risk and pricing systems tick only periodically, and the results of the calculations only change every 30 minutes or a multiple of hours. This data can be cached for efficiency between market ticks. Counterparty data Counterparty data is supplied for many types of calculations when risk is calculated at a company or group level. For example, end of day P&L is often aggregated at the various levels within an organization with which positions have been traded. This data changes infrequently, but is typically very large in size. P&L Profit & Loss. Page 22 of 24
23 Random seeds Random input data used for Monte Carlo analysis or Back Testing. This data set must be generated up front and distributed to all engines. This data is unique for a given collection of model calculations, and is often very large in size. Monte Carlo analysis Monte Carlo methods are computational algorithms that rely on random numbers to compute a result. The larger the random set of numbers the more accurate the result. This technique is used to approximate the probability of an outcome by running simulations, or trial runs using the random numbers. Back Testing Back testing is used to determine the effectiveness of a strategy using past historical time periods and conditions to determine expected outcome. Grid management software Grid management software ensures that machines and engines are available for performing calculations and controls the distribution of work. This software often takes responsibility for the distribution of gridlibs and provides a management console to change grid metadata, size, priority, and sharing settings. Grid management software is often custom built for an application by the business, but there are a many third-party products available. Grid client The client software that is orchestrating the calculation grid is central to the architecture. Written in a variety of different languages and running on any platform, this software must draw together all of the components of the architecture and ensure that the right calculation is performed against the data at the right time. This software is unique to each organization and even business line, and has significant performance and scaling requirements Shared nothing (SN) architecture An architectural pattern where each compute environment operates independently of any other compute environment. Memory and storage usage and maintenance is the responsibility of the compute environment. A contrasting pattern is a shared disk pattern where compute environments leverage common data storage. Page 23 of 24
24 Ticking risk Application where market changes are consumed frequently, every 15 minutes to 1 hour, and positions are calculated for every change of the market. These systems also tend to show fine-grained impacts to prices of market movements over a day or week. Scavenging Ability to extend a compute Grid onto lower reliability or periodically unavailable infrastructure for the purposes of increased throughput. Typically used by extremely large grids by extending onto internal desktop machines which are idle outside of business hours. Contributors The following individuals and organizations contributed to this document: Ian Meyers, AWS Document Revisions Content added to this version of the whitepaper includes: Changed topology option to include 2, 3 and 4 tier grids Removed references to EMR and instead added Intel Lustre Added section on Patching process Added references to cfn-cluster for grid orchestration Added tibco silver reference c4 and g2 instance type detail Contrasts to life sciences/engineering HPC Page 24 of 24
Financial Services Grid Computing on Amazon Web Services January 2013 Ian Meyers (Please consult http://aws.amazon.com/whitepapers for the latest version of this paper) Page 1 of 15 Contents Abstract...
Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Functionality Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of
How AWS Pricing Works May 2015 (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction...
DLT Solutions and Amazon Web Services For a seamless, cost-effective migration to the cloud PREMIER CONSULTING PARTNER DLT Solutions 2411 Dulles Corner Park, Suite 800 Herndon, VA 20171 Duane Thorpe Phone:
WE RUN SEVERAL CRITICAL APPLICATIONS ON AWS BECAUSE WE CAN SCALE AND USE THE INFRASTRUCTURE EFFICIENTLY. - Murari Gopalan Director, Technology Expedia Expedia, a leading online travel company for leisure
How AWS Pricing Works (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction... 3 Fundamental
RemoteApp Publishing on AWS WWW.CORPINFO.COM Kevin Epstein & Stephen Garden Santa Monica, California November 2014 TABLE OF CONTENTS TABLE OF CONTENTS... 2 ABSTRACT... 3 INTRODUCTION... 3 WHAT WE LL COVER...
Hadoop & Spark Using Amazon EMR Michael Hanisch, AWS Solutions Architecture 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Agenda Why did we build Amazon EMR? What is Amazon EMR?
Federal GIS Conference February 9 10, 2015 Washington, DC Using ArcGIS for Server in the Amazon Cloud Bonnie Stayer, Esri Amy Ramsdell, Blue Raster Session Outline AWS Overview ArcGIS in AWS Cloud Builder
Amazon Cloud Storage Options Table of Contents 1. Overview of AWS Storage Options 02 2. Why you should use the AWS Storage 02 3. How to get Data into the AWS.03 4. Types of AWS Storage Options.03 5. Object
Application Security Best Practices Matt Tavis Principal Solutions Architect Application Security Best Practices is a Complex topic! Design scalable and fault tolerant applications See Architecting for
1. Computation Amazon Web Services Amazon Elastic Compute Cloud (Amazon EC2) provides basic computation service in AWS. It presents a virtual computing environment and enables resizable compute capacity.
Managing Your Microsoft Windows Server Fleet with AWS Directory Service May 2015 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational
Cloud Models and Platforms Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF A Working Definition of Cloud Computing Cloud computing is a model
on AWS Services Overview Bernie Nallamotu Principle Solutions Architect \ So what is it? When your data sets become so large that you have to start innovating around how to collect, store, organize, analyze
Introduction to Amazon Web Services! Leo Zhadanovsky! @leozh email@example.com! Senior Solutions Architect AWS HISTORY About How didamazon Amazon Web Services! Deep experience in building and operating global
Cloud Computing Adam Barker 1 Overview Introduction to Cloud computing Enabling technologies Different types of cloud: IaaS, PaaS and SaaS Cloud terminology Interacting with a cloud: management consoles
Introduction to AWS Economics Reducing Costs and Complexity May 2015 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes
An Esri White Paper January 2011 Estimating the Cost of a GIS in the Amazon Cloud Esri, 380 New York St., Redlands, CA 92373-8100 USA TEL 909-793-2853 FAX 909-793-5953 E-MAIL firstname.lastname@example.org WEB esri.com
Using ArcGIS for Server in the Amazon Cloud Randall Williams, Esri Subrat Bora, Esri Esri UC 2014 Technical Workshop Agenda What is ArcGIS for Server on Amazon Web Services Sounds good! How much does it
Web Services Lawrence Berkeley LabTech Conference 9/10/15 Jamie Baker Federal Scientific Account Manager AWS WWPS email@example.com 2015, Web Services, Inc. or its Affiliates. All rights reserved. AWS
.0 Why Use the Cloud? REFERENCE MODEL Cloud Development April 0 Traditionally, deployments require applications to be bound to a particular infrastructure. This results in low utilization, diminished efficiency,
Primex Wireless OneVue Architecture Statement Secure, cloud-based workflow, alert, and notification platform built on top of Amazon Web Services (AWS) 2015 Primex Wireless, Inc. The Primex logo is a registered
P3 InfoTech Solutions Pvt. Ltd http://www.p3infotech.in July 2013 Created by P3 InfoTech Solutions Pvt. Ltd., http://p3infotech.in 1 Web Application Deployment in the Cloud Using Amazon Web Services From
CLOUD COMPUTING WITH AWS An INTRODUCTION John Hildebrandt Solutions Architect ANZ AGENDA Todays Agenda Background and Value proposition of AWS Global infrastructure and the Sydney Region AWS services Drupal
Relocating Windows Server 2003 Workloads An Opportunity to Optimize From Complex Change to an Opportunity to Optimize There is much you need to know before you upgrade to a new server platform, and time
Amazon Web Services Free Usage Tier Elastic Compute Cloud Amazon Web Services Student Tutorial David Palma Joseph Snow CSC 532: Advanced Software Engineering Louisiana Tech University October 4, 2012 Amazon
Amazon Web Services Primer William Strickland COP 6938 Fall 2012 University of Central Florida AWS Overview Amazon Web Services (AWS) is a collection of varying remote computing provided by Amazon.com.
StorReduce Technical White Paper Cloud-based Data Deduplication See also at storreduce.com/docs StorReduce Quick Start Guide StorReduce FAQ StorReduce Solution Brief, and StorReduce Blog at storreduce.com/blog
Object Storage: A Growing Opportunity for Service Providers Prepared for: White Paper 2012 Neovise, LLC. All Rights Reserved. Introduction For service providers, the rise of cloud computing is both a threat
Cloud Computing For Bioinformatics Cloud Computing: what is it? Cloud Computing is a distributed infrastructure where resources, software, and data are provided in an on-demand fashion. Cloud Computing
THE DEFINITIVE GUIDE FOR AWS CLOUD EC2 FAMILIES Introduction Amazon Web Services (AWS), which was officially launched in 2006, offers you varying cloud services that are not only cost effective, but also
Modernize Your Microsoft Applications on Amazon Web Services How to Start Your Journey March 2016 2016, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided
SAS BIG DATA SOLUTIONS ON AWS SAS FORUM ESPAÑA, OCTOBER 16 TH, 2014 IAN MEYERS SOLUTIONS ARCHITECT / AMAZON WEB SERVICES AWS GLOBAL INFRASTRUCTURE 10 Regions 25 Availability Zones 51 Edge locations WHAT
Overview and Deployment Guide Sophos UTM on AWS Overview and Deployment Guide Document date: November 2014 1 Sophos UTM and AWS Contents 1 Amazon Web Services... 4 1.1 AMI (Amazon Machine Image)... 4 1.2
Active Directory Domain Services on the AWS Cloud: Quick Start Reference Deployment Mike Pfeiffer March 2014 Last updated: September 2015 (revisions) Table of Contents Abstract... 3 What We ll Cover...
Shadi Khalifa Database Systems Laboratory (DSL) firstname.lastname@example.org What is Amazon!! American international multibillion dollar electronic commerce company with headquarters in Seattle, Washington, USA.
Extend Your IT Infrastructure with Amazon Virtual Private Cloud January 2010 http://aws.amazon.com/vpc Understanding Amazon Virtual Private Cloud Amazon Virtual Private Cloud (Amazon VPC) is a secure and
IAN MASSINGHAM Technical Evangelist Amazon Web Services From 2014: Cloud computing has become the new normal Deploying new applications to the cloud by default Migrating existing applications as quickly
White Paper SECURE, ENTERPRISE FILE SYNC AND SHARE WITH EMC SYNCPLICITY UTILIZING EMC ISILON, EMC ATMOS, AND EMC VNX Abstract This white paper explains the benefits to the extended enterprise of the on-
Informatica Data Director Performance 2011 Informatica Abstract A variety of performance and stress tests are run on the Informatica Data Director to ensure performance and scalability for a wide variety
solution brief PCI COMPLIANCE ON AWS: HOW TREND MICRO CAN HELP AWS AND PCI DSS COMPLIANCE To ensure an end-to-end secure computing environment, Amazon Web Services (AWS) employs a shared security responsibility
Cost Optimization with AWS Architecture, Tools, and Best Practices February 2016 2016, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational
Extending your Enterprise IT with Amazon Virtual Private Cloud Oyvind Roti Principal Solutions Architect, AWS Three Things Some AWS Concepts Let s build a Virtual Private Cloud together Three New Services
Introduction to AWS in Higher Ed Lori Clithero email@example.com 206.227.5054 University of Washington Cloud Day 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. 2 Cloud democratizes
Service Organization Controls 3 Report Report on the Amazon Web Services System Relevant to Security For the Period April 1, 2013 March 31, 2014 Ernst & Young LLP Suite 1600 560 Mission Street San Francisco,
Cloud Computing and Amazon Web Services Gary A. McGilvary edinburgh data.intensive research 1 OUTLINE 1. An Overview of Cloud Computing 2. Amazon Web Services 3. Amazon EC2 Tutorial 4. Conclusions 2 CLOUD
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain
Whitepaper NexentaConnect for VMware Virtual SAN Full Featured File services for Virtual SAN Table of Contents Introduction... 1 Next Generation Storage and Compute... 1 VMware Virtual SAN... 2 Highlights
CloudCenter Full Lifecycle Management An application-defined approach to deploying and managing applications in any datacenter or cloud environment CloudCenter Full Lifecycle Management Page 2 Table of
ITIL Asset and Configuration Management in the Cloud An AWS Cloud Adoption Framework Addendum September 2015 A Joint Whitepaper with Minjar Cloud Solutions 2015, Amazon Web Services, Inc. or its affiliates.
ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.
The VMware Reference Architecture for Stateless Virtual Desktops with VMware View 4.5 R E F E R E N C E A R C H I T E C T U R E B R I E F Table of Contents Overview...................................................................
Utilizing Amazon Web Services for Basic Website Hosting Anthony Suda Network Manager marketing + technology 701.235.5525 888.9.sundog fax: 701.235.8941 2000 44th st s floor 6 fargo, nd 58103 www.sundoginteractive.com
David Moses January 2014 Paper on Cloud Computing I Background on Tools and Technologies in Amazon Web Services (AWS) In this paper I will highlight the technologies from the AWS cloud which enable you
Mobile Cloud Computing Lecture 02b Cloud Computing II 吳 秀 陽 Shiow-yang Wu T. Sridhar. Cloud Computing A Primer, Part 2: Infrastructure and Implementation Topics. The Internet Protocol Journal, Volume 12,
Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
CONNECTRIA MANAGED AMAZON WEB SERVICES (AWS) Maximize the benefits of using AWS. With Connectria s Managed AWS, you can purchase and implement 100% secure, highly available, managed AWS solutions all backed
Web Application Hosting in the AWS Cloud Best Practices September 2012 Matt Tavis, Philip Fitzsimons Page 1 of 14 Abstract Highly available and scalable web hosting can be a complex and expensive proposition.
Appendix C Pricing Index DIR Contract Number DIR-TSO-2724 Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) Amazon Web Services (AWS) is a comprehensive cloud services platform that offers
S T O R A G E Understanding Microsoft Storage Spaces A critical look at its key features and value proposition for storage administrators A Microsoft s Storage Spaces solution offers storage administrators
TECHNOLOGY DETAIL RED HAT CLOUDFORMS ENTERPRISE- GRADE MANAGEMENT FOR AMAZON WEB SERVICES ABSTRACT Do you want to use public clouds like Amazon Web Services (AWS) to flexibly extend your datacenter capacity,
APPLICATION NOTE: CLOUD DATA TIERING Eversync has developed a hybrid model for cloud-based data protection in which all of the elements of data protection are tiered between an on-premise appliance (software
UT DALLAS Erik Jonsson School of Engineering & Computer Science Cloud Computing Trends What is cloud computing? Cloud computing refers to the apps and services delivered over the internet. Software delivered
Migration Scenario: Migrating Backend Processing Pipeline to the AWS Cloud Use case Figure 1: Company C Architecture (Before Migration) Company C is an automobile insurance claim processing company with
Introduction to Arvados A Curoverse White Paper Contents Arvados in a Nutshell... 4 Why Teams Choose Arvados... 4 The Technical Architecture... 6 System Capabilities... 7 Commitment to Open Source... 12
Assignment # 1 (Cloud Computing Security) Group Members: Abdullah Abid Zeeshan Qaiser M. Umar Hayat Table of Contents Windows Azure Introduction... 4 Windows Azure Services... 4 1. Compute... 4 a) Virtual
Axceleon s CloudFuzion Turbocharges 3D Rendering On Amazon s EC2 In the movie making, visual effects and 3D animation industrues meeting project and timing deadlines is critical to success. Poor quality
SharePoint 2013 on Windows Azure Infrastructure David Aiken & Dan Wesley Version 1.0 Overview With the Virtual Machine and Virtual Networking services of Windows Azure, it is now possible to deploy and
Big Fast Data Hadoop acceleration with Flash June 2013 Agenda The Big Data Problem What is Hadoop Hadoop and Flash The Nytro Solution Test Results The Big Data Problem Big Data Output Facebook Traditional
Building Energy Security Framework Philosophy, Design, and Implementation Building Energy manages multiple subsets of customer data. Customers have strict requirements for regulatory compliance, privacy
Identity and Access Management for the Cloud What you need to know about managing access to your clouds Organizations need to control who has access to which systems and technology within the enterprise.
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
Intro to AWS: Storage Services Matt McClean, AWS Solutions Architect 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved AWS storage options Scalable object storage Inexpensive archive