How To Manage An Ec2 Instance In A Cloud Instance
|
|
|
- Cameron Simmons
- 5 years ago
- Views:
Transcription
1 Managing Your AWS Infrastructure at Scale Shaun Pearce Steven Bryen February 2015
2 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes only. It represents AWS s current product offerings and practices as of the date of issue of this document, which are subject to change without notice. Customers are responsible for making their own independent assessment of the information in this document and any use of AWS s products or services, each of which is provided as is without warranty of any kind, whether express or implied. This document does not create any warranties, representations, contractual commitments, conditions or assurances from AWS, its affiliates, suppliers or licensors. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. Page 2 of 32
3 Contents Abstract Introduction Provisioning New EC2 Instances Creating Your Own AMI Managing AMI Builds Dynamic Configuration Scripting Your Own Solution Using Configuration Management Tools Using AWS Services to Help Manage Your Environments AWS Elastic Beanstalk AWS OpsWorks AWS CloudFormation User Data cfn-init Using the Services Together Managing Application and Instance State Structured Application Data Amazon RDS Amazon DynamoDB Unstructured Application Data User Session Data Amazon ElastiCache System Metrics Amazon CloudWatch Log Management Amazon CloudWatch Logs Conclusion Further Reading Page 3 of 32
4 Abstract Amazon Web Services (AWS) enables organizations to deploy large-scale application infrastructures across multiple geographic locations. When deploying these large, cloudbased applications, it s important to ensure that the cost and complexity of operating such systems does not increase in direct proportion to their size. This whitepaper is intended for existing and potential customers especially architects, developers, and sysops administrators who want to deploy and manage their infrastructure in a scalable and predictable way on AWS. In this whitepaper, we describe tools and techniques to provision new instances, configure the instances to meet your requirements, and deploy your application code. We also introduce strategies to ensure that your instances remain stateless, resulting in an architecture that is more scalable and fault tolerant. The techniques we describe allow you to scale your service from a single instance to thousands of instances while maintaining a consistent set of processes and tools to manage them. For the purposes of this whitepaper, we assume that you have knowledge of basic scripting and core services such as Amazon Elastic Compute Cloud (Amazon EC2). Introduction When designing and implementing large, cloud-based applications, it s important to consider how your infrastructure will be managed to ensure the cost and complexity of running such systems is minimized. When you first begin using Amazon EC2, it is easy to manage your EC2 instances just like regular virtualized servers running in your data center. You can create an instance, log in, configure the operating system, install any additional packages, and install your application code. You can maintain the instance by installing security patches, rolling out new deployments of your code, and modifying the configuration as needed. Despite the operational overhead, you can continue to manage your instances in this way for a long time. However, your instances will inevitably begin to diverge from their original specification, which can lead to inconsistencies with other instances in the same environment. This divergence from a known baseline can become a huge challenge when managing large fleets of instances across multiple environments. Ultimately, it will lead to service issues because your environments will become less predictable and more difficult to maintain. The AWS platform provides you with a set of tools to address this challenge with a different approach. By using Amazon EC2 and associated services, you can specify and manage the desired end state of your infrastructure independently of the EC2 instances and other running components. Page 4 of 32
5 For example, with a traditional approach you would alter the configuration of an Apache server running across your web servers by logging in to each server in turn and manually making the change. By using the AWS platform, you can take a different approach by changing the underlying specification of your web servers and launching new EC2 instances to replace the old ones. This ensures that each instance remains identical; it also reduces the effort to implement the change and reduces the likelihood of errors being introduced. When you start to think of your infrastructure as being defined independently of the running EC2 instances and other components in your environments, you can take greater advantage of the benefits of dynamic cloud environments: Software-defined infrastructure By defining your infrastructure using a set of software artifacts, you can leverage many of the tools and techniques that are used when developing software components. This includes managing the evolution of your infrastructure in a version control system, as well as using continuous integration (CI) processes to continually test and validate infrastructure changes before deploying them to production. Auto Scaling and self-healing If you automatically provision your new instances from a consistent specification, you can use Auto Scaling groups to manage the number of instances in an EC2 fleet. For example, you can set a condition to add new EC2 instances in increments to the Auto Scaling group when the average utilization of your EC2 fleet is high. You can also use Auto Scaling to detect impaired EC2 instances and unhealthy applications, and replace the instances without your intervention. Fast environment provisioning You can quickly and easily provision consistent environments, which opens up new ways of working within your teams. For example, you can provision a new environment to allow testers to validate a new version of your application in their own, personal test environments that are isolated from other changes. Reduce costs Now that you can provision environments quickly, you also have the option to remove them when they are no longer needed. This reduces costs because you pay only for the resources that you use. Blue-green deployments You can deploy new versions of your application by provisioning new instances (containing a new version of the code) beside your existing infrastructure. You can then switch traffic between environments in an approach known as blue-green deployments. This has many benefits over traditional deployment strategies, including the ability to quickly and easily roll back a deployment in the event of an issue. To leverage these advantages, your infrastructure must have the following capabilities: Page 5 of 32
6 1. New infrastructure components are automatically provisioned from a known, versioncontrolled baseline in a repeatable and predictable manner 2. All instances are stateless so that they can be removed and destroyed at any time, without the risk of losing application state or system data The following figure shows the overall process: 1 2 Version Control System EC2 Instance Durable Storage Figure 1: Instance Lifecycle and State Management The following sections outline tools and techniques that you can use to build a system with these capabilities. By moving to an architecture where your instances can be easily provisioned and destroyed with no loss of data, you can fundamentally change the way you manage your infrastructure. Ultimately, you can scale your infrastructure over time without significantly increasing the operational overhead associated with it. Provisioning New EC2 Instances A number of external events will require you to provision new instances into your environments: Creating new instances or replicating existing environments Replacing a failed instance in an existing environment Responding to a scale up event to add additional instances to an Auto Scaling group Deploying a new version of your software stack (by using blue-green deployments) Some of these events are difficult or even impossible to predict, so it s important that the process to create new instances into your environment is fully automated, repeatable, and consistent. The process of automatically provisioning new instances and bringing them into service is known as bootstrapping. There are multiple approaches to bootstrapping your Amazon EC2 instances. The two most popular approaches are to either create your own Page 6 of 32
7 Amazon Machine Image (AMI) or to use dynamic configuration. We explain both approaches in the following sections. Creating Your Own AMI An Amazon Machine Image (AMI) is a template that provides all of the information required to launch an Amazon EC2 instance. At a minimum it contains the base operating system, but it may also include additional configuration and software. You can launch multiple instances of an AMI, and you can also launch different types of instances from a single AMI. You have several options when launching a new EC2 instance: Select an AMI provided by AWS Select an AMI provided by the community Select an AMI containing pre-configured software from the AWS Marketplace 1 Create a custom AMI If launching an instance from a base AMI containing only the operating system, you can further customize the instance with additional configuration and software after it has been launched. If you create a custom AMI, you can launch an instance that already contains your complete software stack, thereby removing the need for any run-time configuration. However, before you decide whether to create a custom AMI, you should understand the advantages and disadvantages. Advantages of custom AMIs Increases speed All configuration is packaged into the AMI itself, which significantly increases the speed in which new instances can be launched. This is particularly useful during Auto Scaling events. Reduces external dependencies Packaging everything into an AMI means that there is no dependency on the availability of external services when launching new instances (for example, package or code repositories). Removes the reliance on complex configuration scripts at launch time By preconfiguring your AMI, scaling events and instance replacements no longer rely on the successful completion of configuration scripts at launch time. This reduces the likelihood of operational issues caused by erroneous scripts. Disadvantages of custom AMIs 1 Page 7 of 32
8 Loss of agility Packaging everything into an AMI means that even simple code changes and defect fixes will require you to produce a new AMI. This increases the time it takes to develop, test, and release enhancements and fixes to your application. Complexity Managing the AMI build process can be complex. You need a process that enables the creation of consistent, repeatable AMIs where the changes between revisions are identifiable and auditable. Run-time configuration requirements You might need to make additional customizations to your AMIs based on run-time information that cannot be known at the time the AMI is created. For example, the database connection string required by your application might change depending on where the AMI is used. Given these advantages and disadvantages, we recommend a hybrid approach: build static components of your stack into AMIs, and configure dynamic aspects that change regularly (such as application code) at run time. Consider the following factors to help you decide what configuration to include within a custom AMI and what to include in dynamic run-time scripts: Frequency of deployments How often are you likely to deploy enhancements to your system, and at what level in your stack will you make the deployments? For example, you might deploy changes to your application on a daily basis, but you might upgrade your JVM version far less frequently. Reduction on external dependencies If the configuration of your system depends on other external systems, you might decide to carry out these configuration steps as part of an AMI build rather than at the time of launching an instance. Requirements to scale quickly Will your application use Auto Scaling groups to adjust to changes in load? If so, how quickly will the load on the application increase? This will dictate the speed in which you need to provision new instances into your EC2 fleet. Once you have assessed your application stack based on the preceding criteria, you can decide which elements of your stack to include in a custom AMI and which will be configured dynamically at the time of launch. The following figure shows a typical Java web application stack and how it could be managed across AMIs and dynamic scripts. Page 8 of 32
9 App Config Application Code App Frameworks Apache Tomcat JVM OS Users & Grps OS EC2 Instance Base AMI Bootstrapping Code App Config Application Code App Frameworks Apache Tomcat JVM OS Users & Grps OS EC2 Instance Foundational AMI Bootstrapping Code App Config Application Code App Frameworks Apache Tomcat JVM OS Users & Grps OS EC2 Instance Full stack AMI Bootstrapping Code Figure 2: Base, Foundational, and Full AMI Models In the base AMI model, only the OS image is maintained as an AMI. The AMI can be an AWS-managed image, or an AMI that you manage that contains your own OS image. In the foundational AMI model, elements of a stack that change infrequently (for example, components such as the JVM and application server) are built into the AMI. In the full stack AMI model, all elements of the stack are built into the AMI. This model is useful if your application changes infrequently, or if your application has rapid autoscaling requirements (which means that dynamically installing the application isn t feasible). However, even if you build your application into the AMI, it still might be advantageous to dynamically configure the application at run time because it increases the flexibility of the AMI. For example, it enables you to use your AMIs across multiple environments. Managing AMI Builds Many people start by manually configuring their AMIs using a process similar to the following: 1. Launch the latest version of the AMI 2. Log in to the instance and manually reconfigure it (for example, by making package updates or installing new applications) 3. Create a new AMI based on the running instance Page 9 of 32
10 Although this manual process is sufficient for simple applications, it is difficult to manage in more complex environments where AMI updates are needed regularly. It s essential to have a consistent, repeatable process to create your AMIs. It s also important to be able to audit what has changed between one version of your AMI and another. One way to achieve this is to manage the customization of a base AMI by using automated scripts. You can develop your own scripts, or you can use a configuration management tool. For more information about configuration management tools, see the Using Configuration Management Tools section in this whitepaper. Using automated scripts has a number of advantages over the manual method. Automation significantly speeds up the AMI creation process. In addition, you can use version control for your scripts/configuration files, which results in a repeatable process where the change between AMI versions is transparent and auditable. This automated process is similar to the manual process: 1. Launch the latest version of the AMI 2. Execute the automated configuration using your tool of choice 3. Create a new AMI image based on the running instance You can use a third-party tool such as Packer 2 to help automate the process. However, many find that this approach is still too time consuming for an environment with multiple, frequent AMI builds across multiple environments. If you use the Linux operating system, you can reduce the time it takes to create a new AMI by customizing an Amazon Elastic Block Store (Amazon EBS) volume rather than a running instance. An Amazon EBS volume is a durable, block-level storage device that you can attach to a single Amazon EC2 instance. It is possible to create an Amazon EBS volume from a base AMI snapshot and customise this volume before storing it as a new AMI. This replaces the time taken to initialize an EC2 instance with the far shorter time needed to create and attach an EBS volume. In addition, this approach makes use of the incremental nature of Amazon EBS snapshots. An EBS snapshot is a point-in-time backup of an EBS volume that is stored in Amazon S3. Snapshots are incremental backups, meaning that only the blocks on the device that have changed after your most recent snapshot are saved. For example, if a configuration update changes only 100 MB of the blocks on an 8 GB EBS volume, only 100 MB will be stored to Amazon S Page 10 of 32
11 To achieve this, you need a long running EC2 instance that is responsible for attaching a new EBS volume based on the latest AMI build, executing the scripts needed to customize the volume, creating a snapshot of the volume, and registering the snapshot as a new version of your AMI. For example, Netflix uses this technique in their open source tool called aminator. 3 The following figure shows this process. Figure 3: Using EBS Snapshots to Speed Up Deployments 1. Create the volume from the latest AMI snapshot 2. Attach the volume to the instance responsible for building new AMIs 3. Run automated provisioning scripts to update the AMI configuration 4. Snapshot the volume 5. Register the snapshot as a new version of the AMI 3 Page 11 of 32
12 Dynamic Configuration Now that you have decided what to include into your AMI and what should be dynamically configured at run time, you need to decide how to complete the dynamic configuration and bootstrapping process. There are many tools and techniques that you can use to configure your instances, ranging from simple scripts to complex, centralized configuration management tools. Scripting Your Own Solution Depending on how much pre-configuration has been included into your AMI, you might need only a single script or set of scripts as a simple, elegant way to configure the final elements of your application stack. User Data and cloud-init When you launch a new EC2 instance by using either the AWS Management Console or the API, you have the option of passing user data to the instance. You can retrieve the user data from the instance through the EC2 metadata service, and use it to perform automated tasks to configure instances as they are first launched. When a Linux instance is launched, the initialization instructions passed into the instance by means of the user data are executed by using a technology called cloud-init. The cloud-init package is an open source application built by Canonical. It s included in many base Linux AMIs (to find out if your distribution supports cloud-init, see the distribution-specific documentation). Amazon Linux, a Linux distribution created and maintained by AWS, contains a customized version of cloud-init. You can pass two types of user data, either shell scripts or cloud-init directives, to cloud-init running on your EC2 instance. For example, the following shell script can be passed to an instance to update all installed packages and to configure the instance as a PHP web server: #!/bin/sh yum update -y yum -y install httpd php php-mysql chkconfig httpd on /etc/init.d/httpd start The following user data achieves the same result, but uses a set of cloud-init directives: #cloud-config Page 12 of 32
13 repo_update: true repo_upgrade: all packages: - httpd - php - php-mysql runcmd: - service httpd start - chkconfig httpd on AWS Windows AMIs contain an additional service, EC2Config, that is installed by AWS. The EC2Config service performs tasks on the instance such as activating Windows, setting the Administrator password, writing to the AWS console, and performing oneclick sysprep from within the application. If launching a Windows instance, the EC2Config service can also execute scripts passed to the instance by means of the user data. The data can be in the form of commands that you run at the cmd.exe prompt or Windows PowerShell prompt. This approach works well for simple use cases. However, as the number of instance roles (web, database, and so on) grows along with the number of environments that you need to manage, your scripts might become large and difficult to maintain. Additionally, user data is limited to 16 KB, so if you have a large number of configuration tasks and associated logic, we recommend that you use the user data to download additional scripts from Amazon S3 that can then be executed. Leveraging EC2 Metadata When you configure a new instance, you typically need to understand the context in which the instance is being launched. For example, you might need to know the hostname of the instance or which region or Availability Zone the instance has been launched into. The EC2 metadata service can be queried to provide such contextual information about an instance, as well as retrieving the user data. To access the instance metadata from within a running instance, you can make a standard HTTP GET using tools such as curl or the GET command. For example, to retrieve the host name of the instance, you can make an HTTP GET request to the following URL: Page 13 of 32
14 Resource Tagging To help you manage your EC2 resources, you can assign your own metadata to each instance in addition to the EC2 metadata that is used to define hostnames, Availability Zones, and other resources. You do this with tags. Each tag consists of a key and a value, both of which you define when the instance is launched. You can use EC2 tags to define further context to the instance being launched. For example, you can tag your instances for different environments and roles, as shown in the following figure. Key Value i-1bbb2637 environment = production role = web i-f2871ade environment = dev role = app Figure 4: Example of EC2 Tag Usage As long as your EC2 instance has access to the Internet, these tags can be retrieved by using the AWS Command Line Interface (CLI) within your bootstrapping scripts to configure your instances based on their role and the environment in which they are being launched. Putting it all Together The following figure shows a typical bootstrapping process using user data and a set of configuration scripts hosted on Amazon S3. Page 14 of 32
15 Instance Launch Request User Data Receive user data and expose via metadata service Retrieve and process User Data EC2 Metadata Service Download base config and execute Base Configuration describe-tags Retrieve server role from EC2 API, download and execute appropriate script describe-tags Retrieve server environment from EC2 API, download and execute appropriate script Server Role Overlay Scripts Environment Overlay Scripts Bootstrap Complete EC2 API Amazon EC2 Instance Amazon S3 Bucket Figure 5: Example of an End-to-End Workflow This example uses the user data as a lightweight mechanism to download a base configuration script from Amazon S3. The script is responsible for configuring the system to a baseline across all instances regardless of role and environment (for example, the script might install monitoring agents and ensure that the OS is patched). This base configuration script uses the CLI to retrieve the instances tags. Based on the value of the role tag, the script downloads an additional overlay script responsible for the additional configuration required for the instance to perform its specific role (for example, installing Apache on a web server). Finally, the script uses the instances environment tag to download an appropriate environment overlay script to carry out the Page 15 of 32
16 final configuration for the environment the instance resides in (for example, setting log levels to DEBUG in the development environment). To protect sensitive information that might be contained in your scripts, you should restrict access to these assets by using IAM Roles. 4 Using Configuration Management Tools Although scripting your own solution works, it can quickly become complex when managing large environments. It also can become difficult to govern and audit your environment, such as identifying changes or troubleshooting configuration issues. You can address some of these issues by using a configuration management tool to manage instance configurations. Configuration management tools allow you to define your environment s configuration in code, typically by using a domain-specific language. These domain-specific languages use a declarative approach to code, where the code describes the end state and is not a script that can be executed. Because the environment is defined using code, you can track changes to the configuration and apply version control. Many configuration management tools also offer additional features such as compliance, auditing, and search. Push vs. Pull Models Configuration management tools typically leverage one of two models, push or pull. The model used by a tool is defined by how a node (a target EC2 instance in AWS) interacts with the master configuration management server. In a push model, a master configuration management server is aware of the nodes that it needs to manage and pushes the configuration to them remotely. These nodes need to be pre-registered on the master server. Some push tools are agentless and execute configuration remotely using existing protocols such as SSH. Others push a package, which is then executed locally using an agent. The push model typically has some constraints when working with dynamic and scalable AWS resources: The master server needs to have information about the nodes that it needs to manage. When you use tools such as Auto Scaling, where nodes might come and go, this can be a challenge. Push systems that do remote execution do not scale as well as systems where configuration changes are offloaded and executed locally on a node. In large 4 Page 16 of 32
17 environments, the master server might get overloaded when configuring multiple systems in parallel. Connecting to nodes remotely requires you to allow specific ports to be allowed inbound to your nodes. For some remote execution tools, this includes remote SSH. The second model is the pull model. Configuration management tools that use a pull system use an agent that is installed on a node. The agent asks the master server for configuration. A node can pull its configuration at boot time, or agents can be daemonized to poll the master periodically for configuration changes. Pull systems are especially useful for managing dynamic and scalable AWS environments. Following are the main benefits of the pull model: Nodes can scale up and down easily, as the master does not need to know they exist before they can be configured. Nodes can simply register themselves with the server. Configuration management masters require less scaling when using a pull system because all processing is offloaded and executed locally on the remote node. No specific ports need to be opened inbound to the nodes. Most tools allow the agent to communicate with the master server by using typical outbound ports such as HTTPS. Chef Example Many configuration management tools work with AWS. Some of the most popular are Chef, Puppet, Ansible, and SaltStack. For our example in this section, we use Chef to demonstrate bootstrapping with a configuration management tool. You can use other tools and apply the same principles. Chef is an open source configuration management tool that uses an agent (chef-client) to pull configuration from a master server (Chef server). Our example shows how to bootstrap nodes by pulling configuration from a Chef server at boot time. The example is based on the following assumptions: You have configured a Chef server You have an AMI that has the AWS command line tools installed and configured You have the chef-client installed and included into your AMI First, let s look at what we are going to configure within Chef. We ll create a simple Chef cookbook that installs an Apache web server and deploys a Hello World site. A Chef cookbook is a collection of recipes; a recipe is a definition of resources that should be configured on a node. This can include files, packages, permissions, and more. The default recipe for this Apache cookbook might look something like this: Page 17 of 32
18 # # Cookbook Name:: apache # Recipe:: default # # Copyright 2014, YOUR_COMPANY_NAME # # All rights reserved - Do Not Redistribute # package "httpd" #Allow Apache to start on boot service "httpd" do action [:enable, :start] end #Add HTML Template into Web Root template "/var/www/html/index.html" do source "index.html.erb" mode "0644" end In this recipe, we install, enable, and start the HTTPD (HTTP daemon) service. Next, we render a template for index.html and place it into the /var/www/html directory. The index.html.erb template in this case is a very simple HTML page: <h1>hello World</h1> Next, the cookbook is uploaded to the Chef server. Chef offers the ability to group cookbooks into roles. Roles are useful in large-scale environments where servers within your environment might have many different roles, and cookbooks might have overlapping roles. In our example, we add this cookbook to a role called webserver. Now when we launch EC2 instances (nodes), we can provide EC2 user data to bootstrap them by using Chef. To make this as dynamic as possible, we can use an EC2 tag to define which Chef role to apply to our node. This allows us to use the same user data script for all nodes, whichever role is intended for them. For example, a web server and a database server can use the same user data if you assign different values to the role tag in EC2. We also need to consider how our new instance will authenticate with the Chef server. We can store our private key in an encrypted Amazon S3 bucket by using Amazon S3 Page 18 of 32
19 server side encryption, 5 and we can restrict access to this bucket by using IAM roles. The key can then be used to authenticate with the Chef server. The chef-client uses a validator.pem file to authenticate to the Chef server when registering new nodes. We also need to know which Chef server to pull our configuration from. We can store a pre-populated client.rb file in Amazon S3 and copy this within our user data script. You might want to dynamically populate this client.rb file depending on environment, but for our example we assume that we have only one Chef server and that a pre-populated client.rb file is sufficient. You could also include these two files into your custom AMI build. The user data would look like this: #!/bin/bash cd /etc/chef #Copy Chef Server Private Key from S3 Bucket aws s3 cp s3://s3-bucket/orgname-validator.pem orgnamevalidator.pem #Copy Chef Client Configuration File from S3 Bucket aws s3 cp s3://s3-bucket/client.rb client.rb #Change permissions on Chef Server private key. chmod 400 /etc/chef/orgname-validator.pem #Get EC2 Instance ID from the Meta-Data Service INSTANCE_ID=`curl -s #Get Tag with Key of role for this EC2 instance ROLE_TAG=$(aws ec2 describe-tags --filters "Name=resourceid,Values=$INSTANCE_ID" "Name=key,Values=role" --output text) #Get value of Tag with Key of role as string ROLE_TAG_VALUE=$(echo $ROLE_TAG awk 'NF>1{print $NF}') #Create first_boot.json file dynamically adding the tag value as the chef role in the run-list echo "{\"run_list\":[\"role[$role_tag_value]\"]}" > first_boot.json 5 Page 19 of 32
20 #execute the chef-client using first_boot.json config chef-client -j first_boot.json #daemonize the chef-client to run every 5 minutes chef-client -d -i 300 -s 30 As shown in the preceding user data example, we copy our client configuration files from a private S3 bucket. We then use the EC2 metadata service to get some information about the instance (in this example, Instance ID). Next, we query the Amazon EC2 API for any tags with the key of role, and dynamically configure a Chef run-list with a Chef role of this value. Finally, we execute the first chef-client run by providing the first_boot.json options, which include our new run-list. We then execute chef-client once more; however, this time we execute it in a daemonized setup to pull configuration every 5 minutes. We now have some re-usable EC2 user data that we can apply to any new EC2 instances. As long as a role tag is provided with a value that matches a role on the target Chef server, the instance will be configured using the corresponding Chef cookbooks. Putting it all Together The following figure shows a typical workflow, from instance launch to a fully configured instance that is ready to serve traffic. Page 20 of 32
21 Instance Launch Request User Data Receive user data and expose via metadata service Retrieve and process User Data EC2 Metadata Service Download private key and client.rb from S3 bucket Chef config files describe-tags Retrieve server role from EC2 API EC2 API Configure first_boot.son to use chef role with tag value describe-tags Pull Config from Chef Server and configure instance EC2 API Bootstrap Complete Amazon EC2 Instance Amazon S3 Bucket Figure 6: Example of an End-to-End Workflow Page 21 of 32
22 Using AWS Services to Help Manage Your Environments In the preceding sections, we discussed tools and techniques that systems administrators and developers can use to provision EC2 instances in an automated, predictable, and repeatable manner. AWS also provides a range of application management services that help make this process simpler and more productive. The following figure shows how to select the right service for your application based on the level of control that you require. Figure 7: AWS Deployment and Management Services In addition to provisioning EC2 instances, these services can also help you to provision any other associated AWS components that you need in your systems, such as Auto Scaling groups, load balancers, and networking components. We provide more information about how to use these services in the following sections. AWS Elastic Beanstalk AWS Elastic Beanstalk allows web developers to easily upload code without worrying about managing or implementing any underlying infrastructure components. Elastic Beanstalk takes care of deployment, capacity provisioning, load balancing, auto scaling, and application health monitoring. It is worth noting that Elastic Beanstalk is not a black box service: You have full visibility and control of the underlying AWS resources that are deployed, such as EC2 instances and load balancers. Elastic Beanstalk supports deployment of Java,.NET, Ruby, PHP, Python, Node.js, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Elastic Beanstalk provides a default configuration, but you can extend the configuration as needed. For example, you might want to install additional packages from a yum repository or copy configuration files that your application depends on, such as a replacement for httpd.conf to override specific settings. Page 22 of 32
23 You can write the configuration files in YAML or JSON format and create the files with a.config file extension. You then place the files in a folder in the application root named.ebextensions. You can use configuration files to manage packages and services, work with files, and execute commands. For more information about using and extending Elastic Beanstalk, see AWS Elastic Beanstalk Documentation. 6 AWS OpsWorks AWS OpsWorks is an application management service that makes it easy to deploy and manage any application and its required AWS resources. With AWS OpsWorks, you build application stacks that consist of one or many layers. You configure a layer by using an AWS OpsWorks configuration, a custom configuration, or a mix of both. AWS OpsWorks uses Chef, the open source configuration management tool, to configure AWS resources. This gives you the ability to provide your own custom or community Chef recipes. AWS OpsWorks features a set of lifecycle events Setup, Configure, Deploy, Undeploy, and Shutdown that automatically run the appropriate recipes at the appropriate time on each instance. AWS OpsWorks provides some AWS-managed layers for typical application stacks. These layers are open and customizable, which means that you can add additional custom recipes to the layers provided by AWS OpsWorks or create custom layers from scratch using your existing recipes. It is important to ensure that the correct recipes are associated with the correct lifecycle events. Lifecycle events run during the following times: Setup Occurs on a new instance after it successfully boots Configure Occurs on all of the stack s instances when an instance enters or leaves the online state Deploy Occurs when you deploy an app Undeploy Occurs when you delete an app Shutdown Occurs when you stop an instance For example, the configure event is useful when building distributed systems or for any system that needs to be aware of when new instances are added or removed from the stack. You could use this event to update a load balancer when new web servers are added to the stack. 6 Page 23 of 32
24 In addition to typical server configuration, AWS OpsWorks manages application deployment and integrates with your application s code repository. This allows you to track application versions and rollback deployments if needed. For more information about AWS OpsWorks, see AWS OpsWorks Documentation. 7 AWS CloudFormation AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. Compared to Elastic Beanstalk and AWS OpsWorks, AWS CloudFormation gives you the most control and flexibility when provisioning resources. AWS CloudFormation allows you to manage a broad set of AWS resources. For the purposes of this whitepaper, we focus on the features that you can use to bootstrap your EC2 instances. User Data Earlier in this whitepaper, we described the process of using user data to configure and customize your EC2 instances (see Scripting Your Own Solution). You also can include user data in an AWS CloudFormation template, which is executed on the instance once it is created. You can include user data when specifying a single EC2 instance as well as when specifying a launch configuration. The following example shows a launch configuration that provisions instances configured to be PHP web servers: "MyLaunchConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration", "Properties" : { "ImageId" : "i ", "SecurityGroups" : "MySecurityGroup", "InstanceType" : "m3.medium", "KeyName" : "MyKey", "UserData": {"Fn::Base64": {"Fn::Join":["",[ "#!/bin/bash\n", "yum update -y\n", "yum -y install httpd php php-mysql\n", "chkconfig httpd on\n", "/etc/init.d/httpd start\n" ]]}} 7 Page 24 of 32
25 } } cfn-init The cfn-init script is an AWS CloudFormation helper script that you can use to specify the end state of an EC2 instance in a more declarative manner. The cfn-init script is installed by default on Amazon Linux and AWS-supplied Windows AMIs. Administrators can also install cfn-init on other Linux distributions, and then include this into their own AMI if needed. The cfn-init script parses metadata from the AWS CloudFormation template and uses the metadata to customize the instance accordingly. The cfn-init script can do the following: Install packages from package repositories (such as yum and apt-get) Download and unpack archives, such as.zip and.tar files Write files to disk Execute arbitrary commands Create users and groups Enable/disable and start/stop services In an AWS CloudFormation template, the cfn-init helper script is called from the user data. Once it is called, it will inspect the metadata associated with the resource passed into the request and then act accordingly. For example, you can use the following launch configuration metadata to instruct cfn-init to configure an EC2 instance to become a PHP web server (similar to the preceding user data example): "MyLaunchConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration", "Metadata" : { "AWS::CloudFormation::Init" : { "config" : { "packages" : { "yum" : { "httpd" : [], "php" : [], "php-mysql" : [] } }, "services" : { "sysvinit" : { "httpd" : { Page 25 of 32
26 "enabled" : "true", "ensurerunning" : "true" } } } } } }, "Properties" : { "ImageId" : "i ", "SecurityGroups" : "MySecurityGroup", "InstanceType" : "m3.medium", "KeyName" : "MyKey", "UserData": {"Fn::Base64": {"Fn::Join":["",[ "#!/bin/bash\n", "yum update -y aws-cfn-bootstrap\n", "/opt/aws/bin/cfn-init --stack ", { "Ref" : "AWS::StackId" }, " --resource MyLaunchConfig ", " --region ", { "Ref" : "AWS::Region" }, "\n", ]]}} } } For a detailed walkthrough of bootstrapping EC2 instances by using AWS CloudFormation and its related helper scripts, see the Bootstrapping Applications via AWS CloudFormation whitepaper. 8 Using the Services Together You can use the services separately to help you provision new infrastructure components, but you also can combine them to create a single solution. This approach has clear advantages. For example, you can model an entire architecture, including networking and database configurations, directly into an AWS CloudFormation template, and then deploy and manage your application by using AWS Elastic Beanstalk or AWS OpsWorks. This approach unifies resource and application management, making it easier to apply version control to your entire architecture. 8 Page 26 of 32
27 Managing Application and Instance State After you implement a suitable process to automatically provision new infrastructure components, your system will have the capability to create new EC2 instances and even entire new environments in a quick, repeatable, and predictable manner. However, in a dynamic cloud environment you will also need to consider how to remove EC2 instances from your environments, and what impact this might have on the service that you provide to your users. There are a number of reasons why an instance might be removed from your system: The instance is terminated as a result of a hardware or software failure The instance is terminated as a response to a scale down event to remove instances from an Auto Scaling group The instance is terminated because you ve deployed a new version of your software stack by using blue-green deployments (instances running the older version of the application are terminated after the deployment) To handle the removal of instances without impacting your service, you need to ensure that your application instances are stateless. This means that all system and application state is stored and managed outside of the instances themselves. There are many forms of system and application state that you need to consider when designing your system, as shown in the following table. State Structured application data Unstructured application data User session data Application and system logs Application and system metrics Examples Customer orders Images and documents Position in the app; contents of a shopping cart Access logs; security audit logs CPU load; network utilization Running stateless application instances means that no instance in a fleet is any different from its counterparts. This offers a number of advantages: Providing a robust service Instances can serve any request from any user at any time. If an instance fails, subsequent requests can be routed to alternative instances while the failed instance is replaced. This can be achieved with no interruption to service for any of your users. Quicker, less complicated bootstrapping Because your instances don t contain any dynamic state, your bootstrapping process needs to concern itself only with provisioning your system up to the application layer. There is no need to try to Page 27 of 32
28 recover state and data, which is often large and therefore can significantly increase bootstrapping times. EC2 instances as a unit of deployment Because all state is maintained off of the EC2 instances themselves, you can replace the instances while orchestrating application deployments. This can simplify your deployment processes and allow new deployment techniques, such as blue-green deployments. The following section describes each form of application and instance state, and outlines some of the tools and techniques that you can use to ensure it is stored separately and independently from the application instances themselves. Structured Application Data Most applications produce structured, textual data, such as customer orders in an order management system or a list of web pages in a CMS. In most cases, this kind of content is best stored in a database. Depending on the structure of the data and the requirements for access speed and concurrency, you might decide to use a relational database management system or a NoSQL data store. In either case, it is important to store this content in a durable, highly available system away from the instances running your application. This will ensure that the service you provide your users will not be interrupted or their data lost, even in the event of an instance failure. AWS offers both relational and NoSQL managed databases that you can use as a persistence layer for your applications. We discuss these database options in the following sections. Amazon RDS Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up, operate, and scale a relational database in the cloud. It allows you to continue to work with the relational database engines you re familiar with, including MySQL, Oracle, Microsoft SQL Server, or PostgreSQL. This means that the code, applications, and operational tools that you are already using can be used with Amazon RDS. Amazon RDS also handles time-consuming database management tasks, such as data backups, recovery, and patch management, which frees your database administrators to pursue higher value application development or database refinements. In addition, Amazon RDS Multi-AZ deployments increase your database availability and protect your database against unplanned outages. This gives your service an additional level of resiliency. Amazon DynamoDB Amazon DynamoDB is a fully managed NoSQL database service offering both document (JSON) and key-value data models. DynamoDB has been designed to provide consistent, single-digit millisecond latency at any scale, making it ideal for high Page 28 of 32
29 traffic applications with a requirement for low latency data access. DynamoDB manages the scaling and partitioning of infrastructure on your behalf. When you create a table, you specify how much request capacity you require. If your throughput requirements change, you can update this capacity as needed with no impact on service. Unstructured Application Data In addition to the structured data created by most applications, some systems also have a requirement to receive and store unstructured resources such as documents, images, and other binary data. For example, this might be the case in a CMS where an editor uploads images and PDFs to be hosted on a website. In most cases, a database is not a suitable storage mechanism for this type of content. Instead, you can use Amazon Simple Storage Service (Amazon S3). Amazon S3 provides a highly available and durable object store that is well suited to storing this kind of data. Once your data is stored in Amazon S3, you have the option of serving these files directly from Amazon S3 to your end users over HTTP(S), bypassing the need for these requests to go to your application instances. User Session Data Many applications produce information associated with a user s current position within an application. For example, as users browse an e-commerce site, they might start to add various items into their shopping basket. This information is known as session state. It would be frustrating to users if the items in their baskets disappeared without notice, so it s important to store the session state away from the application instances themselves. This ensures that baskets remain populated, even if users requests are directed to an alternative instance behind your load balancer, or if the current instance is removed from service for any reason. The AWS platform offers a number of services that you can use to provide a highly available session store. Amazon ElastiCache Amazon ElastiCache makes it easy to deploy, operate, and scale an in-memory data store in AWS. In-memory data stores are ideal for storing transient session data due to the low latency these technologies offer. ElastiCache supports two open source, inmemory caching engines: Memcached A widely adopted memory object caching system. ElastiCache is protocol compliant with Memcached, which is already supported by many open source applications as an in-memory session storage platform. Page 29 of 32
30 Redis A popular open source, in-memory key-value store that supports data structures such as sorted sets and lists. ElastiCache supports master/slave replication and Multi-AZ, which you can use to achieve cross-az redundancy. In addition to the in-memory data stores offered by Memcached and Redis on ElastiCache, some applications require a more durable storage platform for their session data. For these applications, Amazon DynamoDB offers a low latency, highly scalable, and durable solution. DynamoDB replicates data across three facilities in an AWS region to provide fault tolerance in the event of a server failure or Availability Zone outage. To help customers easily integrate DynamoDB as a session store within their applications, AWS provides pre-built DynamoDB session handlers for both Tomcatbased Java applications 9 and PHP applications. 10 System Metrics To properly support a production system, operational teams need access to system metrics that indicate the overall health of the system and the relative load under which it s currently operating. In a traditional environment, this information is often obtained by logging into one of the instances and looking at OS-level metrics such as system load or CPU utilization. However, in an environment where you have multiple instances running, and these instances can appear and disappear at any moment, this approach soon becomes ineffective and difficult to manage. Instead, you should push this data to an external monitoring system for collection and analysis. Amazon CloudWatch Amazon CloudWatch is a fully managed monitoring service for AWS resources and the applications that you run on top of them. You can use Amazon CloudWatch to collect and store metrics on a durable platform that is separate and independent from your own infrastructure. This means that the metrics will be available to your operational teams even when the instances themselves have been terminated. In addition to tracking metrics, you can use Amazon CloudWatch to trigger alarms on the metrics when they pass certain thresholds. You can use the alarms to notify your teams and to initiate further automated actions to deal with issues and bring your system back within its normal operating tolerances. For example, an automated action could initiate an Auto Scaling policy to increase or decrease the number of instances in an Auto Scaling group Page 30 of 32
31 By default, Amazon CloudWatch can monitor a broad range of metrics across your AWS resources. That said, it is also important to remember that AWS doesn t have access to the OS or applications running on your EC2 instances. Because of this, Amazon CloudWatch cannot automatically monitor metrics that are accessible only within the OS, such as memory and disk volume utilization. If you want to monitor OS and application metrics by using Amazon CloudWatch, you can publish your own metrics to CloudWatch through a simple API request. With this approach, you can manage these metrics in the same way that you manage other, native metrics, including configuring alarms and associated actions. You can use the EC2Config service 11 to push additional OS-level operating metrics into CloudWatch without the need to manually code against the CloudWatch APIs. If you are running Linux AMIs, you can use the set of sample Perl scripts 12 provided by AWS that demonstrate how to produce and consume Amazon CloudWatch custom metrics. In addition to CloudWatch, you can use third-party monitoring solutions in AWS to extend your monitoring capabilities. Log Management Log data is used by your operational team to better understand how the system is performing and to diagnose any issues that might arise. Log data can be produced by the application itself, but also by system components lower down in your stack. This might include anything from access logs produced by your web server to security audit logs produced by the operating system itself. Your operations team needs reliable and timely access to these logs at all times, regardless of whether the instance that originally produced the log is still in existence. For this reason, it s important to move log data from the instance to a more durable storage platform as close to real time as possible. Amazon CloudWatch Logs Amazon CloudWatch Logs is a service that allows you to quickly and easily move your system and application logs from the EC2 instances themselves to a centralized, durable storage platform (Amazon S3). This ensures that this data is available even when the instance itself has been terminated. You also have control over the log retention policy to ensure that all logs are retained for a specified period of time. The CloudWatch Logs service provides a log management agent that you can install onto your EC2 instances to manage the ingestion of your logs into the log management service Page 31 of 32
32 In addition to moving your logs to durable storage, the CloudWatch Logs service also allows you to monitor your logs in near real-time for specific phrases, values, or patterns (metrics). You can use these metrics in the same way as any other CloudWatch metrics. For example, you can create a CloudWatch alarm on the number of errors being thrown by your application or when certain, suspect actions are detected in your security audit logs. Conclusion This whitepaper showed you how to accomplish the following: Quickly provision new infrastructure components in an automated, repeatable, and predictable manner Ensure that no EC2 instance in your environment is unique, and that all instances are stateless and therefore easily replaced Having these capabilities in place allows you to think differently about how you provision and manage infrastructure components when compared to traditional environments. Instead of manually building each instance and maintaining consistency through a set of operational checks and balances, you can treat your infrastructure as if it were software. By specifying the desired end state of your infrastructure through the software-based tools and processes described in this whitepaper, you can fundamentally change the way your infrastructure is managed, and you can take full advantage of the dynamic, elastic, and automated nature of the AWS cloud. Further Reading AWS Elastic Beanstalk Documentation AWS OpsWorks Documentation Bootstrapping Applications via AWS CloudFormation whitepaper Using Chef with AWS CloudFormation Integrating AWS CloudFormation with Puppet Page 32 of 32
Alfresco Enterprise on AWS: Reference Architecture
Alfresco Enterprise on AWS: Reference Architecture October 2013 (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 13 Abstract Amazon Web Services (AWS)
Introduction to DevOps on AWS
Introduction to DevOps on AWS David Chapman December 2014 Contents Contents Abstract Introduction Agile Evolution to DevOps Infrastructure as Code AWS CloudFormation AWS AMI Continuous Deployment AWS CodeDeploy
19.10.11. Amazon Elastic Beanstalk
19.10.11 Amazon Elastic Beanstalk A Short History of AWS Amazon started as an ECommerce startup Original architecture was restructured to be more scalable and easier to maintain Competitive pressure for
EXECUTIVE SUMMARY CONTENTS. 1. Summary 2. Objectives 3. Methodology and Approach 4. Results 5. Next Steps 6. Glossary 7. Appendix. 1.
CONTENTS 1. Summary 2. Objectives 3. Methodology and Approach 4. Results 5. Next Steps 6. Glossary 7. Appendix EXECUTIVE SUMMARY Tenzing Managed IT services has recently partnered with Amazon Web Services
Deep Dive: Infrastructure as Code
Deep Dive: Infrastructure as Code Steven Bryen Solutions Architect, AWS Raj Wilkhu Principal Engineer, JUST EAT Bruce Jackson CTO, Myriad Group AG 2015, Amazon Web Services, Inc. or its affiliates. All
Amazon Cloud Storage Options
Amazon Cloud Storage Options Table of Contents 1. Overview of AWS Storage Options 02 2. Why you should use the AWS Storage 02 3. How to get Data into the AWS.03 4. Types of AWS Storage Options.03 5. Object
How AWS Pricing Works May 2015
How AWS Pricing Works May 2015 (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction...
How AWS Pricing Works
How AWS Pricing Works (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction... 3 Fundamental
Scalable Architecture on Amazon AWS Cloud
Scalable Architecture on Amazon AWS Cloud Kalpak Shah Founder & CEO, Clogeny Technologies [email protected] 1 * http://www.rightscale.com/products/cloud-computing-uses/scalable-website.php 2 Architect
Managing Your Microsoft Windows Server Fleet with AWS Directory Service. May 2015
Managing Your Microsoft Windows Server Fleet with AWS Directory Service May 2015 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational
Amazon Web Services Primer. William Strickland COP 6938 Fall 2012 University of Central Florida
Amazon Web Services Primer William Strickland COP 6938 Fall 2012 University of Central Florida AWS Overview Amazon Web Services (AWS) is a collection of varying remote computing provided by Amazon.com.
Cloud Models and Platforms
Cloud Models and Platforms Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF A Working Definition of Cloud Computing Cloud computing is a model
TECHNOLOGY WHITE PAPER Jun 2012
TECHNOLOGY WHITE PAPER Jun 2012 Technology Stack C# Windows Server 2008 PHP Amazon Web Services (AWS) Route 53 Elastic Load Balancing (ELB) Elastic Compute Cloud (EC2) Amazon RDS Amazon S3 Elasticache
Managing Multi-Tiered Applications with AWS OpsWorks
Managing Multi-Tiered Applications with AWS OpsWorks Daniele Stroppa January 2015 Contents Contents Abstract Introduction Key Concepts Design Micro-services Architecture Provisioning and Deployment Managing
ArcGIS 10.3 Server on Amazon Web Services
ArcGIS 10.3 Server on Amazon Web Services Copyright 1995-2015 Esri. All rights reserved. Table of Contents Introduction What is ArcGIS Server on Amazon Web Services?............................... 5 Quick
Primex Wireless OneVue Architecture Statement
Primex Wireless OneVue Architecture Statement Secure, cloud-based workflow, alert, and notification platform built on top of Amazon Web Services (AWS) 2015 Primex Wireless, Inc. The Primex logo is a registered
TECHNOLOGY WHITE PAPER Jan 2016
TECHNOLOGY WHITE PAPER Jan 2016 Technology Stack C# PHP Amazon Web Services (AWS) Route 53 Elastic Load Balancing (ELB) Elastic Compute Cloud (EC2) Amazon RDS Amazon S3 Elasticache CloudWatch Paypal Overview
Web Application Hosting in the AWS Cloud Best Practices
Web Application Hosting in the AWS Cloud Best Practices September 2012 Matt Tavis, Philip Fitzsimons Page 1 of 14 Abstract Highly available and scalable web hosting can be a complex and expensive proposition.
Amazon EC2 Product Details Page 1 of 5
Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Functionality Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of
Chapter 9 PUBLIC CLOUD LABORATORY. Sucha Smanchat, PhD. Faculty of Information Technology. King Mongkut s University of Technology North Bangkok
CLOUD COMPUTING PRACTICE 82 Chapter 9 PUBLIC CLOUD LABORATORY Hand on laboratory based on AWS Sucha Smanchat, PhD Faculty of Information Technology King Mongkut s University of Technology North Bangkok
DLT Solutions and Amazon Web Services
DLT Solutions and Amazon Web Services For a seamless, cost-effective migration to the cloud PREMIER CONSULTING PARTNER DLT Solutions 2411 Dulles Corner Park, Suite 800 Herndon, VA 20171 Duane Thorpe Phone:
Migration Scenario: Migrating Backend Processing Pipeline to the AWS Cloud
Migration Scenario: Migrating Backend Processing Pipeline to the AWS Cloud Use case Figure 1: Company C Architecture (Before Migration) Company C is an automobile insurance claim processing company with
Getting Started with AWS. Computing Basics for Linux
Getting Started with AWS Computing Basics for Linux Getting Started with AWS: Computing Basics for Linux Copyright 2014 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. The following
Fault-Tolerant Computer System Design ECE 695/CS 590. Putting it All Together
Fault-Tolerant Computer System Design ECE 695/CS 590 Putting it All Together Saurabh Bagchi ECE/CS Purdue University ECE 695/CS 590 1 Outline Looking at some practical systems that integrate multiple techniques
Application Security Best Practices. Matt Tavis Principal Solutions Architect
Application Security Best Practices Matt Tavis Principal Solutions Architect Application Security Best Practices is a Complex topic! Design scalable and fault tolerant applications See Architecting for
Amazon Web Services Student Tutorial
Amazon Web Services Free Usage Tier Elastic Compute Cloud Amazon Web Services Student Tutorial David Palma Joseph Snow CSC 532: Advanced Software Engineering Louisiana Tech University October 4, 2012 Amazon
Last time. Today. IaaS Providers. Amazon Web Services, overview
Last time General overview, motivation, expected outcomes, other formalities, etc. Please register for course Online (if possible), or talk to CS secretaries Course evaluation forgotten Please assign one
DO NOT DISTRIBUTE. Automating Applications with Continuous Delivery on AWS. Student Guide. Version 1.0
Automating Applications with Continuous Delivery on AWS Student Guide Version 1.0 Copyright 2014 Amazon Web Services, Inc. or its affiliates. All rights reserved. This work may not be reproduced or redistributed,
Designing Apps for Amazon Web Services
Designing Apps for Amazon Web Services Mathias Meyer, GOTO Aarhus 2011 Montag, 10. Oktober 11 Montag, 10. Oktober 11 Me infrastructure code databases @roidrage www.paperplanes.de Montag, 10. Oktober 11
Web Application Deployment in the Cloud Using Amazon Web Services From Infancy to Maturity
P3 InfoTech Solutions Pvt. Ltd http://www.p3infotech.in July 2013 Created by P3 InfoTech Solutions Pvt. Ltd., http://p3infotech.in 1 Web Application Deployment in the Cloud Using Amazon Web Services From
Getting Started with AWS. Web Application Hosting for Linux
Getting Started with AWS Web Application Hosting for Amazon Web Services Getting Started with AWS: Web Application Hosting for Amazon Web Services Copyright 2014 Amazon Web Services, Inc. and/or its affiliates.
CLOUD COMPUTING WITH AWS An INTRODUCTION. John Hildebrandt Solutions Architect ANZ
CLOUD COMPUTING WITH AWS An INTRODUCTION John Hildebrandt Solutions Architect ANZ AGENDA Todays Agenda Background and Value proposition of AWS Global infrastructure and the Sydney Region AWS services Drupal
EEDC. Scalability Study of web apps in AWS. Execution Environments for Distributed Computing
EEDC Execution Environments for Distributed Computing 34330 Master in Computer Architecture, Networks and Systems - CANS Scalability Study of web apps in AWS Sergio Mendoza [email protected]
Continuous Delivery on AWS. Version 1.0 DO NOT DISTRIBUTE
Continuous Version 1.0 Copyright 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. This work may not be reproduced or redistributed, in whole or in part, without prior written
ITIL Asset and Configuration. Management in the Cloud
ITIL Asset and Configuration Management in the Cloud An AWS Cloud Adoption Framework Addendum September 2015 A Joint Whitepaper with Minjar Cloud Solutions 2015, Amazon Web Services, Inc. or its affiliates.
The Virtualization Practice
The Virtualization Practice White Paper: Managing Applications in Docker Containers Bernd Harzog Analyst Virtualization and Cloud Performance Management October 2014 Abstract Docker has captured the attention
Overview and Deployment Guide. Sophos UTM on AWS
Overview and Deployment Guide Sophos UTM on AWS Overview and Deployment Guide Document date: November 2014 1 Sophos UTM and AWS Contents 1 Amazon Web Services... 4 1.1 AMI (Amazon Machine Image)... 4 1.2
Alfresco Enterprise on Azure: Reference Architecture. September 2014
Alfresco Enterprise on Azure: Reference Architecture Page 1 of 14 Abstract Microsoft Azure provides a set of services for deploying critical enterprise workloads on its highly reliable cloud platform.
Backup and Recovery of SAP Systems on Windows / SQL Server
Backup and Recovery of SAP Systems on Windows / SQL Server Author: Version: Amazon Web Services sap- on- [email protected] 1.1 May 2012 2 Contents About this Guide... 4 What is not included in this guide...
How To Set Up Wiremock In Anhtml.Com On A Testnet On A Linux Server On A Microsoft Powerbook 2.5 (Powerbook) On A Powerbook 1.5 On A Macbook 2 (Powerbooks)
The Journey of Testing with Stubs and Proxies in AWS Lucy Chang [email protected] Abstract Intuit, a leader in small business and accountants software, is a strong AWS(Amazon Web Services) partner
Amazon EFS (Preview) User Guide
Amazon EFS (Preview) User Guide Amazon EFS (Preview): User Guide Copyright 2015 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used
Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL. May 2015
Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL May 2015 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document
Servers. Servers. NAT Public Subnet: 172.30.128.0/20. Internet Gateway. VPC Gateway VPC: 172.30.0.0/16
.0 Why Use the Cloud? REFERENCE MODEL Cloud Development April 0 Traditionally, deployments require applications to be bound to a particular infrastructure. This results in low utilization, diminished efficiency,
Deploy Remote Desktop Gateway on the AWS Cloud
Deploy Remote Desktop Gateway on the AWS Cloud Mike Pfeiffer April 2014 Last updated: May 2015 (revisions) Table of Contents Abstract... 3 Before You Get Started... 3 Three Ways to Use this Guide... 4
Every Silver Lining Has a Vault in the Cloud
Irvin Hayes Jr. Autodesk, Inc. PL6015-P Don t worry about acquiring hardware and additional personnel in order to manage your Vault software installation. Learn how to spin up a hosted server instance
Building Success on Acquia Cloud:
Building Success on Acquia Cloud: 10 Layers of PaaS TECHNICAL Guide Table of Contents Executive Summary.... 3 Introducing the 10 Layers of PaaS... 4 The Foundation: Five Layers of PaaS Infrastructure...
Architecture Statement
Architecture Statement Secure, cloud-based workflow, alert, and notification platform built on top of Amazon Web Services (AWS) 2016 Primex Wireless, Inc. The Primex logo is a registered trademark of Primex
Hadoop & Spark Using Amazon EMR
Hadoop & Spark Using Amazon EMR Michael Hanisch, AWS Solutions Architecture 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Agenda Why did we build Amazon EMR? What is Amazon EMR?
Shadi Khalifa Database Systems Laboratory (DSL) [email protected]
Shadi Khalifa Database Systems Laboratory (DSL) [email protected] What is Amazon!! American international multibillion dollar electronic commerce company with headquarters in Seattle, Washington, USA.
Scaling in the Cloud with AWS. By: Eli White (CTO & Co-Founder @ mojolive) eliw.com - @eliw - mojolive.com
Scaling in the Cloud with AWS By: Eli White (CTO & Co-Founder @ mojolive) eliw.com - @eliw - mojolive.com Welcome! Why is this guy talking to us? Please ask questions! 2 What is Scaling anyway? Enabling
Introduction to Cloud Computing on Amazon Web Services (AWS) with focus on EC2 and S3. Horst Lueck
Introduction to Cloud Computing on Amazon Web Services (AWS) with focus on EC2 and S3 Horst Lueck 2011-05-17 IT Pro Forum http://itproforum.org Thanks to Open Office Impress The Cloud the Name The 90s
Razvoj Java aplikacija u Amazon AWS Cloud: Praktična demonstracija
Razvoj Java aplikacija u Amazon AWS Cloud: Praktična demonstracija Robert Dukarić University of Ljubljana Faculty of Computer and Information Science Laboratory for information systems integration Competence
DISTRIBUTED SYSTEMS [COMP9243] Lecture 9a: Cloud Computing WHAT IS CLOUD COMPUTING? 2
DISTRIBUTED SYSTEMS [COMP9243] Lecture 9a: Cloud Computing Slide 1 Slide 3 A style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet.
ArcGIS for Server in the Amazon Cloud. Michele Lundeen Esri
ArcGIS for Server in the Amazon Cloud Michele Lundeen Esri What we will cover ArcGIS for Server in the Amazon Cloud Why How Extras Why do you need ArcGIS Server? Some examples Publish - Dynamic Map Services
Cloud computing - Architecting in the cloud
Cloud computing - Architecting in the cloud [email protected] 1 Outline Cloud computing What is? Levels of cloud computing: IaaS, PaaS, SaaS Moving to the cloud? Architecting in the cloud Best practices
Background on Elastic Compute Cloud (EC2) AMI s to choose from including servers hosted on different Linux distros
David Moses January 2014 Paper on Cloud Computing I Background on Tools and Technologies in Amazon Web Services (AWS) In this paper I will highlight the technologies from the AWS cloud which enable you
Last time. Today. IaaS Providers. Amazon Web Services, overview
Last time General overview, motivation, expected outcomes, other formalities, etc. Please register for course Online (if possible), or talk to CS secretaries Cloud computing introduction General concepts
JAVA IN THE CLOUD PAAS PLATFORM IN COMPARISON
JAVA IN THE CLOUD PAAS PLATFORM IN COMPARISON Eberhard Wolff Architecture and Technology Manager adesso AG, Germany 12.10. Agenda A Few Words About Cloud Java and IaaS PaaS Platform as a Service Google
MySQL and Virtualization Guide
MySQL and Virtualization Guide Abstract This is the MySQL and Virtualization extract from the MySQL Reference Manual. For legal information, see the Legal Notices. For help with using MySQL, please visit
Preparing Your IT for the Holidays. A quick start guide to take your e-commerce to the Cloud
Preparing Your IT for the Holidays A quick start guide to take your e-commerce to the Cloud September 2011 Preparing your IT for the Holidays: Contents Introduction E-Commerce Landscape...2 Introduction
319 MANAGED HOSTING TECHNICAL DETAILS
319 MANAGED HOSTING TECHNICAL DETAILS 319 NetWorks www.319networks.com Table of Contents Architecture... 4 319 Platform... 5 319 Applications... 5 319 Network Stack... 5 319 Cloud Hosting Technical Details...
Zend Server Amazon AMI Quick Start Guide
Zend Server Amazon AMI Quick Start Guide By Zend Technologies www.zend.com Disclaimer This is the Quick Start Guide for The Zend Server Zend Server Amazon Machine Image The information in this document
Web Application Hosting in the AWS Cloud Best Practices
Web Application Hosting in the AWS Cloud Best Practices May 2010 Matt Tavis Page 1 of 12 Abstract Highly-available and scalable web hosting can be a complex and expensive proposition. Traditional scalable
CloudCenter Full Lifecycle Management. An application-defined approach to deploying and managing applications in any datacenter or cloud environment
CloudCenter Full Lifecycle Management An application-defined approach to deploying and managing applications in any datacenter or cloud environment CloudCenter Full Lifecycle Management Page 2 Table of
Eucalyptus 3.4.2 User Console Guide
Eucalyptus 3.4.2 User Console Guide 2014-02-23 Eucalyptus Systems Eucalyptus Contents 2 Contents User Console Overview...4 Install the Eucalyptus User Console...5 Install on Centos / RHEL 6.3...5 Configure
RemoteApp Publishing on AWS
RemoteApp Publishing on AWS WWW.CORPINFO.COM Kevin Epstein & Stephen Garden Santa Monica, California November 2014 TABLE OF CONTENTS TABLE OF CONTENTS... 2 ABSTRACT... 3 INTRODUCTION... 3 WHAT WE LL COVER...
Cloud-init. Marc Skinner - Principal Solutions Architect Michael Heldebrant - Solutions Architect Red Hat
Cloud-init Marc Skinner - Principal Solutions Architect Michael Heldebrant - Solutions Architect Red Hat 1 Agenda What is cloud-init? What can you do with cloud-init? How does it work? Using cloud-init
APP DEVELOPMENT ON THE CLOUD MADE EASY WITH PAAS
APP DEVELOPMENT ON THE CLOUD MADE EASY WITH PAAS This article looks into the benefits of using the Platform as a Service paradigm to develop applications on the cloud. It also compares a few top PaaS providers
A Comparison of Clouds: Amazon Web Services, Windows Azure, Google Cloud Platform, VMWare and Others (Fall 2012)
1. Computation Amazon Web Services Amazon Elastic Compute Cloud (Amazon EC2) provides basic computation service in AWS. It presents a virtual computing environment and enables resizable compute capacity.
CHEF IN THE CLOUD AND ON THE GROUND
CHEF IN THE CLOUD AND ON THE GROUND Michael T. Nygard Relevance [email protected] @mtnygard Infrastructure As Code Infrastructure As Code Chef Infrastructure As Code Chef Development Models
Amazon Web Services. 18.11.2015 Yu Xiao
Amazon Web Services 18.11.2015 Yu Xiao Agenda Introduction to Amazon Web Services(AWS) 7 Steps to Select the Right Architecture for Your Web Applications Private, Public or Hybrid Cloud? AWS Case Study
Cloud Computing with Amazon Web Services and the DevOps Methodology. www.cloudreach.com
Cloud Computing with Amazon Web Services and the DevOps Methodology Who am I? Max Manders @maxmanders Systems Developer at Cloudreach @cloudreach Director / Co-Founder of Whisky Web @whiskyweb Who are
Thing Big: How to Scale Your Own Internet of Things. Walter'Pernstecher'-'[email protected]' Dr.'Markus'Schmidberger'-'schmidbe@amazon.
Thing Big: How to Scale Your Own Internet of Things Walter'Pernstecher'-'[email protected]' Dr.'Markus'Schmidberger'-'[email protected]' Internet of Things is the network of physical objects or "things"
ITIL Event Management in the Cloud
ITIL Event Management in the Cloud An AWS Cloud Adoption Framework Addendum July 2015 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational
Storage Options in the AWS Cloud: Use Cases
Storage Options in the AWS Cloud: Use Cases Joseph Baron, Amazon Web Services Robert Schneider, Think88 December 2010 Cloud Storage Use Cases To illustrate real-world usage of AWS storage options, let
AWS Service Catalog. User Guide
AWS Service Catalog User Guide AWS Service Catalog: User Guide Copyright 2016 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in
AIST Data Symposium. Ed Lenta. Managing Director, ANZ Amazon Web Services
AIST Data Symposium Ed Lenta Managing Director, ANZ Amazon Web Services Why are companies adopting cloud computing and AWS so quickly? #1: Agility The primary reason businesses are moving so quickly to
Amazon Relational Database Service (RDS)
Amazon Relational Database Service (RDS) G-Cloud Service 1 1.An overview of the G-Cloud Service Arcus Global are approved to sell to the UK Public Sector as official Amazon Web Services resellers. Amazon
AWS Lambda. Developer Guide
AWS Lambda Developer Guide AWS Lambda: Developer Guide Copyright 2015 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection
Oracle Database Cloud
Oracle Database Cloud Shakeeb Rahman Database Cloud Service Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purposes only, and may
Scalable Application. Mikalai Alimenkou http://xpinjection.com 11.05.2012
Scalable Application Development on AWS Mikalai Alimenkou http://xpinjection.com 11.05.2012 Background Java Technical Lead/Scrum Master at Zoral Labs 7+ years in software development 5+ years of working
AWS CodePipeline. User Guide API Version 2015-07-09
AWS CodePipeline User Guide AWS CodePipeline: User Guide Copyright 2015 Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection
McAfee Public Cloud Server Security Suite
Installation Guide McAfee Public Cloud Server Security Suite For use with McAfee epolicy Orchestrator COPYRIGHT Copyright 2015 McAfee, Inc., 2821 Mission College Boulevard, Santa Clara, CA 95054, 1.888.847.8766,
Opsview in the Cloud. Monitoring with Amazon Web Services. Opsview Technical Overview
Opsview in the Cloud Monitoring with Amazon Web Services Opsview Technical Overview Page 2 Opsview In The Cloud: Monitoring with Amazon Web Services Contents Opsview in The Cloud... 3 Considerations...
Introduction to AWS in Higher Ed
Introduction to AWS in Higher Ed Lori Clithero [email protected] 206.227.5054 University of Washington Cloud Day 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. 2 Cloud democratizes
Automated Application Provisioning for Cloud
Automated Application Provisioning for Cloud Application Provisioning in Cloud requires mechanism to automate and repeat as and when it requires. This is mainly because the building blocks of an IT infrastructure
Assignment # 1 (Cloud Computing Security)
Assignment # 1 (Cloud Computing Security) Group Members: Abdullah Abid Zeeshan Qaiser M. Umar Hayat Table of Contents Windows Azure Introduction... 4 Windows Azure Services... 4 1. Compute... 4 a) Virtual
Web Application Firewall
Web Application Firewall Getting Started Guide August 3, 2015 Copyright 2014-2015 by Qualys, Inc. All Rights Reserved. Qualys and the Qualys logo are registered trademarks of Qualys, Inc. All other trademarks
www.boost ur skills.com
www.boost ur skills.com AWS CLOUD COMPUTING WORKSHOP Write us at [email protected] BOOSTURSKILLS No 1736 1st Amrutha College Road Kasavanhalli,Off Sarjapur Road,Bangalore-35 1) Introduction &
WE RUN SEVERAL ON AWS BECAUSE WE CRITICAL APPLICATIONS CAN SCALE AND USE THE INFRASTRUCTURE EFFICIENTLY.
WE RUN SEVERAL CRITICAL APPLICATIONS ON AWS BECAUSE WE CAN SCALE AND USE THE INFRASTRUCTURE EFFICIENTLY. - Murari Gopalan Director, Technology Expedia Expedia, a leading online travel company for leisure
Enterprise IT is complex. Today, IT infrastructure spans the physical, the virtual and applications, and crosses public, private and hybrid clouds.
ENTERPRISE MONITORING & LIFECYCLE MANAGEMENT Unify IT Operations Enterprise IT is complex. Today, IT infrastructure spans the physical, the virtual and applications, and crosses public, private and hybrid
Estimating the Cost of a GIS in the Amazon Cloud. An Esri White Paper August 2012
Estimating the Cost of a GIS in the Amazon Cloud An Esri White Paper August 2012 Copyright 2012 Esri All rights reserved. Printed in the United States of America. The information contained in this document
