Continuous Version 1.0
Copyright 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. This work may not be reproduced or redistributed, in whole or in part, without prior written permission from Amazon Web Services, Inc. Commercial copying, lending, or selling is prohibited. For corrections or feedback on the course, please email us at aws-course-feedback@amazon.com. For all other questions, please email us at aws-training-info@amazon.com. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 2
Table of Contents Introduction... 4 Overview... 4 Technical Knowledge Prerequisites... 4 Topics Covered... 4 Sign in to the AWS Management Console... 4 Using qwiklabs tm to sign in to the AWS Management Console... 4 Module 1 Continuous Deployment Approaches with Atlassian Bamboo... 6 Atlassian Bamboo... 6 Installing Tasks for AWS... 6 Creating a continuous deployment build plan... 7 The data sources build plan... 7 Defining Bamboo plan variables... 11 The web application build plan... 13 Reviewing our build plans... 22 Testing the web application build plan... 22 Bamboo plan branching... 23 Releasing a new application and database feature... 24 Troubleshooting your Bamboo build plans... 26 Conclusion... 26 End Your Lab... 27 Additional Resources... 28 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 3
Introduction Overview In this lab you will configure a continuous deployment (CD) environment using Atlassian Bamboo to automate the process of deploying and managing your MySQL database. You ll also build a pipeline to automate testing, building, packaging and deploying your version of the sample web application. Changes made to the example web application will be baked into an Amazon Machine Image (AMI) and deployed using AWS CloudFormation to a new discrete environment. We ll use exactly the same ideas as covered in earlier labs. We ll even use the same AWS CloudFormation template. Reuse is great. Changes made to your database will be managed using AWS CloudFormation and Liquibase. This is a powerful combination! Technical Knowledge Prerequisites To successfully complete this lab, you should be familiar with the following: Continuous integration and deployment concepts Bootstrapping EC2 instances using Cfn-init AWS CloudFormation and modifying AWS CloudFormation templates Flask as a micro web framework for Python Atlassian Bamboo administration basics Topics Covered This lab will (take you through CloudFormation), including: Configuring a continuous deployment plan/pipeline using Atlassian Bamboo Baking an AMI as an approach to containerizing an application change AWS CloudFormation as a mechanism for managing the release lifecycle for immutable application containers (AMI / containers) Managing your relational database lifecycle and schema via AWS CloudFormation and Liquibase. Sign in to the AWS Management Console Using qwiklabs tm to sign in to the AWS Management Console Welcome to this self-paced lab! You will sign in to the same qwiklab environment as you used in your previous lab. 1. On the lab details page, notice the lab properties. a. Duration - The time the lab will run before automatically shutting down. b. Setup Time - The estimated time to set up the lab environment. c. AWS Region - The AWS Region in which the lab resources are created. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 4
Note: The AWS Region for your lab will differ depending on your location and the lab setup. 2. In the AWS Management Console section of the qwiklab page, copy the Password to the clipboard. 3. Click the Open Console button. 4. Log into the AWS Management Console using the following steps. a. In the User Name field type awsstudent. b. In the Password field, paste the password copied from the lab details page. c. Click Sign in using our secure server. Note: The AWS account is automatically generated by qwiklab. Also, the login credentials for the awsstudent account are provisioned by qwiklab using AWS Identity Access Management. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 5
Module 1 Continuous Deployment Approaches with Atlassian Bamboo In this lab you will use configure a continuous deployment (CD) environment using Atlassian Bamboo to automate the process of building and packaging an application change as an Amazon Machine Image (AMI). AWS CloudFormation will be used from the CD pipeline to manage the release lifecycle of these containers to a running application environment. We ll also build a separate CD pipeline to manage the lifecycle of our relational database and deliver database schema changes to support an application change via the same pipeline. Atlassian Bamboo Installing Tasks for AWS Atlassian Bamboo can be extended via plugins. Tasks for AWS by Utoolity (http://utoolity.net/) is a plugin available on the Atlassian Marketplace that gives Bamboo excellent integration against AWS services including Amazon EC2, AWS CloudFormation, AWS Elastic Beanstalk, and the Amazon Simple Storage Service (Amazon S3). For more information on Tasks for AWS, have a look at the plugin detail page on the Atlassian Marketplace, https://marketplace.atlassian.com/plugins/net.utoolity.atlassian.bamboo.tasks-for-aws 1. To install Tasks for AWS on your Bamboo server, browse to the Bamboo administration control panel. To do this, click on the gear icon on the upper right of the top navigation menu, and select Add-ons 2. Pause the Bamboo server, by clicking Pause at the top of the Administration control panel 3. We re going to install Tasks for AWS from the Atlassian Marketplace, so click on Find new add-ons 4. In the search bar, search for Tasks for AWS 5. Click on the Free Trial button. The Tasks for AWS plugin will then download to and install on your Bamboo server. 6. Enter your Atlassian ID username and password 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 6
7. If you see a message that indicates that you were unable to login with your credentials, click the Get license button. This will take you directly to the evaluation license request form. 8. Click Generate license and when prompted, click Apply license. This will automatically populate the evaluation license details in Bamboo server, and finish installation of the plug-in 9. At the top of the screen above the top navigation, click Resume server We can now use the Tasks for AWS in our future build plans. This plugin is going to make integrating with AWS services significantly easier and more robust. Creating a continuous deployment build plan We are going to setup and configure two build plans in this lab. The first is responsible for deploying and managing the database used by our sample Python web application based on the https://github.com/aws-tools/py-flask-signup-datasources repository we forked in an earlier lab. We ll use this build plan to deploy the database and also to manage it, including updating database schemas when we change the web application. The second plan will implement the same tasks as our Web application pipeline plan (source code checkout and tests), but will also be responsible for packaging our application into an Amazon Machine Image (AMI) and deploying the application update to a new discrete environment based on the new AMI. This plan will use the https://github.com/aws-tools/pyflask-signup repository we forked in an earlier lab. The data sources build plan The build plan we are going to configure for our data sources repository has three distinct stages; Test, Deploy and Teardown. The plan will look something like this, so keep this in mind as you build. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 7
The Test stage 1. Within our awslab Bamboo project, click Create in the top navigation and select Create a new plan 2. Under Plan details: a. For Plan name, enter Data sources pipeline b. For Plan key, enter DATA 3. Configure the data sources GitHub repository you forked in a previous lab, e.g. py-flasksignup-datasources and configure the age-collection branch 4. For Trigger type use Polling the repository for changes 5. In the Polling frequency field enter 30 (we want to poll more frequently than the default 180 second interval) 6. Click the Configure tasks button 7. Click Add task, and choose the Command task type a. For Task description, enter Build CFN template b. Click the Add new executable link c. In the Executable label field enter Make d. In the Path field enter /usr/bin/make e. Click the Add button f. In the Argument field, enter template g. Click the Save button 8. Click Add task, and choose the Amazon S3 Object task type a. For Task description, enter S3 Deployment Bucket b. Ensure the S3 Object Action type is Upload c. Select the correct region for your lab environment, e.g. US-West (Oregon) or EU (Ireland) d. In the Artifact field ensure Local files is selected e. For Source Local Path, enter flask-signup-datasources.template f. In the Target Bucket Name enter ${bamboo.aws.uploadbucket} g. Enter your credentials obtained from the qwiklabs environment h. Click the Save button 9. Click Add task, and choose the AWS CloudFormation Stack task type a. For Task description, enter CFN Template Validate b. Ensure the Stack Action is Validate c. Select the correct region for your lab environment, e.g. US-West (Oregon) or EU (Ireland) d. In the Stack Name field, enter flask-signup-datasources e. In the Stack Template Source field, choose URL and enter https://s3.amazonaws.com/${bamboo.aws.uploadbucket}/flasksignup-datasources.template f. Enter your credentials obtained from the qwiklabs environment 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 8
g. Click the Save button 10. Now that we have completed this build job definition and added all the necessary tasks, ensure the Enable this plan? checkbox is checked, and click the Create button We are going to rename the default stage and job to give them more meaningful names. 1. From the plan configuration screen, click the gear icon, and select Configure stage 2. In the modal dialogue box, in the Name field enter Test 3. Click the Save button 4. Within the stage, you ll see a Default Job link. Click that, and select the Job details tab. 5. In the Job name field enter, CFN Syntax Check 6. Click the Save button. The Deploy stage 1. From the Stages screen for our Data sources pipeline plan, click the Create stage button (right hand side of the Stages screen). 2. For Stage name, enter Deploy 3. Click the Create button 4. This will create a Deploy stage. Under the Deploy stage, click the Add job link a. Select Create a new job b. In the Job name field enter CFN Deploy c. In the Job key field enter CD d. Ensure the Enable this job? checkbox is check e. Click the Create job button 5. Again from the Stages screen, click the newly created CFN Deploy job link a. You ll find a default Source Code Checkout tasks. We don t want to use that, so using the cross next to the task, delete it. 6. Click Add task, and choose the AWS CloudFormation Stack task type a. For Task description, enter Update Datasources Stack b. Ensure the Stack Action is Update c. Check both the Create stack, if it does not already exist and the Don t fail for no-op update d. Select the correct region for your lab environment, e.g. US-West (Oregon) or EU (Ireland) e. In the Stack Name field, enter flask-signup-datasources f. In the Stack Template Source field, choose URL and enter https://s3.amazonaws.com/${bamboo.aws.uploadbucket}/flask- 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 9
signup-datasources.template g. Enter your credentials obtained from the qwiklab environment h. Click the Save button We ve now completed our second stage for our data sources pipeline. This will run an AWS CloudFormation create or update command using the CloudFormation template we created in the earlier stage and uploaded to S3. If this the first time the template has run and the CloudFormation stack doesn t exist, it will deploy an MySQL database environment for us, and do the initial schema configuration and an initial data import as well. If the stack already exists, the AWS CloudFormation task will run an update. This is how we can use our pipeline to update our database schema or do data import work. Because we re using Liquibase, we just need to modify the liquibase-changelog.json configuration file in our source code repository to enact our desired database changes. With great power comes great responsibility! The Teardown stage 1. Go back to the Plan Configuration screen, and on the Stages tab again, click the Create stage button a. In the Stage name field enter Teardown b. Ensure the Manual checkbox is checked c. Click the Create button to finish creating the stage 2. In the newly created Teardown stage, click the Add Job link a. Select Create a new job b. In the Job name field enter CFN Stack Delete c. In the Job key field enter CSD d. Ensure the Enable this job? checkbox is checked e. Click the Create job button 3. Again from the Stages screen, click the newly created CFN Stack Delete job link a. You ll find a default Source Code Checkout tasks. We don t want to use that, so using the cross next to the task, delete it. 4. Click Add task, and choose the AWS CloudFormation Stack task type a. For Task description, enter CFN Delete b. Ensure the Stack Action is Delete c. Select the correct region for your lab environment, e.g. US-West (Oregon) or EU (Ireland) d. In the Stack Name field, enter flask-signup-datasources e. Enter your credentials obtained from the qwiklab environment f. Click the Save button We ve now completed our final stage for our data sources pipeline. This stage will only be run manually, and in fact we might normally disable it in the build plan and leave it disabled. This 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 10
stage will teardown our MySQL database, and would normally not be run. You can probably imagine scenarios where we would want to create cloned database environments for a new project which we would only want temporarily, e.g. for the lifetime of the development project or for proof of concept work, so the concept of Teardown for our stateful application components is still very relevant. Defining Bamboo plan variables There is some information we want to refer to throughout our plan stages in a consistent fashion. Bamboo provides the concepts of variables to be able to share information like this. We ll be using plan variables to define things like the AWS Keypair we want to use when starting EC2 instances. We re not going to configure a plan variable here, but we ll cover the steps for doing so. For now, just read through these step and refer to them later in the lab if you need to. We ll be defining all the plan variables for both our build plans later in the lab. To configure a plan variable, you would do the follow: 1. Click Plan Configuration on the left hand side navigation in your build plan 2. Select the Variables tab 3. In the Variable name field enter AWS.variablename, e.g. AWS.KeyName 4. In the Value field enter the value, e.g. the name of the Keypair provided by the Qwiklabs environment 5. Click the Add button The Data sources pipeline plan variables The data sources pipeline above would fail if we tried to run it. In fact, you might have noticed that Bamboo may have run the plan when we created and enabled it, and then promptly failed. This is because we referred to undefined plan variables such as ${bamboo.aws.uploadbucket}. Refer to the section above on defining plan variables above, and create the following variables for the data sources pipeline build plan. 1. AWS.UploadBucket, and use the value of DeploymentS3Bucket provided by the qwiklab environment. Once configured, you should see something like the following on your data source build plan: This variable will now be available to tasks in your build jobs using the syntax: ${bamboo.aws.uploadbucket}. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 11
Now that you ve configured the requisite plan variable you can manually run the data source build plan. Running the data sources build plan We don t yet have a database environment, and in fact, a relational database is a prerequisite for our web application to function correctly. We ll run the data sources build plan manually to create our database. To do this: 1. Navigate to the plan configuration screen for our Data sources pipeline 2. Click the Run button at the top right of the Plan configuration screen and select Run plan 3. This will initiate a manual run of our data sources pipeline While the plan is running, you can view the real time build summary displayed by Bamboo, and once a job is complete, you can view the logs for that job. Inspect your running and completed build plans to become familiar with this. Bamboo gives you a lot of detail about the status of the running tasks, including lots of detail about the CloudFormation stack status, and any outputs including errors that are returned. The Web application pipeline plan variables The web application pipeline we created earlier also needs to refer to a set of information consistently throughout the build plan. Refer to the section on defining plan variables above, and create the following variables for the web application pipeline build plan. 1. AWS.KeyName, and use the name of the name of the EC2 Key Pair Private Key you downloaded from qwiklab environment without the filename extension, e.g. qwiklabs-l491-33550 2. AWS.AMIRegistry, and use the value of DynamoDbAmiTable provided by the qwiklab environment, 3. AWS.AMIRegistryTopic, and use the value of CustomResourceTopicArn provided by the qwiklab environment, 4. AWS.UploadBucket, and use the value of DeploymentS3Bucket provided by the qwiklab environment. We also want to reference resources created in the build plan we just configured and ran for our application data store resources in our Web application pipeline plans. To find references to these resources, go to the AWS console and navigate to the AWS CloudFormation console. Look for, New Startup SignUp Persistent Data Stores in the Description field. Select that stack, and browse to the Outputs tab. You should see several outputs including, MySqlEndpoint and SignUpSnsTopic. We want to capture both of these outputs, so create the following plan variables in Bamboo: 1. AWS.RDSEndpoint, and use the value of MySqlEndpoint in the output tab. 2. AWS.SignupTopic, and use the value of SignUpSnsTopic in the output tab. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 12
You should have configured these plan variables on your web application build plan and see something like: These variables will now be available to tasks in your build jobs using the syntax: ${bamboo.aws.keyname}, ${bamboo.aws.amiregistry}, ${bamboo.aws.amiregistrytopic}, ${bamboo.aws.uploadbucket}, ${bamboo.aws.rdsendpoint}, ${bamboo.aws.signuptopic} We ll now configure the additional stages for our web application build plan to define build, release and teardown tasks. The web application build plan 1. Navigate to the Web application pipeline build plan that we ve already defined. Currently it only contains a Test stage, but we re now going to add Build, Release, and Teardown stages. The plan we are going to build has four distinct stages. The build plan will eventually look something like this, so keep this in mind as you build. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 13
The Build Stage The build stage will be responsible for building our application change into a new Amazon Machine Image (AMI) and registering that AMI in a simple artifact registry based on DynamoDB. The outputs of the build stage will be used by subsequent stages to deploy our application changes. In detail, the build job tasks will: Checkout our application code to the Bamboo server/build agent Build a tarball in a working directory we define on the Bamboo server/build agent Upload that tarball to the S3 bucket we defined earlier as a plan variable Launch a builder instance on EC2 using a baseline AMI or standard operating environment to use as the baseline for baking our application AMI Bootstrap the builder instance using CloudFormation cfn-init and lay down the web application tarball appropriately based on our application runtime requirements. This bootstrap step will also wire up our web application to the stateful application components correctly as well, including SNS and our MySQL database environment Image the builder instance to create an AMI which has our application updates and configuration baked into it Register that new AMI in our DynamoDB artifact registry for use in later deployment and release stages. This is quite a bit of work and well worth automating. In Bamboo, browse to your Web application pipeline build plan, and click on Actions, and select Configure plan. 1. In the stages configuration screen, click the Create stage button on the top right of the screen 2. In the Stage name field enter Build and click the Create button 3. In the Build stage, click Add Job, and in the modal dialog click Create a new job 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 14
4. In the Job name field, enter Build AMI 5. In the Job key field, enter BA 6. Make sure this job is enabled, and click the Create job button 7. Select the Build AMI job. This will load the Tasks screen for the job so we can define tasks to be run each time this job is run 8. You should be able to see a default task Source Code Checkout. We want to keep this task, so don t delete it. 9. Select the Source Code Checkout task a. In Task description enter Checkout source code b. Use the default plan repository c. In the Checkout Directory enter build d. Ensure that the Force Clean Build checkbox is checked e. Click the Save button 10. Click the Add task button, and choose the Script task type. a. In Task description enter Tarball b. For Script location ensure Inline is selected c. In Script body enter: #!/bin/bash tar czf bundle.tar.gz * d. In the Working sub directory enter build e. Click the Save button 11. Click the Add task button, and choose Amazon S3 Object. a. In the Task description field enter Upload application bundle b. Select the correct region for your lab environment, e.g. US-West (Oregon) or EU (Ireland) c. In the Source Local Path field, enter build/bundle.tar.gz d. In the Target Bucket Name field, enter ${bamboo.aws.uploadbucket} e. In the Target Object Key Prefix (Virtual Directory) enter ${bamboo.repository.revision.number} f. Enter your Access Key and Secret Key information provided by qwiklabs g. Click the Save button 12. Click the Add task button. We re going to configure a task to create a AMI builder instance, which we ll use to bake our AMI. a. Select the AWS CloudFormation Stack task b. Make sure the stack action is Create c. In the Task description field enter Builder instance 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 15
d. Select the correct region for your lab environment, e.g. US-West (Oregon) or EU (Ireland) e. In the Stack Name field enter BuildAMI-Stack f. In the Template URL field enter http://awsinfo.me.s3.amazonaws.com/services/cloudformation/ templates/build-ami.template g. In the Template Parameters field enter: MySqlEndpoint=${bamboo.AWS.RDSEndpoint};SnsTopicArn=${bambo o.aws.signuptopic};releasebundleurl=https://${bamboo.aws.up loadbucket}.s3.amazonaws.com/${bamboo.repository.revision.n umber}/build/bundle.tar.gz h. Enter your Access Key and Secret Key information provided by qwiklabs. i. Click the Save button 13. Click the Add task button again. This time we re going to configure a task to create an Amazon Machine Image from our AMI builder instance. a. Select the Amazon EC2 Image task b. In the Task description field enter Bake AMI c. Make sure the Image Action is Create d. Select the correct region for your lab environment, e.g. US-West (Oregon) or EU (Ireland) e. In the Instance ID field enter ${bamboo.custom.aws.cfn.stack.resources.buildami- Stack.outputs.BuilderInstance} Make sure to use the same stack name as you did in the previous step, e.g. BuildAMI-Stack. f. Enter your Access Key and Secret Key information provided by qwiklabs. g. Click the Save button 14. Add another task. This task will be responsible for tearing down the BuildAMI-Stack we created earlier. Now that we ve created an AMI that has our baked application changes, we no longer need the AMI builder instance. a. Select the AWS CloudFormation Stack task b. Make sure the stack action is Delete a. In the Task description field enter Teardown AMI Builder c. Select the correct region for your lab environment, e.g. US-West (Oregon) or EU (Ireland) d. In the Stack Name field enter BuildAMI-Stack. It s important that this be the same stack name we used when creating the AMI builder instance task earlier e. Enter your Access Key and Secret Key information provided by qwiklabs f. Click the Save button 15. We ll create one more task for the Build AMI job, and that will be responsible for registering the new AMI and some metadata in a DynamoDB table. This DynamoDB table will act as a simple AMI registry, and will be used in subsequent jobs when we want to deploy new discrete environments with that AMI. Having simple configuration 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 16
databases in Amazon DynamoDB like this can be very useful. We could implement external reporting tools to show an audit of all AMIs created for example. Add another task: a. This time select the Command task b. In the Task description field enter Register AMI c. Add a new executable and in Executable label, enter Register AMI d. In the Path field, enter /usr/local/bin/amiregister e. Click Add f. In the Argument field enter: "eu-west-1" ${bamboo.aws.amiregistry} ${bamboo.custom.aws.ec2.image.resources} ${bamboo.plankey}- ${bamboo.planrepository.branchname} ${bamboo.buildnumber} Make sure to use the right AWS region where you re running your lab, e.g. uswest-2 for the AWS Oregon region or eu-west-1 for the AWS Dublin region These parameters are passed to an AMI registry external task on the Bamboo server. The registry task updates our AMI lookup table. g. Click the Save button to finish building the task. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 17
You should have a build stage that looks like this: The Builder instance CloudFormation template The build-ami.template is responsible for starting an EC2 builder instance based on a standard operating environment we define (in this case the latest Amazon Linux AMI), and bootstraps our application onto it. After the builder instance has been bootstrapped with our application, we create an image of it. To do this we simply use the EC2 API to call ec2-create-image and let EC2 do the rest of the work. We use Bamboo to trigger this, and leverage Tasks for AWS to take care of the details. Once we have an AMI, we tag it and then store the AMI ID in a DynamoDB table. This DynamoDB table will act as a lightweight configuration management and artifact registry and can be queried whenever we need to know what AMI to deploy or look at the history of application changes and AMI builds. The following stages in our build pipeline will lookup the correct AMI to deploy. Again, CloudFormation will be used to do this, and a CloudFormation custom resource will do the heavy lifting. If you are interested, have a look at http://awsinfo.me.s3.amazonaws.com/services/cloudformation/templates/custom_resource_ AMI_Lookup.template. CloudFormation is powerful and can be extended like this to manage supporting infrastructure for our web application continuous deployment pipeline. If you re interested in the custom resource implementation, you can have a look at it http://awsinfo.me.s3.amazonaws.com/services/scripts/ami-ddb-lookup.py The Release Stage The release stage is responsible for building a new discrete environment and using the AMI baked in the previous stage to provision a new version of our application in that environment. To create the Release stage: 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 18
1. In our Plan Configuration screen, click the Create stage button 2. In the Stage name field enter Release and click the Create button 3. Click Add Job in the new Release stage, and Create a new job 4. In the Job details screen enter Release app in the Job name field 5. In the Job key field, enter REL 6. Make sure the job is enabled, and click the Create job button 7. Select the Release app job 8. You should be able to see a default task Source Code Checkout. Using the cross icon next to it, delete this task. 9. Click the Add task button and select AWS CloudFormation Stack 10. In Task description enter Deploy app 11. Make sure the stack action is Create 12. Select the correct region for your lab environment, e.g. US-West (Oregon) or EU (Ireland) 13. In the Stack Name field enter 14. In the Template URL field enter http://awsinfo.me.s3.amazonaws.com/services/cloudformation/templa tes/cfn-ami-lookup-flask-signup.template 15. In the Template Parameters field enter DeployApp-${bamboo.planKey}-${bamboo.planRepository.branchName}- ${bamboo.buildnumber} HashKey=${bamboo.planKey}- ${bamboo.planrepository.branchname};rangekey=${bamboo.buildnumber };AmiLookupSnsTopicArn=${bamboo.AWS.AMIRegistryTopic} You can also use the latest build for any given plan and branch: HashKey=${bamboo.planKey}- ${bamboo.planrepository.branchname};rangekey=latest; AmiLookupSnsTopicArn=${bamboo.AWS.AMIRegistryTopic} 16. Enter your Access Key and Secret Key information provided by qwiklabs 17. Click the Save button 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 19
The Release application CloudFormation template The cfn-ami-lookup-flask-signup.template CloudFormation template makes use of a CloudFormation custom resource to lookup the right AMI to use for deployment. If you re interested, have a look at the GetAmi CloudFormation resource, http://awsinfo.me.s3.amazonaws.com/services/cloudformation/templates/cfn-ami-lookupflask-signup.template: "GetAmi" : { } "Type" : "Custom::AmiLookup", "Version" : "1.0", "Properties" : { } "ServiceToken" : { "Ref" : "AmiLookupSnsTopicArn" }, "region" : { "Ref" : "AWS::Region" }, "table" : { "Ref" : "DynamoDbAmiTable" }, "hash": { "Ref" : "HashKey" }, "range": { "Ref" : "RangeKey" } This resource definition uses the custom resource SNS topic ARN to fire CloudFormation resource events to. In our case, we re passing a plan key and branch name to the hashkey and a build number as the range key and the custom resource finds the right AMI to deploy into our new discrete stack. The benefit of doing this in a CloudFormation custom resource rather than via Bamboo directly is that we decouple our CI environment and implementation and the artifact database and lookup services around a clean interface defined by CloudFormation. This means we can modify each independently, and even use this AMI lookup service from another CI tool (e.g. Jenkins) if we wanted to. To finish our web application pipeline, we d normally include another task to update our DNS record to point to the newly created application environment running the new version of our application. In this case we won t manage DNS as part of the lab, but you could use features like weighted round robin with Route53 to gradually move traffic over to your new application stack, http://docs.aws.amazon.com/route53/latest/developerguide/routing-policy.html The Teardown Stage The Teardown Stage will be responsible for removing our discrete application environment consistently using CloudFormation. Normally, you d hook this stage into your release process when you were satisfied that a new version of your application environment was working correctly. For example, you might release a new stack based on a new version of your application (baked into an AMI), and use DNS to update a production domain name to the new stack. Once you were satisfied that the new application version was working correctly you would teardown the previous stack. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 20
We don t have control over DNS in this lab, so we re not automating this part of the release lifecycle, however we ll still create a Teardown stage and make it a manual step in the build plan. To create the Teardown stage: 1. In the Plan Configuration screen, click the Create stage button 2. In the Stage name field enter Teardown 3. We are going to make this a manual stage, which will require user interaction to run, so make sure you have checked the Manual checkbox 4. Click the Create button 5. Click Add Job in the new Teardown stage, and Create a new job 6. In the Job details screen enter Teardown app in the Job name field and TEAR in the Job key field 7. Make sure the job is enabled, and click the Create job button 8. Select the Teardown app job 9. You should be able to see a default task Source Code Checkout. Using the cross icon next to it, delete this task. 10. Click the Add task button and select AWS CloudFormation Stack 11. In the Task description field enter Teardown Stack 12. Make sure the stack action is Delete 13. Select the correct region for your lab environment, e.g. US-West (Oregon) or EU (Ireland) 14. In the Stack Name field enter: DeployApp-${bamboo.planKey}-${bamboo.planRepository.branchName}- ${bamboo.buildnumber} 15. Enter your Access Key and Secret Key information provided by qwiklabs 16. Click the Save button You should now have a build plan that looks like this: 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 21
Reviewing our build plans We ve now finished configuring two different build plans on our Bamboo server. If you navigate back to your awslab build dashboard, you should see something similar to this: You probably won t see the same build numbers or build details, but the same general structure should be present. You also won t see the branching icon next to the web application pipeline yet either. We ll configure that soon. Testing the web application build plan We re now going to run and test our application build plan. We ll manually start our plan to test it, and then later make some application changes via our source code repository and test that the build plan is initiated automatically by Bamboo. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 22
Running the plan manually 1. In the top navigation in Bamboo, click Build and select All build plans 2. Under the project awslab, you should now see two build plans. Click Web application pipeline 3. In the Web application pipeline build plan, at the top right hand side of the plan details page click Run and select Run plan 4. Running the plan performs the following: a. Checks out the source code for our example web application b. Run our unit tests over the application base c. Creates an AMI builder instance and bakes our application changes into a new AMI d. Registers that AMI in our configuration database in DynamoDB e. Lookups the AMI and deploys that using the release application CloudFormation template. This template in turns creates a new discrete environment with our new AMI deployed in an auto scaling group behind a new Elastic Load Balancer 5. The process of baking an AMI and releasing a new version of our application could take up to 10-15 minutes. A lot is happening behind the scenes, and typically this process would continue while development teams continue working on features. Of course, reducing the turn around time would help us get feedback faster, but in practice, 10-15 for a full web application deployment isn t usually too much of an issue. 6. Once the build plan has completed, you should be able to see a new CloudFormation stack in the AWS web console starting with DeployApp-*. Check the stack outputs for the URL of the new stack. The key for this is WebAppUrl. Browse to that URL and you should be able to see your application running. Bamboo plan branching Bamboo provides the concept of plan branching to automatically create new plan branches whenever a configured plan repository is branched. When a new repository branch is detected, and a new Bamboo plan branch is created, the same build plan is run over the branched source code. This provides consistency for our continuous integration and delivery pipeline over all branches in a source code repository, and helps encourage the use of feature branching. Build plan consistency across branches is also a critical component to improving the quality and consistency of application releases across different features in development. The Bamboo documentation covers plan branching in more detail, https://confluence.atlassian.com/display/bamboo/using+plan+branches. Plan branching needs to be activated for a build plan. To do this: 1. Navigate to the Plan configuration screen for your Web application pipeline plan 2. Select the Branches tab 3. Ensure the Automatically manage branches checkbox is checked. 4. In Notifications select Use the plan's notification settings 5. Click the Save button 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 23
Now if we were to branch our source code repository, Bamboo would find the new branch automatically and run our previously defined build plan over it and any future commits. Releasing a new application and database feature We want to release a new feature that requires a database schema update. We ll do this via our data sources pipeline, and then we ll release the application component that relies on that database schema update. We ll use our web application pipeline to make that change. In this case the database change doesn t break the application, so we can do it independently. The order however, is obviously important. We must change the database schema before we release our application update. Updating the database schema Connect to your development environment again, and change to the directory hosting your cloned py-flask-signup-datasources respository, e.g. cd $HOME/git/py-flask-signup-datasources From this working area: 1. Get the latest version of the Liquibase change log for our new database feature: wget -O liquibase-changelog.json https://raw.githubusercontent.com/aws-staging/py-flask-signupdatasources/micro-blog/liquibase-changelog.json 2. Add and commit the new change log to your repository and push to your remote origin, i.e. your forked repository git commit a -m "New schema for micro-blog feature" git push 3. Your data source build plan will now start and generate a new AWS CloudFormation template based on the new Liquibase changelog. The build plan will then run an AWS CloudFormation update task for that template and deploy the differences. Using the Liquibase custom resource runner, a new version of your database schema will be deployed! All without interrupting or impacting your existing application. 4. Go back to your Bamboo environment and inspect the running data source pipeline. Have a look at the logs for the build as well. 5. It s important to wait for this pipeline to finish. If it s unsuccessful you ll want to debug and fix it before continuing. Updating the web application Once the data source build has completed successfully, we ll update our web application to take advantage of the new database schema. Again, on your development environment, change to the directory hosting your cloned py-flask-signup respository, e.g. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 24
cd $HOME/git/py-flask-signup From this working area: 1. Create a new branch to host our new application changes: git branch micro-blog git checkout micro-blog 2. Get the latest version of your web application which leverages the new database schema: wget -O application.py https://raw.githubusercontent.com/awsstaging/py-flask-signup/micro-blog/application.py wget -O templates/base.html https://raw.githubusercontent.com/aws-staging/py-flasksignup/micro-blog/templates/base.html wget -O templates/blog.html https://raw.githubusercontent.com/aws-staging/py-flasksignup/micro-blog/templates/blog.html 3. Add and commit the new changes to your repository and push to your remote origin, i.e. your forked repository for the sample web application git add templates/blog.html git commit a -m "New micro-blog application feature" git push --set-upstream origin micro-blog 4. Bamboo will see this new branch, micro-blog, in the remote repository and create a new plan branch for it. Bamboo will then run your web application pipeline in that plan branch automatically, and deploy a new version of your web application to a completely new discrete application environment, including a new Elastic Load Balancer, a new Auto Scaling group using a newly baked AMI, where the new version of your web application is connected to the recently updated database. 5. Look up the new AWS CloudFormation stack deployed by your pipeline and find the WebAppUrl output. Each stack name starts with DeployApp- but then also uses the branch name and the build number the stack was built from. You should be able to browse to both stacks independently, and test the feature difference. Remember, in this case, both stacks are using the same database and SNS topic, because the new database schema update doesn t introduce any conflict in the data model. 6. The new version of your application uses the new schema update we released previously as well. To test that this is working, browse to your A New Startup web application and 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 25
using the top navigation on the link, click on Blog. This should load your new change, which will display a very basic blog page. This uses the new blogs database tables that we ve pre-populated with some data, i.e. blog comments. Obviously, you d probably invest in building a better blog page than this. 7. As you release new stacks for a given branch you can keep more than one version running until you re comfortable, and then tear them down when you re ready. Ideally, if we had a hosted DNS zone on Amazon Route53 we d automate even this process, taking care to only teardown the previous stack when we were satisfied with the new application version, and also that the previous stack was no longer servicing traffic. Having this level of integration between our CI and CD tool chain and AWS is really powerful. Both Bamboo and the insights into your AWS infrastructure that Tasks for AWS provides, play a very important role here. Troubleshooting your Bamboo build plans Refer to the Lab 4 guide to help troubleshoot the build if it failed. Congratulations! You have now successfully configured and implemented a continuous delivery pipeline using Atlassian Bamboo and Tasks for AWS to deploy a new discrete application environment while also referring to shared and stateful application components. At no point did we log into a single EC2 host, configure a single AWS service by hand, or even manually run a single command line tool, even when updating our database schema. What we re doing is removing the risk of manual, human processes introducing errors. Throughout this lab we ve spent a significant amount of time configuring Atlassian Bamboo and defining and testing our build plans. Remember, this is work that you would do very infrequently. Once you ve defined CI and CD pipelines for your applications and data stores, and any other artifact you wanted to continually deliver and deploy, you ll re-use the same pipelines and definitions over and over again on different branches within your repository (certainly if you use Bamboo s Plan branches feature). You can even reuse this work (and should) across different repositories. It s this consistency and repeatability that gives CICD it s power, and a tool like Atlassian Bamboo, which supports and even encourages this reuse and consistency is an important part of an effective solution. Conclusion Congratulations! If you are finishing the labs here you now have successfully: Learned how to put source code management techniques into practice to improve the way you development and track application changes Automated application testing such as unit tests to improve the quality of the application changes you are making Configured and automated a continuous integration pipeline to check out, and run your test harness on every commit in your SCM system. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 26
Configured and automated a continuous deployment pipeline to containerize your application changes into an AMI and used CloudFormation to manage the release lifecycle of those changes in a running environment. Configured and automated a continuous deployment pipeline for your relational database used by the web application to enable full lifecycle control of the database service, and also to manage database schema changes and data loads. There is a lot more you can do with the techniques we ve covered in these labs. Hopefully you ve learned about some new ideas and techniques that you can reuse or modify for your purposes to help you improve the quality of the applications you deliver to your customers. A well-implemented CICD pipeline can dramatically enhance your development and application teams ability to experiment and innovate by reducing the operational burden and associated risk of complex manual processes through automation. The choice of appropriate tools like Git and Atlassian Bamboo with Tasks for AWS, coupled with automation tools on AWS like CloudFormation can also enrich the governance model you wrap around your application change and release management processes. End Your Lab 1. To log out of the AWS Management Console, from the menu, click awsstudent @ [YourAccountNumber] and choose Sign out (where [YourAccountNumber] is the AWS account generated by qwiklab ). 2. Close any active SSH client sessions or remote desktop sessions. 3. Click the End Lab button on the qwiklab lab details page. 4. When prompted for confirmation, click OK. 5. For My Rating, rate the lab (using the applicable number of stars), optionally type a Comment, and click Submit. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 27
Note: The number of stars indicates the following: 1 star = Very dissatisfied 2 stars = Dissatisfied 3 stars = Neutral 4 stars = Satisfied 5 stars = Very satisfied. 8. You may close the dialog if you do not wish to provide feedback. Additional Resources AWS Training and Certification. For feedback, suggestions, or corrections, please email: aws-coursefeedback@amazon.com. 2013, 2014 Amazon Web Services, Inc. and its affiliates. All rights reserved. 28