My DevOps Journey by Billy Foss, Engineering Services Architect, CA Technologies

Similar documents
DevOps and Continuous Configuration Automation by Didier De Cock, Senior Principal Consultant, CA Technologies

The Continuous Delivery Effect

Introducing ZENworks 11 SP4

KonyOne Server Installer - Linux Release Notes

Application Release Automation (ARA) Vs. Continuous Delivery

Introducing ZENworks 11 SP4. Experience Added Value and Improved Capabilities. Article. Article Reprint. Endpoint Management

Enterprise Service Bus

Source Code Management for Continuous Integration and Deployment. Version 1.0 DO NOT DISTRIBUTE

WEBAPP PATTERN FOR APACHE TOMCAT - USER GUIDE

How to Prepare for the Upgrade to Microsoft Dynamics CRM 2013 (On-premises)

Practicing Continuous Delivery using Hudson. Winston Prakash Oracle Corporation

CA Clarity Project & Portfolio Manager

Administration & Support

White Paper Take Control of Datacenter Infrastructure

Image Credit:

You re going to be a software company. You re going to need DevOps. by John Michelsen, Chief Technology Officer, CA Technologies

SETTING UP ACTIVE DIRECTORY (AD) ON WINDOWS 2008 FOR EROOM

Unicenter Desktop DNA r11

Directions for VMware Ready Testing for Application Software

CA IT Client Manager. Desktop Migration

Pipeline Orchestration for Test Automation using Extended Buildbot Architecture

Be Fast Or Stay Behind

CA Repository for Distributed. Systems r2.3. Benefits. Overview. The CA Advantage

Achieving Rolling Updates & Continuous Deployment with Zero Downtime

Deploying System Center 2012 R2 Configuration Manager

Upgrading From PDI 4.0 to 4.1.0

Oracle Enterprise Manager

Continuous Integration and Automatic Testing for the FLUKA release using Jenkins (and Docker)

Oracle EXAM - 1Z Oracle Weblogic Server 11g: System Administration I. Buy Full Product.

Getting Started with DevOps Automation

Test What You ve Built

Mobile Labs Plugin for IBM Urban Code Deploy

CA Workload Automation Agent for Databases

Best Practices for Patching VMware ESX/ESXi VMware ESX 3.5/ESXi 3.5

Break It Before You Buy It!

Essential Visual Studio Team System

APPLICATION MANAGEMENT SUITE FOR ORACLE E-BUSINESS SUITE APPLICATIONS

Achieve Your Business and IT Goals with Help from CA Services

Installation Guide for WebSphere Application Server (WAS) and its Fix Packs on AIX V5.3L

Service Catalog Management: A CA Service Management Process Map

Windows Scheduled Tasks Management Pack Guide for System Center Operations Manager. Published: 07 March 2013

Crossing the DevOps Chasm

WHITE PAPER. Getting started with Continuous Integration in software development. - Amruta Kumbhar, Madhavi Shailaja & Ravi Shankar Anupindi

Magento Search Extension TECHNICAL DOCUMENTATION

StreamServe Persuasion SP4

How To Migrate To Redhat Enterprise Linux 4

Building a Continuous Integration Pipeline with Docker

Altiris Monitor Pack for Servers 7.1 SP2 from Symantec Release Notes

Virginia Farm Bureau Reduces Compliance Costs by 50 Percent with CA Cloud Service Management

Why Alerts Suck and Monitoring Solutions need to become Smarter

Microsoft Dynamics NAV 2013 R2 Release Notes Follow-up

Hudson configuration manual

DevOps Course Content

Continuous Integration: A case study

XenClient Enterprise Synchronizer Installation Guide

Automated performance testing using Maven & JMeter. George Barnett, Atlassian Software

Specops Command. Installation Guide

<Insert Picture Here> Introducing Hudson. Winston Prakash. Click to edit Master subtitle style

CA Workload Automation Agents Operating System, ERP, Database, Application Services and Web Services

Getting started with API testing

Data Sheets RMS infinity

Advanced Service Design

F Cross-system event-driven scheduling. F Central console for managing your enterprise. F Automation for UNIX, Linux, and Windows servers

vsphere Upgrade vsphere 6.0 EN

In this training module, you learn how to configure and deploy a machine with a monitoring agent through Tivoli Service Automation Manager V7.2.2.

User's Guide - Beta 1 Draft

Application Release Automation with Zero Touch Deployment

Adobe Acrobat 9 Deployment on Microsoft Windows Group Policy and the Active Directory service

Windows Security and Directory Services for UNIX using Centrify DirectControl

Oracle Enterprise Manager

Enabling Cloud Computing for Enterprise Web Applications:

Meta-Framework: A New Pattern for Test Automation

White Paper. Anywhere, Any Device File Access with IT in Control. Enterprise File Serving 2.0

Service Virtualization CA LISA introduction. Jim Dugger CA LISA Product Marketing Manager Steve Mazzuca CA LISA Public Sector Alliances Director

SAP Business Intelligence Suite Patch 10.x Update Guide

Installing Active Directory on Windows Server 2008 by Daniel Petri - January 8, 2009 Printer Friendly Version

How Bigtop Leveraged Docker for Build Automation and One-Click Hadoop Provisioning

STREAMLINING COMPUTER DELIVERY PROCESSES USING 1E SHOPPING AND SCCM

How To Benefit From An Automated Deployment

Automated build service to facilitate Continuous Delivery

DevOps for the Mainframe

Installation Guide for contineo

Exploratory Testing in an Agile Context

High Level Design Distributed Network Traffic Controller

CA Explore Performance Management for z/vm

Data Center Automation with YADT

CDH installation & Application Test Report

CA IT Client Manager. Software Delivery

About the VM-Series Firewall

Red Hat Network Satellite (On System z) 18-JUNE CAVMEN Meeting

Agile Software Factory: Bringing the reliability of a manufacturing line to software development

NexentaConnect for VMware Virtual SAN

solution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms?

Continuous Integration (CI) for Mobile Applications

CA Workload Automation Agents for Mainframe-Hosted Implementations

Application Note. Gemalto s SA Server and OpenLDAP

Strategies for Application Server Deployment Using Multiplatform Installers. October 17-18, 2006 l Santa Clara, CA

ALTIRIS Software Delivery Solution for Windows 6.1 SP3 Product Guide

SUCCESFUL TESTING THE CONTINUOUS DELIVERY PROCESS

How To Set Up Wiremock In Anhtml.Com On A Testnet On A Linux Server On A Microsoft Powerbook 2.5 (Powerbook) On A Powerbook 1.5 On A Macbook 2 (Powerbooks)

Transcription:

About the author My DevOps Journey by Billy Foss, Engineering Services Architect, CA Technologies I am going to take you through the journey that my team embarked on as we looked for ways to automate processes, achieve higher quality, and deliver to the market faster. Billy Foss is an Engineering Services Architect at CA Technologies. He is based in the Cary, NC office. Prior to CA Technologies, Billy spent 4 years at Sensus and 8 years at IBM improving continuous integration. While his past jobs covered different domains (smart grid, mobile device management, network processors, and even some military simulation), they all provided opportunity to improve the speed and quality of software delivery through test automation, install packaging, and continuous integration. Billy earned a MS in Computer Science from the University of Central Florida. Last year, I accepted a position as an Engineering Services Architect on a DevOps team at CA Technologies. I ve read the articles saying you should never accept a DevOps position. I agreed, but took the position anyway! At the start, it looked like an easy task. The product had not been released so there were no legacy code restrictions. Development sat across the hall from operations. Operations hosted one huge customer. The deployment scenarios were limited and did not require the support of multiple versions of software. This seemed much easier in comparison to my previous positions where software was packaged for a wide range of customers, small to large, centrally hosted or installed on site. Our customers had their own schedule for upgrades, which demanded the support of multiple releases at a time and multiple upgrade paths. The main point the new project seemed ideal for taking my continuous integration experience and extending it into continuous delivery and deployment. As we know, DevOps is not a role it is a set of processes and methods designed to increase the efficiency in the delivery of work products across multiple teams (primarily development and operations). Our team s goal was to guide the development and to support an operations team through the changes; ensuring value was being delivered each step of the way. The purpose of this article is to share the tools and techniques we used to improve communication, efficiency, and understanding across the teams. Pillars of the Release Bridge Before outlining specific improvements to our process, I will describe our vision of a release bridge connecting development and operations. There are many steps across this bridge, but the foundation is built on three pillars: continuous builds, continuous integration, and continuous delivery.

Figure 1. Continuous Build Figure 1 shows a simplified release process where source code, provided by development, is built into a package and distributed to different stages of QA until it is approved for release into production. The first pillar to be constructed is an automated build system that can reliably build the code in the same way every time. Having this build system run on every source code change gives the team immediate feedback when something basic breaks. The system should build all the pieces required to deliver the code. If the database needs changes to work with the code change, then the build process should include the changes needed with the package. Whenever possible, the build system should also run unit tests, static analysis, and other code checking tools that do not require significant external resources. If your build process relies on too many external resources, it can become unreliable should those resources ever be down or misconfigured. This can lead to false build failures which can train the team to ignore build failures (a really bad habit). Figure 2. Continuous Integration Figure 2 shows the addition of a continuous integration pillar. With a continuous build system ensuring that all the code compiles together, the next step is to verify it can install together and run. In today s software world with hundreds of components working together as one, continuous integration goes beyond traditional unit testing and ensures that things work in a production like environment. This requires a dedicated system where every build is automatically installed and some level of automated testing occurs against the running system.

The more testing that can be automated at this integration stage, the less time QA will need to validate each build. There are many areas of testing that need to happen before you can release software: functional, system, load, stress. Once you automate all the required acceptance test cases to deliver the release, you have built the third pillar continuous delivery. The only remaining step is to deploy into production. Often there are business reasons for manual approval steps such that continuous deployment is not actually the desired end goal. All this continuous stuff takes a lot of work. It takes automation engineers to build it and operational engineers to keep it running. Extensive testing environments can require significant capital investment. The next sections describe some of the steps we took to improve our release bridge. Continuous Builds When I started, the team had continuous builds running in a CI server (TeamCity). There were Java war files, database scripts and even an XML configuration to import into the CA Process Automation server we used. Each build even had an InstallAnywhere package to install itself. So all we needed to do was run three installers, execute the database scripts, and go to the web interface to import the process flow XML. Then we could start Tomcat and everything would work fine. Wait Tomcat, who installed Tomcat? Oh, we were supposed to install and configure that manually. So this is when we asked ourselves, do we really want to write automation around all of these manual steps? If our end user does not have access to the same automation; they will have to perform all the error prone manual steps as well. Just designing the continuous integration steps gave us feedback that our install process was complex, flawed and we could do better. We took this feedback and combined the three installers into a single package. We ensured that the package could run silently. We created a database plugin that ran the database scripts as part of the install process. We also created a plugin to call the SOAP interface to automatically import the CA Process Automation XML. This gave us a nice installer that our users could run in console mode while our automation ran silently. By using the same code path, our continuous integration is not only installing the product, it is actually testing a customer usage scenario. Continuous Integration To get started with continuous integration, we need three things: a package, an environment, and an automated process to deploy the package to the environment. The continuous build process supplied the package and we reused an existing development/test environment. Our team used a combination of

PowerShell and other scripts run from our CI server to automate the pushing of new builds to the environment and the running of the silent install. Since this was the same CI server generating our packages, we could trigger the deployment immediately after a build. We structured the deployment jobs to be parameterized so we could reuse the job to push code to our test environments too. In order to validate that the install actually worked, we created a very small script to check that our web service URLs were available. The script worked well for a few rounds, until someone needed to change a Tomcat configuration. This required manual steps on each environment and no real good way to validate that they were completed correctly. So this prompted us to look at the configuration requirements going into each environment. Each environment (development, integration, QA, and production) needs to have the basic prerequisites defined somewhere. Our operations team has a set of service catalogs that describe every detail of how to install and configure the OS and prerequisite applications. In an environment with many complex enterprise applications, having detailed documentation is critical. However, long documentation with complex manual steps is very time consuming and error prone. Figure 3. Configuration Management We needed a configuration management tool or a mixture of golden image virtual machines that are hand configured once and then cloned for each new environment. This would allow many of the servers in each environment to be setup and left there across multiple test runs. However, we really wanted to make sure the environment running our web services had a proper install and configuration of Apache Tomcat. So we took that feedback into our build packaging and decided to bundle Apache Tomcat into the installer. This provided much more control and consistency in how

Tomcat was installed and configured. It also reduced the requirements that operations needed to perform for each new system. Operations appreciated the idea of less work, but it would have been helpful if we had figured it out before they had already prepped Tomcat manually on most of the machines. By reducing the manual effort required to install and configure environments, our continuous integration became more reliable and easier to implement (Figure 3). We also gained the bonus of simplified deployment steps once we rolled out into production. Continuous Integration for Developers During the product development, we realized we needed a new component. This component would interact with iptables and would only run on Linux. We created the same continuous build and integration structure for this new component. Since this was Linux only, we decided to package it with RPM. This seemed great, except that most of our development team runs Windows and they could not build or test the RPMs locally before checking in changes. We needed to give them an easy way to create the production-like or even integration-like environment locally on their laptop. Vagrant is a tool designed for this purpose. We were able to define the configuration required in the same path as the project source code. Once our team defined those steps, other team members could create the same virtual environment on their laptop with a few simple commands. We configured our Vagrant file so that the resulting virtual machine could both build the project and run it. The following example steps go from source code to running code: vagrant up (configures the virtual machine) vagrant ssh (logs into the freshly running virtual machine) cd /vagrant; mvn package (builds the RPMs locally) rpm -i target/my-project.*rpm (installs the RPM locally for testing in the same VM) Vagrant allows our developers to run their own continuous integration tests locally before checking in and without tying up shared development environments. It also helps them understand a little more of the configuration required for operating a production environment. Continuous Integration for the Database As mentioned earlier, we ran our database configuration scripts from the installer. These scripts were generated automatically as part of the build and updated anytime the object schema changed. This worked great for rapid development, as the schema could change easily and all the related pieces would automatically

update. It seemed like a nice way to bridge the gap between the Java data structure and the operational database. Minecraft /TM & 2009-2013 Mojang / Notch The problem is that we lacked automatic support for upgrading an existing schema. We could have released the full database creation commands for release 1.0, but then we would have to manually generate upgrade scripts for each new release. So when we got to release our first upgrade we would have seen the bridge looked like this. Minecraft /TM & 2009-2013 Mojang / Notch The continuous integration was still using the full database creation commands, so the upgrade path would only get tested manually. This greatly increased the risk of finding upgrade issues very late in the cycle. The desire was to catch this as early as possible. Our approach was to use a database change management tool (Liquibase) to generate a running change log of all the changes in the database. The initial set of changes was generated from the same full database creation commands. As changes happened to the schema, manual updates to the change sets were applied. This forced the development team to see the impact of schema changes and to think about how existing customer data might be impacted. This pushed the database migration tasks much earlier in the development process allowing it to be tested much earlier. Our 1.0 release used the change sets out of the box, which meant that all changes to the database were tracked in the Liquibase change log table. Operations liked knowing database changes were being tracked. When our 1.1 release went out, the Liquibase tool could tell what changes had already been

applied and only applied the new changes. Key to this was that the new change sets had been tested during every development and test installation because the database install process was always testing the upgrade path. To further test this process, we changed the automated deployment process of one integration environment to deploy the 1.0 GA code and then automatically deploy the 1.1 code in progress. Even with running the version to version upgrade, some database upgrade issues were only found with specific data sets. This shows the importance of testing upgrades with production-like data sets. Looking toward Continuous Delivery With all this continuous build, integration, and testing, you might think continuous delivery is right around the corner. Well, there is still a lot more to do. Most of these changes impacted the development team. Our QA team has an automated test suite and is expanding both framework capability and test case coverage. One of our big upcoming challenges is automating the provisioning and configuration of external applications. Each environment involves 5-10 additional servers. We have used CA Release Automation to deploy some components, but there is a lot more configuration needed in order to be production ready. Stay tuned to read about how we accomplish continuous delivery.

Connect with CA Technologies at ca.com CA Technologies (NASDAQ: CA) helps customers succeed in a future where every business from apparel to energy is being rewritten by software. From planning to development to management to security, at CA we create software that fuels transformation for companies in the application economy. With CA software at the center of their IT strategy, organizations can leverage the technology that changes the way we live from the data center to the mobile device. Learn more about CA Technologies at www.ca.com. 2014 CA. All trademarks, trade names, service marks and logos referenced herein belong to their respective companies. ITIL is a Registered Trade Mark of AXELOS Limited. The statements and opinions expressed in this document are those of the author(s) and are not necessarily those of CA. CA and the authors assume no responsibility for consequences resulting from the publication of or use of this document, and are not responsible for, and expressly disclaim liability for, damages of any kind.