Solution Spotlight KEY OPPORTUNITIES AND PITFALLS ON THE ROAD TO CONTINUOUS DELIVERY
C ontinuous delivery offers a number of opportunities and for organizations. By automating the software buildtest-deployment of software cycles, it allows for completing the cycle multiple times a day. This E-Guide reviews best practices and tips for overcoming the key barriers to continuous integration and delivery to accelerate software development cycles. PAGE 2 OF 16
CONTINUOUS DELIVERY IN ALM: OPPORTUNITIES AND CHALLENGES Nari Kannan is the automation of the software build-test-deployment cycle. Builds can be done, automated tests run and the software deployed multiple times a day. Sometimes, features could be rolled out to production in as often as every ten minutes or so. A build master coordinates the automated testing and deployment of features to production. also provides for feature toggling, selective release of features to only certain users, and that can be chosen based on user characteristics. For example, the social media site Facebook uses continuous delivery to rollout certain features only to female users between the ages of 18 and 34 in the US. This enables rollout of new features only to selected users and could take the place of user acceptance testing even when they are fully deployed. considers software development a pipeline, with feature definition, design, development, build, automated testing and deployment all forming parts of the pipe. The idea here is that development is not managed PAGE 3 OF 16
with discrete milestones and releases, as in a traditional Waterfall methodology or a stories/sprints approach as in Agile development. Feature rollout is a continuous process. offers a number of opportunities and for organizations. CONTINUOUS DELIVERY OPPORTUNITIES Strategic impact: offers a way to roll out features faster than other methodologies. The end result is that little time is lost between conception of a feature and its availability in production. This offers a key strategic advantage for any company that uses this methodology, as opposed to competitors who may be using methodologies that require a longer delivery cycle. Lazy evaluation of features: Rather than do an elaborate requirements gathering, analysis and design of software features, continuous delivery enables features to be rolled out quickly with the evaluation of how useful they were or not, deferred past deployment. Feedback can PAGE 4 OF 16
be obtained after features are in production and they can be turned off, if they are negative. Feature toggling: Based on user characteristics, certain features are available to a user or turned off. Feature toggling offers a very rapid way of doing end user testing or even customizing the software for different kinds of users. It also provides a way of turning off or on a feature as needed at any time. This is a huge advantage for features that an organization is not sure of but wants to experiment with quickly. Value analysis possibilities: When gathering requirements, it is hard to predict the value of each software feature to users. Toggling enables continuous delivery environments to track actual usage by feature. This can enable the comparison of predicted usage vs. actual usage and by inference, predicted value vs. actual value of different features. Those that failed to be key features can be turned off. Better quality: When you do multiple build-test-deployment cycles a day, the test automation exercises the entire software many times a day, PAGE 5 OF 16
catching any software defect that slips through anytime, especially with unit tests are also rolled in. Better quality of software is ensured with continuous delivery. Reduction of software backlogs: Software backlogs have been an irritant for business within IT in many organizations. has the potential of rapidly reducing software backlogs, simply by the fact that features could be deployed more rapidly than with other methodologies. Streamlining of processes: requires a high degree of discipline at every stage of the build-test-deployment automation cycle. Implementation ensures streamlining of internal business processes for this to happen, ensuring efficiencies that may not have been there before. CONTINUOUS DELIVERY CHALLENGES High degree of development discipline: Smaller organizations with PAGE 6 OF 16
fewer decision makers may be more nimble than large ones. The high degree of discipline and close, quick coordination needed in a build-testdeployment cycle may not be possible in larger organizations. Granularity of features assumed: Large features may need to be broken down into smaller ones and accommodated in continuous delivery. Long running features may not be suitable for continuous delivery as this break down may not be easily possible. Methodology leapfrogging : Many organizations are still in the process of evaluating and transitioning to Agile development methodologies with all the attendant difficulties in leapfrogging methodologies. demands a higher level of process discipline than the other ones and as such may be difficult for such organizations to handle. Tool support on various platforms: Availability of automation tools for continuous delivery is best on Linux, and in other environments, somewhat spotty. This is especially true of organizations with legacy PAGE 7 OF 16
computing environments. Time to commit, validation : Features may be deployed but still may need to be validated before being committed. Validation of feature usage and utility takes more time as compared to the earlier parts of the continuous delivery cycle. So even if a feature is developed, tested and deployed, it may take time for it to be committed. However, continuous delivery offers features like toggling to speed the data collection for validation. CONCLUSION The never-ending quest for software development methodologies that can produce better quality software, faster, has resulted in continuous delivery. It uses build-test-deployment automation to achieve its goals. It presents a number of benefits that the other methodologies do not, but only if organizations are able to overcome in its adoption. PAGE 8 OF 16
CONTINUOUS INTEGRATION MADE SIMPLE: FIVE LESSONS YOU WON'T WANT TO MISS Matt Heusser It seems easy to get code to compile and test automatically; you just hook up a server to version control. Yet over a longer term, it turns out that many companies struggle to use continuous integration effectively. In this tip, I'll share a few of those mistakes and how to avoid them. LESSON #1: HAVE A STRATEGY FOR MANAGING THE BUILD It seems obvious, but continually integrating means that, every hour or so, you'll get a new build. If developers are continuing to check new code into that branch, the new code will be picked up by the build machine. Without discipline, your newest build could have new errors and changes that invalidate the most professional testing. There are a few ways you could manage the build process: Either have the capability to mark and 'promote' a candidate build, then perform testing on that PAGE 9 OF 16
build, or else branch the code, and, at a certain point, insist that new development occur on the branch. For example, one company I worked with had a 'master' branch; as the project approached release, we would create a project-name branch, and only check fixes targeted for that release into the project branch. I recommend both strategies. The second may add a bit of overhead, as the project branch will have to be merged back to master occasionally. Modern version control tools like git can take the pain out of merges. LESSON #2: STAMP OUT FALSE ERRORS The integration part of continuous integration is more than a compile step; it implies a series of automated checks that stress both the components in isolation (unit tests), the components with each other (integration tests), and, perhaps, some sort of customer understandable high-level tests (acceptance tests). The higher-level the test, the more often it will fail. Some tests, especially GUI tests, may be intermittent, or prone to failure. On one project, I found our team was using a certain language about tests; you'd hear things like, "Don't worry about the search-by-tag tests, it's just that flaky indexer feature." When that happened, the value of the tests had gone negative. Not only were the failures wasting our time, but we were ignoring the PAGE 10 OF 16
results anyway. This created an even greater risk: that we would ignore future failures that turned out to be real. When people start talking about ignoring failures or commenting failing tests -- and they can't figure out why the tests are failing or how to make them pass -- there's a problem. Stop the process and fix the issue. Not just for one run, not just for today, but find the root cause and fix it. Prevent it next time, or throw the test away. LESSON #3: MIND THE BUILD/DEPLOY TIME integration builds start out fast. Over time, the version control system gets heavy, the build gets more complex, developers add dependencies and third party tools, and automated checks get longer and longer. Within a year, a build that took five minutes can grow to an hour. For a large project, the build plus checks can run several hours -- one team I know of had complex GUI tests that took over twenty-four hours to run. With tests that long, if something goes wrong, and you make a fix, it will take at least a whole business day, if not two, to find out if the tests passed. Now imagine a high-pressure business environment... and it takes three to four days for a build. This is not going to end well. PAGE 11 OF 16
Most likely, the team will start to ignore failures, if not comment out all tests entirely. To fix this, watch your build time carefully. If you find some tests are long and slow-running, you can pull them out into an overnight end-to-end run, or look for ways to run tests in parallel. Personally, I haven't found a great deal of value in having automated GUI checks run as part of the build, unless those checks are very fast verifications that succeed every time. (See Lesson #2). LESSON #4: EXPLORATORY TESTING AFTER ACCEPTANCE TESTS PASS It seems logical that passing "acceptance tests" means the code is ready for acceptance, or ready to be deployed. Unless the application is simple, clean and straightforward, it's more likely that passing acceptance tests means the code is ready for acceptance by the testers. That is to say, when the acceptance tests pass is a time exploratory testers can shine, finding the bugs that only a human can find. Automated checks can be helpful and wonderful... as part of a balanced breakfast. Or, in a pinch, if your change is minor, you might take a little risk and "just run the checks and call it good." If you want to rely on automated checks to make sure the software is good, you'll want other safeguards in place, like an PAGE 12 OF 16
ability to slowly roll code out into increasing user groups over time, and roll a change back on-demand. LESSON #5: MAKE EXPECTATIONS EXPLICIT, ESPECIALLY FOR DISTRIBUTED TEAMS It seems obvious to have a single code repository and CI system for distributed teams -- but is everyone playing the same game? Eric Landes, a solution architect with Agile Thought, pointed out some problems with such a setup. He said: At a prior company, we outsourced a project and agreed that unit tests were required. Our CI process would run the unit tests to make sure they all passed, and some code coverage metrics. After the first couple of sprints, we discovered that the remote team had a different understanding of what unit tests are. To them unit tests were what our group called, integration tests. We then agreed on the following definition for unit tests (which I assume is more or less standard) - Unit tests are isolated, do not run against data stores, but test business logic at the developer level. If all tests do not pass, do not check in code. The CI process will run only those types of unit tests; if they fail, then the build is broken. PAGE 13 OF 16
Eric's integration tests might fail when nothing was wrong with the code at all, but the database happened to be down. This kind of problem sends false error signals to the local team, which may spend time debugging a non-existent problem, or end up waiting twelve hours for the remote team to do so. Again, there's no problem in having a distributed CI setup; only in having one where the different teams have a different understanding of the commit and build rules. CONCLUSIONS The real challenge for continuous integration isn't getting the system set up, or even getting the initial business processes defined. No, the challenge of continuous integration is keeping the system running as it grows over time into a giant blob of dependencies. Yes, we've been dealing with that for decades with "the daily build." With continuous integration, the challenge is bigger, and it's all the time. To keep things running, you'll want to make sure the build is repeatable, fast, and as simple as possible, while ensuring that the automated checks hit the sweet spot of valuable, minimal, and fast. Adam Perlis, the inventor of ALGOL, once wrote: PAGE 14 OF 16
"Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it." PAGE 15 OF 16
FREE RESOURCES FOR TECHNOLOGY PROFESSIONALS TechTarget publishes targeted technology media that address your need for information and resources for researching products, developing strategy and making cost-effective purchase decisions. Our network of technology-specific Web sites gives you access to industry experts, independent content and analysis and the Web s largest library of vendor-provided white papers, webcasts, podcasts, videos, virtual trade shows, research reports and more drawing on the rich R&D resources of technology providers to address market trends, and solutions. Our live events and virtual seminars give you access to vendor neutral, expert commentary and advice on the issues and you face daily. Our social community IT Knowledge Exchange allows you to share real world information in real time with peers and experts. WHAT MAKES TECHTARGET UNIQUE? TechTarget is squarely focused on the enterprise IT space. Our team of editors and network of industry experts provide the richest, most relevant content to IT professionals and management. We leverage the immediacy of the Web, the networking and face-to-face opportunities of events and virtual events, and the ability to interact with peers all to create compelling and actionable information for enterprise IT professionals across all industries and markets. PAGE 16 OF 16