October 2013 Daitan White Paper Reference Model for Cloud Applications CONSIDERATIONS FOR SW VENDORS BUILDING A SAAS SOLUTION Highly Reliable Software Development Services http://www.daitangroup.com
Cloud Application Reference Model 1 ABSTRACT At Daitan, we have seen increasing demand from traditional SW solution providers for re-engineering of their traditional On-Premise applications to be deployed in the Cloud and offered under a SaaS business model. This paper consolidates our research and experience to present an architecture reference model for cloud applications. Daitan has mostly executed cloud projects in communications (call center solutions, web conferencing, unified communications, etc.) and social networking applications, but the concepts we present are generic. Typically, Daitan focuses on the software application itself and that is where our experience concentrates. We have also accumulated experience with NOSQL databases with very large data sets and sizing and deploying applications in commercial cloud infrastructures (e.g. Amazon AWS). This is written for traditional SW vendors facing the challenge of developing SaaS services. We will look at the business and technical challenges, and then propose a reference model that can help you to evaluate the software architecture and imagine what a cloud version of your application would look like. 2 MOVING APPLICATIONS TO THE CLOUD If you are a software vendor offering an On-Premise software solution, the need to consider a Softwareas-a-Service (SaaS) model should be obvious by now. The market is quickly moving in that direction. While most new deployments of mainstream applications (CRM, ERP, email, document sharing) are already moving to the cloud, vendors of On-Premise solutions in vertical markets have been able to defend their position based on richness of features. But, as SaaS vendors accelerate innovation, the feature set advantage will eventually fade. The need to reduce IT costs and to offer services that can scale up/down based on varying demand will cause this trend to accelerate in the next years and vendors that cannot make the shift will be left behind.
2.1 IT MAKES SENSE FOR CUSTOMERS From the customer's perspective, SaaS offers multiple advantages compared to On-Premise solutions: Time-to-Value - Implementation times for SaaS are significantly shorter than comparable On- Premise projects. While there may be situations where On-Premise can claim competitive longterm return-on-investment, SaaS presents significantly lower upfront costs and risks. Reduced IT Costs - Most companies are trying to reduce their dependence on IT teams working on systems that are not market differentiators, so they can focus their energy and resources on what makes them more competitive. Current Software - Updating On-Premise software is both disruptive and expensive. Most companies delay upgrades as long as possible and run on software that is a few years behind the state of art. SaaS customers are (for better or worse) always running the current version. Changes in functionality and user interface are more frequent and gradual, but generally there is no big operational disruption caused by upgrades. Built-in Availability/Reliability - Large enterprises can afford to build reliable systems through redundancy and manage processes of backup and security on their own. But for most companies, that is not the reality. By using SaaS, small and mid-sized companies gain access to shared "enterprise-class" infrastructure. 2.2 IT MAKES SENSE FOR SW VENDORS From the SW vendor's perspective, a SaaS model provides a direct connection with customers, more opportunity to innovate, and better business: Reduced barrier to sales - Offering less financial commitment, less IT dependences, and shorter time-to-value, SaaS reduces barriers to sales. More predictable and recurrent revenues lead to higher company valuations. More upgrade revenue SaaS enables a "land and expand" strategy where an initial sale can be more readily expanded by adding additional seats, companies divisions, and new modules compared to On-Premise software. Lower maintenance and support costs - All resources are directed at providing the best software to all customers without the load of supporting and keeping compatibility with prior releases. More opportunities for innovation - The reduction of the support burden saves development costs and dramatically increases the ability of the vendor to respond to new market opportunities. 2.3 BUT THERE ARE SIGNIFICANT TECHNICAL CHALLENGES An On-Premise vendor decides that a 'hosted' application is a close enough to a SaaS solution. Allocate servers in Amazon AWS, put the old good software to run in some virtual machine, add some SaaS marketing, and they are done, right? Not really. Moving On-Premise software to a virtual server can be a valid first step for a user organization moving towards Cloud Computing. But that doesn't mean software vendors can confuse hosted application deployed on a cloud infrastructure with a true cloud application. With traditional software hosted in the cloud, neither vendors nor customers will benefit from the scalability and management advantages of the cloud in its entirety.
On-premise vendors have not demonstrated a good track record shifting their models to compete against new SaaS-only entrants. They have been technically challenged with the following: Multi-tenancy - Products that were designed to be one-off implementations for each customer cannot be quickly changed to work hosting a large number of separate customers in one system instance. These systems require full re-architecting of data models, business logic and UI. Scalability - On-premise software is often designed with tightly coupled components requiring a dedicated set of computing resources. A SaaS solution needs to be able to allocate resources from elastic pools and be able to scale according to demand and to adapt to different profiles of use. Integration and Customization - On-Premise software requires professional services to be customized to the specific needs of each business and to integrate with other solutions. SaaS vendors update software frequently and must offer software that is configurable and provide wellbehaved APIs that allow for easier integration without the need to fork code. Proprietary Hardware In certain domains, traditional solutions have relied on proprietary hardware to deliver their services (e.g. in Communications, use of DSP boards to perform media transcoding). While the deployment of non-standard hardware in the cloud is possible, it keeps the vendor from benefiting from the scale of commercial cloud infrastructure providers. Billing and Operations Management SaaS requires a new commercial model and technology to support dynamic allocation of new features, feature upgrades (e.g. Free, Premium, Professional). Upgrade and release Requires continuous integration testing and operations with full capability in pre-release testing, upgrade deployment, rollback.
3 REFERENCE MODEL FOR CLOUD APPLICATIONS This section consolidates some of our research and experience to recommend an architecture reference model for cloud applications. 3.1 SCALABILITY AND MULTI-TENANCY An application deployed in the cloud typically serves multiple customers and is able to efficiently allocate resources based on real-time demand. The system should, for example, be able to seamless re-allocate resources (be it computing power, communication channels, bandwidth or storage) from a customer who is inactive to another who is experiencing a peak in traffic. The previous paragraph sounds obvious, but it is not how traditional On-Premise systems were architected. Typically, software systems assume they serve a single customer with a permanently allocated set of resources. That assumption has deep architectural implications and is the most common reason why On-Premise vendors have a hard time migrating their solution to the cloud. An application designed for the cloud should scale horizontally using simple, stateless load-balancing. 3.2 DEVOPS AND DEPLOYMENT PLATFORM Operating a system under a SaaS model changes not only the software, but also how it is developed, deployed, and supported. There is a need for a platform that enables continuous integration, quick release cycles, live database migrations, upgrades, test automation, release management, configuration management, billing and license management, etc.
These platforms are significantly different from the automation platforms used in traditional software development. They need to enable the seamless integration of operations and engineering. It is not just a change in technology, but also a process and organizational change. 3.3 THE APPLICATION FRONT-END: NOT ONLY BROWSERS ANYMORE Some traditional applications are just completing the transition from proprietary thick clients in a clientserver architecture to a web-based interface. But the world is changing fast. Any business application now needs also to consider the access from mobile clients interacting with the-back end through an API. One important step in decoupling the front-end from the application servers is to create a clean and manageable interface between them. In the cloud, the best practice is to do it through a RESTful, httpbased web services API s. 3.4 CONTENT DELIVERY NETWORK (CDN) A Content Delivery Network (CDN) is a distributed system of servers deployed in multiple location so that they are physically and logically closer to the users. When applications need to deliver large amounts of data that are relatively static (e.g. distribution of streamed video content, graphic assets, client software to be downloaded to a device), CDNs can improve availability and user experience and decrease the use of network bandwidth. CDN requires a global infrastructure and is available as a service to software solution vendors from many providers. Akamai is a well-known example of large provider. Most commercial cloud infrastructure providers (e.g. Amazon AWS) also offer ready-to-use static content distribution solutions. 3.5 THE LOAD BALANCING LAYER A software solution deployed in the cloud will almost always employ some form of front-end load balancing. A load balancing layer will distribute traffic among two or more application servers, adding availability and fault tolerance to the system, and let you scale the infrastructure by adding or removing servers depending on aggregate demand. HAProxy, an Open Source Software project, is a popular choice for load balancing of HTTP and TCP applications in the cloud. Commercial cloud infrastructure providers normally also offer a ready-to-use load balancing solution (e.g. Amazon s Elastic Load Balancer on AWS). If you are able to fully decouple front- and back-end through a RESTful API, there are also many API management platforms (e.g. Apigee, Intel/Mashery, 3Scale, SOA Software, etc.) in the market that let you load-balance and manage that interface with even more granularity. 3.6 THE APPLICATION LAYER The application layer consists of an array of application servers servicing requests from users. The application must be designed to be aware of multi-tenancy and be as stateless as possible. Long-lived state information should not be stored in the array nodes, but in back-end service nodes using cloudoptimized storage technologies. That avoids the need of complex data replication schema. As the demand of resources from each customer fluctuates, the system dynamically allocate resources where they are needed. Ideally, there will be automatic scaling-up/down of the aggregate system capacity
so that traffic metric changes can trigger both the dynamic removal of servers from the array (if demand is low) and addition of new applications servers (if demand is high). The same principle of automatic reallocation and scaling of the pool can be applied to other resources that are specific to your application. For example, in many of the projects Daitan has been involved in the unified communication space, transcoding capacity or media channels in a communications application were expensive resources that could be managed as an elastic shared pool. On-Premise applications typically use a client-server architecture. Some had already migrated to a service-oriented architecture (using domain-specific protocols - such as SIP in communications or more generic architectures using SOAP, for example). Applications in the cloud most often than not, expose their services through web-based (HTTP) APIs. For communications application, which traditionally relied on purpose-specific hardware (e.g. DSP s for media transcoding), a necessary step before the migration to the cloud is to consider the utilization of software-only methods. While deploying custom hardware in the cloud is technically possible, that would require the vendor to build its own proprietary cloud infrastructure and keep it from benefiting from the scale of commercial infrastructure providers. 3.7 THE CACHING LAYER If your application is database read-intensive, it can potentially benefit from a caching layer. Memcached and Redis, two Open Source Software projects, are popular choices for implementation of a caching layer in a cloud-deployed software application. Commercial cloud infrastructure normally also offer a ready-to-use caching solution (e.g. Amazon s Elastic Load Balancer on AWS, which supports both Memcached and Redis). 3.8 THE BACK-END STORAGE LAYER The permanent data store is of critical importance and should be designed based on the particular requirements of your application. If you are using a SQL database (say, for example, MySQL), you should consider using a proxy mechanism (e.g. MySQL Proxy) that let you implement one or more slave databases, which will provide more reliability and availability and allow for horizontal scaling of the database. New database technologies have become more common as applications migrate to the cloud. NOSQL databases (such as Cassandra and MongoDB, for example) are optimized for cloud deployments and provide the ability to store vast amounts of data with built-in distributed and redundant architecture and can be quite efficient and scalable for applications that do not require structured SQL queries. Commercial cloud infrastructure providers will usually offer data-storage services that enables the abstraction of backup, availability and redundancy (e.g. Amazon RDS Service on AWS).
ABOUT DAITAN GROUP Daitan Group is software development service provider with focus in Communications, Mobility and Cloud/Web Solutions. We partner with technology vendors to help them develop their next software solution. In several projects, Daitan engineers have been exposed to the challenge of developing software architectures optimized for the cloud. Development of RESTful APIs, utilization of NOSQL databases such as MongoDB and Cassandra, deployment of applications on commercial cloud infrastructures such Amazon AWS, dynamic allocation of resources, simulation and testing of applications subject to very large amounts of transactions or connections are just a few examples of our organizational experience. To know more about what Daitan can do for you, please visit http://daitangroup.com