Selecting the Right Service Virtualization Tool
Selecting Your Service Virtualization Tool In recent years, the adoption of SOA (Service-Oriented Architectures) has become the solution of choice amongst IT directors to solve the challenges posed by antiquated, large and complex infrastructures. The business benefits are manifold: increased integration and agility, greater efficiency and reduced costs amongst others. However, SOA can be complex to implement and requires management to prevent adding a further layer of complexity; this is counterproductive and can make your development and testing cycles more inefficient and costly. One solution is to create virtualized services, which can replicate the behaviour of live systems. These offer developers and testers a stable, isolated environment, which is free from the constraints of system unavailability and cross-system dependencies. However, performing high quality development and testing requires more; rich sets of sophisticated, fit for purpose test data. Below, we have set out a matrix containing a comprehensive list of all the features you need to consider when reviewing virtualized services solutions to improve the quality, efficiency and cost-effectiveness of your SOA development and testing. In each case, we ve noted how important each feature is, and how it can help you to solve the real challenges facing organizations moving to Service-Oriented Architectures.
Support for Structured and Non-Structured Messages Mandatory All medium-to-large scale organizations will have a plethora of existing integration technologies that use unstructured messages to communicate. In many cases, the knowledge about the technology at the end of these messages will be sparse and sometimes non-existent. Support for Multiple Platforms For this reason, any service virtualization tool must be able to support the recording of unstructured request and response messages, so that normally expensive and complex back-office technologies can be virtualized as part of any virtualization strategy. The vast majority of organizations will have more than one flavour of database and more than one platform. If the virtualization tool does not support one or more of platforms required, it will make it impossible to implement a virtualization strategy across the entire enterprise. Mainframe Virtualization All mainframes will have a number of legacy services that they offer and will normally be at the core of the organization. These services are also expensive to run and test. Mainframe virtualization capability provides the potential to perform more testing of new developments with mainframe services, while saving significant costs. Virtualization in the Cloud Medium The ability to provision virtualized services in the Cloud offers a more cost-effective platform for virtualizing scare or expensive resources. There are many advantages to running such services in the Cloud, primarily the fact that Cloud service providers normally charge based on usage, meaning that services only cost when needed. Create Services from Definitions, Logs or Docs Low It is preferable to set up services by looking at, and understanding, the real usage of a service, and then virtualizing it. Using definitions or logs can be prone to failure depending on the amount and type of information that can be gleaned. Using documentation can also be extremely problematic as it is often either not maintained and up to date, or is simply incorrect
Integration with TDM Tools Mandatory When setting up a virtualized service, it is extremely difficult to ensure that any recordings of the service will result in a comprehensive set of test cases that may occur for a test case. Recording samples of each service call, then using a TDM tool to create a range of synthetic responses based on the sample data will result in a full set of responses for any request that may be made. Integration with other Virtualization Tools Low The TDM tool is also essential for the request generation side of testing to ensure that the tool driving the tests has sufficient information to create a broad range of requests that fully and rigorously test the service. When developing a service virtualization policy for an organization, the full needs of the organization must be assessed in advance. This can then lead to a set of requirements from the tools being assessed. If a tool cannot satisfy all of these requirements to implement the policy, or cannot support them in a timely manner, the tool should not be selected. Trying to implement a service testing policy with additional virtualization tools is likely to lead to conflicts and additional costs. Integration with Test Tools The ability to integrate the service virtualization stack with the test tools used within an organization is a must. Consider that the service virtualization tool will first identify samples of requests and responses that must be virtualized. Once available, these samples must then be expanded to ensure full test coverage for the service(s). The test tools must then be able to integrate and use this information to successfully run tests that actually test whether or not full coverage has been achieved. Create Basic Echo Responses Medium Basic echo services can be useful to test basic connectivity issues from a client to a service. This can represent the first phase in a testing strategy but requires a more extensive test data coverage approach to ensure that a service is tested fully.
Pass-Through Data from One There are two good reasons for having a service virtualization tool that can passthrough Service to Another data to a real service. Record and Playback Message Requests and Responses Enhance Playback Responses with Synthetic Data Copy Recordings and Modify the Subsequent Responses The first is that it provides the most comprehensive and accurate way to record a service s behaviour for later virtualization. The second is that many organizations need the ability to turn service virtualization on or off dynamically. With this capability, the configuration can be modified easily to simply pass-through service requests to the real service when appropriate. This provides the ability to record real service requests and responses, and thus provides the best capability to understand the semantic of the service that is to be virtualized. Having the ability to then playback these messages gives an ability to easily test standard sequences of service requests and responses. An ability to enhance playback responses with synthetic data ensures that full coverage of all test cases can be achieved. For more, see Integration with TDM tools Many organizations offer standard services both internally and externally to their customers. In some cases, these services may be backed up by large and expensive resources which take time to adjust to a new customer. On the basis that a request and responses for a given service will change depending on the customer, the ability to record requests and responses for a service, copy and modify them ensures that test data can be created for other customers based on the first real set of data. This enables users to provide services for new customers quicker and cost-effectively. Convert Services into Tables Medium Virtual services backed up by real database tables provides another interesting option as the data in the tables to be used by the virtual services can be populated from other resources. For example, if an organization so wished, they could populate the tables themselves which could then be used as part of the virtual services.
Build Prototype Services in place of Unready Systems In today s agile world, projects stall regularly due to the unavailability of one or more services. Having an ability to create virtual copies of these unavailable services ensures that teams within an agile development project can progress at their own pace. As services become available, they can replace the virtual services. At the same time, having the virtual services available even when the services are then ready is useful in cases where the new service is not always available. Use Legacy Code to Create Prototype Services Medium When legacy code is required within a development project, having an ability to make that functionality available quickly to the project can help speed up the project. This can also facilitate virtualization of these interfaces such that the legacy implementation does not always have to be available for testing. Transact against Virtual Data The ability to transact against virtual data brings a new level of testing capability to an organization. Extremely large databases can be simulated in a virtual way by creating virtual data against which the services operate. Mask Data In-Flight Large amounts of money are spent creating masked copies of large databases which, if the truth were told, can potentially be out of date as soon as they have been created. Allowing the masking of data in flight ensures that production data can safely be used for application testing leading to massive cost savings by avoiding the duplication of databases and ensuring that the data being tested is 100% accurate. Integrate with SOA Harnesses SOA Harnesses are being used widely in agile development projects and thus the service virtualization tool must integrate well with those. Version Virtualized Services Mandatory When a service is developed, it is very rare for it to be accurate first time. It is also a fact of life that services will evolve over the lifetime of a project and even of an application. Without an ability to have lifecycle governance for the virtualized services, and thus the capacity to run different versions of a service at the same time, it will be required that all users of a service must be upgraded at the same time. Given the complexity of today s development and product applications, this is likely to be an impossible task thus making a versioning concept mandatory.
Maintain and Analyse Message Flows In order to create sophisticated, fit for purpose test cases, teams need to be provided with data that contains the maximum coverage. Therefore, the ability to mathematically monitor and analyse message flows enables developers and testers to understand exactly what data they have, and how to improve their coverage. Message Coverage Techniques Once you understand what data you need to provision test cases which test each of the required scenarios. Virtualization tools should be able to utilise coverage techniques (such as all pairs generation, orthogonal arrays and other combinatorial techniques) to improve message coverage and the quality of the data for testing. Throttling Services Medium When testing a virtualized service, it may be necessary to slow down or speed up the response times of your services to test performance. Ideally, this can be performed using a dashboard to control the performance of each of your services. No Coding Required The goal of any organization is to reduce the amount of customer code wherever possible. Thus having a pure configuration based approach will save money in the longer term and also lead to a more consistent and easy to manage set of virtualization components. Consulting Services Organizations require help initially to ensure that they understand the capabilities of their virtualization product and to ensure that they can get the best out of it. The trust of this consulting should be to ensure that the organization is in a position to take over the management and operation of the product following an initial training period. Agile Development Support Agile development environments have a major requirement for service virtualization tools and thus any tool must work well in an agile environment.