Enterprise Service Buses: A Comparison Regarding Reliable Message Transfer M IKAEL AHLBERG
|
|
|
- Aleesha Gordon
- 10 years ago
- Views:
Transcription
1 Enterprise Service Buses: A Comparison Regarding Reliable Message Transfer M IKAEL AHLBERG Master of Science Thesis Stockholm, Sweden 2010
2 Enterprise Service Buses: A Comparison Regarding Reliable Message Transfer M IKAEL AHLBERG Master s Thesis in Computer Science (30 ECTS credits) at the School of Computer Science and Engineering Royal Institute of Technology year 2010 Supervisor at CSC was Alexander Baltatzis Examiner was Stefan Arnborg TRITA-CSC-E 2010:113 ISRN-KTH/CSC/E--10/113--SE ISSN Royal Institute of Technology School of Computer Science and Communication KTH CSC SE Stockholm, Sweden URL:
3 Abstract When it comes to integration solutions and especially integration of systems that require a high level of reliability, maybe even critical systems, the platform which handles the data transport has to be able to make sure that no data disappears from the system. If a transfer error occurs, there has to be very specific rules to handle these errors so that all messages can be traced back to its origin. The task at hand has been to evaluate two comparable integration platforms to investigate what solutions they provide to be able to withhold a high reliability factor, what has to be implemented by hand and if there are any possible shortcomings with specific solutions. As a base for this evaluation, a number of different test scenarios have been built up, based on different types of transport protocols, to get a real world connected, but at the same time, transparent work. The work shows that the result did not match the expectations set before the work started. The systems lost messages even though functionality was enabled to handle platform instability. In other words, to be able to use these platforms in a critical environment you will have to implement functions by hand to ensure reliable message transferring in all scenarios.
4 Referat Enterprise Service Buses: En jämförelse med avseende på tillförlitlig meddelandeöverföring När det gäller integrationslösningar och framförallt integrering av system som kräver en högre tillförlitlighet, kanske även kritiska system, måste plattformen som hanterar dataöverföringen se till så att ingen data försvinner från systemet. Skulle ett eventuellt överföringsfel uppstå måste det därför finnas tydliga sätt att hantera dessa så att alla meddelanden kan spåras. Uppgiften har varit att utvärdera två likvärdiga integrationsplattformar för att undersöka vilka lösningar det finns för att kunna hålla en hög tillförlitlighet samt vad som måste implementeras för hand och om det finns eventuella brister med en specifik lösning. Som grund för denna utvärdering har ett antal olika testscenarion byggts upp, baserat på olika typer av överföringsprotokoll, för att få ett verklighetsförankrat men samtidigt ett relativt överblickbart arbete. Det visar sig att resultatet inte stämmer överens med de förväntningar som låg till grund för arbetet. Systemen förlorar meddelanden trots att funktionalitet är påslagen för att hantera platformsinstabilitet. Det krävs med andra ord implementering för hand för att kunna använda plattformarna i en kritisk miljö och samtidigt vara säker på att meddelanden inte går förlorade.
5 Contents Glossary 1 Introduction Problem definition Evaluation method Delimitations Background The market for integration Early adoptions of integration solutions EAI and its problems The Enterprise Service Bus What is an ESB Techniques included in the ESB family Sonic ESB Mule ESB Messaging system Reliable message transfers Study of platform functionality Persistent message queues Transactions Error handling Other functionality Implementation Scenarios Message flows Database as sending access solution File as sending access solution Web service access solution Environment setup Sonic ESB System installation
6 4.4.2 Database to database message flow Database to file message flow Database to multiple receivers message flow File to database message flow File to file message flow File to multiple receivers message flow Web service message flow Persistent queue setup Mule ESB System installation Database to database message flow Database to file message flow Database to multiple receivers message flow File to database message flow File to file message flow File to multiple receivers message flow Web service message flow Persistent queue setup Transactions Results Receiver disconnected scenario Database as sending access solution File as sending access solution Web service access solution Receiver temporary disconnected scenario Database as sending access solution File as sending access solution Web service access solution Platform or message system crash scenario Database as sending access solution File as sending access solution Web service access solution Receiver disconnected in multi receiver flow Database and file access solutions to multiple receivers Summary of results Persistent delivery performance hit Sonic s database to database performance test Mule s database to database performance test Discussion Receiver disconnected scenario Receiver temporary disconnected scenario Platform or message system crash scenario
7 6.4 Receiver disconnected in multi receiver flow Persistent delivery performance hit Conclusions Scenario results Platform comparison General Reliable messaging Problems that arose Possible solutions Final words on the platforms Further work Hardware Organizational level Security and Integrity Bibliography 73 Appendices 74 A Performance tests 75 A.1 Sonic A.2 Mule A.3 Explanation
8
9 Glossary CSV Comma-separated values, a simple text based data format. CXF A Web service framework. DLQ Dead Letter Queue. ESB Enterprise Service Bus, a software allowing integration of applications by providing a robust platform with a set of different type of tools and functions. JDBC Java Database Connectivity, an API based on Java for accessing databases. JMS Java Messaging Service, a Java based Message-oriented middleware API for sending messages. JRE Java Runtime Environment, contains libraries and the Java Virtual Machine for executing and running Java applications. MOM Message-oriented middleware, software handling transportation of data by providing asynchronous message transfer support. POJO Plain Old Java Object. RME Rejected Message Endpoint. SOA Service-oriented architecture, a set of rules or a design pattern that emphasize loose coupling. SOAP An XML based protocol for exchanging data when using Web services. SQL Structured Query Language, a database computer language. WSDL Web Service Description Language, a model for describing a Web Service. XML extensible Markup Language, a markup language designed for carrying data.
10 XPath XML Path Language, a language for selecting data from an XML document. XSLT extensible Stylesheet Language Transformations, used for transforming XML documents.
11 Chapter 1 Introduction Enterprises today more than often have multiple applications and systems which have been constructed to do specialized tasks. To reuse these applications for new business logic, an integration solution is built around the applications which will be integrated. Over time, however, these applications could have been written in a number of different languages, using different communication protocols, which could make the integration task a very difficult and time consuming one. To ease the burden for the integrator or system engineer, a number of applications or platforms exist today on the market for integration solutions. Each providing solutions for both large as well as small enterprise systems. According to the article Getting on Board the Enterprise Service Bus [14] the last decade this market has evolved and to compete with more and more complex enterprise environments new tools have been developed. The Enterprise Service Bus, or ESB as we will call it, is a such tool trying to ease the task for the integration specialists. These tools use standardized protocols to try to avoid vendor lock in. Depending on the solution at hand, some integration solutions may need extra functionality for making a more robust integration solution. Each integration platform may have its own functions for providing this increased reliability but it may not be the same for other platforms. Figure 1.1. A message flow binding two applications together with the help of an ESB. 1
12 CHAPTER 1. INTRODUCTION 1.1 Problem definition When these so-called ESBs are used in a more critical environment, an environment that may demand that no data should be lost even if the system goes down, the ESB s functionality becomes a critical component to rely on. My work revolves around this fact and is an evaluation of two integration platforms, comparing their differences regarding reliable message transferring. As a starting point, I had access to the thesis employer Mogul AB s integration platform, which is based on Sonic Enterprise Service Bus from Progress [16]. The alternative platform that the comparison was made against is the Mule Enterprise Service Bus. Mule ESB is an open source integration platform and is made available from Mulesoft [11]. 1.2 Evaluation method To be able to evaluate these two platforms regarding reliable message transfers in some sort of a real world connected test, two test platforms were built for the task. One platform based on Sonic ESB and another platform based on Mule ESB where the two test platforms performed similar tasks. At the same time, the systems were closely studied to shed light on what kind of functionality and solutions each platform could offer to increase the reliability of the total system. The solutions that were found, was then tested in a number of different scenarios to see how the systems performed with and without the specific functions found during the studying part of the work. The scenarios, which shed light on the differences of the systems and how they handled messages in different situations, will be explained in greater detail in the beginning of the implementation part of this report. An important factor was that these scenarios had to have some real world connection to be able to provide a reasonable picture of how the platforms may act in a real situation. Therefore the task that was performed in the different scenarios included a number of different access solutions, like file transferring, database transferring and Web services. In this way I could also evaluate the impact which the different access solutions had on the platform at hand. 1.3 Delimitations Focus was put on how the two platforms differed when it comes down to functionality regarding reliable messaging, and how they perform in their default mode. With reliable transferring, one can go relatively deep regarding what can have an effect on the system, like hardware level etc. The work therefore had to be limited so that the covered area would not get too big. The question regarding reliability could even be asked on an organizational level; who has the task to check possible error messages. It is all part of reliable message transferring, where even if the integration solutions does not work properly, no messages should disappear unnoticed. You have to be 2
13 1.3. DELIMITATIONS able to track each error message, through the use of logs or similar, so that a message can be redelivered if it did not reach its destination. Therefore a list of questions was made before the work started, to narrow the evaluation down to just consider the integration platforms. These questions have been the base for the scenarios, which was mentioned earlier, that will be implemented later on in this report. What will happen if the receiving system is down when data is transferred? What will happen if the receiving system is only temporary down during data transferring? What will happen if the receiving system crashes during data transferring? What will happen if the integration platform itself crashes during the processing of a message? How will the choice of access solution affect the problem regarding reliable message transferring? Has reliable messaging an effect on the performance of the platform at hand or does it cause other problems? 3
14
15 Chapter 2 Background To get a better understanding of what an integration solution is and how they have been used and evolved over time, this background chapter will explain the history from earlier integration solutions up till what we have today in form of the Enterprise Service Bus. In this chapter, an explanation of technologies that revolves around the ESBs will also be made as well as areas regarding reliable messaging and what solutions there are to this problem. 2.1 The market for integration Integration solutions are by any means not new inventions that have appeared during recent years. They have been on the market for a long time, mostly in larger companies and enterprises. However, the way that you apply your integration solutions and which tools that are available has changed in a larger scale to meet new demands. In the books Enterprise Integration Patterns [7] and Patterns: Implementing an SOA Using an Enterprise Service Bus [9], the authors tell us why this integration market exists today and in what way it has evolved. It is not unusual for larger enterprises to have collected more than hundreds of applications over time. These applications can be all from specially written programs for performing a single task to larger web pages and/or Web services. Often these applications are not written in the same programing language or with the same tools and no concern was taken about integration when the program in place was developed. It is also rarely the case that a single application covers the whole company business, which is one explanation to the number of smaller applications being developed. Companies also often come up with new areas of use for their services and programs, and instead of writing new ones or rewriting old ones they try to reuse as many old programs as possible. This is where integration comes at hand, by integrating the applications which needs to talk to each other in order to create the new functionality. In that way, a company can continue to concentrate on its core business while at the same time they can continue to increase their portfolio of services. Through integration 5
16 CHAPTER 2. BACKGROUND they also get new products out in a faster pace than what it would have taken to develop a completely new program for the task, which would probably have been a more complex task. Although, even if the market for integration is apparent and has been so for a long time, there are certain difficulties regarding integration that need to be resolved. A number of aspects have to be taken into account, for example a solution that integrates systems which are spread over a large physical distance must bear in mind that networks are slow and arbitrary [7] compared to an internal data bus. Applications that will be integrated may get replaced or upgraded over time which will lead to that your integration solutions needs to be checked and modified in the future. Another difficulty regarding integration is that more than often you will not have control over the applications that are marked for integration. They are labeled as so-called legacy applications and you may have to integrate these programs by sharing their data through their database access instead of rewriting the application to share information directly. This leads us into what developers have done to solve these difficulties and the four communication protocols or access solutions that have evolved. The four access solutions are file transferring, shared database, remote procedure invocation and messaging according to Hohpe and Woolf [7]. These four protocols reflects the most common integration problems that arise in companies, such as information portals, data replication, shared business functionality, Serviceoriented architecture (SOA), distributed business processes and business-to-business integration. Even though that with today s tools the burden to integrate legacy applications have surely diminished, the impact of Web services have eased the burden even more. Web services is a part of SOA and the advantage from an integration perspective is that you can invoke these services independently from each other and for the most part you avoid exotic protocols which may not work over large distances etc. Web services use open standards such as XML [25], SOAP [19] and HTTP and we shall later see that the integration tools relies on the use of open standards. However, sometimes it is not enough to write your own simple integration solutions with Web services, even though the use of Web services have made this task relatively simple compared to using legacy applications. Larger demands may be placed upon the integration solution than a simple handmade solution may provide. It can also quite easily get out of hand to just let Web services integrate with each other directly without the help of some sort of well tested integration tool. In the next few chapters we will see how the integration solutions were built and how they looked like before Enterprise Service Buses became popular on a large scale. Finally, when speaking about integration solutions you often speak of message flows. These so-called message flows are the way data is distributed or sent in an integration solution, for example from the sender to the receiver. This keyword will be mentioned throughout this report. 6
17 2.2. EARLY ADOPTIONS OF INTEGRATION SOLUTIONS 2.2 Early adoptions of integration solutions Before there was a standardized tool or framework for integration you had to manually program solutions to sew together programs. This could lead to complex solutions which would get hard to maintain and if any part had to be upgraded or replaced you had to rewrite the original integration solution. The costs and the time spent on these integration projects were of understandably reasons higher than what they could have been with a more modern approach, which also is mentioned in Getting on Board the Enterprise Service Bus [14]. The solution at hand, or the temporary improvement, was spelled Enterprise Application Integration or in short EAI [4] and was a step on the way towards the mentioned Enterprise Service Bus. Figure 2.1. Applications talking to each other using EAI and hub-and-spoke. EAI often used a so-called hub-and-spoke approach where the adapters used for connecting each application that were to be integrated was placed in the endpoints at the application, see figure 2.1. These adapters needed to be modified for each application connecting to the hub. The messages, or the data traffic between the applications residing in the integration solution, went through the central hub, as the picture shows. Other improvements that helped integration, was the earlier mentioned SOA. It started to get used, according to Ortiz [14], in the mid to late 1990 s and by using the SOA principles the companies started to build their internal programs and services directly prepared for communication with other services. SOA also used standard protocols like XML, SOAP and HTTP which lowered the costs for integration according to Ortiz [14]. However SOA still used hub-and-spoke which had its disadvantages. 7
18 CHAPTER 2. BACKGROUND EAI and its problems Even though EAI solved some of the problems and difficulties which earlier led to higher developing costs there were still problems with the solution. In both Open Source ESBs in Action [17] and Getting on Board the Enterprise Service Bus [14] the authors explain that the two biggest problems were that point-to-point and hub-and-spoke were used for integrating the applications. It was common that you, as a developer, started with point-to-point solutions for your EAI platform, and by doing so you had to know in the developing stage which applications were to communicate with each other. For every new application that later on was added to the mix the work load increased because you had to write a translator for every application. Keep in mind that the applications rarely were written in the same programming language, the task to translate between the different protocols that were used became complicated. Rademakers and Dirksen [17] also mentions that EAI used more or less closed protocols for the transport of the messages. In that way you could easily get yourself into a vendor lock-in which is also mentioned at the Wikipedia site regarding EAI solutions [4]. 2.3 The Enterprise Service Bus At the end of the year of 2002, Gartner published an article regarding the prospects for the Enterprise Service Bus called Enterprise Service Buses Emerge [18]. This was around the time when ESBs were relatively new and more traditional solutions were used to integrate systems. According to Gartner, ESBs would have a great breakthrough during the year of 2003 but it would start with the smaller companies before large enterprises embraced the use of the new technology. Later on, around 2005 and onwards, the larger enterprises would start to use the ESB technology. The reason for the embracement of this new platform was that it would simplify the task to let the SOA applications, developed in different environments, talk to each other and use asynchronous data transferring. Another plus factor was the ESB s modularity, were you could easily enhance the product. As mentioned, this article was published when ESBs were something new and it could be interesting to know how analysis companies predicted the use of the product. In Getting on Board the Enterprise Service Bus [14] we could see that these future visions were not completely off the chart. According to industry observers the ESB market had started to grow in the year of 2007 and the technology had left the pilot projects and were now used in financial as well as telecom enterprises. More companies were also in the beginning of starting to use this technology for their integration solutions What is an ESB We have mentioned how the future visions of ESBs looked like before and during its starting time but what is an Enterprise Service Bus and how does it compare to the previously named EAI solution. An ESB can be seen as two things [17], a pattern 8
19 2.3. THE ENTERPRISE SERVICE BUS or a product which provides tools for integration. ESBs are today s buzzword in the industry when it comes to integration, and for a product to be named ESB it should have certain core functions to comply with the demands that enterprises have on these platforms. These core functions are location transparency, transport protocol conversion, message transformation, message routing, message enhancement, security as well as monitoring and management [17]. Some of these ESB products are built on top of earlier EAI products that were used before in the industry and which were mentioned in chapter 2.2. However, ESBs are more modular and uses standard protocols like JMS [8] and XML [25] which solves some of the earlier problems around EAI. You could say that some knowledge has been drawn from the EAI when the definition of ESB platforms took place. In Enterprise Service Bus [3] Chappell explains that ESBs could be looked upon as an EAI solution but without the problems that the hub-and-spoke had and much more scalable. ESBs are also much more general in the way it uses its tools and are not as centralized as the earlier solutions, where everything went through the hub. The business logic is also not as integrated as it could be when for example only a message-oriented middleware was used according to Chappell. The platform also supports connecting systems through the Internet, where you could let your platforms link together message flows even if the business is distributed all over the world. You could let your ESB software stand at the endpoints and let the ESB take care of the data that needs to be sent between the nodes over the Internet. One of the benefits with using an ESB platform is that it is both modular and supports a wide variety of communication protocols from the start [14]. The platform deals with the task of converting from one data format to another, routing of a message and it just needs protocols for getting the applications on the bus so to speak. To compare it with EAI, when using hub-and-spoke you as a integration specialist had to build these translators for every new application that you wanted to connect to your integration solution. It could however be a time consuming task to move your solution from for example an EAI platform to an ESB platform, even though many ESB products, as mentioned earlier in this chapter, are built on previous products. But as integration becomes more and more important for enterprises, the move to the ESB platform could be beneficial. ESBs have also helped when working with SOA since the platforms makes the communication part of SOA easier. According to Patterns: Implementing an SOA Using an Enterprise Service Bus [9], The Enterprise Service Bus is to SOA as SOA is to e-business on demand. The authors also mention that today enterprises demand more quality of service than other techniques can offer. ESBs are described as an infrastructure that will handle all applications who follows the SOA principle, and its main task is to transport and route data to the correct address Techniques included in the ESB family Enterprise Service Buses relies more heavily on some techniques than others. An example is the XML standard [25] which is greatly and extensively used inside an 9
20 CHAPTER 2. BACKGROUND ESB, it could be called one of the ESB s cornerstones. The data, or messages, that are being transferred through the ESB to fulfill your integration solution is typically sent as an XML message. This makes it very convenient to use XML transformations on the data, if there is a need for data conversion in a message flow. Since the ESBs rely on XML they also support XSLT [26] transformations out of the box which makes XML transformations a blaze. Another benefit for using XML as the standard message protocol is that the integration specialist could easily use content based routing for messages since the standard is open and easy to use. To send these messages inside the ESB, the systems rely on the so-called messageoriented middleware or simply MOM for the task. In chapter 2.4 a more detailed explanation will shed light on what this is but you could say that it is the software responsible for transferring the data safely. In Open Source ESBs in Action [17], Rademakers and Dirksen takes up the different access types that an ESB supports, such as file transferring, Web services and Java Messaging Service. File support is simply that a file can be fetched from or delivered to a folder by the ESB engine. Java Messaging Service, in short JMS, is often used to transfer the data inside the ESB to different nodes in the message flow but can also be used to connect to applications outside the ESB. JMS uses so-called queues or topics to deliver messages from one point to another. A greater explanation of queues and topics can be read later on in chapter 2.4. The ESB also includes software to connect to JDBC connections out of the box which gives you the possibility to write or read data from a relations database. Other protocols that are supported is for example SMTP, POP3, or even FTP. As mentioned earlier, ESBs also includes message routing techniques to make sure that a message is sent to the correct node in the flow. Typically they support a wide range of different routers which are built-in from the start, like fixed routers or content based routers. Some ESBs also support custom routers or other custom objects to intervene with the message flow. Rademakers and Dirksen also talks about message validation in the book [17], where messages are validated to ensure that a message does not contain errors or have been routed to the wrong destination. That way the ESB could request a new message if it was corrupt, or notify an administrator that there is a potential problem. Last, the platforms also support techniques for hosting its own Web services nowadays. This gives the software designers a more robust platform to build its critical business projects on and can simply let the ESB take care of any possible data that needs to be transferred Sonic ESB Sonic ESB is developed by the company Progress [16] and is currently at version 7.6 of the platform. This is the platform that is currently running at my thesis employer s servers. The system is shipped with a workbench which lies on top of Eclipse [5] which is a popular IDE for developing software. The workbench consists of easy to use tools for developing message flows, i.e. processes, and has plenty of 10
21 2.3. THE ENTERPRISE SERVICE BUS documentation at hand. It also supports a wide variety of operating systems and contains more functionality out of the box. To briefly describe how Sonic ESB is built, the platform uses so-called containers. The message flows that are developed for the platform are then running inside these containers. That way we can split up our message flows to different containers and if one container becomes unstable, our other message flows that are not included in the unstable container will not be affected. Our message flow is in itself built up from so-called processes and services. It is these processes and services that run inside the containers. A message flow could consist of one ESB process or many ESB processes. These processes will typically contain nodes or endpoints that may transform a message or connect to a database and that functionality is handled by the services. So a typical service could be to convert an XML message with XSLT. Figure 2.2 shows an example of what an ESB process consists of. The services are built with Java classes and you can use predefined services or you can build your own ones. Figure 2.2. An ESB process runs inside a container and can contain many different services in a message flow. The platform has a management program called Sonic Management Console where configurations can be made for the containers and such regarding the platform. Some of these configurations can also be made from the Sonic Workbench 11
22 CHAPTER 2. BACKGROUND which could be handy when developing ESB processes Mule ESB Mule ESB is an open source platform from Mulesoft [11] and is built in a different way than for example the Sonic ESB platform. The software comes in two different versions, an Enterprise Edition and a Community Edition and I will focus more on the Community Edition throughout this thesis. The differences can be read at Mulesoft s homepage. It is this platform that has been chosen for comparison against the Sonic ESB platform. The platform is not shipped with a messaging system and you will therefore have to locate and install one for your self. Since ActiveMQ [1] is used in Open Source ESBs in Action [17] we will look closer on this messaging system. Mule is not using containers as Sonic does, instead Mule is more lightweight and simply uses configuration files where you set up a message flow. These configuration files are then used together with an executable file to run the specified flow. One of Mule s cornerstones are, according to Open Source ESBs in Action [17], services. These services can be a connection to an application, which will be integrated, or a component inside a message flow. These components can be built by using regular and simple Java classes, so-called POJO (Plain Old Java Objects). To get into a little more detail on how Mule is composed we can break down the flow inside the ESB to the following. All of this is explained in greater detail in the book [17]. The application that will be integrated is connected through a channel into the ESB. A channel can be a folder where files are stored, or a JMS connection. This channel is then connected to a transport component which takes care of the connection to the ESB and performs transformations that are necessary. From the transport component, we get into the service component which consists of an inbound router, possible POJO components and an outbound router. The chain is then completed by another transport component and finally a channel for the receiving system. Mule is also delivered with the most common connection methods (channels) like database connectivity, file transferring and of course JMS connectivity. One of the reasons that the Mule platform was selected is because it is an open source project. Rademakers and Dirksen [17] discusses the topic of what open source could mean to an ESB project. They describe the so-called myth that open source programs is not up to the standards of corresponding programs, and that this so-called myth is false. This assumption, a falsely one according to the book, is because open source projects tends to be developed on spare time. But today we know that this is not always the case. They do however mention that you would want an open source ESB with a reasonably active community so that bugs and such that are found quickly gets fixed. 12
23 2.4. MESSAGING SYSTEM 2.4 Messaging system Centrally for the Enterprise System Bus system and similar technologies, like the explained EAI system, is the messaging system or the so-called message-oriented middleware. Its task is to make sure that the data that is sent between endpoints in a message flow is transferred correctly. Because the transport of data in a message flow tends to include networks, the demand on the message-oriented middleware or MOM is somewhat higher than just sending data between programs. In Enterprise Integration Patterns [7] you can for example read that, networks are slow and can be unreliable compared to if the data was sent between applications on the same computer. This negative aspect leads us in to the advantages of using a MOM. With a MOM you get access to asynchronous data transferring, which means that the MOM will take care of the data transferring and deliver the data or message when the receiver or the sender is ready. This way, your programs does not have to wait for the receiver on the other end, you just trust the MOM for delivering the data and the MOM makes sure that the data is being transferred sooner or later. This technique, if you so may call it, is called fire-and-forget [21]. The authors of Enterprise Integration Patterns [7] and Using Message-oriented Middleware for Reliable Web Services Messaging [21] explains that there are in essential two different ways a message can be sent through a MOM. The two different ways are Point-to-Point and Publish/Subscribe. In P2P a message is sent, as the name reveals, from one point or endpoint to another endpoint through a message queue. With Publish/Subscribe a message is published on a message topic where multiple so-called subscribers can fetch the message from. This leads us down to the bottom line that there are queues and topics in a message-oriented middleware, where queues are one to one and topics can be viewed by multiple receivers or so-called subscribers. The connection to the MOM is handled by the ESB and what you would have to think of in your message flows or integration solutions is if you will be needing a queue or a topic to fulfill the integration solution s purpose. 2.5 Reliable message transfers Today when integrations have become a larger part of companies and enterprises, downtime in these systems or business processes could lead to substantial costs. These integration solutions could also include business critical services or functions which has to be working without any interruptions. As mentioned before in chapter 2.4, networks are unreliable [7] and there is always the risk of hardware failures or electrical problems. To withstand these types of problems the integration solution needs a robust and safe transferring technique between the services that are integrated. This is were reliable message transfer enters the picture. An important cornerstone in the Enterprise Service Bus platform regarding reliability is the message-oriented middleware system. Because it supports asyn- 13
24 CHAPTER 2. BACKGROUND chronous message transferring and in that case can send messages when the receiving part is available and ready, the management of messages becomes a critical point in the chain of reliability. This subject is touched in the article Using Messageoriented Middleware for Reliable Web Services Messaging [21] but they are splitting up the problem in three separate problems. They call these three problems Middleware endpoint-to-endpoint reliability, Application-to-middleware reliability and last Application-to-Application reliability. In the same article they discuss these problems and the possible solutions to increase the reliability. One solution that was mentioned regarding Middleware endpoint-to-endpoint reliability is the ability to make the queues or endpoints in the message flows persistent. That way, the messages are stored to disk when reaching a queue to avoid loss of data if the platform becomes unstable. Between these endpoints or queues the messages then needs to be transferred securely, for example with Java Message Service [8] which also is commonly used in MOMs. Persistent queues are also discussed in Enterprise Integration Patterns [7], typically the messages are stored in a database on the disk and not directly on the file system at hand. Problems that however could arise with this type of configuration is that the speed of the platform could decrease, or that messages would pile up and take up a great deal of space. In combination with store-and-forward the guarantee for message transferring could be increased even more. Another interesting aspect regarding Application-to-Application reliability is that you could view the flow from one application to another application as one transaction. If something went wrong with the message transferring that is not in line with the specifications for the rules that have been set-up, both the sending and receiving part will be rolled back to its previous state. The transaction part could also be split up so that the sender and the receiver belong to two different transactions. Something to have in mind is that the messaging system can only guarantee reliable delivery inside its own system, to the endpoints. After that point, the applications or services that are connected to the endpoints have to do the rest. However there are more parts to reliable message transferring than just the transferring part. The authors of Data provenance in SOA: security, reliability, and integrity [22] talks about that you can not always attack the reliability part the same way you do with traditional software. In an integration solution the data could have been sent through multiple nodes or endpoints, and these does not even have to be located in the same system or local network. It is enough for one of these nodes to become compromised for the whole chain in the message flow to become compromised. In traditional software, it is only the endpoints that would need to be checked, since the rest is inside the software itself. There are also other standards, for example Web services, that can be used in an ESB for increasing the reliability. For example WS-ReliableMessaging and WS- Reliability, which Are Web Services Finally Ready to Deliver? [10] discusses briefly. In that article they also mention that Web services have begun to be used more and more in business critical projects and that protocols such as HTTP does not have any built-in support to increase the reliability regarding data transferring. Most 14
25 2.5. RELIABLE MESSAGE TRANSFERS ESBs support hosting of these Web services since they are used much in enterprises today and therefore can use the platform s functionality. Both Sonic ESB and Mule ESB have this Web service hosting support. When it comes to transactions and their involvement and contribution to reliability in a safe data transfer, there could be difficulties if the message flow chain is rather long. In the worst case the nodes are lying on completely different networks. The author of Web Services and Business Transactions * [15] mentions these complex message flows and calls them business processes. The problem with these complex business processes and transactions is if you are going to follow the ACID model for a transaction. ACID stands for Atomicy, Consistency, Isolation and Durability and is a well known set of rules for handling database transactions. If the message flow chain is rather long it could be difficult to try to lock nodes in a transaction and the possibility for deadlocks could become apparent if networks included in the chain are distant from each other. The article is written to show their new framework but it also shows us which kind of problems transactions could have if they are implemented wrong, in a way that causes deadlocks or long waiting times. The transactions, however, have to work as if they were used against a simple database. Either the transaction is committed and all changes have been performed, or the transaction is rolled back and no changes have been made. The principles of ACID and transactions are also discussed in Open Source ESBs in Action [17] and an example of how to set up transactions with Mule ESB is also shown. In Mule you would configure a transaction only on the inbound endpoint and the message will not be removed from the queue before the transaction has been committed. Another important part of reliable messaging is how well the system handles possible errors when they appear. Rademakers and Dirksen [17] mention the socalled dead letter queue or invalid message queue. This is the usual approach, that when a message can not be delivered or some other error situation occurs the message is being sent to a dead letter queue. However you have to make sure that as an administrator or maintenance worker check these queues regularly to be noticed of the problems. These queues have different names depending on the platform that they are used on. 15
26
27 Chapter 3 Study of platform functionality Before we can implement possible enhancements to keep a high level regarding reliable message transferring we first have to investigate both platforms to see what kind of functionality they provide. Earlier in the background chapter we have talked about possible solutions and functionality to retain high reliability throughout the platform. It remains to be seen if these functions are available at both the Sonic ESB and the Mule ESB platform. The study of the two platforms was performed by partly reading the documentation at hand, provided by both companies behind the two platforms, and partly by implementing the test system to get a quick look to see if a function could be of any interest. 3.1 Persistent message queues The most common functionality, and a functionality that repeatedly gets mentioned in the books regarding the subject is persistent message queues. It is used on the platform at hand to be able to handle platform crashes as well as making sure that a message is safely delivered to its endpoint. The availability of this function is tied to what kind of message system that is available to the respective platform. We have mentioned earlier in chapter that Mule ESB is loosely coupled from the message-oriented middleware and that you can easily choose which message software you want to handle the message transferring. In Sonic s case, the message software that gets delivered is more connected and integrated into the ESB. The documentation for the Sonic platform mentions persistent message queues and lets us know that the functionality at least is available. Per default, this functionality is turned off, which is common practice in most message handling systems. But the functionality is there and we can establish the fact that it is a question of configuring the message handling system and message flows rather than something that needs to be implemented by hand. When we get to Mule ESB we have to look at the message system we will be using for our tests. Their Enterprise system gets shipped with IBM Websphere, we however are using Apache ActiveMQ [1] that is used in the book Open Source ESBs 17
28 CHAPTER 3. STUDY OF PLATFORM FUNCTIONALITY Figure 3.1. The ESB can use a database to store the messages to disk. in Action [17]. We also tested Sun s OpenMQ [13] as a message system, but both of these two messaging systems have functionality for persistent message queues. It is also a matter of configuration and nothing needs to be implemented by hand. A quick notice however is that you can also configure individual message flows to use persistent queues which also is the case for the Sonic ESB. The ability to use an external database is available, however, both message systems for the two platforms comes with an internal database that stores the messages on the file system. 3.2 Transactions When it comes to transactions, which we also have mentioned earlier in chapter 2.5, like persistent queues, it seems to be trickier. The documentation for the two platforms does not tell us that much regarding the ability to use transactions for our message flows. By using some sort of transaction, our hope is that possible problems that occurs when you have multiple receivers/endpoints can be avoided. In other words, if there is an error at one of the endpoints you may not want the other endpoints to still receive and process the message. Instead you would want to rollback and resend the message. The need for transactions may not be as large in our test when we only have a point-to-point message flow. The test results will hopefully show us in what way the transactions affect the message flows. Mule ESB seems to support two types of transactions that could be interesting. Normal transactions for the SQL and JMS connectors but also so-called XA transactions [24]. The differences between them is that SQL transactions and JMS transactions can only be used between database endpoints and JMS endpoints. XA transactions on the other hand is a wider protocol for transactions and in theory could include both database and JMS endpoints in one transaction. In that case you could overcome endpoints using different access solutions or protocols and have transaction capabilities despite the use of messages to multiple endpoints. Sonic ESB on the other hand seems only to support XA transactions for JMS 18
29 3.3. ERROR HANDLING queues, but it is not mentioned much in the documentation. There is however a sample program that uses XA transactions in hand with the platform and we will see if we can use this in some way. 3.3 Error handling Error handling is also something that is central to reliable message transferring because a message can never disappear unnoticed from the system. Usually something called a dead message queue or similar is used for messages that could not get delivered or when other errors occur. Both the Sonic ESB and the Mule ESB platform seems to have very good support for error handling but it is implemented different for the two platforms when comparing them. In Sonic it is more or less a setting where you choose where you would want to deliver messages that could not be sent to its destination when something goes wrong. It is the same for the whole process that is running on the Sonic platform. For Mule on the other hand, you have to configure more settings if you would want error handling for both message flows and connectors. You could also have individual dead letter queues for each service or connector in a message flow. 3.4 Other functionality Other functions than the above mentioned regarding reliability were not found for neither the Mule or the Sonic platform. However another setup possibility that increases the reliability but can not really be regarded as a function or something that needs to be implemented is that the message flows can run in separate containers, or in Mule s case, multiple configuration files. In that way a container or a program can crash without taking the whole platform with it. But the containers will still use the same messaging system and if the messaging system is unstable or crashes, it will probably take the whole platform down. Another function that briefly touches the area is if the systems have functionality for clustering of the platforms. By using clustering, one could potentially avoid losing the system if just one part of the cluster goes down. In this report however we are only concentrating on the software part and not on the hardware which will be a factor using clustering. 19
30
31 Chapter 4 Implementation To be able to test the two platforms in a real world scenario, two similar test benches were built from scratch for both platforms. How this build up was done and what difficulties and choices that had to be made during the development is presented in this chapter. There will also be some light shed on the design differences between the two platforms to get a clearer picture on how the Sonic ESB and the Mule ESB differ in implementation detail. 4.1 Scenarios As mentioned a number of scenarios need to be tested to be able to answer the questions that were presented in the initial chapter. That way we could see how the two platforms perform in different situations and compare the results. The following five scenarios were those that were decided to use for this test, which are connected to the questions asked in the introduction part. Receiver disconnected scenario Receiver temporary disconnected scenario Platform or message system crash scenario Receiving part disconnected in multi receiver message flow Persistent delivery performance hit In the first scenario a message is sent from a sending part through a message flow running on an ESB platform. At the same time, the receiving part is then disconnected from that same message flow. The test will show us how the error handling is done, if messages are lost from the system, if they can not be delivered and how you get notified by the ESB platform when a connection is dropped. For the second scenario, the disconnected receiver is reconnected after a short while to see if the platform resumes its work and if messages that were not delivered gets re-sent. 21
32 CHAPTER 4. IMPLEMENTATION The third scenario tests the platform s ability to handle a crash. As we have pointed out in the previous chapter, the platforms support persistent queues and this test will shed light on how that works. The platform is taken down during the time messages are being sent from one point to another in the message flow. When the platforms are restarted it will be interesting to see if the messages are still left in the ESB system and if the tasks are resumed. So far the scenarios have only included simple message flows, from one sender to one receiver. The fourth scenario includes message flows with multiple receivers. In that kind of message flow you often want all or none of the receivers to get a message. If an error occurs at one of the endpoints the preferred way would be that the message is thrown away for the other receivers and then re-sent for all of them. Transactions, as we mentioned in chapter 3.2, could have a large impact on this scenario. Last we have a performance scenario where the mentioned persistent queues are tested. Since the messages that gets delivered to a persistent queue will typically be written down to disk or to a database server it could be interesting to see how much of a performance hit it has on the system. 4.2 Message flows To accommodate the scenarios we have to have a number of message flows to test with. Since there are many different access solutions that could be used with an ESB the decision was taken that the following three access solutions would be implemented. File transferring, database transferring and Web services using SOAP. An application using a JMS connection was decided against in the scope of this thesis but since most message systems use JMS internally they will in either way show up in our testing. In order to make the message flows more interesting the ESB platform will transform the messages from one format to another. Typically, our message flow for our multiple receiver scenario needs transformations to let one message get delivered to multiple different receivers. The message flows that were implemented to be used in the scenarios above are followed here Database as sending access solution The database will store a database table called userlist including the columns first name, last name, social security number and id. When the message flow has a file endpoint as the receiving part, the fist name and the last name will be fetched from the database. The data will then be transformed into an XML file which will be dropped in a folder on a USB memory device. The message flow with a database endpoint as the receiving part will also fetch the first name and the last name from the sending database. It will then be transformed into an address by the ESB system and placed into an account table on the receiving database server. 22
33 4.3. ENVIRONMENT SETUP File as sending access solution In the folder, which the ESB system will be polling for files, a comma separated value (CSV) file will be placed. The file will include user data similar to the database table above. When a file endpoint also is used as the receiving part, the CSV file will be transformed by the ESB system to an XML file. The file will be dropped in a folder on a USB memory device. Regarding the flow with a database as the receiving part the data inside the CSV file will be inserted into the same userlist table that was presented for the previous message flow in The ESB will then typically be needing to transform the CSV format to a suitable format for inserting the data into the database server Web service access solution Since in the case of a Web service, the sender and the receiver typically is the same application, the Web service message flow will be implemented differently compared to the other flows. The message flow will host a Web service on the ESB platforms and with the help of the tools that both platforms provide, a simple WSDL will be presented for Web service calls. The Web service will take a name as input and the platform will then be asking a database server for the corresponding address to that name. The address will be delivered back to the caller if all went well. 4.3 Environment setup The scenarios described above for testing the platforms however demanded more software than the two ESBs could provide. We needed at least one database server, to be able to test message flows from and towards a database source. The sending part and the receiving part could be handled by the same database server but the choice fell on using two different database servers. The recommendation was to use the H2 database [6] together with the Sonic platform. This database software is somewhat of a lightweight product compared to some of the other bigger database servers. As database server number two the MySQL database server was chosen. The MySQL database is commonly known and since the evaluation of the ESB platforms are not about database software it seems like a good decision to choose a database that were familiar. The version of the H2 database that was installed was version and the installation was straightforward and there is no need for advanced configurations or similar. For the MySQL database a software called WampServer [23] was used instead of a clean installation of the MySQL server. This software includes both a preconfigured web server, phpmyadmin and a MySQL server, which minimizes the configuration burden. The version of the database that comes included in WampServer 2.0i is MySQL version When it comes to the connector drivers for the database servers, the software that the ESB platforms use to connect to the database, they have to be of the 23
34 CHAPTER 4. IMPLEMENTATION same Java version that the ESBs support. For the H2 database, the same version of the connector ( ) that was installed could be used for the Mule ESB. However version , which is the last build before 1.1, had to be used for the Sonic platform because of the above mentioned problems. Regarding the MySQL connector, both platforms could use version to connect to the database. Except the above mentioned database servers there was also a need for an operating system to run the two platforms on. There was a choice to run the testing on physically different hardware to be able to simulate network problems but the choice fell on testing on one computer running Windows 7. This could have been a problem, or a risk, since this operating system was relatively new and the ESB software may not have been fully tested on this operating system. It may have been preferred to run the platforms on a more server focused operating system. Last but not least, SoapUI [20] was used to connect to the Web service message flow. This simplifies the testing and by using SoapUI there was no need to build a testing client for this simple purpose. The version of the software used is Sonic ESB As mentioned in the background chapter the latest version of the Sonic ESB is version 7.6, which also was the version that were supplied from Mogul AB. At the time when the work on the thesis begun, an earlier version of the platform was used in operation but the choice was taken to use the latest version for this thesis System installation The Sonic ESB platform is installed through a traditional installation program. During the installation you get the choice to use the included JRE or the system installed JRE. By recommendation the included JRE was chosen since it had been well tested to work with the Sonic platform. The included version is 1.4, which is a bit old compared to the latest version that can be retrieved from Sun. During the work a patch for the Sonic platform also appeared which took the version from 7.6 to version Some configurations had to be done after the installation to make sure that the workbench and the management console could connect to the domain manager. In picture 4.1 and 4.2 you can see an overview of the workbench and the management console Database to database message flow First we need to make sure that the message flow is polling towards the database server and fetches the data at a given time frequency. Since we are doing this by using built-in functionality from the platform, we use the DBService module. The DBService module is a service where you can connect to databases. This service is configured from the Sonic Management Console and not from the Workbench. The database connections that can be set up from the Workbench are only for testing 24
35 4.4. SONIC ESB Figure 4.1. The Sonic Workbench on top of the Eclipse IDE. Here you have the tools palette to the right and the current overview of a process in the center. To the left the containers containing the message flows and services can be seen. parts of the message flow, so-called unit testing. Our flow is going to use both the H2 database and the MySQL database and we therefore need to set up two services of the type DBService. These services will then be running inside a container, which we mentioned in the background chapter, and will be available for our ESB processes to use. One thing that still remains however is to make sure that the jar file, containing the drivers for the H2 and MySQL connectors, are available for the platform. If you do not make them available, the containers, where the services will be placed, will not start and error messages will be displayed in the log files. By opening up the configuration window for our ESB container where our services will be running inside, we add the two jar files under the tab resources. As previously mentioned version , the last version before the 1.1 build, for the H2 database is needed because of the use of Java version 1.4 for the Sonic platform. Our two 25
36 CHAPTER 4. IMPLEMENTATION Figure 4.2. platform. The Sonic Management Console where you can configure the ESB 26
37 4.4. SONIC ESB Figure 4.3. The ESB platform collects data from a database, transforms it and routes the data to a receiving database server. new services are now named TestBench.H2Service and TestBench.MySQLService and are placed inside our container dev_esbtest. Figure 4.4. Setting up a SQL query in the Sonic Workbench. The variables are mapped by the developing tool. To get back to the Workbench where our message flow is being developed, we create two new database operations called db-to-db-getdata.esbdb and db-todbinsertdata.esbdb. In these files you can enter the SQL query that the operation will execute, as you can see in figure 4.4. The data that the operation fetches from the database is converted automatically to the XML format which makes the use of XSLT and XPath convenient. The message flow needs two transformations between fetching the data from the database and storing the data, in the form of an address, at the receiving database server. The first transformation will split up the XML message since we fetch multiple database rows from the database. In that way each user from our userlist table in our H2 database will end up in a separate XML message. Thereafter, our transformation which assembles the data to an address will start processing. Finally our message will be sent to our Test- Bench.MySQLService and placed inside the database. An overview of the message 27
38 CHAPTER 4. IMPLEMENTATION Figure 4.5. An overview of the database to database message flow. flow can be seen in figure 4.5. Both transformations are using XSLT with the use of XPath expressions. With the expressions we set up where the message should be split and what data from the original XML message part is going to be included. There are visual tools to aid you if you do not feel comfortable to write code for an XSLT file yourself, as can be seen in figure 4.6. In the figure of the message flow there is however a database operation missing, where the data should be fetched into our flow. Originally there was such a database operation but it was then discovered that if you want to poll against a database server, which we do, it had to be configured inside the Sonic Management Console and not in an ESB process. Inside the SMC where we created our two database services you can choose if the service should execute a SQL query and how often the query file should be executed. There is also the the possibility to choose a validation query which apparently is a way for the Sonic platform to test if a database connection is working. The query to be used has to return at least a row for it to work however. When it comes to SQL queries you can also add multiple queries on the same database service which could fetch the data and send it to different queues or topics, so-called endpoints. For our flow, however, we only need one query and we only 28
39 4.4. SONIC ESB Figure 4.6. Visual tool to aid creating a XSLT transformation file. You can connect variables by drawing lines between the different variables or by altering the code directly. need to set up so that the service places the fetched message inside a queue for our ESB process to pickup. When our ESB process picks up the message from its entry endpoint, where our service has sent it to, the process starts and finally our message gets inserted into our MySQL database. When a process has finished with a message the Exit Endpoint is called and we could transfer the message to another process if we want. In our case however we insert the data into the database table Database to file message flow Much from our earlier message flow in chapter can be reused for this flow, except we have to replace our last database operation with a file drop operation. We also have to replace one of the XML transformations to match the data output format. Again, we use XSLT and XPath expressions to transform our message to the correct XML format. The XML splitting part is still needed since we want a separate file for each user from our userlist table. The file service for dropping files in a folder is also something that is built into the Sonic platform and we can drag and drop a File Drop Service from our graphical developer palette. All that the file drop service needs is a configuration file with the extension *.drop for it to function properly. In this configuration file, the folder where the file is dropped is set up as is the file name for the output file. You can also set up a verification message which could be sent to a queue or a topic. 29
40 CHAPTER 4. IMPLEMENTATION Figure 4.7. The ESB platform collects data from a database, transforms it and places the data in a new file. One thing that has not been mentioned is that the Workbench has excellent tools to debug your message flows. You have the ability to set up listeners for your queues and topics and you can also listen on processes as well. With this help you can spot how far a message is being transferred before an error occurs, see figure 4.8. Figure 4.8. Message listening capabilities for the Sonic Workbench. Listeners can be added on a service or queue in a process. Any received message is displayed under Received Messages. We have also not mentioned the fact that error handling is very straightforward in the Sonic platform. All you have to do is make sure you set up the endpoints for the so-called Fault endpoint and Rejected Message Endpoint for each ESB process. When a fault occurs or a message is rejected, it should be delivered to the configured queue or topic Database to multiple receivers message flow The flow with multiple receivers is created easily thanks to the development tools for the Sonic platform. The only thing that needs to be done is using a so-called Fanout component, which will duplicate our message and send it through each fan 30
41 4.4. SONIC ESB Figure 4.9. The ESB platform collects data from a database, transforms it and routes the data to multiple receivers. which is connected to a service or process. In that way, we only have to move our two previous flows, with our services inside, to our new message flow and connect them to the Fanout as can be seen in figure File to database message flow For starters, a completely new file polling service was built with the help of tutorials shipped with the platform. But as soon as the included file polling service was discovered it was used instead. How to build your own services is mentioned further down in this chapter since we need to build one for our CSV to XML transformation later on. Back to the message flow, in the same way as we created the database message flows we here use the Sonic Management Console to create a new service from the built-in File Service. The service TestBench.FileService is created and in a similar way as previously for the file dropping service, it needs a configuration file to work. The last thing to do is making sure that the Exit Endpoint for our service is connected to the Entry Endpoint for our ESB process containing the message flow. In our case it is the queue TestBench.FilePickup. The service is then placed inside our dev_esbtest container and the files are fetched from the file system and gets delivered to our specified queue. However, the messages that gets sent to our entry endpoint for our ESB process is on the format of a CSV file and we need to make sure it has the format of an XML message instead. In the same way as for the Mule platform, handmade services are written in Java and when you choose to create a new service you get a template which helps you get into the coding part. The Java service class implements the interface XQServiceEx which contains a number of methods like init(), service(), start(), stop() to mention a few. The important method for us is the service method which is the one responsible for 31
42 CHAPTER 4. IMPLEMENTATION Figure An overview of the multi-receiving message flow. Both message flows are combined with a Fanout component. Figure A file is collected and transformed by the ESB platform and the data is then routed to a receiving database server. receiving and sending messages for our transformation class. Messages are delivered in the form of a XQEnvelope which contains XQMessages. A message that is being sent can also have multiple parts with different data, so you have to locate the correct XQPart of a message, which in our case includes the CSV data. The W3C dom package is used to create a new XML part which switches place with the CSV data in our XQPart. The XQPart is then reattached to the XQMessage and the whole Envelope is being placed in the outbox. The service is then uploaded to any 32
43 4.4. SONIC ESB container to be able to be used by an ESB process. We place it in our dev_esbtest container. Figure An overview of the file to database message flow. After that we only need to place our new CSV to XML service inside our message flow and place our XML splitter after it. Last in the message flow we have our database operation that inserts the users from the original CSV file into our H2 database. Since we have already configured the H2 database service we can use it without any problems. The final overview of the message flow can be seen in figure One thing that also needs to be mentioned is that the flows, or the ESB processes, needs to be uploaded to a container File to file message flow This message flow is easy to set up since we have completed our previous message flows. We only need to replace our last database operation, from the file to database flow, with the built-in File Drop Service and remove the transformation that split our XML message into multiple parts. 33
44 CHAPTER 4. IMPLEMENTATION Figure A file is collected and transformed by the ESB platform and then placed in a new folder File to multiple receivers message flow Figure A file is collected and transformed by the ESB platform and then routed to two different receivers. Nothing new shows up here as it is the same procedure for this message flow as it was for the database to multiple receivers in chapter We only need to drag and drop a Fanout component from our developing tool and connect our previous flows to this Fanout Web service message flow Hosting of a Web service, or in this case expose an ESB process, is easily done with the Sonic platform. There is a setting for the ESB process called Expose as Web Service which makes the ESB process accessible from the outside like a typical Web service. In the same way you can also generate a WSDL file which will use the ESB process interface to create the needed code for the WSDL file. 34
45 4.4. SONIC ESB Figure A Web service is hosted by the ESB platform where a client can access it from the outside. For the message flow, or the ESB process itself, the Sonic platform provides a service called Unwrap SOAP. Since Web services uses SOAP it is needed to strip that information from the input data. As the figure 4.16 shows, the database operation is then used to fetch the address for the name that has been sent in as input data. However we need to make sure that the correct part of the data is sent to the database. For each database operation there is a tool called Request and Response Mapping. These are visual tools where you can specify what part of a message that should be used as input data for the database operation. On this part we can also apply an XPath expression to extract the text we need for our operation. For the Response Mapping we choose to replace the previous message completely with our new message containing the address data that have been fetched from the database. Finally we have a Web service in our message flow which will send the data back to the caller. In the same way as for the database operation there is also a mapping tool where we for this message flow can remove database specific parts and just send the address string back to the caller. More than the above is not needed to get a fully working Web service from an ESB process, but it should be mentioned that the work did not go as straightforward as described above Persistent queue setup Persistency works in the following way for the Sonic platform. You have to make all queues or topics in a message flow chain persistent by changing a configuration variable to At least once or Exactly once. The default setup is Best effort which is the same as non persistent. The settings refers to how the platform should handle message transferring when sending a message to the next in line queue or topic. In other words, if the last service in a process uses a topic which has a non persistent configuration, messages sent from this service to the Exit Endpoint will be non persistent. The queue used for the Exit Endpoint may have a persistent 35
46 CHAPTER 4. IMPLEMENTATION Figure An overview of the Web service message flow. configuration but the messages lying in that queue will be of non persistent type. The setting which you can choose for an ESB process in the Workbench is not for the entire process, and for the first message to be persistent the Entry Endpoint has to be persistent. If you use the setting Exactly once, all queues or topics in the chain have to have the same setting or messages will be sent to the dead letter queue or so-called Rejected Message Endpoint. 4.5 Mule ESB Mule gets shipped in two versions, as we described in the background chapter 2.3.4, a Community Edition and an Enterprise Edition and as also mentioned the Community Edition was chosen for the task. During the progress of the work different versions of this edition was tested since there were problems regarding the transactions. To exclude that a bug was causing the problem, different versions of the platform was installed but the problems did not get solved by changing version. The Enterprise Edition, which could be freely tested for thirty days, was also tested but the version that was used during the test was version of the Community Edition. 36
47 4.5. MULE ESB The Community Edition does not include a message system, like the Sonic platform does. The choice was taken to use Apache ActiveMQ as the messaging software, since this version was used in the literature Open Source ESBs in Action [17] that was studied for this thesis. At the time of writing this the current stable version is which was the version used in our tests. However we should point out that OpenMQ 4.3 [13] was also briefly tested before ActiveMQ was chosen System installation The installation of the Mule ESB platform is also very straightforward. Mule is shipped in the way of an archived zip file and the installation is simply to extract the files in the package to a suitable folder. After that some environment variables were configured to let you start the Mule platform regardless where you were in the folder structure at the command prompt. The Apache ActiveMQ is installed in a similar way because it also gets shipped as an archived zip file. No further configurations were needed for the ActiveMQ software but we had to make sure, for the message flows, that the Mule ESB could connect to the ActiveMQ software Database to database message flow Figure The ESB platform collects data from a database, transforms it and routes the data to a receiving database server. First the connectors, which this message flow is using, have to be set up. In our case the two database connectors and the ActiveMQ connector. By the help of Open Source ESBs In Action [17] we get a simple example on how a such configuration could look like. The configuration is done inside XML files and uses Spring to be able to set up all settings in a smooth and easy way. Early on we could see that if a connector was shut down in our message flow and then restarted it did not reconnect to the platform. This is apparently because you need something called a retry policy for your connectors. Retry policy s are not built-in for the Community Edition of Mule, you have to create one yourself. In 37
48 CHAPTER 4. IMPLEMENTATION this early stage we have to make our own Java class, which we call InfiniteRetryPolicyTemplate. The class extends AbstractPolicyTemplate and overrides the method createretryinstance which returns yet another class. This class, which also has to be created, needs to implement the interface RetryPolicy. The task for this class is only to run a thread sleep for a short amount of time. When the thread reactivates, the policyok method is called which starts the reconnection phase for the Mule platform on the specific connector. This retry policy class can then be used by all our connectors, and our final configuration for the ActiveMQ part can be seen in figure Figure The retry policy is added to the connector configuration in the form of a simple spring property. Figure A new data source which can be used by the database connectors. For our database connectors we have to set up a spring bean which handles the connection towards the database, see figure Our JDBC connector then uses this data source to execute the SQL queries which we have configured in our JDBC connector. We now have the building blocks needed to create our message flow. As described earlier in the background chapter, a Mule service consists in the simplest case of an inbound and an outbound endpoint. The outbound endpoint also has a router to determine where a message should be routed to, if there for example are multiple receivers. Since we want to use JMS queues for later testing with persistent queues, we split up a Mule service into two different services. The first service will use our polling JDBC inbound endpoint and send the message through a pass-through-router onto a JMS outbound endpoint using the queue db.storage for our ActiveMQ connector. Our other service then uses a JMS inbound endpoint and fetches the message from the db.storage and sends it through a pass-through-router onto our JDBC outbound endpoint which uses the insert SQL query defined earlier. In the figure 4.20 we can see how our second service looks like. However we also need to transform the message someway along the road to get our address from the fetched data. In Sonic s case we used XSLT but for the 38
49 4.5. MULE ESB Figure The second part of the service component for the message flow. This example shows a database writer service. Mule platform we instead use so-called Plain Old Java Objects, in short POJOs to do the transformation. First we have to define that we are going to use a custom transformer by adding the tag <custom-transformer> in our configuration file. Thereafter we can add the custom transformer onto our JDBC inbound endpoint by placing the tag <transformer> and refer back to our custom transformer tag by using the keyword ref. To create transformers in Java is easy after you have done it the first time. All it needs is that your class extends AbstractTransformer, which is mentioned in Open Source ESBs in Action [17] as well as in online documentation. After that you also have to override the method dotransform which returns an Object and have an Object and a String as parameters. The method throws a TransformerException for error handling. Inside the method you can transform your Object message to the requested format, in my case a simple string. This is done by typecasting the Object to a Map, because the data that has been fetched from the database is a Map container. After that you can fetch the data for the first name and the last name and attach any string and return it as a string. An early notation is that Mule throws messages that are not delivered, for example if a connection is down. You therefore have to add some sort of error handling or exit strategy which will take care of messages that have been rejected. After browsing through online documentation and the references that concerned Mule, two exception strategies were discovered. One for the connectors and one for the entire service. With the help of figure 4.21 you can see where the exception strategies are placed and where the messages are meant to be sent if an exception occurs. A warning regarding transformations is that when you use custom transformers on an endpoint, all automatically performed transformations disappears, like Object to JMS or JMS to Object. We therefore have to be careful when we add our own transformations. In this flow however, we do not have to think about it since we are not using custom transformations on our JMS connections. One last thing to have in memory is that we have to place our drivers for the database connectors in a suitable place and include them in the Eclipse building 39
50 CHAPTER 4. IMPLEMENTATION Figure An exception strategy which sends the messages to the dead letter queue specified. part. They will then be found when the configuration file is executed by Mule Database to file message flow Figure The ESB platform collects data from a database, transforms it and places the data in a new file. Large parts of the message flow above can be reused for the database to file message flow but with some minor adjustments. The largest adjustment is that for our second service we need to replace the JDBC outbound endpoint with a file outbound endpoint. We also need to set up output patterns so that all files do not get the same name. Still though, we use a pass-through-router since the message is only going to be delivered to one receiver. Something that was discovered after a while was that when we found out that exception strategies were needed for the connectors, we had to create a file connector as well. In the beginning of the development phase, we only had simple file connectors directly in our message flow. But we could not add exception strategies when we configured it up in that way. We also had to have retry polices on all connectors, except for the file connector. A new transformation class also needs to be developed since in this message flow the data that we fetch from the database will be converted to an XML file. In the 40
51 4.5. MULE ESB same way as before, we create a new Java class which extends AbstractTransformer. In the method dotransform we use the W3C dom package to create an XML document which we then fill up with the data retrieved from the database. The XML document is created by the use of the DocumentBuilderFactory class and an example on how an XML document is built up in this way can be found at W3C s XML page [25]. When we have all our data stored in our new XML document, we return it in the form of a string back to the message flow and if all goes well we end up with an XML file in our output folder Database to multiple receivers message flow Figure The ESB platform collects data from a database, transforms it and routes the data to multiple receivers. The difference with this message flow compared to the two earlier is that we now have to make sure both the file and the database endpoints receives the correct formatted data transformed from one and the same message. This is solved by first using topics instead of queues in our services. That way we can have multiple services subscribing to a topic and receive messages that are placed there. After that we then copy our previous two flows and move them inside our new message flow and make sure that they are subscribed to the JMS topic instead of the JMS queue which was used before. We now have two subscribers for a message and we can make the corresponding transformations for each service. We could probably also use a multicast router instead of the pass-through-router to send a message to multiple endpoints, but the chosen solution is easier to implement by a small margin. The transformations are not to be forgotten and they now have to be performed on the outbound endpoint or the inbound endpoint for the JMS topic. But as previously mentioned, if we use a transformation on a JMS endpoint we have to add the JMS to Object transformer since it will no longer be performed automatically. 41
52 CHAPTER 4. IMPLEMENTATION File to database message flow Figure A file is collected and transformed by the ESB platform and the data is then routed to a receiving database server. There are no surprises regarding the setup of this message flow except that we again have to create a new transformation class, which will be converting CSV data to a Map data structure. The Map can then be used to insert the data into the database by explicitly selecting data from the Map. This is done through the help of the SQL queries which you define for your connectors. See figure 4.25 for an example of how this code looks like. Figure A SQL query for inserting the data, taken from a map container, into a database server. When we fetch or read our file from the specified folder we also choose to run one of the built-in transformations. This transformation converts a byte array to a string object. This way, our CSV file will become a string containing the substance from the file which makes it convenient on our part later on. However, since we can only insert one user from the original CSV file at a time into the database we have to split our message or string object. This is done by implementing a custom router which will split our message into multiple distinct parts. As we did before with the custom transformers, we create a custom router class by extending AbstractMessageSplitter. The method getmessageparts is then overridden and split our message, which is a string, on every new line. We then make sure that the message is sent to the correct endpoint and returns the new messages. All that is left to do is to make sure that the polling frequency is not set too high and that the file connector is used, as described before, so that exception strategies can be used for the connector. One thing that we have not mentioned earlier regarding Mule and the way the configuration file works is that you have to include the keywords jms or file etc. in the Mule header. If you do not do this, Mule will not understand what a 42
53 4.5. MULE ESB file:connector is or a jms:connector. The keywords, in the header, are linked to XML documents containing information regarding the keywords File to file message flow Figure A file is collected and transformed by the ESB platform and then placed in a new folder. Another new transformation class is created for this message flow, which will convert our CSV file to an XML document. More than that will not be necessary for this flow to function and the W3C dom package is used to create our new XML document. The byte array to string transformer is also used and placed before our new transformer File to multiple receivers message flow Figure A file is collected and transformed by the ESB platform and then routed to two different receivers. Since both our message flows that have a file connector as the inbound endpoint are using the built-in transformation, byte array to string, the transformation can 43
54 CHAPTER 4. IMPLEMENTATION be placed on the inbound endpoint for this multi message flow. Other than that, we will be doing exactly as in previous multi message flows. We simply move the two flows above into the configuration file so we get three services. We then change the use of JMS queues to JMS topics so that each subscriber, the two services, can receive a message that is being sent through the chain Web service message flow Figure A Web service is hosted by the ESB platform where a client can access it from the outside. There are a number of different ways to take for this message flow which are described by Rademakers and Dirksen in Open Source ESBs in Action [17], but the road that was chosen was one of the simplest ones. Our Web service is created in Java and to host it on the Mule platform the CXF [2] connector is used. For the inbound endpoint we use a CXF endpoint where we can choose on what URL it should listen on for receiving SOAP requests to our Web service. After that, in our message flow chain, we add a component class which is linked to a spring bean where we specify which Java class we use as a component class. These component classes have been mentioned briefly in the background chapter. They can be placed between an inbound and an outbound endpoint and in this case it will be our new Web service class. However we will not be needing any outbound endpoints for this message flow since our component class takes care of the response back to the original caller. The figure 4.29 shows how little code that is needed to host a Web service in Mule. For our Web service class, we can then define methods which can be called from outside the platform. However, we must not forget that the methods should throw an Exception so that the Mule platform can take care of possible errors. This way, error messages are thrown on the dead letter queue instead of disappearing from the system. When we launch our message flow a WSDL file will automatically be generated which makes it easy to host an own Web service. 44
55 4.5. MULE ESB Figure Code to host a Web service in Mule using CXF. The method we create in our Web service class uses a database connection where we fetch the data, in the form of an address, and return it as a string to the caller. A name is used as input, to find the corresponding home address. We also make sure that if no address is found for a specific name, the string address unknown will be returned, but we could also throw an exception Persistent queue setup To set up persistent queues for our message flows, a setting needs to be added. On our ActiveMQ connector configuration, the keyword persistentdelivery can be used and if it is set to true the messages which ends up in a JMS queue will be stored on disk. As default the setting is false but as you can see in figure 4.30 it is a simple task to change it. More than that is not necessary to use persistent queues unless you want to use an external database for storing the messages or such. Figure Adding persistent delivery to the ActiveMQ connector in Mule Transactions For the Mule platform the ability to use transaction was looked upon. If you start a transaction on a JDBC endpoint it seems that you can only bind this transaction with other JDBC endpoints. If you have an JMS endpoint as the outbound endpoint you will get an exception. But as we discovered in chapter 3.2, Mule also supports XA transactions which can handle both JDBC and JMS endpoints. 45
56 CHAPTER 4. IMPLEMENTATION To get the XA transactions to work you have to do a number of configurations. First, transactions are set up on the inbound endpoint but the outbound endpoint still needs to have transactions available. The outbound endpoint does not have to join the transaction started on the inbound endpoint. Because of this we have to set up XA transaction support for our ActiveMQ messaging system since we use JMS transferring between our two services in our Mule message flows. We also need to have a transaction handler and the recommendation is to use the built-in JBoss transaction manager. It is initiated by adding the tag <jbossts:transaction-manager/> in our configuration file. To set up the ActiveMQ connector the only thing required is to change its tag to <jms:activemq-xa-connector>. We also have to specify which JMS specification we are going to use with the keyword specification, and we will be using version 1.1 as can be seen in figure By adding the tag <xa-transaction> for an endpoint in the Mule service, the transaction is started on that endpoint. The tag has the keyword action that needs to be set, where we can choose: NONE, ALWAYS_BEGIN, ALWAYS_JOIN, BE- GIN_OR_JOIN and JOIN_IF_POSSIBLE. We will be starting the transactions on the service which has the outbound endpoint connected to our receiving service because the receiver will be disconnected in our tests. We thereby use the setting ALWAYS_BEGIN on our inbound endpoint on that service, and select the ALWAYS_JOIN setting for our outbound endpoint. Figure XA capable configuration for the ActiveMQ connector. The database connectors also have to be looked upon, so they support XA transactions. This is done by altering the connector class being used. 46
57 Chapter 5 Results In this chapter the result from the different scenarios, which we have built up in the implementation phase, is presented. Possible discussions and conclusions drawn from these results are presented in the next chapters. 5.1 Receiver disconnected scenario Figure 5.1. The receiving part in a message flow is disconnected while the system is up and running. As mentioned earlier we disconnected the receiving part from the message flow to see how the platforms reacted. In that way we could get an overview on how the error handling worked for the two systems Database as sending access solution We begin with the Sonic platform. We can establish that the containers, which the message flows are running inside, starts directly even though a service is not connected, in this case the receiving part. Error messages do however show up inside the containers log showing that a connection can not be established to a service. When we then start to send messages through our message flow where we have a database as the receiving part and then disconnect that service, a couple of things occur. First, checking in the log files displays clear Java Exceptions which tells us that something has gone wrong. These exceptions describes that a message could not be delivered to the specified part because the connection is not online. The other 47
58 CHAPTER 5. RESULTS interesting observation is that the messages are being sent to the dead letter queue or Rejected Message Endpoint which we have configured in the implementation phase. No messages were therefore lost from the system when the receiving part was disconnected. When it comes to the database to file message flow it did not differ itself much compared to the database to database flow. An important note is however that when the folder is unreachable, or in our case when we remove the USB memory device, there are no error messages showing up in the container logs. In the database to database message flow there were clear error messages when the connection disappeared and another difference is that it takes less time for a message to go from the entry point to the exit point in our message flow. The messages do however end up in the dead letter queue when we start sending messages through our message flow, just as before. The Mule platform is more strict when it comes down to starting the system without all the services up and running. But as long as we have a retry policy on our connections, which we use in our configuration, the system starts as soon as all the connections are available. First we have our message flow with the database server as the receiving part. When the database gets disconnected nothing happens in Mule which indicates that we have a crash or disconnection scenario. There are no warnings or error messages in the logs either as there were with the Sonic platform. However, when the messages starts to be delivered through our message flow the warnings start to appear and our implemented retry policy starts to do its work by trying to reconnect to the database server. In the logs where the errors appear you can also clearly see that the default connection exception strategy gets called and sends our messages, which could not be delivered, to the dead letter queue, in our case the db.error queue. If there are multiple messages that are being sent, it will take some time before they are being processed and wind up in the dead letter queue which we have specified. Something interesting happens when we try the transaction configurations on this message flow. When we disconnect the receiving database server and send a message, instead of calling the default connection exception strategy the default service exception strategy is called. The error messages which shows up in the log are also of a different characteristics and are more closely connected to the transaction part than to our disconnected database server. The messages, ready for processing, are at a first glance looking to stay put in the queue, unprocessed, but after a while they start to disappear. However these messages do not end up in the dead letter queue which we have configured, they disappear completely from the system. Another aspect is that the messages are staying a much longer time in the queue before they disappear when comparing with the message flow where transactions are deactivated. An interesting thing did occur when we later tested persistent queues which led to retesting our earlier scenarios. When persistent queues are activated the messages which gets lost when using transactions is sent to ActiveMQ s own dead letter queue called ActiveMQ.DLQ. Regarding the database to file message flow the only thing that is different com- 48
59 5.1. RECEIVER DISCONNECTED SCENARIO paring to the database to database flow is that when a folder becomes unavailable files may already be opened for writing. These files becomes corrupted and gets on our USB memory device with a size of zero bytes. However all these messages which did not get sent properly ends up in the dead letter queue, even those which got corrupted and this must be considered as a positive thing. We also get clear and visible Java Exceptions in the logs for each message which did not get delivered, explaining that the search path was unavailable. When the transaction configuration is tested on this message flow, with a file endpoint as the receiving part, it did not differ much compared to the database to database message flow with transactions. The messages still disappear without getting sent to the dead letter queue even though we have configured one up. But as soon as we activate the persistent queues the messages are delivered to the ActiveMQ.DLQ File as sending access solution For the Sonic platform, this test did not behave any different than the previous integration solution. The messages for both the file to database and the file to file message flows ended up in the dead letter queue when the receiving part was disconnected. Error messages are also displayed in the containers logs for easy viewing. Mule s solution or outcome of this test did also not behave any different comparing to earlier or towards the Sonic platform. As soon as there is an error on the receiving part, both database or folder, the messages gets delivered to the dead letter queue file.error. They also gets delivered to the dead letter queue if there are problems with for example putting data into the database, e.g. if the primary key already exists in the database table when running the file to database message flow. Regarding the transaction message flows we are not in for any surprise compared to earlier results. Messages keeps getting lost if we have not activated persistent queues Web service access solution We begin with the test for the Sonic platform. Because Web services are both senders and receivers we decided that the database server, which is used by the message flow, gets disconnected from the ESB platform to see what will happen. When we disconnect the database and sends a request through our Web service with the help of our tool Soap UI [20], the message gets intercepted and delivered to the dead letter queue (Rejected Message Endpoint). However we are left hanging at the end, waiting on a reply from our Web service. We do not get any message back indicating that something went wrong but after a while a socket timeout occurs. The Mule platform acts differently compared to the Sonic s case. When unplugging the database server from the message flow and sending a message a Java Exception is thrown from our component class. Instead of waiting for a socket time- 49
60 CHAPTER 5. RESULTS out with our Soap UI tool we immediately get a SOAP:Fault message back with error codes describing the problem. The error code lets us determine what went wrong, that the database connection is down. As pointed out earlier we did not get this type of error message when we use the Sonic Web service, even though our message got sent to the dead letter queue. 5.2 Receiver temporary disconnected scenario Figure 5.2. The receiving part in a message flow is disconnected and then reconnected while the system is up and running. The differences between this scenario and the one above is that here we reconnect the disconnected receiver after a short while, to see if the messages eventually are re-sent Database as sending access solution For the Sonic platform we directly notice that messages which have already been sent to the dead letter queue (RME) does not get re-sent after the receiver is reconnected. It does not matter whether it is the database or file message flow that is tested. Since we also split our messages in our database to database process, this can result in that some messages or database rows gets delivered to the receiving part while others gets sent to the dead letter queue. Regarding the Mule platform, for both the database to database and the database to file message flow the same outcome as with the Sonic platform occurs. The messages which have reached the dead letter queue does not get re-sent. The other messages which have not yet been processed and sent to the dead letter queue are sent to the receiver as soon as it is reconnected. You will have to look out for corrupted files when using a folder as the receiving part since many files of size zero bytes appears in the folder. This was nothing that showed up in the database to database message flow, with corrupted rows or such. When the transaction configurations were tested for each message flow, nothing new showed up. As long as we have persistent queues activated, messages which fails to be sent are sent to the ActiveMQ.DLQ queue instead. However, an interesting aspect regarding this scenario is that there is a variable called max redelivery for ActiveMQ connections. If this variable is increased it takes much longer for a 50
61 5.3. PLATFORM OR MESSAGE SYSTEM CRASH SCENARIO message to be sent to the dead letter queue and the message may get re-sent before it has been redelivered x amount of time File as sending access solution For the Sonic platform, nothing new occurred compared to when a database server is the sending part, except that for the file to file message flow the messages got sent with a higher speed. A small down time then results in more messages getting sent to the dead letter queue, compared to our database to database message flow. When we take the Mule platform into consideration, common for the message flows that have a file endpoint as the sending part is that it takes less time to fetch the data into the system. Therefore it takes shorter time before the messages end up in the dead letter queue. As pointed out earlier, max redelivery only works together with the use of transactions. When the transactions are activated you can notice that the max redelivery variable have an effect on the message flow even tough file endpoints are not supported by XA transactions. Setting the variable max redelivery to around 200 gives two to three seconds of time before a message gets sent to the dead letter queue. This would be enough time for a minor connection drop out Web service access solution The Web service message flow for the Sonic platform works just the same as before. If an error has occurred the message is not re-sent and we therefore have to wait on a socket timeout before we can send a new request with our tool Soap UI [20]. When we then use the Mule platform, nothing has changed compared to when the Sonic platform was tested. Because the message flow uses synchronous message transferring the database server has to be online when the request occurs. It does not matter if it is just a short down time because the request is not sent again. However with the Mule platform you at least get a SOAP:Fault message back so you can resend the request directly. 5.3 Platform or message system crash scenario Figure 5.3. The platform is taken down while a message flow is running and sending messages through the system. 51
62 CHAPTER 5. RESULTS In this scenario we tested how the platforms reacted to the use of persistent queues by letting them crash during transferring of messages. The interesting part was to see if the platforms resumed their work where they were abruptly stopped Database as sending access solution First out is the Sonic platform. With the exactly once setting on all services, which makes the messages persistent, the messages should still be there after the ESB platform crashes. This also seems to be the case but the work is not resumed where it stopped upon restarting the Sonic ESB platform. It could be that the topics, which are used between the services, are marked or something that would imply that a crash has occurred and the work should therefore not resume. We can clearly see that the Sonic platform recognizes that the system was not shut down properly and takes actions. The different message flows does not seem to do any difference either. Regarding the Mule platform, if the ESB platform or the message system, in our case ActiveMQ, crashes, we instantly notice that the messages are still left where they were when we restart the platform as long as we have persistent delivery activated. However, data that has been fetched from the database server gets split up and there is a risk that they have not reached the persistent JMS queue. These messages are lost when the platform crashes. Fetching many database rows per polling case results in that many messages are lost when the system goes down. There are different problems regarding if the ESB platform crashes or the message system crashes. If the ESB platform goes down there is a possibility that a few messages gets lost because there is a chance that part of the data that were fetched has not reached a persistent JMS queue. If however the message system drops, the ESB platform continues to process and fetch data from the database. But as there are no queues to deliver these messages to, all messages are lost and only a very few gets sent to the correct part. Transactions does not seem to have any effect on how persistent queues are handled by Mule File as sending access solution As mentioned earlier, the access solution has no effect on how the ESB handles a crash for the Sonic platform. Regarding the message flows for the Mule platform with file endpoints the only difference is that it goes quicker to transfer the messages. Therefore less time is spent fetching the messages which results in that a message spends more time inside a persistent queue than anywhere else inside the message flow Web service access solution There are problems with our Web service solution when our ESB platform crashes since Web services uses synchronous message transferring. This is true for both the 52
63 5.4. RECEIVER DISCONNECTED IN MULTI RECEIVER FLOW Sonic platform and the Mule platform. If you have already sent a request to the ESB platform there will not be any response back to the caller. Instead you get a connection reset exception after a while when the platform crashes. 5.4 Receiver disconnected in multi receiver flow Figure 5.4. The sending part sends a message which is redistributed by the platform to two different receivers, where one receiver is disconnected from the ESB platform. Within this scenario the system is tested with a message flow containing multiple receivers. The preferred outcome is that either every receiver get the message or no one of the receivers get the message which were sent through the message flow Database and file access solutions to multiple receivers Because we did not have any transactions to try with regarding the Sonic platform the results were expected. When one of the receivers were disconnected the other ones still received the message. The test is also performed with the Exactly once setting which should make a roll back if something went wrong but it seems only to work if you have multiple addresses in the JMS message header part. For the Mule platform, when we disconnected one of the two receivers the other receiver still get the message. This is pretty much expected when transactions are deactivated. The messages that did not get to its receiving part ended up in the dead letter queue. We can, with the help of the dead letter queue, then easily locate which of the messages that did not get sent. With some manual implementation you could resend just that message if needed. However, when we activate the transactions, to handle cases like this, going through the same procedure as before the outcome is exactly the same. The transactions should be rolled back when a problem occur but it does not seem to be the case. 53
64 CHAPTER 5. RESULTS Regarding the file to multiple receivers message flow, something abnormal occurred. The messages got multiplied and one message could result in up to four messages at the receiving end. 5.5 Summary of results A table containing the results from the above mentioned scenarios for both the Sonic and the Mule platform. Scenario Sonic ESB Mule ESB Receiver disconnected Java Exceptions in log files; Java Exceptions only when No messages were lost; sending data; No messages Failed messages found in were lost except when XA DLQ; Corrupted files for transactions were used without file receiver; Socket timeout persistent queues on; for Web service Failed messages found in DLQ; Corrupted files for file receiver; SOAP:Fault message for Web service Receiver temporary disconnected Platform or message system crash Receiver disconnected in multi receiver flow No messages were lost; Failed messages found in DLQ; Messages in DLQ did not get re-sent; Socket timeout for Web service Messages outside queues were lost; Did not resume work after restart; Connection reset for Web service No messages were lost; Messages reached receiver one when receiver two was disconnected No messages were lost except when XA transactions were used without persistent queues on; Failed messages found in DLQ; Messages in DLQ did not get re-sent; Redelivery function with XA transactions; SOAP:Fault message for Web service Messages outside queues were lost; Resumed work after restart; Connection reset for Web service No messages were lost; Messages reached receiver one when receiver two was disconnected; No rollback on receiver one with XA transactions 54
65 5.6. PERSISTENT DELIVERY PERFORMANCE HIT 5.6 Persistent delivery performance hit Figure 5.5. The ESB stores the messages in the queues and topics to disk. The result in this chapter should not be compared between the platforms because the message flows are not implemented exactly the same and may not have been implemented in an optimal way. A more detailed view can be seen in Appendix A Sonic s database to database performance test It was a little difficult to test the performance hit of persistent delivery for the Sonic platform, compared to the Mule platform, when using the database to database message flow. The database connections or the JDBC connectors is significantly slower when the message flow is running on the Sonic ESB platform compared to the Mule ESB platform. To reflect the slow database access, we changed our message flow to fetch 200 rows instead of the two rows we have in our database. We also tested to increased our polling frequency from the low 1 ms to a substantial frequency of 100 ms. This way, more processor time would be given to the actual process and not to the polling thread. At the start the message flow is tested without persistent queues activated. For the frequency 100 ms, this resulted in that after 60 seconds, around 300 messages reached the MySQL database and a total number of circa 1300 messages went through all transformations waiting to be delivered to the database. With persistent queues activated that number went from 300 messages down to 200 messages which reached our MySQL database after 60 seconds. The number of messages which went through all transformations in our ESB process was In other words a small but noticeable difference between the results. The exact numbers can be viewed in Appendix A Mule s database to database performance test The problems which showed up above, with the database access, was not apparent when we tested the message flow on the Mule platform because the database con- 55
66 CHAPTER 5. RESULTS nections were quicker and not the bottleneck of the message flow. We could here perform the test as we originally had planned, polling against the database every millisecond to fetch the two rows into our message flow. The test started with the persistent queue settings off and we could clearly see that the message queues grew as new data was fetched into the system at the same time the ESB tried to process them. After a while the number of unprocessed messages started to stabilize at around 20 to 30 messages at the input queue. When 60 seconds had gone we could see that circa 4500 messages were processed and delivered to our MySQL database. We also tested to increase the original two rows, as we did for Sonic, to see if it had an effect on the result. But we got a similar result which indicates that the database connection was not our bottleneck in this message flow. The test was then re-done with persistent queues activated which resulted in that after 60 seconds around 2500 messages were processed and delivered. A quite substantial difference compared to the 4500 messages that were sent with persistent queues deactivated. 56
67 Chapter 6 Discussion In this chapter a short discussion around the results that were presented in the previous chapter is held to shed more light on the causes. There is also discussions around possible solutions or enhancements to certain problems. 6.1 Receiver disconnected scenario Figure 6.1. The receiving part in a message flow is disconnected while the system is up and running. The preferred way in this scenario would have been that the messages gets sent to the dead letter queue that we specified in the implementation part, and as the results shows this seemed to be the case. Although in one case it did not go as expected, when transactions were used. More on that later on. As the test result showed there were no large differences between the Sonic and the Mule platform, despite the system differences. This could be seen as a positive thing because it could facilitate a move from one platform to the other. Both platforms also clearly displayed errors in the logs if a connection which were used in the message flow was inaccessible. The Mule platform is however a little more strict concerning the start up procedure if a service in the flow is unavailable. But this is not a problem as long as the retry policy is on. By not allowing the system to go online when a connection is down, some debugging time may be saved. It is possible that the problem regarding a disconnected connection would have popped up later on if you have a large message flow with nodes rarely used. We also got 57
68 CHAPTER 6. DISCUSSION simple Java Exceptions with both platforms which could ease the burden to try to locate an error or bug in the integration solution. In the test we also saw that an error message is not deployed if a service crashes when connected to the Mule platform. This is partly because that service is not used at the current moment. But if for example the sending part went down we would have got an error message because we are constantly polling against that service. In Sonic however, we also get an instant error message in the log if the receiving part disconnects, this would have been preferred for Mule. Regarding the scenario where we have a file transfer as the receiving part located on a USB memory device, we would probably want some kind of error message when the USB stick is removed from the system. When it comes to the tests with Mule s different message flows we have the option to use XA transactions and our results shows that the outcome is a little different when the transactions were used. The first thing that were noticed when the receiving part was disconnected, is that instead of calling the default connection exception strategy the default service exception strategy was called. We could assume that this depends on that the error occur in the transaction handling part, a service, and it has a somewhat higher priority than the connection part, even though the problem lies within the connection territory. The other thing that was noticed and of great interest is that messages which could not be delivered, because the receiving part was disconnected, did not get sent to the dead letter queue which we had specified. Instead they were all lost even though we have a dead letter queue for both connection exceptions and service exceptions. If this happens because something I have missed when configuring these dead letter queues seems to be unclear but I could not find any information that would have proved otherwise. Though, it is curious to see the messages get delivered to another dead letter queue, namely ActiveMQ.DLQ, when persistent queues are activated. It is possible that the messaging system takes over the error handling when transactions gets activated, since the ActiveMQ.DLQ queue is specified there, but it seems rather unlikely. This is not the optimal way of handling this problem and it is not good that messages disappear from the system. It would have been better if the messages were sent to the specified dead letter queue or re-sent at a later time. Another thing which was noticed is that the messages were staying much longer in the queue before they got sent to the dead letter queue or disappear when transactions were activated. Probably the transaction handling part tries to resend these messages a number of times or the handling of messages takes a longer time when transactions are on. We could clearly see that when the database to file message flow was tested the messages got sent straight away to the dead letter queue but it took longer for them to get there during the database to database message flow. The file endpoints does not support XA transactions accordingly and that is probably why the messages got sent to the dead letter queue at a quicker pace. The last thing of interest was when the USB memory device was disconnected. When files were written to the file system in our database to file scenario or in our file to file scenario, some files got corrupted with the size of zero bytes. A theory 58
69 6.2. RECEIVER TEMPORARY DISCONNECTED SCENARIO regarding this is that it is the underlying file system which is responsible for this file corruption. At the same time the operating system prepared files for writing to the USB memory device, the memory device got disconnected resulting in files with zero bytes. However, if this was the case, the messages or files, which did get corrupted, should not be found in the dead letter queue which they were. It could be the case that Sonic and Mule prepares to write files to the file system but the data have not been flushed down to disk before the disconnecting occur. Therefore the platform notices an error with the destination and sends the message responsible for the error to the dead letter queue. Regardless it is positive that the messages are found in the queue since you would definitely not want corrupted messages in your message flow. About the Web service message flows there were some differences regarding how they were implemented and how the results turned out when comparing Sonic ESB to Mule ESB. Mule ESB has a good way of handling an error by sending a SOAP:Fault message back to the sender when something goes wrong. In Sonic s case you got the socket timeout after a while but you have no idea of what went wrong. If a message get sent to the dead letter queue, or Rejected Message queue in Sonic s case, the system should be able to send a Fault message back to the sender. This could probably be implemented by hand and is likely the only way to accommodate this. 6.2 Receiver temporary disconnected scenario Figure 6.2. The receiving part in a message flow is disconnected and then reconnected while the system is up and running. Our hope was that at the same time no messages disappeared from the system, the message flow would resume its work when the link to the receiving part was reestablished. Another hope was that messages that did not get delivered during the time the connection went down should be redelivered. However, this is not the case since both Sonic s and Mule s messages are not re-sent when the flow is resumed. This could very well lead to problems, as pointed out in the result chapter, since we for example split several rows from our database polling message flows. Some rows may have reached the destination while others ended up in the dead letter queue. In our simple case it would have been just enough to re-read the rows that ended up in the dead letter queue but in a more complex message flow, where the 59
70 CHAPTER 6. DISCUSSION order of the messages are critical, a more advanced solution will probably have to be implemented by hand. An interesting notice from the result chapter is that with persistent queues activated and increasing the max redelivery variable for the ActiveMQ connection, it took longer time before the messages got delivered to the ActiveMQ.DLQ queue. This happens because when transactions are used, the messages are being re-sent as many times as the max redelivery variable suggest when a failure occur. As suggested by Open Source ESBs in Action [17] the max redelivery variable should work with both regular transactions, i.e. JMS to JMS or JDBC to JDBC, and XA transactions which we have seen here. As pointed out earlier, the file access tests were faster than the database access solutions and thereby could the smallest downtime for a file connection make you lose your entire batch of files ready for delivery. However, in Mule the file endpoints does not support transactions but when we have transactions activated, the max redelivery variable yet seems to work. It could be the case that somehow the redelivery function is activated although XA transactions are not supported for the file connectors. Web services use, in comparison with the other message flows, synchronous message transferring and you could not expect that everything would go smooth if a service went down. But as mentioned earlier a simple error message would have been preferred as we got with the Mule platform but not with the Sonic platform. As we can see, no messages disappeared from the system and the ones that ended up in the dead letter queue did not get re-sent when the message flow was resumed. The preferred way for how this should be handled could possible vary from flow to flow but for a message flow where the order of messages which are being sent do matter things could get tricky. In most cases you would probably have to implement a solution by hand to control the messages so they get delivered in the correct order, or aborts the transfer if an error occurs. The calculated risk is though that it could lead to manually have to correct database tables on the receiving part if there were errors in the transfer. But as long as no messages disappears from the system the problem could always be fixed in some way or another. 6.3 Platform or message system crash scenario Figure 6.3. The platform is taken down while a message flow is running and sending messages through the system. 60
71 6.4. RECEIVER DISCONNECTED IN MULTI RECEIVER FLOW The best solution for this scenario had been if no messages were lost even though the platform crashed. As we have determined, both platforms support persistent queues which should solve the problem with a crashing platform but as the result shows this solution is not enough to reach the goal. In Sonic s case the messages are still there when the system has restarted after a crash but the services are not resumed where they were left off before the crash. Since the topics are used internally between the services in an ESB process it could be the case that the services do not know which messages have been processed. In Mule s case we have the problem that when large data quantities are collected from a database and the system goes down the data could already be in the message flow system but not placed in a persistent queue. A simple solution to this could be that you only fetch one row at a time and sends an acknowledgment back to the database, by altering a processed column in the table, or another manual solution. The differences regarding if the platform crashes or if the message system crashes were also discovered. If the platform goes down, the messages which are not residing in a persistent queue are gone when the platform restarts and some messages are lost which could have been fetched by the system but had not reached a persistent queue. A solution for this could be that you acknowledge each message that are residing in a persistent queue and not just when the message has been fetched by the platform. With Mule we split the messages before they get delivered to the persistent queue and we therefore get a larger error marginal with more messages disappearing from the system. If the message system crashed however, the platforms continues to fetch messages from the sender but all these messages are lost since there is no underlying message system to collect all the messages. The messages can not be sent to a dead letter queue since there is no queue if the message system is down. Regarding resuming of previous tasks when the platform restarted, the message flows resumed work in Mule s case. All messages that were lying in the persistent queues were transferred after the restart, which is not the case for the Sonic platform. This could have been a good thing if no messages were lost in the crash, but our tests shows us that messages were indeed lost and did not show up in the dead letter queue. One can argue which is the best solution, to resume the work or not to, but if you resume the work you could get a time consuming job to try to distinguish what data that was delivered and what is missing in a complex message flow. 6.4 Receiver disconnected in multi receiver flow As pointed out earlier in chapter 4.1, the preferred way would have been that if an error occurs in the flow no receiver should receive the message. The test shows however to be rather insipid because the XA transactions that were investigated were only implemented and working on the Mule platform. Therefore the results when the scenario is running on the Sonic platform were rather expected, that the message was delivered to the other receivers. To get around this problem you would 61
72 CHAPTER 6. DISCUSSION Figure 6.4. The sending part sends a message which is redistributed by the platform to two different receivers, where one receiver is disconnected from the ESB platform. probably have to manually implement some kind of a service in the process which will then get the responsibility to make sure that if an error occurs no receivers will get the message. The exactly once setting was tested but as the message got delivered to the services responsible for the access to our database or folder before an error occur there were no indications that would have resulted in a rollback. This is however just a theory and the exactly once setting should be investigated more closely to get an exact knowledge on how it is suppose to function. Regarding Mule we have transactions in place but it seems that XA transactions with multiple endpoints did not work as if it would have been regular transactions with multiple database receivers or multiple JMS receivers. The file endpoints on the other hand does not support XA transactions and we did not get anything from using transactions for these message flows even though we used a common queue for our implementation. About the errors that occurred when we had file endpoints in our message flow, they are little more difficult to explain. As we described in the result part, multiple copies of the same message showed up at the receiving part but the simple answer to this has to be related to the fact that file endpoints does not support XA transactions. During the implementation part a number of different versions of the Mule software was tested to see if the different versions had any effect on the XA transactions. There were a few differences but they had little or no effect on the outcome for the scenario. 6.5 Persistent delivery performance hit Even though one should not look much into these numbers since the tests which were used are very simple, we could however see that activating persistent queues had an effect on the performance. This could clearly be seen when testing the Mule 62
73 6.5. PERSISTENT DELIVERY PERFORMANCE HIT Figure 6.5. The ESB stores the messages in the queues and topics to disk. platform, were we did not have the same problems to evaluate persistent queues as we had with the Sonic platform. A performance hit from around 4500 messages to around 2500 messages is a quite big jump. Most platforms are probably not running on the verge of its limit and would have no problems when turning on persistent queues. If you have message flows that demand this service you would have to take the performance part into the calculation. It is probably safe to say that the disk access slow things down but then again no messages are lost from the queues when the platform fails. Sad enough we could not really compare the Sonic platform with the Mule platform in this case since in Sonic s case the platform did not support millisecond polling with the built-in database service. We had to implement our own simple version resulting in that the polling towards the database occurred when a message was received from a queue. However this did not increase the load on the database servers as much as we had hoped. We were still nowhere near the data performance that the Mule platform had when fetching data from the database server. There is a possibility that something was overlooked or that Sonic and the database servers were not optimally configured. One thing that also can be taken into consideration is that between every service in a Sonic ESB process, JMS queues or rather JMS topics are used. In Mule s case there was only one JMS queue used. In other words, there were a larger number of JMS queues in the Sonic message flows compared to the message flows in Mule. Another thing which also should be taken into consideration is that different services in a Sonic ESB process takes different amount of CPU time. If a heavy conversion of a message is running, messages which have completed that step may not be delivered at the same speed to the next node in the message flow since the heavy conversion is eating up all CPU time. There are probably optimizations which could be done to push the limit of the number of messages that can be delivered from the sender to the receiver. 63
74
75 Chapter 7 Conclusions In this chapter, the results and the discussions are taken into consideration and reconnected to the questions asked in the beginning of the thesis. Possible recommendations, depending on the outcome of the results will also be talked about. 7.1 Scenario results As we have discussed in the previous chapter, the results were quite expected in most of the test cases and both platforms followed each other in the results that were presented. However you could be a little critical regarding the scenarios since there are several permutations of tests that could have been performed. In retrospect you can also say that the message flows used could have been of a more complex type but that would probably have led to longer time to complete the work. With more complex message flows it is possible that the platforms would have been pushed more because of more transformations rolling or multiple nodes between the start and the exit endpoint. 7.2 Platform comparison During the work, differences regarding the two platforms have been shown, some larger than others. The question is how important conclusions that can be drawn from the tests that were performed. Earlier on in the introduction part we described that we wanted to conclude on how the platforms handled themself in different situations and if there were functionality included to keep a high reliability for a message flow. We also asked ourself if both platforms were equal regarding both performance and functionality but let us begin with the more general differences General The first and probably the clearest difference between the platforms was how the development was done. In Sonic s case we had the Workbench which allowed us 65
76 CHAPTER 7. CONCLUSIONS through the use of graphical tools, create both complex and simple message flows. Through the Workbench you also got a good overview of the message flows that were developed. With Mule on the other hand, the message flows were developed by using code in configuration files. Bear in mind that the message flows which were developed for the test in this thesis were rather simple, it was easy to get an overview of the code. The feeling throughout the implementation part was also that it was a little bit cumbersome to build the message flows in Sonic compared to Mule. This could have been because of the graphical tools and the many settings and configurations that you had to look for by browsing the graphical environment. In Mule s case, all the settings and possibilities were gathered together on the same place. The only drawback was however that you had to have the documentation close at hand to know what kind of configurations that could be used for the Mule platform. You can also use spring configuration for the Sonic platform, in a similar way that you use on the Mule platform. Whether it is recommended or not I will leave unsaid. The impression is also that the Sonic platform has more prebuilt functions which can be used compared to Mule were you have to look it up and may have to implement functions yourself. If the message flows are simple this will not be a problem since the base functions like support for file polling, JDBC connections and so on are supported but the Sonic platform also has support for other connectivities. In the Mule Community Edition, you had to implement retry polices by hand to get your connections to reconnect to endpoints in case of a failure. To build these components were however easy compared to building components for Sonic since it was more overhead code that had to be included to get a service working under the Sonic platform. The final impression regarding the more general bits are however leaning towards the Sonic platforms because it feels slightly more robust than its counterpart. However, because of its robustness it is more cumbersome in some cases. The Mule platform feels a little more smooth to develop for but could not offer exactly the same solutions as the Sonic platform does from the start Reliable messaging When it comes to the main part, reliability, it gets even harder to compare the platforms. We had hopes that no messages under any circumstances would disappear from the system but according to our tests which were performed, both platforms lost messages in some of the scenarios. This was, as pointed out, not a good solution and is not acceptable at all if the platforms were used in a critical environment. It was only occurring when the platforms crashed, but some integration solutions have to work every day, and losing data could have catastrophic consequences. In other words, there is a need for manually implemented functions to be used together with persistent queues to avoid any messages getting lost. During my thesis, the platforms never crashed on their own which may or may not be an indication that this may not be happening frequently. 66
77 7.3. PROBLEMS THAT AROSE Other than that both platforms used error handling in a good way, where an administrator or similar person easily could get informed in case of an error by looking through the log files or the dead letter queue. A good way to handle errors is something that every system should require to be able to call itself reliable since there is always the possibility of a failure and it needs to be covered in an error handling plan. Positively for the Mule platform was that transactions could be implemented on the message flows easily. In that way we got access to resending messages which resulted in that for our message flows, where our receiving part was only temporary disconnected, all messages could be sent safely. Resending messages should also be available in the Sonic platform for regular message flows but was something that could not be found and implemented during the short period of time. 7.3 Problems that arose The biggest problems that occurred during the tests were all connected to the platform crash test. Messages which are sent to the dead letter queue when a service fails in a message flow is of course undesirable, but it is still better than messages totally disappearing from the system. This was something that happened when the data which the platforms were sending was between the persistent queues that were used in the message flows. The hope that no messages were to get lost from the system, as pointed out above, could neither Mule or the Sonic platform satisfy with the configurations that were used in the test. There also did not seem to be any extra functionality built in which could remedy this. The second problem that occurred was when the message flows included multiple receivers. Before the implementation part took place of these scenarios, the possibility to use transactions were investigated, to be able to get all or no messages to the endpoints. Sadly this showed to not be the case and our transactions could not satisfy this goal. The Mule platform could although use the so-called XA transactions but not for all types of access solutions. File endpoints were a such access solution that did not have support for XA transactions and could not use the benefit the transactions provided. For the other scenarios the platforms worked as expected when messages got sent to the so-called dead letter queue when problems regarding the transferring part occurred. It is also important that the messages are still stored in this dead letter queue if the platform crashes which was the case when persistent queues were turned on. Another problem, if you could call it that, is the way errors were handled when Web services was tested. The Mule way of handling errors by sending an error message back to the caller felt spontaneously like the better solution compared to the socket timeout received from the Sonic platform. There is however the possibility to implement a service that would do the same thing, or close to the same thing, for the Sonic platform but it is always good to see this kind of functionality included 67
78 CHAPTER 7. CONCLUSIONS from the start. The same can be said about the retry policy for the Mule platform, where for the Sonic platform this was handled automatically. 7.4 Possible solutions The solutions to the problems mentioned above are quite hard to predict but a guess is that it takes some sort of a manual implementation of components or services to aid the platforms in reaching the set-up goal. That messages disappears from the system could probably be solved by checking or controlling every message that reaches the receiver. The problem is however if a message that has failed to reach its destination, no longer is available at the source. This could be the case if you poll against a folder to fetch files, but could be solved by always storing messages until it has been confirmed that it has reached its destination. Regarding database servers, it could be the case that the data inside the database has changed since the last time you tried to send a message and this is something that needs to be kept in mind when designing manual solutions. Platforms crashing was also something that were of interest and our guess that there would not be any problems as long as persistent queues were used regarding message reliability was not correct. It could be the case that something has been overlooked in the configurations but it seems that persistent queues are not enough to avoid losing any message from the system. Some manual implementation is probably needed to check for missing messages or confirm retrieved ones. As it stands now, the platforms are with the current configurations not ready for a critical environment. In the same way, you would probably have to implement functionality by hand when it comes to multiple receivers using different access solutions. This feels like a common scenario and something that the platforms should be able to have functionally for included in the base package. But the risk is that the solution on the receiving or sending end has something that requires a specific solution to be able to determine if a message has reached its destination or not, which will end up in implementing functionality by hand either way. Another question for the solution at the receiving end would probably be how do we perform a rollback if one receiver of a multi receiving message flow has not received the message. Rollback with a database server is one thing, rollback on a folder containing files or some other access solution could lead to problems. As mentioned in the report, XA transactions could be a solution, if the access solution supports the protocol. However, data rollback was something that did not work for our message flows. If that is because of a configuration miss or something else is hard to say since the documentation regarding this part was a little deficient. 68
79 7.5. FINAL WORDS ON THE PLATFORMS 7.5 Final words on the platforms Finally you could say that both platforms have their respective strong and weak points but both platforms gives a good impression of being able to handle the parts which it was designed for. The platforms also include much functionality from the start, which an Enterprise Service Bus product should. The fact that Mule ESB was built on open source is according to my short experience with this platform neither a disadvantage or an advantage, which was also briefly mentioned in Open Source ESBs in Action [17]. We can also see from our test results that the different access solutions have a minor part in how the platforms behave in different situations. However if you have some obscure access solution which you want to connect to the platform bus you might hit some problems that were not shown here. When it comes to the minor performance tests that were performed, we should again point out that not too big conclusions should be drawn by the results. Surely it can look like a remarkable effort by the Mule platform to have a such high flow of messages compared to the Sonic platform when the message flows were quite similar. The answer however probably lies in the JDBC connections that were used, which slowed down the Sonic process and may not have been optimized correctly. Regarding critical environments it is quite clear that the platforms are not ready without proper configurations and manual implementations by the side to cope with demands put up in a such environment. It should however be no doubt of that they can be used in a such environment, they just needs to be adjusted for the situation with manual implementation. 69
80
81 Chapter 8 Further work In this report I have, as mentioned earlier, only touched the topic and concentrated on the software itself, the Enterprise Service Bus platform. There are however much more to study and other aspects that could increase the reliability regarding message transferring for the platform than was brought up in this report. You could also take these questions regarding reliability to other levels which could have an effect on reliable message transferring. 8.1 Hardware An interesting aspect which could be looked closer at is the hardware aspect of reliable message transferring. How large impact has the hardware part of a platform on the Enterprise Service Bus. Most ESB s has, as is mentioned in the report, some form of clustering capability to be able to handle large data quantities but does it also work to increase stability? It would have been interesting to study how the the cluster gets affected if for example a part of the cluster goes down, so-called redundancy. What part does hardware play in the role of stability for an Enterprise Service Bus platform? Can you cluster important/critical points in an ESB network to increase scalability and or reliability/redundancy? Is there any negative aspects that appears when clustering an ESB platform? Is clustering used frequently in the market of integration when it comes to ESB platforms or is it just a functionality that looks good on paper? Does the integration implementations need any modifications to be able to take part of a clustered ESB? 71
82 CHAPTER 8. FURTHER WORK 8.2 Organizational level On an organizational level there are other angles and aspects to take into consideration in a study of this problem. If an error would occur and the message is sent to a so-called dead message queue, which was mentioned in the report, who has the responsibility to check these messages? Is there someone monitoring the platforms to guarantee reliability around the clock? Study how the organization around the integration solution is built to handle possible message or transfer failures. Who has the responsibility, the customer or the company that delivered the integration implementation? What is the technical knowledge of the integration solution if it was delivered by a third part? 8.3 Security and Integrity Security and integrity is something that eminently affects the reliability and stability of an integration platform. To call a platform reliable or a message transportation reliable when messages gets corrupted or modified by an external part, I think no one would do. There are many questions that could be asked around this subject regarding Enterprise Service Buses. In Data provenance in SOA: security, reliability, and integrity [22] the authors mention that you have to look upon a large message flow with different eyes than you would with a small system. It is enough for one node in the flow to get compromised for the security and reliability to fail. What kind of functionality exists today to handle security issues around an Enterprise Service Bus? Is there functionality for handling integrity of data packages as well? If functionality for security and integrity exists in modern ESBs, are they per default on and do they have an effect on the platform itself? How should testing of a corrupt message be made and can the ESB discover a modified package? There are of course other areas one can study regarding reliable message transferring but these subjects were close to the work that was made in this report. 72
83 Bibliography [1] Apache ActiveMQ website [2] Apache CXF website [3] Chappell, David A., Enterprise Service Bus, O Reilly Media, Inc., Sebastopol, California, [4] EAI Wikipedia website [5] The Eclipse Foundation website [6] H2 Database Engine website [7] Hohpe, G., and Woolf, B., Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Addison-Wesley, Pearson Education, Inc., Reading, Massachusetts, [8] JMS Sun website [9] Keen, M., Acharya, A., Bishop, S., Hopkins, A., Milinski, S., Nott, C., Robinson, R., Adams, J., and Verschueren, P., Patterns: Implementing an SOA Using an Enterprise Service Bus, International Business Machines Corporation, [10] Leavitt, N., Are Web Services Finally Ready to Deliver?," Computer (IEEE Computer Society), vol. 37, no. 11, pp , November [11] MuleSoft website [12] MySQL website [13] OpenMQ website [14] Ortiz Jr., S., Getting on Board the Enterprise Service Bus," Computer (IEEE Computer Society), vol. 40, no. 4, pp , April
84 BIBLIOGRAPHY [15] Papazoglou, M. P., Web Services and Business Transactions *," World Wide Web, vol. 6, no. 1, pp , March [16] Progress Software website [17] Rademakers, T., and Dirksen, J., Open Source ESBs in Action, Manning Publications Co., Greenwich, CT, [18] Schulte, R., Predicts 2003: Enterprise Service Buses Emerge, Gartner Research, December [19] SOAP W3C website [20] SoapUI website [21] Tai, S., Mikalsen, T. A., and Rouvello, I. Using Message-oriented Middleware for Reliable Web Services Messaging," Web Services, E-Business, and the Semantic Web, pp , Springer-Verlag, Berlin, Heidelberg, [22] Tsai, W. T., Wei, X., Chen, Y., Paul, R., Chung, J., and Zhang, D., Data provenance in SOA: security, reliability, and integrity," Service Oriented Computing and Applications, vol. 1, no. 4, pp , December [23] WampServer website [24] XA Wikipedia website [25] XML W3C website [26] XSLT W3C website
85 Appendix A Performance tests The values reported in the table below are rounded mean values from running the tests ten times in a row for 60 seconds. The tests were started directly after a system cold start. A.1 Sonic Persistent Frequency Rows per cycle Messages processed Messages inserted No 1 ms 2 rows No 1 ms 200 rows No 100 ms 2 rows No 100 ms 200 rows Yes 1 ms 2 rows Yes 1 ms 200 rows Yes 100 ms 2 rows Yes 100 ms 200 rows A.2 Mule Persistent Frequency Rows per cycle Messages processed Messages inserted No 1 ms 2 rows No 1 ms 200 rows No 100 ms 2 rows No 100 ms 200 rows Yes 1 ms 2 rows Yes 1 ms 200 rows Yes 100 ms 2 rows Yes 100 ms 200 rows
86 APPENDIX A. PERFORMANCE TESTS A.3 Explanation Persistent If persistent queues are used or not. Frequency The frequency which the ESB platform polls/fetches data from the sending database server. Rows per cycle The number of actual rows with data that is fetched from the sending database server each cycle (frequency). Messages processed The number of messages that has passed all transformation services inside the ESB and is waiting or has been sent to the receiving database server. Messages inserted The number of messages that has been inserted at the receiving database server, in other words, the messages that has completed the message flow. 76
87 TRITA-CSC-E 2010:113 ISRN-KTH/CSC/E--10/113--SE ISSN
A standards-based approach to application integration
A standards-based approach to application integration An introduction to IBM s WebSphere ESB product Jim MacNair Senior Consulting IT Specialist [email protected] Copyright IBM Corporation 2005. All rights
Service Mediation. The Role of an Enterprise Service Bus in an SOA
Service Mediation The Role of an Enterprise Service Bus in an SOA 2 TABLE OF CONTENTS 1 The Road to Web Services and ESBs...4 2 Enterprise-Class Requirements for an ESB...5 3 Additional Evaluation Criteria...7
Real World Integration Challenges and Enterprise Service Bus (ESB)
Real World Integration Challenges and Enterprise Service Bus (ESB) Mian Zeshan Farooqi Punjab University College of Information Technology (PUCIT) University of the Punjab. [email protected] Software
CERTIFIED MULESOFT DEVELOPER EXAM. Preparation Guide
CERTIFIED MULESOFT DEVELOPER EXAM Preparation Guide v. November, 2014 2 TABLE OF CONTENTS Table of Contents... 3 Preparation Guide Overview... 5 Guide Purpose... 5 General Preparation Recommendations...
Introduction to WebSphere Process Server and WebSphere Enterprise Service Bus
Introduction to WebSphere Process Server and WebSphere Enterprise Service Bus Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 4.0.3 Unit objectives
Enterprise Service Bus
We tested: Talend ESB 5.2.1 Enterprise Service Bus Dr. Götz Güttich Talend Enterprise Service Bus 5.2.1 is an open source, modular solution that allows enterprises to integrate existing or new applications
AquaLogic ESB Design and Integration (3 Days)
www.peaksolutions.com AquaLogic ESB Design and Integration (3 Days) Audience Course Abstract Designed for developers, project leaders, IT architects and other technical individuals that need to understand
Implementing Enterprise Integration Patterns Using Open Source Frameworks
Implementing Enterprise Integration Patterns Using Open Source Frameworks Robert Thullner, Alexander Schatten, Josef Schiefer Vienna University of Technology, Institute of Software Technology and Interactive
VALLIAMMAI ENGNIEERING COLLEGE SRM Nagar, Kattankulathur 603203.
VALLIAMMAI ENGNIEERING COLLEGE SRM Nagar, Kattankulathur 603203. DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Year & Semester : II / III Section : CSE Subject Code : CP7028 Subject Name : ENTERPRISE
Enterprise Integration
Enterprise Integration Enterprise Service Bus Java Message Service Presented By Ian McNaney University of Colorado at Boulder Motivation Enterprise context Many different systems Varying ages Varying technologies
Enterprise Service Bus
FREE AND OPEN SOURCE SOFTWARE CONFERENCE 2007 1 Enterprise Service Bus Falko Menge Abstract This paper is a comprehensive introduction to the Enterprise Service Bus (ESB), which is a new type of integration
Enterprise Service Bus Defined. Wikipedia says (07/19/06)
Enterprise Service Bus Defined CIS Department Professor Duane Truex III Wikipedia says (07/19/06) In computing, an enterprise service bus refers to a software architecture construct, implemented by technologies
S A M P L E C H A P T E R
S AMPLE CHAPTER Open Source ESBs in Action by Tijs Rademakers Jos Dirksen Sample Chapter 1 Copyright 2008 Manning Publications brief contents PART 1 UNDERSTANDING ESB FUNCTIONALITY...1 1 The world of open
Enterprise Service Bus 101
Enterprise Service Bus 101 Marty Wasznicky Director, Product Business Development Neudesic Copyright 2010 Neudesic, LLC. All rights reserved. Table of Contents Abstract... 3 Understanding the Enterprise
Motivation Definitions EAI Architectures Elements Integration Technologies. Part I. EAI: Foundations, Concepts, and Architectures
Part I EAI: Foundations, Concepts, and Architectures 5 Example: Mail-order Company Mail order Company IS Invoicing Windows, standard software IS Order Processing Linux, C++, Oracle IS Accounts Receivable
Combining Service-Oriented Architecture and Event-Driven Architecture using an Enterprise Service Bus
Combining Service-Oriented Architecture and Event-Driven Architecture using an Enterprise Service Bus Level: Advanced Jean-Louis Maréchaux ([email protected]), IT Architect, IBM 28 Mar 2006 Today's business
Enterprise Application Integration
Enterprise Integration By William Tse MSc Computer Science Enterprise Integration By the end of this lecturer you will learn What is Enterprise Integration (EAI)? Benefits of Enterprise Integration Barrier
FioranoMQ 9. High Availability Guide
FioranoMQ 9 High Availability Guide Copyright (c) 1999-2008, Fiorano Software Technologies Pvt. Ltd., Copyright (c) 2008-2009, Fiorano Software Pty. Ltd. All rights reserved. This software is the confidential
Enterprise Service Bus Evaluation as Integration Platform for Ocean Observatories
Enterprise Service Bus Evaluation as Integration Platform for Ocean Observatories Durga pavani Brundavanam, Mississippi state university Mentor: Kevin Gomes Summer 2009 Keywords: Integration, Enterprise
RED HAT JBOSS FUSE SERVICE WORKS 6 COMPARED WITH MULE ESB ENTERPRISE 3.4
RED HAT JBOSS FUSE SERVICE WORKS 6 COMPARED WITH MULE ESB ENTERPRISE 3.4 COMPETITIVE REVIEW, APRIL 2014 INTRODUCTION The ability to integrate systems and share data across the enterprise is a common datacenter
A Unified Messaging-Based Architectural Pattern for Building Scalable Enterprise Service Bus
A Unified Messaging-Based Architectural Pattern for Building Scalable Enterprise Service Bus Karim M. Mahmoud 1,2 1 IBM, Egypt Branch Pyramids Heights Office Park, Giza, Egypt [email protected] 2 Computer
WELCOME TO Open Source Enterprise Architecture
WELCOME TO Open Source Enterprise Architecture WELCOME TO An overview of Open Source Enterprise Architecture In the integration domain Who we are Fredrik Hilmersson Petter Nordlander Why Open Source Integration
Oracle Service Bus. Situation. Oracle Service Bus Primer. Product History and Evolution. Positioning. Usage Scenario
Oracle Service Bus Situation A service oriented architecture must be flexible for changing interfaces, transport protocols and server locations - service clients have to be decoupled from their implementation.
Methods and tools for data and software integration Enterprise Service Bus
Methods and tools for data and software integration Enterprise Service Bus Roman Hauptvogl Cleverlance Enterprise Solutions a.s Czech Republic [email protected] Abstract Enterprise Service Bus (ESB)
LinuxWorld Conference & Expo Server Farms and XML Web Services
LinuxWorld Conference & Expo Server Farms and XML Web Services Jorgen Thelin, CapeConnect Chief Architect PJ Murray, Product Manager Cape Clear Software Objectives What aspects must a developer be aware
A SOA Based Framework for the Palestinian e-government Integrated Central Database
Islamic University of Gaza Deanery of Higher Studies Faculty of Information Technology Information Technology Program A SOA Based Framework for the Palestinian e-government Integrated Central Database
Lesson 18 Web Services and. Service Oriented Architectures
Lesson 18 Web Services and Service Oriented Architectures Service Oriented Architectures Module 4 - Architectures Unit 1 Architectural features Ernesto Damiani Università di Milano A bit of history (1)
SOA Fundamentals For Java Developers. Alexander Ulanov, System Architect Odessa, 30 September 2008
SOA Fundamentals For Java Developers Alexander Ulanov, System Architect Odessa, 30 September 2008 What is SOA? Software Architecture style aimed on Reuse Growth Interoperability Maturing technology framework
Enterprise Application Designs In Relation to ERP and SOA
Enterprise Application Designs In Relation to ERP and SOA DESIGNING ENTERPRICE APPLICATIONS HASITH D. YAGGAHAVITA 20 th MAY 2009 Table of Content 1 Introduction... 3 2 Patterns for Service Integration...
Enterprise Integration Patterns
Enterprise Integration Patterns Asynchronous Messaging Architectures in Practice Gregor Hohpe The Need for Enterprise Integration More than one application (often hundreds or thousands) Single application
Jitterbit Technical Overview : Microsoft Dynamics CRM
Jitterbit allows you to easily integrate Microsoft Dynamics CRM with any cloud, mobile or on premise application. Jitterbit s intuitive Studio delivers the easiest way of designing and running modern integrations
SCA-based Enterprise Service Bus WebSphere ESB
IBM Software Group SCA-based Enterprise Service Bus WebSphere ESB Soudabeh Javadi, WebSphere Software IBM Canada Ltd [email protected] 2007 IBM Corporation Agenda IBM Software Group WebSphere software
How To Integrate With An Enterprise Service Bus (Esb)
Mule ESB Integration Simplified Rich Remington [email protected] Topics Integration, SOA, and ESB What Mule ESB is (and isn t) Mule Architecture & Components Configuration & Deployment Enterprise
Increasing IT flexibility with IBM WebSphere ESB software.
ESB solutions White paper Increasing IT flexibility with IBM WebSphere ESB software. By Beth Hutchison, Katie Johnson and Marc-Thomas Schmidt, IBM Software Group December 2005 Page 2 Contents 2 Introduction
Enterprise Service Bus: Five Keys for Taking a Ride
About this research note: Technology Insight notes describe emerging technologies, tools, or processes as well as analyze the tactical and strategic impact they will have on the enterprise. Enterprise
"An infrastructure that a company uses for integrating services in the application landscape."
Enterprise Service Bus by Jürgen Kress, Berthold Maier, Hajo Normann, Danilo Schmeidel, Guido Schmutz, Bernd Trops, Clemens Utschig- Utschig, Torsten Winterberg Answers to some of the most important questions
Oracle WebLogic Foundation of Oracle Fusion Middleware. Lawrence Manickam Toyork Systems Inc www.toyork.com http://ca.linkedin.
Oracle WebLogic Foundation of Oracle Fusion Middleware Lawrence Manickam Toyork Systems Inc www.toyork.com http://ca.linkedin.com/in/lawrence143 History of WebLogic WebLogic Inc started in 1995 was a company
EAI OVERVIEW OF ENTERPRISE APPLICATION INTEGRATION CONCEPTS AND ARCHITECTURES. Enterprise Application Integration. Peter R. Egli INDIGOO.
EAI OVERVIEW OF ENTERPRISE APPLICATION INTEGRATION CONCEPTS AND ARCHITECTURES Peter R. Egli INDIGOO.COM 1/16 Contents 1. EAI versus SOA versus ESB 2. EAI 3. SOA 4. ESB 5. N-tier enterprise architecture
Introduction to Enterprise Service Bus
Introduction to Enterprise Service Bus Xiaoying Bai Department of Computer Science and Technology Tsinghua University March 2007 Outline ESB motivation and definition Message oriented middleware (MOM)
JBI and OpenESB. Introduction to Technology. Michael Czapski Advanced Solutions Architect, SOA/BI/Java CAPS Sun Microsystems, ANZ
JBI and OpenESB Introduction to Technology Michael Czapski Advanced Solutions Architect, SOA/BI/Java CAPS Sun Microsystems, ANZ Learn what JBI and OpenESB are intended to address and how they go about
An Introduction to the Enterprise Service Bus
An Introduction to the Enterprise Service Bus Martin Breest Hasso-Plattner-Institute for IT Systems Engineering at the University of Potsdam, Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany [email protected]
API Architecture. for the Data Interoperability at OSU initiative
API Architecture for the Data Interoperability at OSU initiative Introduction Principles and Standards OSU s current approach to data interoperability consists of low level access and custom data models
Persistent, Reliable JMS Messaging Integrated Into Voyager s Distributed Application Platform
Persistent, Reliable JMS Messaging Integrated Into Voyager s Distributed Application Platform By Ron Hough Abstract Voyager Messaging is an implementation of the Sun JMS 1.0.2b specification, based on
E-mail Listeners. E-mail Formats. Free Form. Formatted
E-mail Listeners 6 E-mail Formats You use the E-mail Listeners application to receive and process Service Requests and other types of tickets through e-mail in the form of e-mail messages. Using E- mail
Improve business agility with WebSphere Message Broker
Improve business agility with Message Broker Enhance flexibility and connectivity while controlling costs and increasing customer satisfaction Highlights Leverage business insight by dynamically enriching
Ce document a été téléchargé depuis le site de Precilog. - Services de test SOA, - Intégration de solutions de test.
Ce document a été téléchargé depuis le site de Precilog. - Services de test SOA, - Intégration de solutions de test. 01 39 20 13 55 [email protected] www.precilog.com End to End Process Testing & Validation:
Emerging Technologies Shaping the Future of Data Warehouses & Business Intelligence
Emerging Technologies Shaping the Future of Data Warehouses & Business Intelligence Service Oriented Architecture SOA and Web Services John O Brien President and Executive Architect Zukeran Technologies
Access Point Framework towards a more scalable ESB solution
Access Point Framework towards a more scalable ESB solution SANJA JANKOLOVSKA BOSHKO ZHERAJIKJ Supervisor: Don Baldwin, CEO of Aurenav LLC. Examiner: Mihhail Matskin Degree project in Software Engineering
ATHABASCA UNIVERSITY. Enterprise Integration with Messaging
ATHABASCA UNIVERSITY Enterprise Integration with Messaging BY Anuruthan Thayaparan A thesis essay submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE in INFORMATION
ActiveVOS Server Architecture. March 2009
ActiveVOS Server Architecture March 2009 Topics ActiveVOS Server Architecture Core Engine, Managers, Expression Languages BPEL4People People Activity WS HT Human Tasks Other Services JMS, REST, POJO,...
Enterprise Application Integration (EAI) Architectures, Technologies, and Best Practices
Enterprise Application Integration (EAI) Architectures, Technologies, and Best Practices Give Your Business the Competitive Edge IT managers have been under increasing pressure to migrate a portfolio of
A Survey Study on Monitoring Service for Grid
A Survey Study on Monitoring Service for Grid Erkang You [email protected] ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide
Pervasive Software + NetSuite = Seamless Cloud Business Processes
Pervasive Software + NetSuite = Seamless Cloud Business Processes Successful integration solution between cloudbased ERP and on-premise applications leveraging Pervasive integration software. Prepared
SONIC ESB 7. KEY CAPABILITIES > Connects, mediates and controls. KEY BENEFITS > Creates new processes using
CONNECT EVERYTHING. ACHIEVE ANYTHING. TM DATASHEET KEY CAPABILITIES > Connects, mediates and controls services, wherever they are deployed > Fast, dependable and secure communications > Transactional failover
Increasing IT flexibility with IBM WebSphere ESB software.
ESB solutions White paper Increasing IT flexibility with IBM WebSphere ESB software. By Beth Hutchison, Marc-Thomas Schmidt and Chris Vavra, IBM Software Group November 2006 Page 2 Contents 2 Introduction
What s New in Sonic V7.5 Rick Kuzyk
What s New in Sonic V7.5 Sonic ESB 7.5 Senior Portfolio Specialist 2 What s New in Sonic V7.5 Sonic ESB Timeline Sonic XQ March 2002 World s First Enterprise Service Bus Sonic ESB 6.0 March 2005 Continuous
FUSE-ESB4 An open-source OSGi based platform for EAI and SOA
FUSE-ESB4 An open-source OSGi based platform for EAI and SOA Introduction to FUSE-ESB4 It's a powerful OSGi based multi component container based on ServiceMix4 http://servicemix.apache.org/smx4/index.html
ESB pilot project at the FMI
ESB pilot project at the FMI EGOWS 2008 Pekka Rantala Finnish Meteorological Institute Contents 1) What is it? 2) Why do we want to look at it? 3) What did we set out to do? 4) What did we actually do?
Redbook Overview Patterns: SOA Design with WebSphere Message Broker and WebSphere ESB
IBM Software for WebSphere Redbook Overview Patterns: SOA Design with WebSphere Message Broker and WebSphere ESB Presenter: Kim Clark Email: [email protected] Date: 27/02/2007 SOA Design with WebSphere
Event based Enterprise Service Bus (ESB)
Event based Enterprise Service Bus (ESB) By: Kasun Indrasiri 128213m Supervised By: Dr. Srinath Perera Dr. Sanjiva Weerawarna Abstract With the increasing adaptation of Service Oriented Architecture for
Migrating Applications From IBM WebSphere to Apache Tomcat
Migrating Applications From IBM WebSphere to Apache Tomcat MuleSource and the MuleSource logo are trademarks of MuleSource Inc. in the United States and/or other countries. All other product and company
How To Build A Financial Messaging And Enterprise Service Bus (Esb)
Simplifying SWIFT Connectivity Introduction to Financial Messaging Services Bus A White Paper by Microsoft and SAGA Version 1.0 August 2009 Applies to: Financial Services Architecture BizTalk Server BizTalk
Achieving business agility and cost optimization by reducing IT complexity. The value of adding ESB enrichment to your existing messaging solution
Smart SOA application integration with WebSphere software To support your business objectives Achieving business agility and cost optimization by reducing IT complexity. The value of adding ESB enrichment
AquaLogic Service Bus
AquaLogic Bus Wolfgang Weigend Principal Systems Engineer BEA Systems 1 What to consider when looking at ESB? Number of planned business access points Reuse across organization Reduced cost of ownership
Enterprise Application Integration (EAI) Architectures, Technologies, and Best Practices
Enterprise Application Integration (EAI) Architectures, Technologies, and Best Practices Give Your Business the Competitive Edge IT managers have been under increasing pressure to migrate a portfolio of
Designing an Enterprise Application Framework for Service-Oriented Architecture 1
Designing an Enterprise Application Framework for Service-Oriented Architecture 1 Shyam Kumar Doddavula, Sandeep Karamongikar Abstract This article is an attempt to present an approach for transforming
Closer Look at Enterprise Service Bus. Deb L. Ayers Sr. Principle Product Manager Oracle Service Bus SOA Fusion Middleware Division
Closer Look at Enterprise Bus Deb L. Ayers Sr. Principle Product Manager Oracle Bus SOA Fusion Middleware Division The Role of the Foundation Addressing the Challenges Middleware Foundation Efficiency
The Integration Between EAI and SOA - Part I
by Jose Luiz Berg, Project Manager and Systems Architect at Enterprise Application Integration (EAI) SERVICE TECHNOLOGY MAGAZINE Issue XLIX April 2011 Introduction This article is intended to present the
WebSphere ESB Best Practices
WebSphere ESB Best Practices WebSphere User Group, Edinburgh 17 th September 2008 Andrew Ferrier, IBM Software Services for WebSphere [email protected] Contributions from: Russell Butek ([email protected])
Developers Integration Lab (DIL) System Architecture, Version 1.0
Developers Integration Lab (DIL) System Architecture, Version 1.0 11/13/2012 Document Change History Version Date Items Changed Since Previous Version Changed By 0.1 10/01/2011 Outline Laura Edens 0.2
Creating new university management software by methodologies of Service Oriented Architecture (SOA)
Creating new university management software by methodologies of Service Oriented Architecture (SOA) Tuomas Orama, Jaakko Rannila Helsinki Metropolia University of Applied Sciences, Development manager,
Unlocking the Power of SOA with Business Process Modeling
White Paper Unlocking the Power of SOA with Business Process Modeling Business solutions through information technology TM Entire contents 2006 by CGI Group Inc. All rights reserved. Reproduction of this
BEA AquaLogic Service Bus and WebSphere MQ in Service-Oriented Architectures
BEA White Paper BEA AquaLogic Service Bus and WebSphere MQ in Service-Oriented Architectures Integrating a Clustered BEA AquaLogic Service Bus Domain with a Clustered IBM WebSphere MQ Copyright Copyright
Leveraging Service Oriented Architecture (SOA) to integrate Oracle Applications with SalesForce.com
Leveraging Service Oriented Architecture (SOA) to integrate Oracle Applications with SalesForce.com Presented by: Shashi Mamidibathula, CPIM, PMP Principal Pramaan Systems [email protected] www.pramaan.com
Introduction to Service-Oriented Architecture for Business Analysts
Introduction to Service-Oriented Architecture for Business Analysts This course will provide each participant with a high-level comprehensive overview of the Service- Oriented Architecture (SOA), emphasizing
An Oracle White Paper March 2011. Guide to Implementing Application Integration Architecture on Oracle Service Bus
An Oracle White Paper March 2011 Guide to Implementing Application Integration Architecture on Oracle Service Bus Disclaimer The following is intended to outline our general product direction. It is intended
s@lm@n Oracle Exam 1z0-102 Oracle Weblogic Server 11g: System Administration I Version: 9.0 [ Total Questions: 111 ]
s@lm@n Oracle Exam 1z0-102 Oracle Weblogic Server 11g: System Administration I Version: 9.0 [ Total Questions: 111 ] Oracle 1z0-102 : Practice Test Question No : 1 Which two statements are true about java
IBM WebSphere application integration software: A faster way to respond to new business-driven opportunities.
Application integration solutions To support your IT objectives IBM WebSphere application integration software: A faster way to respond to new business-driven opportunities. Market conditions and business
MuleSoft Blueprint: Load Balancing Mule for Scalability and Availability
MuleSoft Blueprint: Load Balancing Mule for Scalability and Availability Introduction Integration applications almost always have requirements dictating high availability and scalability. In this Blueprint
CHAPTER 1 INTRODUCTION
1 CHAPTER 1 INTRODUCTION Internet has revolutionized the world. There seems to be no limit to the imagination of how computers can be used to help mankind. Enterprises are typically comprised of hundreds
SpiritSoft (SpiritWave)
Decision Framework, R. Schulte Research Note 9 December 2002 Predicts 2003: Enterprise Service Buses Emerge The enterprise service bus, a new variation of software infrastructure, has added to the range
Part 2: The Neuron ESB
Neuron ESB: An Enterprise Service Bus for the Microsoft Platform This paper describes Neuron ESB, Neudesic s ESB architecture and framework software. We first cover the concept of an ESB in general in
How service-oriented architecture (SOA) impacts your IT infrastructure
IBM Global Technology Services January 2008 How service-oriented architecture (SOA) impacts your IT infrastructure Satisfying the demands of dynamic business processes Page No.2 Contents 2 Introduction
MD Link Integration. 2013 2015 MDI Solutions Limited
MD Link Integration 2013 2015 MDI Solutions Limited Table of Contents THE MD LINK INTEGRATION STRATEGY...3 JAVA TECHNOLOGY FOR PORTABILITY, COMPATIBILITY AND SECURITY...3 LEVERAGE XML TECHNOLOGY FOR INDUSTRY
Service Governance and Virtualization For SOA
Service Governance and Virtualization For SOA Frank Cohen Email: [email protected] Brian Bartel Email: [email protected] November 7, 2006 Table of Contents Introduction 3 Design-Time Software
ESB solutions Title. BWUG & GSE Subtitle 2013-03-28. [email protected]. [email protected]
ESB solutions Title BWUG & GSE Subtitle 2013-03-28 [email protected] [email protected] 1 I8C part of Cronos Integration consultancy ESB, SOA, BPMS, B2B, EAI, Composite Apps Vendor independent 40+ consultants
SONIC ESB: AN ARCHITECTURE AND LIFECYCLE DEFINITION
CONNECT EVERYTHING. ACHIEVE ANYTHING. WHITEPAPER SONIC ESB: AN ARCHITECTURE AND LIFECYCLE DEFINITION Copyright 2005. Sonic Software Corporation. All rights reserved. TABLE OF CONTENTS > 1.0 Introduction
Tier Architectures. Kathleen Durant CS 3200
Tier Architectures Kathleen Durant CS 3200 1 Supporting Architectures for DBMS Over the years there have been many different hardware configurations to support database systems Some are outdated others
Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0
Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Third edition (May 2012). Copyright International Business Machines Corporation 2012. US Government Users Restricted
Oracle SOA Suite/B2B as a Critical Mission Hub for a High Volume Message Use Case
Oracle SOA Suite/B2B as a Critical Mission Hub for a High Volume Message Use Case Introduction Stop. Think. Ok, in the meanwhile 2 seconds has passed and 250 messages more were processed by a mission critical
SOA Myth or Reality??
IBM TRAINING S04 SOA Myth or Reality Jaqui Lynch IBM Corporation 2007 SOA Myth or Reality?? Jaqui Lynch Mainline Information Systems Email [email protected] Session S04 http://www.circle4.com/papers/s04soa.pdf
Integration using IBM Solutions
With special reference to integration with SAP XI Email: [email protected] Table of contents Integration using IBM Solutions Executive Summary...3 1. Introduction...4 2. IBM Business Integration
THE INFOBUS PROJECT THE SCENARIO
THE INFOBUS PROJECT A leading Italian mobile telephony operator entrusted Sytel Reply with the task of planning and developing an EAI solution able to integrate some best-of-breed technologies and constitute
Replacing a commercial integration platform with an open source ESB. Magnus Larsson [email protected] Cadec 2010-01- 20
Replacing a commercial integration platform with an open source ESB Magnus Larsson [email protected] Cadec 2010-01- 20 Agenda The customer Phases Problem defini?on Proof of concepts
