SOAP Reply Scaling Problem Our flows are a variation of the built-in pattern called Service façade: MQ request-response
SOAP Reply Scaling Problem Basically, this pattern exposes an MQ exchange as a SOAP web service 1. Request 2. Backend reads message and replies 3. Response
SOAP Reply Scaling Problem Scaling options include: 1. Additional instances in the same Integration Server. 2. Deploying to Additional Integration Servers (execution groups) on the same Integration Node. 3. Deploying to Additional Integration Servers on a different Integration Node (same or different machine) The problem we have is that we are currently locked out of options 2 and 3. We can t scale by deploying our.bar file in more Integration Servers (same or different Integration Nodes, same or different machine). The problem shows up as BIP3704 Message does not contain a valid SOAP Reply Identifier at all whenever an Integration Server processes a response for which it did not process the request. Continued on next slide
SOAP Reply Scaling Problem BIP3704 is caused because soap reply identifiers are local to an Integration Server. In the documentation this is stated as this: The SOAPReply node is typically used with the SOAPInput node, which can be included in the same message flow, or a different flow in the same integration server.
Scaling. Vertical 1. Soap request Integration Server 1 Integration Server 2 2. MQ Request 4. MQ Input 3. MQ Reply Backend Machine 1 5. BIP3704 Message does not contain a valid SOAP Reply Identifier at all. Integration Server 2 does not recognize a reply identifier generated by Integration Server 1.
Scaling. Horizontal 1. Soap request HTTP Load Balancer Integration Server MQ Cluster Integration Server Machine 1 Machine 2 2. MQ Request 3. MQ Reply 4. MQ Input Backend 5. BIP3704 Message does not contain a valid SOAP Reply Identifier at all. Integration Server 2 does not recognize a reply identifier generated by Integration Server 1.
SOAP Reply Scaling Problem Questions: Is there a simple way to scale these message flows? If not. Which workarounds would you recommend?
Alternative Solutions
On further investigation, this problem belongs in a category known as message affinity, which means messages can only be processed by specific servers. Message affinities should be removed whenever possible. If they cannot be removed they could be circumvented.
Broker-wide listener One solution to the vertical scaling problem is using the broker-wide listener. All HTTP connections and reply IDs use the same listener. Pros: Less configuration Cons: Lower throughput compared to several embedded listeners
Broker-wide listener 1. Soap request 5. Soap reply Broker-wide listener Integration Server 1 Integration Server 2 2. MQ Request 4. MQ Input 3. MQ Reply Backend Machine 1
WS-Addressing With WS-Addressing, async SOAP calls are made, which allow requests to be received by an IIB node and responses to be sent on a different IIB node. The DataPower appliance can be used to translate SOAP async to SOAP sync calls. Pros: No changes in backend or cluster Response workload is balanced. Cons: Changes in all message flows Either all consumers change to WS-Addressing or the DataPower does a translation to avoid consumer changes. There is additional work to correlate SOAP request/responses.
WS-Addressing 1. Soap request 7. Soap reply 6. Soap response 2. Soap request WS-Addressing HTTP Load Balancer WS-Addressing translation - DataPower WS-Addressing Integration Server MQ Cluster Integration Server Machine 1 Machine 2 3. MQ Request 4. MQ Reply 5. MQ Input Backend
Cluster workload user exit A cluster workload user exit uses the MQMD replytoqueuemanager to direct response messages to the same IIB node that handled the request. Pros: No changes in backend or flows. Simple. Cons: A custom user exit must be developed.
Cluster workload user exit HTTP Load Balancer 1. Soap request 5. Soap response 4. MQ Input Integration Server MQ Cluster Integration Server Machine 1 Machine 2 2. MQ Request 3. MQ Reply Cluster w. user exit Backend
Mirror backend A second backend MQ manager is created to mirror IIB nodes. Pros: No changes in backend or flows. Cons: Higher resource usage. The second MQ manager and related backend processes demand more resources. For every IIB node another backend mirror is required.
Mirror backend HTTP Load Balancer 1. Soap request 5. Soap response 4. MQ Input Integration Server Integration Server Machine 1 Machine 2 2. MQ Request 3. MQ Reply Backend Backend