Long time since last post…but I have not been idle. I have been working on a really big double post regarding Integration and Azure Service Bus both that will be posted in short succession. Based on a customer scenario – what started as a best practices Azure Service Bus lead to a theoretical and conceptual discussion about integrations in the cloud as a whole and more detailed on how to use Azure Service Bus as a key part of an integration scenario on cloud, on premise and mixed scenarios what are the best practices and what should you think about.
While it is really difficult to set a pattern that will work for all scenarios there are key points to discuss and things that should be considered. This is a really big topic and as always when dealing with integrations you seldom have full freedom but have to deal with the limitations of the participating systems.
I will post two entries in this series: Part 1- this one – will discuss integration options on-/off premise and the other one will discuss the features in Azure Service Bus in particular with focus on best practices. So lets get started…
Integrations in 2016
Everyone who knows me (professionally) knows that I am an eager promoter of centralized integration platforms such as BizTalk (or WebMethods, Mule or similar for that matter). The simple reason for this is that those point-point based integrations often create an awfull unmantainable spiderweb where no-one knows what is connected to what and a change somewhere may propagate to disasterous effect for a company. Other problems that could come with this scenario is:
* who is the source (System B uses System A data and then System C accesses System Bs combined data).
* inconsistent data in different databases
* maintenance hell (nobody dares to change anything due to risks). The systems are not always aware of who consumes data from them.
A network of point-to-point integrations will be exponentially risky the more systems you have involved because the chain can be broken at so many points and it is difficult to predict the consequences. These integrations are typically poorly monitored and typically some of these integrations are not (commonly) known in the enterprise.
I am not saying that an integration platform solves all integration problems but it is easier to get an overview/control over your data as you can in a more structured way collect and distribute information to participating systems. You often get logging, resends and ordered functionality also as a part of the package. You can for example distribute orders to a number of recipients from a single source in a controlled and monitored way. I am aware that BizTalk (and even more so WebMethods) is quite expensive but I believe that you in most cases should treat your integrations as a business critical functionality and should aim for high availability and failover when setting this up in your company and that you should put high value on the integrations. With that in mind – paying a sum every year for a structured integration platform may be acceptable.
Alternatives to (data-shuffling) Integration
First of all you can likely not avoid integrations. You need to communicate with others in some way in almost every scenario. And as the world moves more and more towards SaaS solutions you cannot always control where/and how you access/place your data. The bought CRM system will need the customer information as well as your invoicing system does. So your data will need to be duplicated and synchronized in a structured and controlled way.
Some years ago some enterprises tried to avoid data shuffling integrations with api-/service based system landscapes. Those tried to avoid data duplication in favour of each system providing real-time api to its data.
The drawbacks with this approach were a few. As an example each system could become a single point of failure for an entire enterprise (consider that the article system goes down then most other systems would come to a halt also). Secondly, these services were often designed with inside-out approach, i.e. the own system decided what should be in the services and tried to make generic services instead of (my preferred approach) outside-in design where all services are adapted to the consumer from both security and payload perpective (i.e. not all consumers are allowed to see all data and the idea of sending a huge payload with hundreds of attributes when you want one field is not very efficient). Besides, based on my experience, you will probably notice when looking at the consuming systems that they have likely stored the article name, price in their system as well after looking up the product anyway-effectively building up their copy of the data as well. Even if you believe that you are autonomous, someone will want the data to be mirrored into an analysis plattform or generate data for the invoicing system. So integrations are there no matter if you’d like them or not.
More modern SOA based system landscapes (for example a well designed Enterprise Service Bus pattern) today are usually built differently – using proper integrations with suitable technology (such as an integration platform) to support a common service layer. To be able to deliver this require a functional governance set up to work to be able to maintain the needs of the enterprise. But as this section is a bit of topic I will leave it matter here and just stick with the claim that systems today integrate with others.
Azure Service Bus
This post is concentrating on how to utilize Azure Service Bus in B2B integrations. But – just to make it clear from the beginning- Azure Service Bus is not an integration plattform/software as such and cannot be compared to commercial products like Biztalk etc. Azure Service Bus is a PaaS service and its Azure Service Bus Queues/Topics could rather be compared to a Premium cloud based version of MSMQ or IBM MQ Series. Service Bus Relays are web services (WCF) capable of functioning working through most firewall situations and could enable communication between cloud to on premise systems) in a secure way. The number of SDKs/APIs, AMQP and REST protocol makes service bus a very useful component and most modern systems should be able to post or consume (messages or Web Services) from it.
The Service Bus PaaS service is by now a very mature service and could in many cases be the first choice for asynchronous message handling in the cloud. There are service bus components not mentioned at all in this post for the reason that they are not ideal components for B2B communication (such as the Notification Hub and Event Hub).
So think of Azure Service Bus as a capable component in your integration landscape. If you have (and I hope you do) an integration platform software I suggest that you use it to processs your service bus messages and potentially you may want to call service bus relays for synchronous needs. If you don’t have a specialized integration platform there are multiple components in Azure that you could use to perform your integrations like combining Service Bus with logic apps, web jobs, automation, Azure Functions, (Worker Roles) etc.
I am not mentioning the standalone BizTalk Services PaaS service separately anymore because I don’t think that the component will be a critical part for coming new integrations. For a company older than let’s say 5 years (you likely have more legacy systems and more advanced integrations) and for new companies you will probably build directly for the cloud using other components. BizTalk services offered traditional VETR (Validate, Enrich, Transform, Route) operations but was a bit tricky to set up and relied on the now more or less obsolete ACS service. The Biztalk Services functionality is now available in logic apps so it would be wiser to use Logic Apps and the BizTalk APIs in there in such scenario. There is a great article on how to use the BizTalk components in logic apps here https://azure.microsoft.com/en-us/documentation/articles/app-service-logic-create-eai-logic-app-using-vetr/
Returning to the main topic Azure Service Bus, it also simplifies getting things working in your corporation by (most likely) not forcing any firewall, DMZ servers or tunnel setup. By using queues/topics you can also have maintenance on your own backend systems but would still be able to receive messages meanwhile. Note that the partner probably should have some resend logic on their end also to handle internet hiccups (but if you are lucky they also have some integration software handling that for them).
The trends in integration
We integrate more and more. We collect tons of data from devices and other systems. Today we also add other factors that we didn’t receive before – such as information from social networks (to our CRM system?), weather data (for our transport reliability predictions) etc.
“Be gone batch update, hail realtime!” In the old days we designed solutions to receive big chunks of data (large files through FTP) at low peak hours in our systems to affect bandwidth and system performance as little as possible. Hardware was too expensive to handle both peak loads as well as batch updating the base data so we decided to process data in night batches instead. In some cases we might then make decisions on old data. Do you recall the old stock figures in the early internet webshops which were based on the inventory of the night before (read last batch)? Today we want to handle data on the fly, have predictive analysis in realtime. Customers want to know if their order actually did reserve a product in stock so they will get their goods quickly and will not accept an order where we have to backorder to our suppliers. With cloud computing we can now scale up to handle the processing in realtime during peaks and can scale down in low usage hour depending on needs and you pay for the capacity you use.
Azure Service Bus is a great cloud PaaS service which can handle asynchronous real time messaging, on premise integration, device integration (through eventhubs) which in its turn can be used for realtime analysis with Stream Analytics and realtime predictions with machine learning. It also features notification hubs to handle device communication.
The topic today is however not device communication so I will focus these posts on the B2B suitable components in Azure Service bus, i.e. queues, topics and relays.
Some B2B Scenarios (with Azure SB)
Partner to (mostly) On Premise Data Centers
This is the likely scenario of a company not having come so far with their cloud transformation. This could have many reasons that I have seen – like heavy investments in own data center, -fear of storage of personal data, -contractual issues or other reasons. Usage of the service bus for temporary storage of messages in a B2B scenario or allowing the partner to call on premise services (without port openings) could be very usefull for your solution. It provides a secure gateway that is accessible by both parties – compared to set up DMZ servers (with for example FTP) on your network with all the firewalls, tunneling, routing that is needed to get it working)
With BizTalk 2013 (as an example) you can set it up with the SB-Messaging adapter to consume and post messages from/to the service bus into your and use it like any other adapter. Other products may have more problems and you may have to write some code in worst case to connect. But with a simple REST interface you should not have much problems with this.
So why would you use the service bus messaging for a on B2 on premise scenario? The main benefits are
- the caller is not depending on your backend (you can upgrade your on premise system without the risk of loosing data).
- You don’t have to open any firewalls/tunnels. This in turn gives the benefits of short time from idea to implementation. And possibly – time consuming network tracing.
- You can handle large amount of messages without having to set up on premise VMs to be able to receive the maximum number of incoming messages on premise
- Cheap and reliable messaging component.
- Can be used by any partner (almost) as the number of APIs/SDKs including REST enables most partners/systems to post or consume messages with little effort.
- A shared access point in the cloud. Easy to set up a common area in which you can relay messages on with little configuration.
In case you need your partners to make asynchronous calls in to your on premise network (I.e. messaging won’t work in a scenario) you can utilize the Service Bus Relays to create WCF services in your own datacenter that registers themselves in the service bus and enables calls to the on premise WCF service (I will discuss Relays in more details later)
Partner to Azure Only
This is more likely more common scenario for new companies that started after the whole cloud revelation. You went offensive and put your applications directly in the cloud and have none or very few on premise systems.
The way forward in this scenario may differ a bit. First of all Relay Services is out of the question (for synchronous operations you would have a web app, api app directly in the cloud instead). So while you could technically still create relays and host them in the cloud – the purpose is lost as you don’t have to bypass a on premise firewalls.
As far as asynchronous messaging you would likely use Azure Service bus to post/receive messages from/to your partners through queues/topics. In many cases you don’t have an integration plattform in Azure yet which may be fine if you have few integrations to few systems. You will in such case probably utilize Logic Apps, Worker Roles or WebJobs to post and process messages in the queues. I would not rule out setting upp VMs in Azure with a BizTalk (or similar) once the number of cloud deployments and integrations increase in the cloud. Having hundreds of logic apps/Workers/WebJobs to monitor will likely not be the most efficient scenario and can easilly grow out to a unmaintainable spidernet of point-to-point integrations (see my reasoning about point-to-point earlier).
The benefits with service bus in this scenario is basically the same for the on premise alternative, cheap, reliable, virtually cross-plattform, can handle peaks in messages, simplicity of getting started etc. Your costs in this solution will not come from the service bus but from your other cloud services.
Partner to Mixed Landscape
This is likely the most common scenario today for existing businesses. You have a backbone of legacy systems, meanwhile you have probably started to offload some parts and some solutions to the cloud. Your integrations would affect both on premise and cloud resources.
Here you have a more complex decision to make. You likely have BizTalk or similar component on premise to handle integrations, but at the same time using on-premise BizTalk to pull messages from Azure SB Queues to on premise for processing only to update a cloud resource will consume network bandwidth (i.e. download a message from Azure just to update a database in Azure again is bad usage of your limited network bandwidth (note that if you enrich the messages or consume it in other on premise systems as well you probably still want to do it (i.e. there is a deliberate purpose in downloading the message).
To set up a separate cloud BizTalk that handles Cloud integrations at the same time as you keep your on premise BizTalk could be a costly solution as it will effectively double the license cost for your integration software (though not necessarilly double – because if half the payload is in the cloud environment and half on premise you could have fewer cores licenses in each location). Depending on where the majority of the integrations are performed there would be some options here:
* Large number of integrations on both locations. If you are big enough and prosperous enough and therefor can afford it – it would probably be wise to have an integration platform both on premise as well as in the cloud.
* For corporations with a majority of on premise solutions you could probably accept a few Logic Apps/Web Jobs/Workers to handle the cloud destined messages and can avoid sending unneeded messages to your on premise data center.
* If you have a majority of integrations in the cloud you probably want to have your integration software in the cloud and maybe can complete it with for example Relay Services to communicate with on premise resources.
The Windows Service Bus
It is worth mentioning that you may use the same API/Syntax as Azure Service Bus to send messages on premise with the windows service bus also. In fact it is only a connection string difference in most cases. The on premise windows service bus is a bit complex to set up and maintain (certificate hell) than its cloud twin but once you have set it up correctly and have run it for a while you’ll notice the greatness of message based communication with the same api for both cloud and on premise systems. The fact that you can easily move systems to/from the cloud without having to change the integration is really neat. If you are interested in the Windows Service bus you could start here (TechNet).
The benefits of Relay Services
A few notes about Azure Relay Services in B2B Context. The Relay services actually allows for authenticated partners (you can turn authentication off – but for B2B Scenarios I cannot visualize this as a valid option). The purpose is to have Web Services inside your organisation (on premise) close to the actual databases and information needed that are accessible to the outside world through Azure. The caller will call to the azure service bus endpoint and will not know where the actual service is hosted. You can and should have multiple listeners (instances of the service on premise) so that you have failover and round robin load balancing between service instances.
The key here is that the on-premise service registers itself in Azure, i.e. the communication is initiated from inside (this is why you can bypass the firewall on the way in). The services are WCF but the endpoints in Azure are set up as either SOAP or REST. I will in the next post give some advice on how to make this as secure as possible.
Ready for next part?
So now that I have spoken well about the service bus – what should you think about when setting it up and using it? You can read about this in the second part of this post which will be available very soon. The next part will be much more hands-on after this theoretical post.