Click here to Skip to main content
15,891,657 members
Articles / Hosted Services / Azure

Architecture of a Polyglot Azure Application

Rate me:
Please Sign up or sign in to vote.
4.91/5 (4 votes)
4 Aug 2017CPOL8 min read 5.7K  
Architecture of a Polyglot Azure Application

Introduction

Image 1I started working on a C# project that will communicate requests to several different partners, and receive feedback from them. Each partner will receive requests in their own way. This means that sending requests can (currently) be done by:

  • calling into a REST service
  • preparing a file and putting it on an FTP share
  • sending a mail (SMTP)

Needless to say that the formats of these requests are never the same, which complicates things even more.

Receiving feedback can also be done in different ways:

  • receiving a file through FTP. These files can be CSV files, JSON files, XML files, each in their own format
  • polling by calling a web service on a schedule

So we need an open architecture that is able to send a request, and store the feedback received for this request. This feedback consists of changes in the states for a request. I noticed that this is a stand-alone application, that can easily be moved into the cloud. We use Microsoft Azure.

Here is a diagram for the current specifications:

Current specifications

First Observations

When I analyzed this problem, I immediately noticed some things that could make our lives easier. And when I can make things simpler, I’m happy!

The Current Flow

Currently, everything is handled in the same application, which is just a plain simple C# solution. In this solution, a couple of the protocols are implemented. This is OK because currently there are only 2 partners. But this will be extended to 20 partners by the end of the year.

There are adapters that transform the request into the right format for the corresponding partner, and then send it through a REST service. So we already have a common format to begin with. If the “PlaceOrder” block can receive this common format, we know at least what comes in. And we know what we can store in the “Feedback Store” as well; this will be a subset of the “PlaceOrder request.”

PlaceOrder then will have to switch per partner to know in which data format to transform the request, and send it to that partner.

On the feedback side, we know that feedback comes in several formats, over several channel types. So in the feedback handler, we need to normalize this data so that we can work with it in a uniform way. Also, some feedback will come as a file (SFTP) with several feedback records; or per one record (for example when polling). This needs to be handled as well.

So now, we can think about some more building blocks. The green parts are new:

image

  • The “Initiator Service” will receive a request from the application (and in the future from multiple applications). All it will do is transform the request into a standard format and putting it on the “Requests Queue“. Some common validations can be done here already. Creating a separate service allows future applications to use the application as well.
  • We introduce the “Request Queue”, which will receive the standardized request.
  • And now, we can create the “PlaceOrder queue handler” which will wake up when a request arrives on the queue, and then handles all the messages on the queue.

Advantages of Adding Queues

  • Separation. A nice (and simple) separation between the caller (Application -> “Initiator Service“) and the callee (the “PlaceOrder Queue Handler“).
  • Synchronization. In the queue handler, we only need to bother about 1 request at a time. Everything is nicely synchronized for us.
  • Elasticity. When needed, we can add more Queue Handlers. Azure can handle this automatically for us, depending on the current queue depth.
  • Big loads will never slow down the calling applications, because all they have to do is to put a message on the queue. So each side of the queue can work at its own pace.
  • Testing. Initiating the Queue Handler means putting a message on the queue. This can be done using tools such as the Storage Explorer. This makes testing a lot easier.
    • Testing the “Initiator Service“: Call the service with the right parameters, and verify if the message on the Request Queue is correct.
    • Testing the “Queue Handler“: Put in some way (example: storage explorer) a request in the correct format on the queue and take it from there.
    • Both are effectively decoupled now.

We can do the same for the feedback handler. Each partner application can receive feedback in its own way, and then send the feedback records one by one to the Feedback Queue in a standard format. This takes away a lot of the complexity again. The feedback programs just need to interpret the feedback from their partner and put it in the right format on the Feedback Queue. The Feedback Queue Handler just needs to handle these messages one by one.

To retrieve the feedback status, we’ll need a REST service to answer to all the queries. You’ll never guess the name of this service: “Feedback Service“. I left this out of scope for this post. In the end, it is just a REST service that will talk to the data store via the “Repository Service.”

I also don’t want to access the database directly, so a repository service is created as well. Again, this is a very simple service to write.

But There is Still A Lot of Complexity

image

The “Place Order Queue Handler” handles each request by formatting the message and sending it to the specific partner. Having all this in 1 application doesn’t seem wise because:

  • This application will be quite complex and large
  • When a new partner needs to receive calls, we need to update (and test, deploy) this application again.
  • This is actually what we do currently, so there would be little or no advantage in putting all this effort into it if we stopped here.

So it would be nice to find a way to extend the application by just adding some assemblies in a folder. The first idea was to use MEF for this. Using MEF, we can dynamically load the modules and use them, effectively splitting out the complexity per module. Again, each module has only 1 responsibility (formatting & sending the request).

The same would apply (more or less) for the feedback part.

But thinking a bit further, I realized that this is actually nothing but a workflow application (comparable to BizTalk). And Azure provides us with Logic Apps, which are created to handle workflows. So let’s see how we can use this in our architecture…

image

I left out the calling applications from this diagram. A couple of things have been modified:

  • DLQ. For each queue, I also added a Dead Letter Queue (DLQ). This is sometimes also called a poison queue. The “Initiator Service” puts a request on the queue to be handled. But if the Queue Handler has a problem (for example, the Partner web service sends back a non-recoverable error code), we can’t let the Initiator Service know that. So we’ll put those failed messages on the DLQ to be handled by another application. A possible handling could be to send an e-mail to a dedicated address to resolve the problem manually.
  • Logic App. The “Request Q Handlernow is a Logic App. This is a workflow that can be executed automatically by Azure when a trigger is fired. In our case, the trigger is that one or more requests are waiting on the “Request Queue.” In this post, I won’t go into detail into the contents of this Logic App, but this is the main logic:
    • Parse the content of the request message as JSON
    • Store the relevant parts of the message in the database with a “Received” status.
    • Execute the partner specific logic using Logic App building blocks, and custom made blocks.
    • Update the status of the request in the database to “Sent”
    • When something goes wrong, put the request on the DLQ.
  • Configuration. The nice thing is that all this can be done using available building blocks in the Logic App, so no “real” programming is needed – only configuration. Adding a new partner requires just adding a new branch in the switch and implementing the partner logic.
  • The database is accessed by a REST service, and there are Logic actions that can communicate with a REST service. So accessing the database can be done in a very standard way.

The Feedback Part is Also a Bit Simpler Now

  • One Logic App will poll every hour for those partners who work like that. This means that this App will have a block per “polling partner” which will retrieve the states for the open requests, transform them into a standard format and put them in the Feedback Queue. So the trigger for this Logic App is just a schedule.
  • Some partners communicate their feedback by putting a file on an FTP location. This is the trigger and the handling is a bit different:
    • Interpret the file contents and transform them into JSON.
    • For each row in the JSON collection, execute the same logic as before.
    • Delete the file.
    • Again, these are all existing blocks in a Logic App, so no “real” programming!

The “Feedback Q handler” is again simple. Because the FTP Logic Apps (Notice the plural!) make sure that the feedback records are stored one by one on the “Feedback Queue“, all we have to do is to update the status in the database, and possibly execute a callback web service.

Conclusion

Thanks to Microsoft Azure, I was able to easily split the application in several small blocks that are easy to implement and to test. In the end, we reduced a programming problem to a configuration problem. Of course, some programming remains to be done, for example, the “Repository Service” and possibly some other services to cover more exotic cases.

Image 6 Image 7

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Architect Faq.be bvba
Belgium Belgium
Gaston Verelst is the owner of Faq.be, an IT consultancy company based in Belgium (the land of beer and chocolate!) He went through a variety of projects during his career so far. Starting with Clipper - the Summer '87 edition, he moved on to C and mainly C++ during the first 15 years of his career.

He quickly realized that teaching others is very rewarding. In 1995, he became one of the first MCT's in Belgium. He teaches courses on various topics:
• C, C++, MFC, ATL, VB6, JavaScript
• SQL Server (he is also an MSDBA)
• Object Oriented Analysis and Development
• He created courses on OMT and UML and trained hundreds of students in OO
• C# (from the first beta versions)
• Web development (from ASP, ASP.NET, ASP.NET MVC)
• Windows development (WPF, Windows Forms, WCF, Entity Framework, …)
• Much more

Of course, this is only possible with hands-on experience. Gaston worked on many large scale projects for the biggest banks in Belgium, Automotive, Printing, Government, NGOs. His latest and greatest project is all about extending an IoT gateway built in MS Azure.

"Everything should be as simple as it can be but not simpler!" – Albert Einstein

Gaston applies this in all his projects. Using frameworks in the best ways possible he manages to make code shorter, more stable and much more elegant. Obviously, he refuses to be paid by lines of code!

This led to the blog at https://msdev.pro. The articles of this blog are also available on https://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=4423636, happy reading!

When he is not working or studying, Gaston can be found on the tatami in his dojo. He is the chief instructor of Ju-Jitsu club Zanshin near Antwerp and holds high degrees in many martial arts as well.

Gaston can best be reached via https://www.linkedin.com/in/gverelst/.


Comments and Discussions

 
-- There are no messages in this forum --