Azure Logic apps by a developer point of view

By Daniel Schnabel de Barros


As easy as it might sounds, application integration in the cloud does require some effort, creativity and lots of coffee.

We currently see numerous videos, webcasts, and blogs demonstrating how easily we can create workflows in Azure, performing integration tasks such as retrieving a flat file from a FTP server, applying some transformation and finally send to a SQL table on-premises.

However, real world client demand is for slightly more complex workflows, where, for example, this same flat file must be uploaded into a SharePoint library, merged with a payload obtained from a SOAP web service, submitted to several different endpoints asynchronously and so forth.

In addition, each stage of this workflow is susceptible to failure, which will need some sort of intervention, retry mechanism, monitoring, alerting and the like. Whereas Microsoft Azure seems to be slowly maturing its approach, from a developer point of view it looks like we are still a long way from replacing the well stablished on-premises platforms, such as BizTalk, for complex integration solutions.

To illustrate a few of the gaps that we have identified whilst creating a proof of concept, we took the following workflow as an example:


In order to achieve the above, we decided to use Logic apps combined with Service Bus queues for persistence. We had to break down each stage of the flow into separate small logic allowing the administrator to resume from a specific stage in the event of a failure.

This is how the first Logic App looks:

Workflow – Stage1

  1. The first Http Listener API app is responsible for receiving the xml payload from the client.
  2. The BizTalk XML Validator, validates the xml against a schema
  3. The second HTTP Listener returns an http “200” response if the result of the validation is equal to “Success”
  4. The third HTTP Listener returns an http “500” response if the result of the validation is not equal to “Success”
  5. The Azure Service Bus Connector publishes the XML payload to a Service Bus Queue

By successfully completing this workflow (stage1) we can guarantee that messages published to the queue will be a valid XML conforming to the pre-defined schema, and in case of a poisonous message, the client application will be notified immediately with an http 500 error response which should be handled by the client application.

Initially the first obstacle reached was implementing the authentication process, where for testing purposes we used Azure ADFS. We initially found the authentication mechanism quite difficult to understand, specifically how to obtain the ZUMO token when using silent authentication without having the Microsoft login page popping up.

Once this part was sorted, we managed to send authenticated requests to our HTTP listener that triggered the logic app.

The validation was more straightforward, however we struggled to pass it on to the next step. The solution was to obtain the body response from the XML Validator first, then convert it to a Boolean and use as a condition:

We found the solution in the following article:​

Again, the lack of documentation on how to structure conditions was a large obstacle as there are minimal instructions on how this syntax should be structured, and the possible operators that are available.

With this figured out, we managed to create this IF/ELSE type condition that would send a response back to the client containing the result of the validation.

Finally, the last step of this logic app was submitting the xml payload to the Service Bus queue, where the condition for this API app to be triggered was dependent on the http listener response block. With this connector, we faced an issue that took quite a while to unveil. There is a small bug in the new Azure Portal that will not keep the package settings values entered when provisioning a new logic app

When entering the required information such as the Connection String and Entity Name (queue name) and clicking on the OK button, followed by the Create button, the logic app would be successfully provisioned without any errors. However, once the connector received its first request, a “400 – Bad Request” error would be thrown, either issuing a request via the logic app itself or through the swagger for the respective API.

This also led to another issue, which was debugging in Azure. As it turns out, it is not an easy task as the full stack trace is not thrown, so identifying the source of the problem can prove to be tricky.

Luckily, we managed to track down the issue by checking the app settings, and found that the key responsible for keeping the value of the connection string to the service bus was always empty.

When creating a new API app, we noticed that the first time the settings are entered, these are not persisted and by selecting “configured settings” again, the page opened is empty. The second time this is entered, the portal keeps the information and the API app is correctly provisioned.

When searching this error it seemed that we were not alone in this, and even today this issue hasn’t been resolved in the new portal.


Once all the above issues were finally amended, we managed to get the full flow working and our stage1 was completed.

The next step was to create the logic app responsible for the SharePoint file upload. The final product is illustrated below:

Workflow –stage2

  1. The first Azure Service Bus Connector is subscribed to the same Service Bus queue used in the previous logic app and is used as a trigger for this next workflow.
  2. The BizTalk XPath extractor is used to extract an information from the XML payload which will be used as a file name for the file to be uploaded in SharePoint
  3. The second Service Bus Connector is connected to an on-premises SharePoint library
  4. The third Service Bus Connector is connected to a different Service Bus queue responsible for storing messages that could not be submitted to SharePoint
  5. The final Service Bus Connector is connected to a Service Bus queue responsible for storing messages that succeeded this flow and are ready for the next step.

This logic app (stage2) will receive all messages published by the first logic app (stage1), and attempt to upload the XML payload into an on-premises SharePoint library. If successful, the same XML will then be forwarded to a new queue, which will pass on the message to the next Logic app (stage3), otherwise it will publish the message to a queue, which will store all failed messages, and manually re-submitted to the source queue in order to attempt the SharePoint upload once again.

Another limitation we noticed is that the Service Bus connector is not able to select that the message will be sent to a dead letter queue. In case the workflow fails this would avoid the having to create a new queue for failed instances.

Again while we built this logic app, we found little documentation on the syntax and operators to be used in the code view in order to implement simple logic with the connector. The idea was to use part of a value contained in the xml concatenated with a timestamp in order to produce the filename, however nothing available was found so we only used the message body parts.

The last part of our solution is the following logic app that we can call stage3:

Workflow – stage3

  1. The first Service Bus connector is listening to messages in a queue that contains messages that were successfully submitted by stage2 logic app.
  2. The Transformation is used to build the http request message.
  3. HTTP connector calls a web app (which in turn is used as a proxy to an on-premises SOAP web services)
  4. The Service Bus Connector is responsible for submitting failed messages to a queue in order to allow manual retries.

This logic app would receive messages from stage2 app, which means that the SharePoint list upload was successful. This message is then transformed using the BizTalk Transform Service connector to the selected schema and sent to the HTTP connector.

This approach was taken because there is a limitation in the hybrid connection restricting its use only to a few out-of-the-box API apps such as SQL, SharePoint, SAP, etc. but not for custom coded API apps. Since web apps have this functionality, we created an application that receives an HTTP post and forwards it to the destination URL that points to an on-premises SOAP web service.

This proxy application is, in turn, called by the logic app HTTP connector, allows the on-premises integration in our solution.

The difficulty identified at this stage was establishing the authentication from the HTTP connector to the web app, where no documentation is provided of its formatting. This caused our chosen method of impersonation to not be successful since we had the proxy application restricted by the resource group shield where only internal calls are allowed.

Finally, the last connector again would submit messages that have failed to reach its destination (in this case the on-premises application) to a service bus queue and manually resumed once the issue had been mitigated.

At the end of this implementation we had a fully working solution that partially achieved our goals in terms of integration. However, the complexity of having to separate all stages, as well as having multiple queues to control the messages, discouraged us from going forward with the implementation. This approach generated another issue, where a tool for monitoring the queues would have to be built and implement the logic of re-submitting the messages to each respective stage, allowing manual retries.

This tool would be similar to the Service bus explorer (, modified to suit a business user interaction.

Moving on from our solution, a few other generic items were identified that could be improved in the logic app designer:

  • Ability to package and export the logic app, allowing the developer to test at his own subscription and once completed, deploy it onto the test and production environments that could be under a completely separated subscription;
  • Add a hybrid connection capability to custom API apps, allowing developers to integrate legacy applications;
  • Transactional containers allowing logic apps behaving atomically;
  • Visual Studio tooling allowing offline workflow design.

During development we were also tempted to have our solution provisioned under the free tier, so that possible costs would be reduced, before the solution is ready for production. However it seems impossible to have anything running on the shared environment, as we constantly encountered random http 403 errors, and errors as pictured below, when trying to open the designer:

In conclusion, we hope that the promised improvements will continue such as those mentioned by Sandro Pereira on his latest presentation ( ), where apparently Microsoft is working hard to improve the logic apps designer, replacing the left-to-right to a top-bottom flow; possibly including if/else and contained blocks similar to a BizTalk Orchestration or XAML workflow service. Also alerting might be soon implemented in the portal which can be set at milestones and having the responsible person notified in case of problems. Once these are in place I believe that we can produce a better replacement for our use-case and we might be one step closer to migrating some of our solutions to the cloud.