Last week I’ve attended the Integrate conference once again. It was my 5th time attending the conference, and the 4th time I’ve participate as a speaker.
But Integrate 2020 was quite a different experience from the last years. Dubbed Integrate 2020 Remote, to reflect the fact that was yet another event that had to adapt to the pandemic reality that assaulted the planet since the end of last year. Kovai, the company formerly know as BizTalk 360, and the event organizer since its inception in 2012, took some time to decide that the event would go ahead, but when decided that it would be in a new format, didn’t pull any stops to get it running. With almost 1000 online attendees – which is impressive for a paid event, Integrate 2020 had 28 speakers, between Microsoft Program Managers and Microsoft MVPs and as many technical sessions distributed across three days.
This year’s MS Build was very different – we all know that. But for most of us, me included, was the first opportunity to join the event officially. And what event it was.
Ran across three time zone, with a mix of live sessions, Teams Live events and smaller Teams events, like focus groups, which allowed the attendees to really interact with the product groups and advocates from Microsoft.
This year there were some interesting announcements around the Azure Integration Services technologies. I’ve recently shared those announcements on a Auckland Azure User Group meetup, and thought that, since I already had everything collated, it would be a good idea to just share this with you on the blog as well. So, let’s talk about what is now available, or just \around the corner for AIS
“It was the week before Christmas, and…” and I was actually super busy! One of my main tasks on that week was to implement notifications when a legacy Dynamics AX, still running on-premises, had orders ready to delivery.
My solution was relatively simple (although needed to be generic enough to include other notifications later):
I had a very simple event data being provided by the notification repository:
I thought that it was quite an easy setup, but I got stuck for a while setting up the Event Grid Logic App trigger. Why? I was expecting that the trigger would support advanced filters out of the box, on the designer experience, but that’s not the case.
During December’s episode of Integration Downunder, Alessandro Moura showed a recap of the main features that announced for Logic Apps throughout the year. If you didn’t watch it on the day, you should take a look at the webcast.
One of the features that caught my attention, which I haven’t seen before, was the trigger condition. The ability to only fire a logic app if the condition is met. This is great for scenarios where you don’t have control over the event which triggers the logic app (like for example Dynamics 365 triggers, which only allow you to execute a logic app when a record for a given entity has been created or updated), but don’t want to implement the checks within the logic apps itself.
This week we’ve been preparing for a Go Live in a project that integrates Dynamics 365 Field Services with an on-premises system . Part of this process was to run a series of migration script to get data from the on-prem system into D365.
Since D365 pushes the information into logic apps, I thought that the safest way to run that migration would be to simply disable the logic apps that would triggered by the process and the event would be “lost”. But to my surprise logic apps triggers are more powerful than that and remember the last event being processed. So when I turned on my logic apps after the migration, I was “rewarded” with thousands of triggers being fired…
To be honest this is quite powerful, because in cases where you had to take the logic app offline because of downstream system issues or updates, upstream systems can continue to work as expected. But how to avoid that in cases like mine – where bulk uploads (initial load or bulk updates) are not expected to flow downstream?
It doesn’t expect the data like showed above (which is fair enough), but also don’t like “AlternateEmail”: null. Instead it expects the AlternateEmail element to be dropped from the payload. Trying to do this with logic apps components would make the workflow really hard to maintain later (and to be honest I don’t even sure if I would be able to pull that off with out of the box components like composite and variables).
Last week I was reviewing a logic app with one of the lead developers at Theta. It was a relatively simple logic app. It needed to call an oData endpoint on a regular basis, and process the value back. The developer original design was to:
Poll the oData endpoint on the agreed interval.
Test if the value array length was bigger then zero
If the array was larger than zero, process each value in the array within a for each.
Else, terminate the instance as cancelled, so he could distinguish between real executions and zero-length polls.
That would work, but if felt wrong for a couple of reason.
The logic app was wasting an action to test if the actual logic would be executed or not.
Another action was being wasted just to tag the logic apps that didn’t actually “fired”.
Then it dawn on me that we should be using splitOn instead. This would avoid the check, Setting up the splitOn is as simple as adding this on your trigger:
This is a cautionary tale…A month or so ago, someone from support asked me why the hell a test environment had spent over a thousand NZD in Logic Apps actions. My first reaction was “Are you kidding?”… my second reaction was that pit in your stomach feeling when you know something is really wrong, but you don’t know why.
Have you ever created a logic apps solution – maybe 10 or so logic apps – and noticed that you needed to enable basic notification alerts for all of them? I found a while ago that this was kind of a tedious process, so I end up creating a PowerShell script for that. I end up forgetting that I never blogged about that, so here it goes.
I’ve been working during the last week or so on setting up a DR strategy for a solution that is based on API Management, Azure Functions and Service Bus. Most of the deployment to the secondary site is dealt by VSTS, but one of the main issues on the proposed strategy was the fact that APIM instance utilized is Standard, which doesn’t allow multi-region deployments. This way, to guarantee that all APIM configuration, including users, API policies and subscriptions, I had to leverage from the backup/restore functionality available in APIM, based on the Management API.