I am still working on an API Manager DR scenario for a client. After automating the backup and restore process, to make sure that the APIM instances are always in sync, I needed to setup Traffic Manager in priority mode to distribute the calls between the main and secondary instances.
Traffic Manager setup seemed quite straightforward. You just need to create a traffic manager endpoint for each API Manager Instance, using the external endpoint.
Creating that for each endpoint should do the trick… Or so I thought. After that setup was complete, testing the endpoint always returned 503, even though access each individual endpoint was returning the correct result.
Today I’ve received a very special email – the renewal of my MVP Award for the period 2018-2019. Those who had received the award before knows how cherished is the moment that you see the email on your inbox. The best part of the award is the confirmation that what you are doing is been recognized as having an impact on the community – which is the reason why you do the work in the first place. The renewal shows that you didn’t lose steam, but keep going in the right direction. But it wouldn’t be a post about the MVP Award, without recognizing the support network behind me that gives me the chance to do all the community contribution I do. Continue reading “And the Cycle Starts Again”
I’ve been working during the last week or so on setting up a DR strategy for a solution that is based on API Management, Azure Functions and Service Bus. Most of the deployment to the secondary site is dealt by VSTS, but one of the main issues on the proposed strategy was the fact that APIM instance utilized is Standard, which doesn’t allow multi-region deployments. This way, to guarantee that all APIM configuration, including users, API policies and subscriptions, I had to leverage from the backup/restore functionality available in APIM, based on the Management API.
A while ago, I was involved in a project that needed to push messages to a Kafka topic. I found that while the .NET code required for that implementation was relatively straight-forward – thanks to the Confluent’s .NET client for Kafka – configuring the server was a nightmare. The IT team at the client site was supposed to get the kafka cluster sorted and dragged the issue for a month or so. And I understood why when I tried to setup a cluster myself – configuring a kafka cluster is not a walk in the park. If only we had a managed, one click solution to implement a event streaming solution, based on kafka protocol… 😀
When Microsoft announced last month Event Hubs support for the Kafka protocol – I thought that a great way to prove that this was really interoperable, was to use part of the original code I wrote and see if I could connect to Event Hubs without any significant changes. And I was pleasantly surprised! The only changes required was some additions to the producer/consumer configuration. This post shows how I managed to get this working, and show one of the main gotchas I found along the way. Continue reading “Acessing Event Hubs with Confluent Kafka Library”
Recently I’ve presented at Directions ASIA 2018 with my good friend and MVP Tharanga Chandrasekara, and I’ve been exposed to a “new world” – the Business Solutions world! Coming from and enterprise integration background, I usually tend to gravitate around the enterprise integration tools and lately iPaaS offering, so my initial reaction to integration will always be BizTalk Server / Logic Apps. But Microsoft Flow had evolved to be quite a reasonable option – and I would say probably the first option for integration within the Office 365 / Dynamics 365 consultants, since it gives you almost the same level of functionality that Logic Apps would give – no surprises here, since behind the scenes they are actually the same engine. Continue reading “Logic Apps x Microsoft Flow – which one should I choose?”
Last Saturday, 24/03/2018, the second edition of the Global Integration Bootcamp (GIB) was on full swing around the globe, with 15 locations across 10 countries sharing a full day of hands on labs and breakout sessions, highlighting the latest and greatest that Microsoft integration technologies have to offer.
As in the first edition, Auckland kicked off the show. And we did it justice! There was around 35 or so people sharing our experiences and learning from each other in the new Datacom facilities at Gaunt Street. The breakout sessions covered a wide range of topics, from Event Grid to Data Factory. We had also a great line-up of presenters, a mix of recurring ACSUG presenters and new faces, which made the event even more special. Continue reading “Global Integration Bootcamp 2018 – Auckland Recap”
Integration Down Under is a brand new Microsoft Integration & Azure webcast, in an APAC friendly time zone! And that is an idea that have been simmering for a long time.
We all love Integration Monday, one of the most successful MS Integration webcasts ever, but it was always a recurring joke between the AU/NZ integration professionals who would have to wake up the earliest to be able to watch. So, for a while now, we’ve been discussing how nice would be to have a similar webcast in a time slot that didn’t require lots of caffeine and an alarm clock setup to early in the morning, just to enjoy live…
Then earlier this year Bill Chesnut rallied the troops and brought together a group of Microsoft Azure MVPs – Daniel Toomey, Martin Abbot, Rene Brawuers and myself – to start this project. After a couple of crazy nights furiously discussing details and logistics – social media, logo, topics, among lots of other things – we’ve settled for an inaugural meeting on the 8th Feb, which will kick off the webcast with a series of lightning talks. Here is a taste of what you can expect on the first webcast:
Bill Chesnut – API Management REST to SOAP
Martin Abbott – Azure Data Factory v2.
Wagner Silveira – Azure Functions Proxy
Dan Toomey – Azure Event Grid
Rene Brauwers – A reactive integration primer
You can register for the webinar here, thanks to SixPivot, which is kindly providing the Webinar facility.
So please join us on the 8th, and let us know what you think, what topics you are interested in hearing more next and how can we improve more and more!
Have you ever wanted to stop all logic apps in a resource group in one go – either for production maintenance, or maybe because that set of logic apps in a resource group is eating all your resources? If so, welcome to the club… What you probably found is that there is no way to do this in the portal. Coming from a BizTalk background where you can stop all orchestration – or even the whole application) with a right click, in some cases you will ask “Why?”, while in others you might shout “Khaaaannn!” (I know I probably did both). Continue reading “Enable/disable all logic apps in a resource group”