API Management what enabled? If you haven’t keeping up with the news from MS Ignite 2019, the title might be a bit confusing. Arc (not A.R.C as I read it initially) is a new Service in Azure that allow customer to deploy and manage Azure Services anywhere – and anywhere in this case means any cloud, on-premises infrastructure and at the edge. You can find more about that service here.
During MS Ignite, the API Management team launched the public preview of a new feature, that takes advantage of the Arc technology to deploy API Management gateways anywhere. So let’s take a look of what that looks like.
What is API Management Arc Enabled?
API Management takes advantage of the Arc service to deploy self-hosted API Gateway, which contains either all or a subset of the APIs deployed into an API Management instance, to any cloud or to an on-premises service.
The Gateway is deployed as a container which can be executed within a Kubernetes cluster or self hosted using docker.
The original instance acts as a control plane for the self-hosted gateway, monitoring its health (e.g. how many nodes are running) and making sure that any changes in the APIs that are deployed to the container are updated into that node. Although the original instance is central configuration location, the container is autonomous and connects to the cloud instance to report its health and download configuration changes, but in cases where the connectivity to the cloud instance is lost, the gateway continues to run independently.
Creating your first Gateway
To create your first gateway, you need to go to your Api Management Instance, and go to the settings section of the configuration side blade. Under that section you will find a new item, called Gateways (preview). Click on it to take you to the gateways management area.
At this stage, Gateways (preview) is only available on Development and Premium tiers.
When you click in the Gateways (preview), you will be taken to the gateways management area. In there you will find all the gateways already deployed. To add a new Gateway, you just need to click on the +Add button at the top.
To create a new gateway you need to provide a name (1) and a region (2), and optionally a description (3) and which APIs will be deployed with the Gateway (4).
From those items, one that got me confused initially was the Region – since this is a free text, you can create your own definition of region (location, datacenter) – anything that help you to identify where the gateway is being deployed. This property will be useful later to create conditional policies.
Once those items are defined, you just need to click Add to create the new gateway.
At the time of this writing, the Add APIs option is a bit flaky – what was supposed to be a dropdown list was not working well. But once the gateway is created you can edit and add the APIs. The interface on the edit section is working fine.
Once a new gateway is created, you can manage it by clicking on its name. That will show a new blade:
On the management blade you can find the Deployment configuration, the list of configured APIs, the list of configured Hostnames and the access Keys.
Like in the main overview area, you can find the basic analytics for the Gateway, how many nodes are deployed and running. You can also Edit the Gateway. Editing a gateway will basically change the name and the region that you define for your gateway.
Clicking the APIs option allow you to add or remove APIs to the Gateway definition. This way you can pick and choose which APIs are deployed on each Gateway. After that, any changes in the API definition (adding/removing operations, changing policies, association to product) will be synchronized to an gateway node that is online. Clicking +Add allow to choose from existing APIs in the API Management instance to be deployed with the Gateway. Clicking the elipsis (…) allow you to remove an API to the Gateway. Online nodes connect to the control plane every 10 seconds to synchronize the list of APIs.
The + Add experience is quite straightforward:
Once clicking on Add you can search for an API using either name or description, and tick all the APIs that you want to add.
Once the APIs are chosen, simply click on Select and the APIs will be added to the Gateway
Deploying an API
Once the Gateway is all configured for deployment, you configure and acquire the deployment script to the containerized node, ready to be deployed anywhere that have support for docker or kubernetes. For that you just need to click on the Deployment (1) section of the configuration side blade.
Within deployment, you can define the access tokens expiration date/time (2), then Generate a new token (3). that you define your configuration file.
You can choose your deployment script (4), depending the technology you choose (Docker or Kubernetes).
If you choose docker, you can find the docker run command (5). The command is dependent on the configuration, which includes the configuration URL and the access token. Clicking the env.conf (6) will download file. So when access tokens change, you will just need the new env.conf file.
If you choose kubernetes, you will find similar guidance. – a kubectl command (7), which is dependent on a configuration file called <APIGatewayName>.yaml
Like the env.conf can also download the <API GatewayName>.yaml by clicking on the link provided.
Once the configuration files are deployed and the command copied, the Gateway can be deployed anywhere that supports the technologies selected (Docker or Kubernetes).
To get a node online you just need to execute the script defined. You can find how many nodes are being executed for a given API Gateway on the Gateways area:
The new ARC enabled API Gateway feature in API Management extends the API Management product to more than just the Azure Cloud. Gateways can be configured from a central location in Azure, and will maintain its configuration synchronized by connecting to the control plane (the main API Management Instance) at regular intervals.
This new feature opens up a number of possibilities – from on-premises API Gateway solutions, to multi-cloud configuration.