[This is part 4 of 4 in this series about securing a load-balanced API hosted on Azure VMs with Azure API Management and an Azure Virtual Network. You’ll find the table of contents at the bottom of this post.]
Create the Azure API Management Service
Let’s create a new Azure API Management service – to do this you’ll need to use the current portal (as Azure API Management isn’t exposed yet in the preview portal).
Sign in and tap the New button, you’ll find Azure API Management in the App Services tab.
Specify a URL for the service, pick a subscription and the region which you created your VMs in. On the second page you should tap the Advanced Settings checkbox, in order to get the ability to choose the pricing tier on the next page – you’ll need to use the Premium pricing tier in order to connect to the virtual network.
Once you’ve filled out the details it could take some time before the service is ready – as we’re doing quite some heavy-lifting under the hood to get your service ready.
When your service is ready, tap into it and head to the Configure page. Turn on the VPN connection, pick the virtual network that we created earlier and choose the subnet which the VMs and internal load-balancer is located within.
Be sure to save the settings and wait for the service to be ready again (IP addresses of the service will change).
Create the proxy
To create our proxy we need to get to the publisher portal of the Azure API Management service. If you select your service in the current portal, you’ll be able to see the Manage button – tap on it in order to get to the publisher portal quickly.
While in the publisher portal – choose APIs in the side-menu and then click on the Add API button.
In the next dialog, give your API a name. As for the Web Service URL – this is where you’ll need to know the IP address of your internal load-balancer. If you forgot the static IP address that you assigned to your internal load-balancer, you can use Azure PowerShell to look it up. Use the following command:
Get-AzureInternalLoadBalancer -ServiceName “[cloud service name]”
Enter a Web API URL suffix to build the proxy endpoint, determine which URL schemes you’d like to enable and finally add your API to a product (I’ll be using the default Unlimited product in this case).
You need to add the API to a product in order to issue subscription keys, if you’d like to learn more about Azure API Management in general – head over to:
Next up, head over to your new API in the publisher portal and map your operations, apply policies and harness the power of Azure API Management!
If you’d like to test your new proxy/API, head over to the developer portal and get a hold of your subscription key.
You can also browse to your API in the developer portal and test the operations using the web console.
Oh – don’t mind the response latency! I was simulating a ton of latency in my API in this operation. If you’d like to make sure that your internal load-balancer is working, you could use a test operation like I did here, which returns the name of the VM. Head over to the Azure portal and start/stop your VMs to make sure that your workloads are managed by alive VMs.
Alright, let’s tidy up here! Make sure that your VMs are completely isolated within the cloud service – so head to the Endpoints settings for each of your VMs and remove any external endpoint.
If you ever need to RDP into the VMs again (for patching etc.) – you could simply re-add the endpoint. Best practice is to not leave these endpoints available for any production VMs.
In conclusion, this is what we’ve ended up with:
A load-balanced API that can only be reached through the Azure API Management proxy. The proxy communicates using a VPN connection with the internal load-balancer for the API.
Pretty sweet I’d say! Now we can enjoy the benefits of Azure API Management without having to worry that our backend API could be called directly, instead of going through the proxy. Again, you can learn more about the great features in Azure API Management at:
You can navigate within the entries of this series here:
- Creating the services and networking
- Installing IIS and Web Deploy
- Setting up the internal load-balancer
- Configuring the proxy and tightening up