Category: Uncategorized

Deploying secure communication with APIm and Functions using Managed Identity

Yes I know, not the snappiest title.
In my previous post Secure communication with APIm and Functions using Managed Identity, I showed how easy it is to setup OAUTH-based authentication in front of your Azure Functions, and how to configure an APIm policy to call that function, thereby uping the security level of your Azure Function communication.

CI/CD using ARM

I like ARM and after some serious digging I found the ARM template definition for how to secure your Azure Function. I prefer to use version V2, but you should look at V1 as well, as it has better documentation.

You can find the V2 template definition here, and here is V1.

When you deploy your function app you are using CI/CD and ARM. In order to add the functionality of the authourization setting you need to add a resourse called Microsoft.Web/sites/config and set its name to yourfunctionappname/authsettingsV2. You can either add this to your existing ARM-template or as a separate template. If you add it to your existing ARM template, do not forget to add the dependsOn property.

Minimal ARM Template

Here is my ARM-template that will deploy the settings addressed in the other post.

    "$schema": "",
    "contentVersion": "",
    "parameters": {
      "appClientId": {
        "type": "string",
        "metadata": {
          "description": "The Client ID of the AppReg that the function is a part of."
      "sites_function_name": {
        "type": "string",
        "metadata": {
          "description": "The function app name"
      "tenantID": {
        "type": "string",
        "metadata": {
          "description": "The ID of the tenant providing auth token"
    "resources": [
        "name": "[concat(parameters('sites_function_name'), '/authsettingsV2')]",
        "type": "Microsoft.Web/sites/config",
        "apiVersion": "2020-12-01",
        "properties": {
          "platform": {
            "enabled": true
          "globalValidation": {
            "requireAuthentication": true,
            "unauthenticatedClientAction": "Return401"
          "identityProviders": {
            "azureActiveDirectory": {
              "enabled": true,
              "registration": {
                "openIdIssuer": "[concat('',parameters('tenantID'),'/')]",
                "clientId": "[parameters('appClientId')]"
            "isAutoProvisioned": false
          "login": {
            "tokenStore": {
              "enabled": true
          "httpSettings": {
            "requireHttps": true
    "outputs": {}

I will go thru some of the properties
properties/platform/enabled This basically enables the setting. You can switch this between true or false if you, for instance, have different rules for TEST and PROD. Simply turn it off for TEST and enable it for PROD.
globalValidation/requireAuthentication Should be set to true but in some cases you might want to be able to handle unauthorized calls as well.
globalValidation/unauthenticatedClientAction Set this to Return401 if the function should be called as APIs. For other scenarios, consult the documentation.
identityProviders/azureActiveDirectory This object contains all the settings needed for the AAD type authentication.
identityProviders/azureActiveDirectory/registration/openIdIssuer This points to your tenant using its GUID. If you do not know your tenant GUID you can visit This setting makes sure that the authenticating token is issued by your Azure tenant.
identityProviders/azureActiveDirectory/registration/clientId This is the ID of Application Registration that was created when you setup the function authentication. If this is the first run, create an application registration first and use it with this function. I made an easy walkthru post a while back.
properties/httpSettings/requireHttps always require HTTPs.

Additional settings

Consider reading the documentation for additional settings. There are a lot of properties relating to other authentication providers, but also some settings relating to authentication and AAD that you might find useful .
jwtClaimChecks/allowedGroups If you are using claims and groups/roles, you can make sure that the caller has the right credentials (I have not validated this functionality with APIm)
allowedClientApplications Stop callers that are not APIm simply by adding the clientID of your APIm (I think, I have not tried this)
allowedAudiences Might be useful to add other clients as audiences as well. The connected Application Registration ID (the clientID in the ARM-template) is always allowed, even if it is not added as allowed.

Next step

You should now add your version of the ARM template to your CI/CD process, and let me know how it works out for you. You can find me on Twitter.

Secure communication with APIm and Functions using Managed Identity.

Using manage service identity is an easy to configure, out of the box, functionality that ups your level of security within your organization.

Having an Azure Function exposed using Azure API management is very easy. So easy in fact that I will not address it here, but here is a good source if you need to know it, and here is a very basic one. In some cases, the level of security given by the Function’s out of the box functionality is too low. Just having a “code” to protect the function is not enough in my opinion. Especially since it is so easy to set this up. This post provides a walkthrough for using built-in functionality for Managed Service Identity (or MSI for short). As always, I try to show this in a simple, just make it work, way.

What is an MSI?

But I still need to explain the basics of an MSI. You want a service or a process to have an account, a way of showing its “identity” and you want that identity to be protected, but you do not want to handle (and send) passwords everywhere. This is where the Managed Service Identity comes into play. It is connected to the Azure AD, but the actual authentication is handled by Azure. You never need to send credentials. The authentication, and authenticity of the service is handled by Azure AD.

Looking at the concrete stuff, the MSI is an Enterprise Application in your AAD, viable only for that service instance.

Using MSI for other integration services

This “built-in” functionality means that you can use this way of securing not only functions but Logic Apps and Azure Service Bus as well. The service bus even allows you to combine it with RBAC.
Here is some documentation for securing Logic Apps in (almost) the same way, and here is a great post on how to setup MSI for Azure Service Bus.

What we need to make it work

  1. Your APIm instance must have a “Managed Identity” configured.
  2. Your function must have “Authentication” configured.
  3. The APIm has an updated policy.
  4. A successful test.

1: The APIm MSI

You can set that up using this documentation. Note that the screen capture in that post is a bit dated, but all the names are the same. I suggest you use System Assigned as the User Assigned is (still) in preview.

2: Configure Authentication for your function

This is easy but requires a couple of steps.

Find the authentication setting

In the left menu, you can find it under Settings.

Update Authentication Settings

Find and click Add Provider. This will setup everything you need.

Start by selecting Microsoft from the dropdown. Note that there are other IdP:s.

Fill in the rest according to this:

The name is up to you. I would just like to add that you should use an environment identifier in there somewhere.
Make sure you select 401 under Unauthenticated requests, the click Add to complete the configuration.

Back at the Authentication page, you can see that Azure created an Identity for your function. The App (client) ID will be used later. Note that you can only have one identity per Azure Function App. Every function in the App will be protected the same way.

3: The APIm Policy

We now have everything we need and can add the policy the APIm. This policy basically states that we would like to authenticate to a particular App ID using Managed Identity.

Open the Policy you need to update and add this before calling the function, in the inbound policy.

<authentication-managed-identity resource="e23a1800-0000-0000-0000-00000009a0" output-token-variable-name="msi-access-token" ignore-error="false" />
<set-header name="Authorization" exists-action="override">
<value>@("Bearer " + (string)context.Variables["msi-access-token"])</value>

The GUID value for the resource attribute is the App (client) ID of your function, simply paste yours instead of mine from the example.

4: A successful test

Use your favorite way of invoking the API and then call it. You do not need to change anything on the caller’s side. The API client does not need to update anything related to authentication.

If you want to make sure your authentication is working properly, remove the policy and call the API again. If everything is working properly, you should receive an authentication error.

Automation using CI/CD

I made a blog post about this. You can read it here.

Using the AppGw in front of APIm – part 2

This is the sequel to my previous post Using AppGw in front of APIm, in which I showed how to setup an Application Gateway (AppGw) in-front of an APIm instance.

In this post I will show how you should (could) configure a health probe, configure the firewall and lastly how to secure your APIm to only accept calls that has been routed thru your AppGw.

Configuring a health probe

Why do you need this? A health probe is used by the AppGw to see which services in a backend pool is healthy. If you do not have more than one instance in a backend pool, perhaps you do not need this feature. If you have multiple instances, the AppGw will only send requests to the healthy instances. This is basic load balancing.

To configure a healthprobe for APIm you can use the “status” service that is available in all APIm instances. You can test it by sending a GET to https://{yourAPIminstanceFQDN}/status-0123456789abcdef. If you receive 200 Service Operational, then you know the service is healthy.

We can use this call to get the APIm status in our health probe.

Setting it up

Start by clicking “Health probes” in the left menu.

The find and click + Add at the top of the new blade.

Set the values for the health probe like this:

  • Name: The name of the probe.
  • Protocol: HTTPs (you do not use HTTP)
  • Host: The FQDN of your APIm instance, usually
  • Pick hostname from the HTTP settings: I use NO, you could use YES if you know what you are doing.
  • Pick port from the HTTP settings: I use YES, because I know that the port is set there.
  • Path: Set to /status-0123456789abcdef

Leave the rest as is but make sure you select your HTTP setting you created in the previous post.

End the configuration by testing it. Click Test.

It tests the setting:

And hopefully, it works.

The firewall

This is so easy; it might not even need to be documented.

If you configured the firewall according to the steps in part one, you are already protected but I will go thru the different settings and provide some experience.

This is the basic configuration:

Basic stuff

Tier should be set to WAF V2. The standard is cheaper but does not contain the WAF.

Firewall status: should be enabled but you could disable it in DEV to know if a caller is locked out because of the firewall or not.

Firewall mode: should be set to Prevention. You do want to keep the bad people out, to prevent them. The other setting will just log a bad person.


This is where you configure the WAF to not block calls based on “strange data”. The firewall scans the incoming request for “strange data” or junk-data if you will. A long string of letters, such as an auth-header of a cookie can be considered strange, and the firewall will reject it. You might, therefore, exclude some settings.

The official documentation gives some examples. I always add: Request Header Name, Starts with, Auth to eliminate any authorization issues.

Global parameters

Note that these settings are global. All incoming calls will be scanned using these settings.

Inspect request body: If you turn this on the WAF will automatically scan incoming requests for strange data. For instance, if the incoming request is marked as Content-Type: application/json, the WAF will validate the JSON and reject it if the JSON is malformatted.

Max request body size: Here is a big limitation of the WAF. If you turn on inspection, you cannot accept calls bigger than 128 kb. I recently had a project where the caller wanted to send files in excess of 20 mb as request bodies. We had to device a workaround to keep the WAF inspection turned on for everything else. The settings are global.

File upload limit: If you can get your caller to send the messages as file uploads instead of request bodies, the upper limit is 100 mb.


Click the Rules tab at the top of the blade.

The basic setting is to use the max version of OWASP to be protected from the bad people. For more information on OWASP visit their website.

You can leave the Advanced rule configuration as disabled for now. This feature is useful if your callers are expressing issues and you need to disable a certain rule to allow the calls. Just find the affected rule and uncheck it.

Please be aware that: these settings are global and limits your security baseline. Try to update the caller instead.

Secure your APIm

The WAF has its own IP address, and you can use that for IP-whitelisting. You can also have the AppGw set a custom header and have APIm look for that header. This method will not be used here but look under “Rewrites” in the left menu.

Find the IP address

To find the IP address of your AppGw, look under Overview and to the right you will find the Frontend Public IP address:

In my case it is

Update APIm global policy

Switch to your APIm instance and select APIs in the left menu. Then select All APIs (this is the global policy). Lastly select Add Policy under Inbound processing.

Choose Filter IP addresses and in the new page, choose Add IP filter.

Since we will not be using a range of IP addresses, just fill in the first box with the IP address of your AppGw.

This will only allow calls from that IP address. The policy file will look like this:

<ip-filter action=”allow”>


If someone calls your APIm directly they will get an error message. Here is an example:


There are other useful features for you to explore. For instance, you can hook up the AppGw to a Log Analytics instance and look at the data being logged. This is very useful if your callers report errors due to rules or message inspection. You can also use it to show that the firewall stops incoming malicious calls.

I have found this service a useful and welcome addition to APIm in Azure, and I encourage you to try it out in your API platform.

Using AppGw in front of APIm

Everyone knows that the internet is a scary place, and we also know that our APIm instance resides on it, then again, we know that, despite the obvious lack of a firewall, it still works just fine. So why should you add a firewall in front of your APIm instance?

The answer is simple: Added security and features, all provided by the Azure Application Gateway, or AppGw.

Let me go thru some of the main points of this architecture, then I will show you how to implement it.

Network vs service based?

This post is meant for a scenario in which the APIm instance is not network based. You can use this together with the standard edition, and still make the network huggers happy because you can use things like IP-whitelisting. If you are using the premium version of APIm you should set it up according to some other architectural walkthrough.

The AppGW needs a network, you just do not implement it if you do not want to.

The Azure Front door

Let us get this out of the way early. The Azure Front Door, or AFD, is a very useful router and firewall. It is a global service, and it is easy to use. Why not put that in front of your APIm instance?

According to this very handy flowchart. The AFD is meant to be a load balancer in front of several instances. The AppGw has some routing but if you have multiple instances of APIm, I really think you should be using the APIm premium offering instead. The AppGw is more of a router, and not so much a load balancer.


The communication overview of the setup looks something like this:

  • The API user calls your API at send a salesorder.
  • The call is routed to the local DNS that hosts the domain. In that DNS, there is a CNAME that points to the public IP address of the AppGw.
  • The gateway receives the call and routes it to the APIm instance’s public IP address.
  • Now the call can be sent anywhere the APIm instance has access to. Perhaps an external SaaS or an internal resource via a firewall or something else.

AppGw is a great product for placing at the very edge of your internet connected services. Here are some good reasons.


The Web Application Firewall, or WAF, is a firewall designed to handled API, or webcalls. Besides mere routing you can also configure it to look at messages, and headers so that they confirm with what is expected. One example is that it can see if the payload is valid JSON if the header content-type is set to application/json.

But the best thing is its support of rules based on the recommendations from OWASP. This organization looks at threats facing the internet and APIs, such as SQL injection or XML External Entities. Its Top 10 Security Risks is a very good place to start learning about what is out there. The only thing you need to do is to select OWASP protection from a dropdown and you are good to go. Security as a Service at its finest.

Sink routing

One popular way of setting up any kind of routing is to default all calls to a “sink”, i.e. the void, as in no answer, unless some rule is fulfilled. One such rule is a routing rule. This rule will only allow paths that confirm to specific patterns and any other sniffer attempt by any kind of crawler is met with a 502.

A rule that corresponds to the example above might be /orders/salesorder*. This allows all calls to salesorder but nothing else, not even /orders/.


I will not go that much into any detail here, but you can get access to everything that is sent thru the WAF. Each call ends up in a log which is accessible using Log Analytics and as such you can do a lot of great things with that data.

Setting it up

There are many cool things you can do. I will show you how to setup the most basic of AppGw.

The start

You need to complete the basic settings first. Here is how I setup mine.

Make sure that you select the right region, then make sure you select WAF V2. The other SKUs is either old or does not contain a firewall, and we want the firewall.

Next, enable autoscaling. Your needs might differ but do let this automated feature help you achieve a good SLA. It would be bad if the AppGw could not take the heat of a sudden load increase if all other systems are meant to.

Firewall mode should be set to prevention. It is better to deny a faulty call and log it, instead of letting it thru and logging it.

Network is a special part of the setup, so it needs its own heading.


You need to connect the AppGw to a network and a Public IP, but you do not need to use the functionalities of the network.

Configure a virtual network that has a number of IP-addresses. This is how I set it up:

Now you are ready to click Next:Frontends at the bottom.


This is the endpoints that the AppGw will use to be callable. If you need an internal IP address you can configure that here.

I have simply added a new Public IP and given it a name. For clarity, the picture contains settings for a private IP but that is not needed of you only need to put it in front of APIm.

Click Next:Backends at the bottom.


It is time to add the backend pools. This can be multiple instances of APIm or another service that will be addressed in a “round robin pattern”, so load balancing yes, but in a very democratic way. Therefore, you should not really use it for those scenarios described earlier.

Just give it a name and add the APIm instance using its FQDN.

When you are done, click Next:Configuration.


This is … tricky and filled with details. Be sure to read the instructions well and take it easy. You will get thru it.

Add a listener

  • Start by adding a routing rule. Give it a name. I will call mine apim_443.
  • Next you need to add a Listener. Give it a good descriptive name. I will call mine apim_443_listener.
  • Choose the frontend IP to be Public and choose HTTPs (you
    do not ever run APIs without TLS!)

This is the result

Note that there are several ways to add the certificate. The best way is to use a KeyVault reference.

Configure backend targets

Next, you shift to the Backend targets tab.

The target type is Backend Pool. Create a new backend target and call it Sink. More on this later.

Next you need to configure the HTTP Setting. I know it is getting tricky but this is as bad as it gets. Click the Add New under HTTP Setting

HTTP setting

  • Http settings name: Just give it a name
  • You will probably be using port 443 for your HTTPs setup.
  • Trusted Root certificate: If you are using something custom, such as a custom root certificate for a healthcare organization you can select No here and upload the custom root CA. If this is not the case, just select Yes.
  • If you need another standard request timeout setting than 20 seconds before timeout, you change it here. I will leave it unchanged. Note that in some services, such as Logic Apps, timeout values can be much higher and this needs to be reflected all the way up here.
  • I think you should override the hostname. This is simply a header for the backend. You could potentially use it as an identifier, but there is a better way to implement that.
  • Lastly, you want to create custom probes. This is health probes that check if the APIm instance is alive and well.

Path based rule

This is where you set the routing rule presented above. Imagine we have an api that is hosted under We will configure that and also add a “sink” as a catch all, where we send the calls that does not match any API route.

Choose to use a backend pool.

Set the path to your needs for your APIs. The syntax is very simple, just use a * to indicate a “whatever they enter”. In this case I have set it to “orders/salesorders*”. This means that the API above will match the routing rule and it will target the MyAPImInstance backend target using the HTTP settings we defined earlier.

This means that, since we defined an empty “Sink” backend earlier under “Configure backend targets”, that is the default and the sink will be the target unless this routing rule is fulfilled. Then the call will be sent to the APIm instance.

When you are done, click Add to return back to the Routing Rule setting, and the Add again to create the routing rule.

When you are back in the overview page, click Next:Tags to advance.


Add tags depending on your needs. Perhaps the owning organization, or an environment tag.

Create it

When you are done, click Create, have it validated and then create your AppGw.

Reconfiguring the DNS

The last thing you need to do in this case is to point your to your new AppGw. This is usually done by someone with elevated admin rights and you might need to send out an order. What you need is the IP-address of the AppGw you just created. This can easily be found either during deployment, since it is created first, or you can wait until the creation is done and find it in the overview page.

The person updating your DNS needs to know which CNAME (the name before the domain name in the URL) you want and which IP-number to point that to.

Before you go

You might be thinking that the AppGw can be expensive and particularly if you are using multiple instances of APIm (dev/test/prod). You do not need multiple instances of the AppGw if you use the API-path cleverly.

If you need this: “” and “”, you need two instances, as you can only have one Public IP per AppGw.

If you need to save a bit of money you could instead use this pattern: “” for test and “” for production. The only thing you need are two routing rules, one to point to production and one for pointing to test.

Next steps

In the next post I will be giving you pointers on how to setup the WAF, how to reconfigure the health probe to better for APIm, and also how to secure the communication between the AppGw and the APIm instance.





Documentation tools for VS Code

So, you are at the end of a project or task, and you need to document the thing that you did. I recently did enough documentation to change my title from Solution Architect to Author. Here are my tips and tricks for making a documentation experience better when using VS Code.


Of course, you need to document in markdown, and you commit the markdown files as close to the reader as possible. If you need to document how a developer should use an API or some common framework, the documentation should be right next to the code, not in a centralized docs hub!

When I use markdown, I always keep this cheat sheet close. This is because I can never remember all the cool and nice things you can use markdown for.

VS Code and markdown

Multiple screen support

The support for markdown in VS code using extensions is great but there is one trick you need to know if you have two screens. Using this trick, you can have the markdown on one screen and the preview on the other.

  1. Open VS Code and your markdown file.
  2. Use the shortcut Ctrl+K and the press O (not zero, the letter O).
  3. This opens a new instance of VS code with your workspace open.
  4. Turn on the Preview in the new window. You can use Ctrl+Shift+V

Now, you can have the preview on one screen and the markdown on another. Every time you update and save the markdown, the preview will be updated.

Extension 1: Markdown all in one

Serious markdown tool to help you with more than just bold and italics. It has shortcut support for those two, but you can even create a table of content (ToC) and index your headings, aka use section numbers. It has millions of downloads and can be found here: GitHub – yzhang-gh/vscode-markdown: Markdown All in One

Extensions 2: File Tree Generator

Every time you document code you seem to end up presenting folder structures, and the refer to them. Using this you can easily create nice looking folder trees, copy and paste them between markdown documents.


There are other, cool features and extensions in markdown. The important thing is to know if the platform you are uploading to can support, or render, the result.

One such thing is mermaid. Which can render diagrams based on text. This makes it super duper easy to document message flows, Gantt-schemas or even Git-branches.