Bypassing cache in APIm

The good thing about caching

One very strong feature of Azure API manager is the ability to cache data. When implemented, the response is picked up from an in-memory store and returned to the caller within milliseconds. It all depends on the type of data returned, but not all data needs to be kept fresh from the backend systems all the time.

The response to creating an order might be a bad idea to cache, but a list of the company offices might be a good candidate.

There are a million articles on how to implement caching in APIm including the official documentation.

Here is an example that stores a response for an hour, with a separate cache for each developer/caller.

<policies>
    <inbound>
        <base />
        <cache-lookup vary-by-developer="true" vary-by-developer-groups="false" />
    </inbound>
    <outbound>
        <base />
          <cache-store duration="3600" />
    </outbound>
</policies>

The trouble with caching

In some cases you need to force the call to get data from the backend system and not use the cache. One such case is during development of the backend system. If something is updated, the data is not updated until the cache has timed out, and you need that new data now!

I did not find any ready made examples of how to achieve this and that is why I wrote this post.

How to control the cache

The official documentation points to using a header called Cache Control. More information can be found here. In fact, if you test your API from the portal, the tester always sets this header to no-store, no-cache. It is up to you how to handle this header in your API.

Example of no-cache

This is what I did to implement the following case: “If someone sends a Cache Control header with no-cache, you should get the data from the backend system and ignore the cache.”

Using the same example from above, I added some conditions.

<policies>
    <inbound>
        <base />
        <choose>
            <when condition="@(context.Request.Headers.GetValueOrDefault("Cache-Control","").Contains("no-cache") == false)">
                <cache-lookup vary-by-developer="true" vary-by-developer-groups="false" />
            </when>
        </choose>
        <!-- Call your backend service here -->
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
        <choose>
            <when condition="@(context.Request.Headers.GetValueOrDefault("Cache-Control","").Contains("no-cache") == false)">
                <cache-store duration="3600" />
            </when>
        </choose>
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

It is very straight forward: Find a header, and if it does not contain no-cache; use the cache.
If the cache is used the call will be done on row 6 and then go down to row 17, returning what ever is in store.

Using OAUTH 2.0 in Logic Apps

OAUTH 2.0

There are many scenarios where you need to call a service that implements OAUTH 2.0. It has support for roles and claims for instance. Out of the box support for OAUTH 1.0 is really easy and there are many walkthrus on this topic. I will show how to configure a Logic App to use OAUTH 2.0.

This is also related to my earlier post Getting a bearer token from AAD using Logic Apps where I show you how to get a OAUTH 2.0 token using Logic Apps.

The scenario

Someone has setup a service that handles sensitive data. Simply protecting it using an API key is considered too low a level of security. The provider has setup an Application Registration in Azure AD and provided you with a ClientID, Client Secret and the Scope. All are needed to authenticate.

How this is setup in AAD is out of scope in this post.

The solution

We decide to use Logic Apps and an HTTP connector. It has built in support for OAUTH 1.0 but we are going to use 2.0.
Here is a mock-up of the settings, lets go thru them.

  • URI: The URI of the service you need to call.
  • Header: api-key: You usually need to provide an API key when calling an API. This setting is specific to the service you need to call.
  • Authentication Type: Choose Active Directory OAuth
  • Authority: Set to https://login.microsoftonline.com when using Azure AD
  • Tenant: Your Azure AD TenantID
  • Audicence: Provide the Scope you have been sent. Make sure to omit the /.default at the end of the scope string, if present.
  • Client ID: The Client ID you have been sent.
  • Client Secret: The Client Secret you have been sent.

That is actually it!

Some notes

Scope

The strange and hard part for me was finding how to configure the Scope. First off you put the Scope as Audience, which feels strange. Then you must provide the base Scope. This was different for me.

When you use Postman to get an OAUTH2.0 token you send the scope with a /.default at the end of it to say “give me claims for the default scope”. When I set the property like that I got an error. You need to remove the suffix.

Authority

This hard to find in the documentation, but the setting makes sense. If you are using standard Azure (not US military, China or Germany) this is always set to https://login.microsoftonline.com. You can find the other settings here.

Getting a bearer token from AAD using Logic Apps

Why?

You can use the built in authentication when calling, for example, external APIs. It is one of the best features of Logic Apps. Sometimes you might need to get the auth token anyway. Such as when you think Logic Apps does not support OAUTH 2.0 (Thank you Mötz Jensen for helping me understand it.)

How?

You need to get the usual things: Client ID, Client Secret and audience, or scope depending on the version of OAUTH you need to use. For version 2.0 you need to use scope. Please replace scope with audience in the examples below.

In my example I will use the following settings (pseudo-real):

Property Value
Client ID 0352cf0f-2e7a-4aee-801d-7f27f8344c77
Client Secret Th15154S3cr32t!
Scope https://api.myhome.com/.default
Tenant ID a2c10435-de68-4994-99b2-13fed13bdadf

Configuring a call

Create a Logic app HTTP step and set the following properties:

Property Setting
Method POST
URI https://login.microsoftonline.com/tenantID/oauth2/v2.0/token
Headers
Key Value
Content-type application/x-www-form-urlencoded

The body is a bit more tricky. You need to create a URL-string with login info. It is not that hard. Just type the property name, set a = sign and the add the property value. Inbetween properties you put a & sign (like a URL).

Using my values from above, the resulting string will look like this:

Client_Id=0352cf0f-2e7a-4aee-801d-7f27f8344c77&Client_Secret=Th15154S3cr32t!&grant_type=client_credentials&scope=https://api.myhome.com/.default

Remember to add the grant_type=client_credentials and you must not use any line breaks. It is just a long string.

Result

Looking at this in logic apps, the step looks like this:

Using the token

To use the token in a call you need to do two things.

  1. Get the token value from the output of the Get Token step.
  2. Add the token as a header to the call you need to make.

Get the token value

To make things easy I used a Parse JSON step and provided this schema:

{
    "properties": {
        "access_token": {
            "type": "string"
        },
        "expires_in": {
            "type": "integer"
        },
        "ext_expires_in": {
            "type": "integer"
        },
        "token_type": {
            "type": "string"
        }
    },
    "type": "object"
}

Use the token value

In this case I am calling an API that expects an OAUTH 2.0 token. I configured the step like this:

The short-short version

Issue a call to your AAD using an application/x-www-form-urlencoded body. Making sure the body is one long URL-encoded string.

My US Tour :-)

Not exactly. I am going on a trip for work to St Louis and Nashville and since meetup.com is an awesome, global service I reached out to the local groups. They welcomed me and now I have two sessions lined up. Very Happy!

The session: Api management 101

The topic
An introduction to how Azure API management can help you take control of your organization’s APIs.
APIs are built and published everywhere, they can handle different data, authentication, and paths, making your APIs easier to use and re-use. Security, documentation, users, even sign-up can be handled directly.

Who is this aimed at?
Developers or architects working in a mid-to-large organization that uses APIs to exchange data internally or externally. You do not need any prior knowledge about Azure API management.

Friday, December 9th St Louis

Hosted by St Louis Azure User Group

Microsoft Technology Center – @4220
4220 Duncan Ave #501 · St. Louis, MO

Sign up here.

Tuesday, December 13 Nashville

Jointly hosted by Nashville .Net User Group and The Nashville Microsoft Azure Users Group

Vaco Nashville
5501 Virginia Way Suite 120 · Brentwood, TN

Sign up here

The code and labs

All the code shown in the sessions can be found in this handly GitHub Rep.

Azure DiagnosticSettings and Bicep

This is not the explain everything article, but something I wrote as a guide to how I usually solve the issue of DiagnosticSettings.

The basics

I will assume you know what Diagnostics are within Azure and that you know how to create and deploy Bicep. This post aims at showing you how you can solve how to connect Azure Diagnostigs to your resources, using Bicep.

The Scenario

In this scenario I will use something very specific but it will work just as well for your scenario. I am using autoscaling for an Azure Function environment (or Service Plan if you prefer). If the function gets a lot of traffic, the autoscaler will add an additional instance and then remove instances if the traffic goes down.

The autoscaler can alert users whenever it fires. Either by sending emails or using a webhook. However you also need a record of when the autoscaler triggered. That is very useful when you want to analyze traffic and response times.

Bicep and DiagnosticSettings

A diagnostic setting is different compared to, lets say an Azure Storage. Normally, a resource can be deployed by itself but diagnosticsettings need to be connected to an existing resource. This is called Extension Resource.

When deploying an extension resource you simply need to tell it which other resource it is connected to. You do this using the scope property.

Here is an example from my Bicep file:

resource LogAnalyticsConnection 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
  name: 'scaling'
  scope: FunctionplanElastic_Autoscale
  properties: {...}
}

The FunctionplanElastic_Autoscale is the autoscale I have created for my function.

The problem

When deploying a diagnostic setting you might not always know which metrics are available to you, and in some cases the metrics differ between the portal and the APIs used by Azure for deployment. So using the portal is not the best way, because you get strange errors complaining about different metrics not being available.

Another problem is that diagnostic settings are not exported as a part of the resource, so finding the settings can be really tricky.

A solution

This is in two parts: Finding what resource to scope, and finding what metrics and logs are available to you.

Finding the scope

This is the easy part. When you navigate the Azure Portal you connect diagnostic settings to a resource. That is the scope you are looking for. If you need it for an Azure Storage, that storage is the scope. In my case, I needed it for an autoscaling resource which in turn is connected to an Azure function. In this case, the diagnostic settings should be connected to the autoscaler.

Finding the Logs and Metrics

There is an API for this called Diagnostic Settings – List!
If you call the API you will get the possible diagnostic settings for that particular resource, including syntax. Using the API is a little tricky but here goes:

Authenticating the API

The caller needs to have read access to the resource. I recommend you use my “Login as yourself” post to manage the API authorization.

Setting up the URI

This is the tricky part. Here is the documentation version GET https://management.azure.com/{resourceUri}/providers/Microsoft.Insights/diagnosticSettings?api-version=2021-05-01-preview.

The tricky part is the resourceUri. Here is my version from Postman.

https://management.azure.com/subscriptions/:subscriptionId/resourcegroups/:resourcegroupName/providers/:provider/:resourceName/providers/Microsoft.Insights/diagnosticSettings?api-version=2021-05-01-preview

The resourceUri has 4 different parts:
– The subscriptionId: I think you know what this is.
– The resourceGroupName: Yeah, you know this too.
Provider this is the provider name of the resource type you are trying to access.
The easiest way to find this is to look in the URL when you access the resource in the Azure Portal. This always contains a /, for instance Microsoft.DataFactory/factories.
ResourceName: This is simply the name of the resource you are trying to access.

In my scenario: https://management.azure.com/subscriptions/XXXX-YYYY-zzzz-eeee-wuertweuygfdu/resourcegroups/SYS001-Identifier-RG/providers/microsoft.insights/autoscalesettings/MyAutoscaler/providers/Microsoft.Insights/diagnosticSettings?api-version=2021-05-01-preview

This replies back with this body:

{
    "value": [
        {
            ...
                "metrics": [
                    {
                        "category": "AllMetrics",
                        "enabled": false,
                        "retentionPolicy": {
                            "enabled": false,
                            "days": 0
                        }
                    }
                ],
                "logs": [
                    {
                        "category": "AutoscaleEvaluations",
                        "categoryGroup": null,
                        "enabled": false,
                        "retentionPolicy": {
                            "enabled": false,
                            "days": 0
                        }
                    },
                    {
                        "category": "AutoscaleScaleActions",
                        "categoryGroup": null,
                        "enabled": true,
                        "retentionPolicy": {
                            "enabled": false,
                            "days": 0
                        }
                    }
                ],
                "logAnalyticsDestinationType": null
            },
            "identity": null
        }
    ]
}

Note! There is two small issues here.
1. If you have selected any logging, then that is the configuration that will show up. To know which features are available, the option has to be unconfigured.
2. If you have not added any diagnostic settings before, the APIs returns an empty list.

So now I know which features are available. I need both logging options, so I will add allLogs to my Bicep.

Updating the Bicep file

My finished Bicep looks like this:

param logAnalyticsResourceId string
...
resource LogAnalyticsConnection 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
  name: 'scaling'
  scope: FunctionplanElastic_Autoscale
  properties: {
    workspaceId: logAnalyticsResourceId
    logs: [
      { 
        enabled: true
        categoryGroup: 'allLogs'
        retentionPolicy: {
          days: 30
          enabled:true 
        }
      }
    ]
  }
}

Conclusion

Adding diagnostic settings needs to be done using another process since they cannot be downloaded. You can access and API to get the settings available to you.

If you need the files used in this scenario you can find them in my GitHub Repo.