Daniel's Tech Blog

Cloud Computing, Cloud Native & Kubernetes

Deploy ARM-based container images with Azure Kubernetes Service on your Azure IoT Edge devices

In my previous blog post I showed you how to build ARM-based container images with VSTS.

-> https://www.danielstechblog.io/building-arm-based-container-images-with-vsts-for-your-azure-iot-edge-deployments/

Now, we need to deploy our container applications or in IoT Edge jargon our container modules onto the edge devices. For the deployment we will use an AKS cluster with the IoT Edge virtual kubelet provider. The IoT Edge VK provider runs as a virtual node in the AKS cluster and connects it with the Azure IoT Hub. This enables us to leverage our Kubernetes know-how to write the deployment template files and deploy the container modules in a programmatic way.

-> https://github.com/Azure/iot-edge-virtual-kubelet-provider

Assuming that you have an AKS cluster in-place, I will guide you through the IoT Edge VK provider installation. First we need the helm chart files. The easiest way to get them is to clone the GitHub repository.

git clone https://github.com/Azure/iot-edge-virtual-kubelet-provider.git

The helm chart for deploying the IoT Edge VK provider supports Kubernetes clusters with RBAC enabled, but per default settings RBAC is disabled as a deployment option. Before we start we must create a secret in our AKS cluster containing the connection string to the Azure IoT Hub.

kubectl create secret generic iot-edge --from-literal=hub0-cs='<CONNECTION STRING>'

Then we can kick off the IoT Edge VK provider deployment.

cd iot-edge-virtual-kubelet-provider/src/charts/iot-edge-connector/
helm install -n hub0 --set rbac.install=true .

Checking the deployment status with kubectl get pods and kubectl get nodes should always be the next step. A successful deployment should give you the following output.

iotedgedeploy01

Furthermore, it is possible to connect more than one IoT Hub with additional deployments of the IoT Edge VK provider to the AKS cluster. That enables us to do massive scale container module deployments onto our Edge devices with one single deployment template. Adding a second IoT Hub can be done through the following commands.

kubectl create secret generic iot-edge2 --from-literal=hub0-cs='<CONNECTION STRING>'
helm install -n hub1 --set rbac.install=true,edgeproviderimage.secretsStoreName=iot-edge2,env.nodeName=iot-edge-connector-hub1 .

The deployment template file for the container modules has some specific configurations regarding Azure IoT Edge deployments. So, let us have a look at the following example file.

...
      annotations:
        isEdgeDeployment: "true"
        targetCondition: "tags.location.building='mobile' AND tags.environment='test'"
        priority: "15"
        loggingOptions: ""
...

As you can see what every pod or deployment template needs are four annotations. The first one isEdgeDeployment does not need further explanation. It just defines if it is an edge deployment or not. The targetCondition is very important, because with the tags, you have defined in the device twin of your edge device, you can control your deployments.

-> https://docs.microsoft.com/en-us/azure/iot-edge/how-to-register-device-portal
-> https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-linux-arm
-> https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins#tags-and-properties-format
-> https://docs.microsoft.com/en-us/azure/iot-edge/module-deployment-monitoring#target-condition

Tags can be combined with AND or OR operators to define the final target condition. Even NOT is supported to exclude specific edge devices with a certain tag. E.g. the above deployment only targets edge devices with a matching building tag mobile and environment tag test. If an edge device only has one of the tags assigned, it is not targeted for the deployment.

Priority controls the application of a deployment. Deployments with a higher priority supersedes the previous applied deployment. If the priority is equal, then the most recent deployment is applied. As of writing this blog post loggingOptions will be left empty.

Before we move to the next part, I would like to give you a general recommendation. When ever it is possible use a Kubernetes deployment definition instead of a Kubernetes pod definition. Otherwise you are not able to do massive scale deployments with the IoT Edge VK provider. That said, the point replicas under spec is the important part here.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: go-webapp-arm
spec:
  replicas: 2
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
...

As earlier mentioned we deployed two IoT Edge VK providers to connect two different Azure IoT Hubs with our Azure Kubernetes Service cluster. Setting replicas to two ensures that our deployment will be pushed out to both Azure IoT Hubs and then will be applied to every edge device that met the target condition. So, you can roll out a deployment to thousand or more edge devices with one single deployment template.

...
    spec:
      containers:
      - name: go-webapp-arm
        image: azstcr1.azurecr.io/go-webapp-arm:latest
      nodeSelector:
        type: virtual-kubelet
      tolerations:
      - key: azure.com/iotedge
        effect: NoSchedule
...

The next settings you have to recognize are nodeSelector and tolerations. The nodeSelector specifies that deployments should only target Kubernetes nodes from the type virtual-kubelet. What you must specify is tolerations, because the IoT Edge VK provider nodes are using a taint. The taint ensures that no pods are deployed on the virtual nodes itself. E.g. daemon sets would deploy pods on virtual nodes. To prevent that you use taints. But with that you must use tolerations in your deployment templates to target those virtual nodes for the IoT Edge deployments.

In general, the special part in an IoT Edge deployment are config maps. With config maps you can specify certain configurations for the system container modules edgeagent and edgehub. For the edgeagent system module you can only provide the credentials for accessing a private container registry or environment variables for the edgehub container module which will be applied during the creation process. The first one is now necessary, because we do not have the iotedgectl tool with the GA version of IoT Edge anymore. Currently, there is only one important environment variable, the OptimizeForPerformance, you must use with IoT Edge deployments where your edge devices are resource constrained devices like a Raspberry Pi.

-> https://docs.microsoft.com/en-us/azure/iot-edge/troubleshoot#stability-issues-on-resource-constrained-devices

...
apiVersion: v1
kind: ConfigMap
metadata:
  name: edgeagent
data:
  desiredProperties: |
    {
      "runtime": {
        "settings": {
          "registryCredentials": {
            "docker": {
              "address": "REDACTED",
              "password": "REDACTED",
              "username": "REDACTED"
            }
          }
        }
      },
      "systemModules": {
        "edgeHub": {
          "env": {
            "OptimizeForPerformance": {
              "value": "false"
            }
          }
        }
      }
    }
...

For the edgehub system module we can specify the message routing and how long messages should be stored in a queue before sending them to the container modules or upstream to the IoT Hub in Azure, if any of them are not available or offline.

...
apiVersion: v1
kind: ConfigMap
metadata:
  name: edgehub
data:
  desiredProperties: |
    {
      "routes": {
        "route": "FROM /* INTO $upstream",
      },
      "storeAndForwardConfiguration": {
        "timeToLiveSecs": 5
      }
    }
...

For your custom module configuration, you name the config map as your container in the container specification. E.g. it is possible to expose the TCP port 80. So, a HTTP server is reachable from external resources.

...
apiVersion: v1
kind: ConfigMap
metadata:
  name: go-webapp-arm
data:
  status: running
  restartPolicy: always
  version: "1.0"
  createOptions: |
    {
      "HostConfig": {
        "PortBindings": {
          "80/tcp": [
            {
              "HostPort": "80"
            }
          ]
        }
      }
    }

We have the full Docker container configurations available for our custom module configuration.

-> https://docs.docker.com/engine/api/v1.24/#3-endpoints

A final advice. Even if you don’t specify anything for the createOptions, you should specify an empty one. Otherwise the deployment will throw an IoT Edge VK provider error.

Now let us have a look at the final deployment template.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: go-webapp-arm
spec:
  replicas: 2
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: go-webapp-arm
  template:
    metadata:
      name: go-webapp-arm
      labels:
        app: go-webapp-arm
      annotations:
        isEdgeDeployment: "true"
        targetCondition: "tags.location.building='mobile' AND tags.environment='test'"
        priority: "15"
        loggingOptions: ""
    spec:
      containers:
      - name: go-webapp-arm
        image: azstcr1.azurecr.io/go-webapp-arm:latest
      nodeSelector:
        type: virtual-kubelet
      tolerations:
      - key: azure.com/iotedge
        effect: NoSchedule
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: edgeagent
data:
  desiredProperties: |
    {
      "runtime": {
        "settings": {
          "registryCredentials": {
            "docker": {
              "address": "REDACTED",
              "password": "REDACTED",
              "username": "REDACTED"
            }
          }
        }
      },
      "systemModules": {
        "edgeHub": {
          "env": {
            "OptimizeForPerformance": {
              "value": "false"
            }
          }
        }
      }
    }
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: edgehub
data:
  desiredProperties: |
    {
      "routes": {
        "route": "FROM /* INTO $upstream",
      },
      "storeAndForwardConfiguration": {
        "timeToLiveSecs": 5
      }
    }
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: go-webapp-arm
data:
  status: running
  restartPolicy: always
  version: "1.0"
  createOptions: |
    {
      "HostConfig": {
        "PortBindings": {
          "80/tcp": [
            {
              "HostPort": "80"
            }
          ]
        }
      }
    }

The deployment uses the image of a simple web server written in Go that was created previously. Just follow the blog post on “Building ARM-based container images with VSTS for your Azure IoT Edge deployments”.

-> https://www.danielstechblog.io/building-arm-based-container-images-with-vsts-for-your-azure-iot-edge-deployments/

The VSTS release definition is the same as for a standard Kubernetes release. So, just have a look at the documentation.

-> https://almvm.azurewebsites.net/labs/vstsextend/kubernetes/

iotedgedeploy02iotedgedeploy03

Finally, we can kick off the whole CI/CD pipeline to deploy the Go web server onto our edge devices connected to two different IoT Hubs.

iotedgedeploy04iotedgedeploy07

iotedgedeploy05iotedgedeploy06

iotedgedeploy08iotedgedeploy08_01iotedgedeploy09iotedgedeploy09_01

When everything went fine we can directly connect to the web server.

iotedgedeploy10iotedgedeploy11

The deployment template is available in my GitHub repository, if you need a starting point.

-> https://github.com/neumanndaniel/kubernetes/blob/master/iotedge/templates/arm32v7/go-webapp-arm.yaml

WordPress Cookie Notice by Real Cookie Banner