Daniel's Tech Blog

Cloud Computing, Cloud Native & Kubernetes

Installing Helm and Azure IoT Edge on a k3s Kubernetes cluster on Raspbian

This is the third and last blog post in a series covering k3s a new Kubernetes distribution by Rancher.

-> https://k3s.io/

In this post we focus on the Azure IoT Edge deployment on Kubernetes via the package manager tool Helm.

-> https://docs.microsoft.com/en-us/azure/iot-edge/about-iot-edge
-> https://helm.sh/

The topic is divided in two parts. First about the automated container image build of Tiller, the server-side component of Helm, for the linux-arm architecture as well the deployment and configuration of Helm itself. Second about the Azure IoT Edge deployment on Kubernetes via Helm.

In both parts we are using Azure Pipelines to achieve our overall goal. The setup of the Azure Pipelines agent was the topic discussed in the second blog post of this series.

-> https://www.danielstechblog.io/using-an-azure-pipelines-agent-on-a-k3s-kubernetes-cluster-on-raspbian/

Helm

Let us start with the container image build and deployment of Tiller and Helm. For that we create a new project in Azure DevOps named k3s.

We need to place a couple of files into the Azure Repos of our k3s project that are required for our build and release pipelines.

k3shelm00

The necessary files can be found in my GitHub repository.

-> https://github.com/neumanndaniel/kubernetes/tree/master/k3s/helm

The files and what they do will be explained later throughout this blog post.

Next, download the DOCKERFILE from the Helm GitHub repository and place it into the Azure Repos.

-> https://github.com/helm/helm/blob/master/rootfs/Dockerfile

Per default the DOCKERFILE builds the x64-based container image of Tiller. Cause we want the ARM-based container image, we must change line 15 to FROM arm32v7/alpine:3.9 and we are done.

Now we are ready to set up our build pipeline which we name k3s-tiller-image. As agent pool select Hosted Ubuntu 1604.

k3shelm01

In the task section we add the Bash task first and link the downloadHelm.sh shell script to it. This shell script reads the required Helm version from the file HELMVERSION and downloads the ARM-based programs of Tiller and Helm. Next step in the script is the unpacking and moving of tiller and helm to the current working directory. Afterwards a small clean-up is done to keep the container image small.

k3shelm02

The last task for our build pipeline is the Azure CLI task. As with the Bash task we link the containerImageBuild.sh shell script to it. The script takes one argument as input which should be the build number $(Build.BuildNumber) in our case.

k3shelm03

The script itself reads the file HELMVERSION and then sends the build context to the Azure Container Registry via az acr build. The container image is tagged with the Helm version stored in the file HELMVERSION and with the build number we gave as an input.

Final step for our build pipeline is the activation of the CI option under Triggers.

k3shelm04

Make sure to include the HELMVERSION file under path filters. We only want to start the CI process, when the HELMVERSION file got updated.

Following our build pipeline, we create our release pipeline named k3s helm init.

In the Artifacts section we need to ensure that we have the links to our build pipeline, with the CD trigger enabled, and to Azure Repos configured correctly.

k3shelm05

The release pipeline consists of three tasks and starts with a Bash task. The Bash task executes the shell script downloadKubectl.sh which downloads the recent arm-based kubectl version and helm to our Azure Pipelines agent running on our k3s Kubernetes cluster.

k3shelm06

As seen in the screenshot make sure that the Script Path and Working Directory are set correctly. Otherwise our release fails at the first task.

The second task is a kubectl apply task. For this task to function properly we need to create the Kubernetes service connection to our k3s Kubernetes cluster.

k3shelm07

A klick on New opens a new menu. There we specify a connection name. Here we use the name of our k3s master k3s-master-0. The KubeConfig part can be retrieved with the following command.

kubectl config view -o yaml --raw

Just copy and paste the output into the KubeConfig section. The cluster context should display default automatically, when your kubeconfig only contains the k3s Kubernetes cluster information.

k3shelm08

Hit OK and jump back to the release pipeline task and select the newly created service connection. As namespace we type in kube-system and then select the rbac-tiller.yaml template to be applied.

k3shelm09

The rbac-tiller.yaml creates a ServiceAccount for Tiller and the associated ClusterRoleBinding linked to the cluster-admin ClusterRole. Furthermore, we assign the ServiceAccount an imagePullSecrets, we created in part two of the series, to be able to pull the container image from ACR.

That is all for the task two and we can move to task three which brings Tiller onto our k3s Kubernetes cluster. It is the helm init task we need here.

k3shelm10

As service connection we select k3s-master-0 and check mark Upgrade Tiller and Wait as seen in the following screenshot.

k3shelm11

In the Arguments section copy the following code.

--service-account tiller --tiller-image=acr.azurecr.io/tiller:$(Build.BuildNumber) --node-selectors "kubernetes.io/role"="master" --override "spec.template.spec.tolerations[0].key"="node-role.kubernetes.io/master" --override "spec.template.spec.tolerations[0].operator"="Equal" --override "spec.template.spec.tolerations[0].value"="true" --override "spec.template.spec.tolerations[0].effect"="NoSchedule" --force-upgrade

This ensures that Tiller runs only on the master and not on the nodes wasting their capacity. Beside that we tell Tiller to use our pre-created ServiceAccount and our freshly build ARM-based container image.

The final step in the release pipeline is to select the agent pool k3s with our Azure Pipelines agent running in our cluster.

k3shelm12

Before we continue with the Azure IoT Edge deployment we kickoff our Tiller deployment with starting the build pipeline.

k3shelm13k3shelm14k3shelm15k3shelm16

On our workstation we can now type in helm version to check the Tiller deployment in our k3s Kubernetes cluster. The output should look like this.

> helm version
Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}

Azure IoT Edge on Kubernetes

Before we can deploy Azure IoT Edge on Kubernetes, we must do several preparation steps upfront.

-> https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-kubernetes

First, we register and IoT Edge device, here our k3s cluster, in our Azure IoT Hub. This step is important to get the connection string which the setup requires later.

-> https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-create-through-portal
-> https://docs.microsoft.com/en-us/azure/iot-edge/how-to-register-device-portal

k3shelm17

Afterwards run the following commands locally.

helm repo add edgek8s https://edgek8s.blob.core.windows.net/helm/
helm repo update
helm fetch edgek8s/edge-kubernetes
helm inspect edge-kubernetes-0.1.6.tgz

As you might have seen right now, we cannot say for sure that the container images are ARM-based. So, let us assume they are x64-based.

The Docker Hub links to the container images are the following ones.

-> https://hub.docker.com/r/azureiotedge/iotedged
-> https://hub.docker.com/r/azureiotedge/azureiotedge-agent
-> https://hub.docker.com/r/azureiotedge/update-identity-hook
-> https://hub.docker.com/_/traefik

In the end we could identify the ARM-based images.

  • azureiotedge/iotedged:21860920-linux-arm32v7
  • traefik:v1.7-alpine
  • azureiotedge/azureiotedge-agent:22930764-linux-arm32v7
  • azureiotedge/update-identity-hook:21860920-linux-arm32v7

Back in Azure DevOps we create a new release pipeline named k3s-iot-edge and link our Azure Repos in the Artifacts section.

The release pipeline consists of three tasks. Since the helm tasks of Azure DevOps do not support remote Helm Charts repositories that needs to be added via helm repo command, we fall back to the Bash tasks in Azure DevOps.

The first task again is Bash that executes the downloadKubectl.sh shell script to prepare our Azure Pipelines agent with kubectl and helm.

Our second task is also a Bash task executing the shell script helmRepoAdd.sh. The script prepares the Azure Pipelines agent for the final task and adds the required Helm Charts repository for the Azure IoT Edge on Kubernetes deployment. Do not forget to check the Script Path and Working Directory for both Bash tasks.

k3shelm18k3shelm19

Last preparation step is to store the IoT Edge device connection string as secret in a variable called iotEdgeConnectionString under the Variables section.

k3shelm20

Finally, everything is prepared for the helm install task to be implemented.

As service connection we select k3s-master-0.  Then specify edgek8s/edge-kubernetes under Chart Name and k3s under Release Name. Next, unselect Wait. If you keep the check mark, I have experienced that the Azure IoT Edge on Kubernetes deployment, in specific the edgeAgent startup, fails.

Under Set Value copy & paste the following code to use the ARM-based container images and reference the IoT Edge device connection string from our variable created earlier.

iotedged.image.tag=21860920-linux-arm32v7,edgeAgent.image.tag=22930764-linux-arm32v7,updateIdentityHook.image.tag=21860920-linux-arm32v7,deviceConnectionString=$(iotEdgeConnectionString)

k3shelm21k3shelm22

Do not forget to adjust the agent pool to k3s. Same as with the other release pipeline.

Afterwards we kickoff the release pipeline.

k3shelm23

When everything went successful, the k3s Kubernetes cluster should show up as a connected IoT Edge device in the Azure IoT Hub.

k3shelm24k3shelm25

kubectl get pods -n msiot-azst-iot-hub-k3s should show us all IoT Edge on Kubernetes pods in a running state.

> kubectl get pods -n msiot-azst-iot-hub-k3s
NAME                        READY   STATUS    RESTARTS   AGE
edgeagent-f4998864c-8tbdn   2/2     Running   0          12m
iotedged-7dc5dfdc8b-kqlf8   1/1     Running   1          12m

Next stops would be the deployment of IoT Edge modules via the Azure IoT Hub directly or via the IoT Edge Connector for Kubernetes. For the later one I have written several blog posts you can check out here.

-> https://www.danielstechblog.io/deploy-arm-based-container-images-with-azure-kubernetes-service-on-your-azure-iot-edge-devices/
-> https://www.danielstechblog.io/introducing-breaking-changes-to-the-iot-edge-vk-provider-helm-chart-and-deployment-templates-for-kubernetes/
-> https://www.danielstechblog.io/best-practices-azure-iot-edge-deployments-with-azure-kubernetes-service/
-> https://www.danielstechblog.io/stream-analytics-on-iot-edge-deployment-changes/

WordPress Cookie Notice by Real Cookie Banner