I have already written about on how to use a custom DNS server for domain specific name resolution with AKS a couple of weeks ago.
Today I am writing about how you can leverage the newly announced Terraform OSS Azure Resource Provider for the same configuration with your existing Azure Resource Manager template know-how. The Terraform OSS RP is currently in private preview and if you would like to try it out you can sign up for the private preview.
During the private preview only the three Terraform providers for Kubernetes, Cloudflare and Datadog are supported. We will focus on the Kubernetes one.
Having a look at the following ARM template you will see two different resource types Microsoft.TerraformOSS/providerregistrations and Microsoft.TerraformOSS/resources.
{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "clusterName": { "type": "string", "metadata": { "description": "The name of the AKS cluster." } }, "aksResourceGroup": { "type": "string", "metadata": { "description": "AKS cluster resource group name" } }, "terraformResourceName": { "type": "string", "metadata": { "description": "The name of the Terraform deployment." } }, "terraformResourceType": { "type": "string", "defaultValue": "kubernetes_config_map", "allowedValues": [ "kubernetes_config_map", "kubernetes_horizontal_pod_autoscaler", "kubernetes_limit_range", "kubernetes_namespace", "kubernetes_persistent_volume", "kubernetes_persistent_volume_claim", "kubernetes_pod", "kubernetes_replication_controller", "kubernetes_resource_quota", "kubernetes_secret", "kubernetes_service", "kubernetes_service_account", "kubernetes_storage_class" ], "metadata": { "description": "The name of the Terraform resource type." } }, "terraformResourceProviderLocation": { "type": "string", "defaultValue": "westcentralus", "allowedValues": [ "westcentralus" ], "metadata": { "description": "Terraform resource provider location." } }, "dnsZone": { "type": "string", "metadata": { "description": "The name of the DNS zone." } }, "dnsServerIp": { "type": "string", "metadata": { "description": "The DNS server ip address." } } }, "variables": { "apiVersion": { "aks": "2018-03-31", "terraform": "2018-05-01-preview" }, "deploymentConfiguration": { "clusterName": "[parameters('clusterName')]", "aksResourceGroup": "[parameters('aksResourceGroup')]", "terraformLocation": "[parameters('terraformResourceProviderLocation')]", "terraformResourceName": "[parameters('terraformResourceName')]", "terraformResourceType": "[parameters('terraformResourceType')]", "dnsZone": "[parameters('dnsZone')]", "dnsServerIp": "[parameters('dnsServerIp')]" } }, "resources": [ { "apiVersion": "[variables('apiVersion').terraform]", "type": "Microsoft.TerraformOSS/providerregistrations", "name": "[variables('deploymentConfiguration').clusterName]", "location": "[variables('deploymentConfiguration').terraformLocation]", "properties": { "providertype": "kubernetes", "settings": { "inline_config": "[Base64ToString(ListCredential(resourceId(subscription().subscriptionId,variables('deploymentConfiguration').aksResourceGroup,'Microsoft.ContainerService/managedClusters/accessProfiles',variables('deploymentConfiguration').clusterName,'clusterAdmin'),variables('apiVersion').aks).properties.kubeConfig)]" } } }, { "apiVersion": "[variables('apiVersion').terraform]", "type": "Microsoft.TerraformOSS/resources", "name": "[variables('deploymentConfiguration').terraformResourceName]", "location": "[variables('deploymentConfiguration').terraformLocation]", "dependsOn": [ "[concat('Microsoft.TerraformOSS/providerregistrations/',variables('deploymentConfiguration').clusterName)]" ], "properties": { "providerId": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.TerraformOSS/providerregistrations/',variables('deploymentConfiguration').clusterName)]", "resourcetype": "[variables('deploymentConfiguration').terraformResourceType]", "settings": { "metadata": [ { "name": "kube-dns", "namespace": "kube-system" } ], "data": [ { "stubDomains": "[concat('{\"',variables('deploymentConfiguration').dnsZone,'\": [\"',variables('deploymentConfiguration').dnsServerIp,'\"]}\n')]" } ] } } } ], "outputs": {} }
The providerregistrations type is used to get the connection and authentication information that will be used by the resources type to deploy the described configuration to your AKS cluster. More details about that can be found in the announcement blog post by Simon Davies.
-> https://azure.microsoft.com/en-us/blog/introducing-the-azure-terraform-resource-provider/
So, when you would use such an ARM template? First you can use it during an AKS cluster deployment to deploy Kubernetes resources to the cluster or if you have an existing one you can use ARM templates instead of YAML configuration files to deploy Kubernetes resources to the cluster itself.
The template I provided above can be used for both scenarios. For the first one you need a nested template configuration which deploys the AKS cluster first and then applies the Kubernetes configuration. The second one will be described now.
You can find the template in my GitHub repository.
-> https://github.com/neumanndaniel/armtemplates/blob/master/terraform/aksCustomDns.json
As in my previous blog post described I am deploying a busybox pod for the name resolution tests.
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always
Yes, I could have used the Terraform OSS RP for it, but decided to go with a YAML configuration file instead to focus on the ARM template for the Kubernetes config map configuration.
My scenario was, that I wanted to have the name resolution for the following test domain azure.local. So, AKS cannot resolve it by default, because it is not a standard TLD that is known by the DNS system.
Moving on with the config map definition as described in the Kubernetes documentation, I am providing the AKS cluster with the necessary information on how to contact the custom DNS server for this specific domain. The DNS server sits in another VNET in Azure which is connected via VNET peering with the AKS VNET.
The config map definition then gets deployed via the ARM template through an Azure Cloud Shell session and the following Azure CLI command.
az group deployment create --resource-group terraform --template-file ./aksCustomDns.json --parameters clusterName=azst-aks1 aksResourceGroup=aks terraformResourceName=customDnsZone terraformResourceType=kubernetes_config_map terraformResourceProviderLocation=westcentralus dnsZone=azure.local dnsServerIp=172.16.0.4
For the template deployment we provide the AKS cluster name, the resource group the AKS cluster is sitting in, the Terraform resource type for a Kubernetes config map, the Terraform RP location, the name of the DNS zone and the DNS server IP address as parameters. The Terraform RP is only available in the Azure region West Central US right now.
As you can see above the config map definition was successfully deployed to the AKS cluster and we have two resources in Azure for the two different Terraform resource types.
After seconds we now able to resolve the domain azure.local and its records.
In my opinion the Terraform OSS RP is a perfect addition to the Azure Resource Manager template capabilities we have today. If you are very comfortable with ARM templates, the Terraform OSS RP gives you the additional tooling to deploy Kubernetes resources onto AKS clusters instead of using YAML configuration files which need to be deployed via kubectl for example. In the end you have the freedom of choice what you would like to use and that is great in my opinion.
If you missed it in the beginning and like to try out the Terraform OSS RP, then sign up for the private preview under the following link.