Recently, I updated my Terraform AKS module switching from the AAD service principal to managed identity option as well from the AAD v1 integration to AAD v2 which is also managed.
Other changes and improvements are the following ones:
- Private cluster support
- Managed control plane SKU tier support
- Windows node pool support
- Node labels support
addon_profile
section parameterized
-> https://github.com/neumanndaniel/terraform/tree/master/modules/aks
Overall the switch to managed identity and the managed AAD integration takes some operational burden away like regular credential rotation and makes the deployment way easier.
... identity { type = "SystemAssigned" } role_based_access_control { enabled = true azure_active_directory { managed = true admin_group_object_ids = [ data.azuread_group.aks.id ] } } ...
Here is an example how to use the module and deploy an Azure Kubernetes service cluster using managed identity and the managed AAD integration.
module "aks" { source = "../modules/aks" resource_group_name = module.resource_group.name location = module.resource_group.location container_registry_id = data.azurerm_container_registry.aks.id log_analytics_workspace_id = data.azurerm_log_analytics_workspace.aks.id name = "azst-aks1" kubernetes_version = "1.18.4" private_cluster = false sla_sku = "Free" vnet_subnet_id = module.virtual_network.subnet_id aad_group_name = "AKS-Admins" api_auth_ips = [] addons = { oms_agent = true kubernetes_dashboard = false azure_policy = false } default_node_pool = { name = "nodepool1" node_count = 3 vm_size = "Standard_D4_v3" zones = ["1", "2", "3"] taints = null cluster_auto_scaling = false cluster_auto_scaling_min_count = null cluster_auto_scaling_max_count = null labels = { "environment" = "demo" } } additional_node_pools = {} }
The RBAC role assignment for the managed identity option is different to the one using a service principal.
... resource "azurerm_role_assignment" "aks" { scope = azurerm_kubernetes_cluster.aks.id role_definition_name = "Monitoring Metrics Publisher" principal_id = azurerm_kubernetes_cluster.aks.identity[0].principal_id } resource "azurerm_role_assignment" "aks_subnet" { scope = var.vnet_subnet_id role_definition_name = "Network Contributor" principal_id = azurerm_kubernetes_cluster.aks.identity[0].principal_id } resource "azurerm_role_assignment" "aks_acr" { scope = var.container_registry_id role_definition_name = "AcrPull" principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id } ...
For the necessary permissions on the Virtual Network subnet you use the AKS cluster managed identity. Allowing the AKS cluster to pull images from your Azure Container Registry you use another managed identity that got created for all node pools called kubelet identity.
Beside that when you enable the add-ons Azure Monitor for containers and Azure Policy for AKS, each add-on gets its own managed identity.
As always you can find the modules in my GitHub repository.
-> https://github.com/neumanndaniel/terraform/tree/master/modules
In the next weeks I am updating the Azure Resource Manager templates for AKS as well. Stay tuned.