Looking at Azure Container Service (AKS) – Managed Kubernetes you may have recognized that AKS currently does not support bring your own VNET and private Kubernetes masters. If you need both capabilities and one of them today, you must use ACS Engine to create the necessary Azure Resource Manager templates for the Kubernetes cluster deployment.
-> https://github.com/Azure/acs-engine
Beside that ACS Engine has advantages and disadvantages. Some of the ACS Engine advantages are RBAC, Managed Service Identity, private Kubernetes master, bring your own VNET and jump box deployment support. You even can choose between your favorite CNI plugin for network policies like the Azure CNI plugin, Calico or Cilium. If you want, you can specify none. But the default option is the Azure CNI plugin.
The main disadvantage of ACS Engine is that it creates non-managed Kubernetes cluster. You are responsible for nearly everything to keep the cluster operational.
So, you have a greater flexibility with ACS Engine, but you are responsible for more things compared to a managed solution.
After setting the context let us now start with the two scenarios I would like to talk about.
- Private Kubernetes cluster with bring your own VNET and jump box deployment
- Private Kubernetes cluster with bring your own VNET, custom Kubernetes service CIDR, custom Kubernetes DNS server IP address and jump box deployment
For each scenario, a VNET with address space 172.16.0.0/16 and one subnet with address space 172.16.0.0/20 will be the foundation to deploy the Kubernetes cluster in.
Starting with the first scenario and its config.json, have a look at the following lines.
{ "apiVersion": "vlabs", "properties": { "orchestratorProfile": { "orchestratorType": "Kubernetes", "orchestratorRelease": "1.10", "kubernetesConfig": { "useManagedIdentity": true, "networkPolicy": "azure", "containerRuntime": "docker", "enableRbac": true, "maxPods":30, "useInstanceMetadata": true, "addons": [ { "name": "tiller", "enabled": true }, { "name": "kubernetes-dashboard", "enabled": true } ], "privateCluster": { "enabled": true, "jumpboxProfile": { "name": "azst-acse1-jb", "vmSize": "Standard_A2_v2", "osDiskSizeGB": 32, "storageProfile": "ManagedDisks", "username": "azureuser", "publicKey": "REDACTED" } } } }, "masterProfile": { "count": 1, "dnsPrefix": "azst-acse1", "vmSize": "Standard_A2_v2", "osDiskSizeGB": 32, "distro": "ubuntu", "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s", "firstConsecutiveStaticIP": "172.16.15.239", "vnetCIDR": "172.16.0.0/16" }, "agentPoolProfiles": [ { "name": "agentpool", "count": 3, "vmSize": "Standard_A2_v2", "osDiskSizeGB": 32, "distro": "ubuntu", "storageProfile": "ManagedDisks", "diskSizesGB": [ 32 ], "availabilityProfile": "AvailabilitySet", "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s" } ], "linuxProfile": { "adminUsername": "azureuser", "ssh": { "publicKeys": [ { "keyData": "REDACTED" } ] } } } }
If your network does not overlap with the Kubernetes service CIDR, you only need to specify the VNET subnet id for the master and agent nodes.
{ "apiVersion": "vlabs", "properties": { "orchestratorProfile": { ... }, "masterProfile": { ... "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s", "firstConsecutiveStaticIP": "172.16.15.239", "vnetCIDR": "172.16.0.0/16" }, "agentPoolProfiles": [ { ... "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s" } ], "linuxProfile": { ... } } }
Additionally, you should set the first consecutive IP and VNET CIDR in the master node section. With the first consecutive IP you set the IP of the first master node. For details check the ACS Engine documentation. The VNET CIDR should be set to prevent source address NAT’ing in the VNET.
-> https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/features.md#feat-custom-vnet
Our second case covers the necessary configuration steps, if your network overlaps with the Kubernetes service CIDR. So, you would like to change the Kubernetes service CIDR, Kubernetes DNS server IP address and pod CIDR. Have a look at the following lines.
{ "apiVersion": "vlabs", "properties": { "orchestratorProfile": { "orchestratorType": "Kubernetes", "orchestratorRelease": "1.10", "kubernetesConfig": { "useManagedIdentity": true, "kubeletConfig": { "--non-masquerade-cidr": "172.16.0.0/20" }, "clusterSubnet": "172.16.0.0/20", "dnsServiceIP": "172.16.16.10", "serviceCidr": "172.16.16.0/20", "networkPolicy": "azure", "containerRuntime": "docker", "enableRbac": true, "maxPods":30, "useInstanceMetadata": true, "addons": [ { "name": "tiller", "enabled": true }, { "name": "kubernetes-dashboard", "enabled": true } ], "privateCluster": { "enabled": true, "jumpboxProfile": { "name": "azst-acse1-jb", "vmSize": "Standard_A2_v2", "osDiskSizeGB": 32, "storageProfile": "ManagedDisks", "username": "azureuser", "publicKey": "REDACTED" } } } }, "masterProfile": { "count": 1, "dnsPrefix": "azst-acse1", "vmSize": "Standard_A2_v2", "osDiskSizeGB": 32, "distro": "ubuntu", "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s", "firstConsecutiveStaticIP": "172.16.15.239", "vnetCIDR": "172.16.0.0/16" }, "agentPoolProfiles": [ { "name": "agentpool", "count": 3, "vmSize": "Standard_A2_v2", "osDiskSizeGB": 32, "distro": "ubuntu", "storageProfile": "ManagedDisks", "availabilityProfile": "AvailabilitySet", "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s" } ], "linuxProfile": { "adminUsername": "azureuser", "ssh": { "publicKeys": [ { "keyData": "REDACTED" } ] } } } }
In this case the service CIDR is part of the VNET CIDR to ensure no overlapping with existing networks. The DNS server IP address must be in the service CIDR space and the pod CIDR equals the VNET subnet address space, because we are using the Azure CNI plugin. So, the pods are receiving an IP address directly from the VNET subnet. If you are using another CNI plugin or none, then make sure to use an address space that is part of the VNET CIDR in this case. The final parameter is the – -non-masquerade-cidr that must be set to the VNET subnet CIDR. Have a brief overview of all necessary settings.
{ "apiVersion": "vlabs", "properties": { "orchestratorProfile": { ... "kubernetesConfig": { ... "kubeletConfig": { "--non-masquerade-cidr": "172.16.0.0/20" //VNET subnet CIDR address space }, "clusterSubnet": "172.16.0.0/20", //VNET subnet CIDR address space "dnsServiceIP": "172.16.16.10", //IP address in serviceCidr address space "serviceCidr": "172.16.16.0/20", //CIDR address space in VNET CIDR - no overlapping in VNET ... } }, "masterProfile": { ... "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s", "firstConsecutiveStaticIP": "172.16.15.239", "vnetCIDR": "172.16.0.0/16" //VNET CIDR address space }, "agentPoolProfiles": [ { ... "vnetSubnetId": "/subscriptions/REDACTED/resourceGroups/acs-engine/providers/Microsoft.Network/virtualNetworks/acs-engine/subnets/k8s" } ], "linuxProfile": { ... } } }
The next step on our way to the private Kubernetes cluster with bring your own VNET is the generation of the Azure Resource Manager templates by using the ACS Engine. Assuming you have downloaded the necessary ACS Engine bits and placed the kubernetes.json config into the same folder.
./acs-engine.exe generate ./kubernetes.json
The command generates the ARM template and places them in the folder _output per default.
Now, start the Azure Cloud Shell (https://shell.azure.com) and jump into the _output folder. Per drag and drop upload the azuredeploy.json and azuredeploy.parameters.json to the Cloud Shell. Afterwards kick off the deployment using the Azure CLI.
az group create --name acs-engine --location westeurope az network vnet create --name acs-engine --resource-group acs-engine --address-prefixes 172.16.0.0/16 --subnet-name K8s --subnet-prefix 172.16.0.0/20 az group deployment create --resource-group acs-engine --template-file ./azuredeploy.json --parameters ./azuredeploy.parameters.json --verbose
After the successful deployment we can connect to the jump box and check, if all settings were taken effectively.
azcdmdn@azst-acse1-jb:~$ kubectl get services --all-namespaces -o wide NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default azure-vote-back ClusterIP 172.16.29.115 <none> 6379/TCP 13d app=azure-vote-back default azure-vote-front LoadBalancer 172.16.30.243 51.144.43.142 80:32627/TCP 13d app=azure-vote-front default kubernetes ClusterIP 172.16.16.1 <none> 443/TCP 13d <none> kube-system heapster ClusterIP 172.16.18.168 <none> 80/TCP 13d k8s-app=heapster kube-system kube-dns ClusterIP 172.16.16.10 <none> 53/UDP,53/TCP 13d k8s-app=kube-dns kube-system kubernetes-dashboard NodePort 172.16.28.177 <none> 80:32634/TCP 13d k8s-app=kubernetes-dashboard kube-system metrics-server ClusterIP 172.16.26.104 <none> 443/TCP 13d k8s-app=metrics-server kube-system tiller-deploy ClusterIP 172.16.29.150 <none> 44134/TCP 13d app=helm,name=tiller
azcdmdn@azst-acse1-jb:~$ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE default azure-vote-back-68d6c68dcc-xcfgk 1/1 Running 0 2h 172.16.0.46 k8s-agentpool-35404701-2 default azure-vote-front-7976b7dcd9-mt2hh 1/1 Running 0 2h 172.16.0.51 k8s-agentpool-35404701-2 default kured-2hrwh 1/1 Running 5 7d 172.16.0.22 k8s-agentpool-35404701-0 default kured-5sgdn 1/1 Running 4 7d 172.16.0.45 k8s-agentpool-35404701-2 default kured-qgbsx 1/1 Running 4 7d 172.16.0.99 k8s-agentpool-35404701-1 default omsagent-2d2kf 1/1 Running 10 8d 172.16.0.41 k8s-agentpool-35404701-2 default omsagent-ql9xv 1/1 Running 18 13d 172.16.0.84 k8s-master-35404701-0 default omsagent-sc2xm 1/1 Running 10 8d 172.16.0.110 k8s-agentpool-35404701-1 default omsagent-szj6j 1/1 Running 9 8d 172.16.0.16 k8s-agentpool-35404701-0 default vsts-agent-qg5rz 1/1 Running 1 2h 172.16.0.52 k8s-agentpool-35404701-2 kube-system heapster-568476f785-c46r9 2/2 Running 0 2h 172.16.0.39 k8s-agentpool-35404701-2 kube-system kube-addon-manager-k8s-master-35404701-0 1/1 Running 6 13d 172.16.15.239 k8s-master-35404701-0 kube-system kube-apiserver-k8s-master-35404701-0 1/1 Running 9 13d 172.16.15.239 k8s-master-35404701-0 kube-system kube-controller-manager-k8s-master-35404701-0 1/1 Running 6 13d 172.16.15.239 k8s-master-35404701-0 kube-system kube-dns-v20-59b4f7dc55-wtv6h 3/3 Running 0 2h 172.16.0.44 k8s-agentpool-35404701-2 kube-system kube-dns-v20-59b4f7dc55-xxgdd 3/3 Running 0 2h 172.16.0.48 k8s-agentpool-35404701-2 kube-system kube-proxy-hf467 1/1 Running 6 13d 172.16.15.239 k8s-master-35404701-0 kube-system kube-proxy-n8sj6 1/1 Running 7 8d 172.16.0.5 k8s-agentpool-35404701-0 kube-system kube-proxy-nb4gx 1/1 Running 6 8d 172.16.0.36 k8s-agentpool-35404701-2 kube-system kube-proxy-r8wdz 1/1 Running 6 8d 172.16.0.97 k8s-agentpool-35404701-1 kube-system kube-scheduler-k8s-master-35404701-0 1/1 Running 6 13d 172.16.15.239 k8s-master-35404701-0 kube-system kubernetes-dashboard-64dcf5784f-gxtqv 1/1 Running 0 2h 172.16.0.109 k8s-agentpool-35404701-1 kube-system metrics-server-7fcdc5dbb9-vs26l 1/1 Running 0 2h 172.16.0.55 k8s-agentpool-35404701-2 kube-system tiller-deploy-d85ccb55c-6nncg 1/1 Running 0 2h 172.16.0.57 k8s-agentpool-35404701-2
azcdmdn@azst-acse1-jb:~$ kubectl get nodes --all-namespaces -o wide NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-agentpool-35404701-0 Ready agent 13d v1.10.0 <none> Debian GNU/Linux 9 (stretch) 4.13.0-1014-azure docker://1.13.1 k8s-agentpool-35404701-1 Ready agent 13d v1.10.0 <none> Debian GNU/Linux 9 (stretch) 4.13.0-1014-azure docker://1.13.1 k8s-agentpool-35404701-2 Ready agent 13d v1.10.0 <none> Debian GNU/Linux 9 (stretch) 4.13.0-1014-azure docker://1.13.1 k8s-master-35404701-0 Ready master 13d v1.10.0 <none> Debian GNU/Linux 9 (stretch) 4.13.0-1014-azure docker://1.13.1
The private Kubernetes cluster with bring your own VNET and custom network configuration is now fully operational and ready for some container deployments.
Last but not least I highly recommend that you secure your jump box with the Azure Security Center just in time capability.
-> https://docs.microsoft.com/en-us/azure/security-center/security-center-just-in-time