Setting custom upstream nameservers for CoreDNS in Azure Kubernetes Service

Last year I have written a blog post about configuring kube-dns in Azure Kubernetes Service to provide a custom nameserver for DNS name resolution.

-> https://www.danielstechblog.io/using-custom-dns-server-for-domain-specific-name-resolution-with-azure-kubernetes-service/

Since then Kubernetes switched to CoreDNS and AKS as well. Today I am not talking about the topic in my previous blog post, that will follow the next days, instead I am focusing on the custom upstream nameservers configuration for CoreDNS.

You might think that this is simple looking at the official Kubernetes & AKS docs, and you only need to apply the following ConfigMap to your AKS cluster. Be prepared!

-> https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-configuration-equivalent-to-kube-dns
-> https://docs.microsoft.com/en-us/azure/aks/coredns-custom

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  Corefile.override: |
        forward . 8.8.8.8 8.8.4.4

First, you stumble across the issue that the ConfigMap is not loaded by CoreDNS after applying the template with kubectl apply -f configMap.yaml. Indeed, this is an issue discussed on the CoreDNS and AKS GitHub repositories. You must delete/restart the CoreDNS pods to get your custom ConfigMap settings loaded by CoreDNS. Just run the following command and you should be fine. But you need at least kubectl with version 1.15.0 for it.

kubectl -n kube-system rollout restart deployment coredns

If you then take a look into the CoreDNS logs with kubectl logs, you are greeted by lots of DNS name resolution errors.

[WARNING] No files matching import glob pattern: custom/*.server
.:53
2019-08-06T21:23:47.180Z [INFO] CoreDNS-1.3.1
2019-08-06T21:23:47.180Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-08-06T21:23:47.180Z [INFO] plugin/reload: Running configuration MD5 = 3d857228607ba1ff23e0d609eae89195
2019-08-06T21:23:54.410Z [ERROR] plugin/errors: 2 v1-go-webapp.default.svc.cluster.local.xbmjdg5ws0bufpxuyfmkdn5ihb.fx.internal.cloudapp.net. A: read udp 10.240.0.253:33645->8.8.8.8:53: i/o timeout
2019-08-06T21:23:54.733Z [ERROR] plugin/errors: 2 helloworld-function-figlet.default.svc.cluster.local.xbmjdg5ws0bufpxuyfmkdn5ihb.fx.internal.cloudapp.net. A: read udp 10.240.0.253:51913->8.8.8.8:53: i/o timeout
2019-08-06T21:23:55.264Z [ERROR] plugin/errors: 2 akscnicalc-function-akscnicalc.default.svc.cluster.local.xbmjdg5ws0bufpxuyfmkdn5ihb.fx.internal.cloudapp.net. A: read udp 10.240.0.253:46457->8.8.8.8:53: i/o timeout
....

The reason for that is Azure’s internal DNS name resolution in a Virtual Network.

-> https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances#name-resolution-that-uses-your-own-dns-server

Per default every pod in AKS / Kubernetes uses the ClusterFirst dnsPolicy.

-> https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy

Every request that does not match the configured cluster domain suffix is sent to the upstream nameserver received from the /etc/resolv.conf file on the worker nodes. Guess what, the nameserver referenced in the /etc/resolv.conf is the Azure DNS virtual server 168.63.129.16 that provides DNS name resolution to the VMs in Azure, if you do not specify a custom DNS server in the Virtual Network settings. Because we have overwritten the upstream nameserver with the Google DNS servers, they do not know the domain xbmjdg5ws0bufpxuyfmkdn5ihb.fx.internal.cloudapp.net.

-> https://docs.microsoft.com/en-us/azure/virtual-network/what-is-ip-address-168-63-129-16

So, to get our configuration working without flooding the CoreDNS log with DNS name resolution errors we specify a domain specific name resolution for internal.cloudapp.net.

The following ConfigMap template contains the necessary configuration.

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  Corefile.override: |
        forward . 8.8.8.8 8.8.4.4
  azure.server: |
    internal.cloudapp.net:53 {
        errors
        cache 30
        proxy . 168.63.129.16
    }

-> https://github.com/neumanndaniel/kubernetes/blob/master/coredns/aksCoreDnsConfigMap.yaml

Again, we run  kubectl apply -f configMap.yaml && kubectl -n kube-system rollout restart deployment coredns to apply the changes to the custom CoreDNS ConfigMap object and restarting the CoreDNS pods.

When you now take a look into the CoreDNS log your output should look like this.

.:53
internal.cloudapp.net.:53
2019-08-06T21:45:01.558Z [INFO] CoreDNS-1.3.1
2019-08-06T21:45:01.558Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-08-06T21:45:01.558Z [INFO] plugin/reload: Running configuration MD5 = 3d857228607ba1ff23e0d609eae89195

Finally, CoreDNS uses the custom upstream nameservers for the DNS name resolution.

In the next blog post I am focusing on the details in the data section of the custom ConfigMap object for CoreDNS in AKS.

Facebooktwitterlinkedinmail